How TrueAccord Creates High Performing Compliant Content

By on July 31st, 2018 in Compliance, Product and Technology, User Experience
TrueAccord Blog

In debt collection, the language one uses in customer communications makes a big difference on liquidation rates. At TrueAccord, compliant content is the lifeline of our system. We continuously create, test and revise our content to engage consumers more personably—which drives better results for our clients.

Our Goal: Create a Better Customer Experience

Communication styles in the debt collection industry are typically stiff and unapproachable. Most of the time, it sounds like “legalese,” which can be off-putting, if not intimidating, to many customers. TrueAccord has a digital-first strategy to debt collection, primarily with emails, supplemented by SMS and phone calls to effectively engage with our customers. We strive to make our content informative, actionable, and compassionate.

Our mission is to transform the debt collection industry by helping people regain their financial health. Thus, our content is written to reflect that. It’s not accusatory or condescending, but respectful and empowering. We focus on finding solutions and helping people by presenting options on how to resolve their debt.

How We Experiment with Content —and Continually Improve It

Our proprietary content management system (CMS) was designed to help us craft and edit content based on massive amounts of dynamic data. We track everything from the customer’s balance, creditor, where they are in the debt lifecycle, if they’re in a payment plan, and how long we’ve been communicating with them to craft customized emails.

We constantly run experiments to generate the right content for each person. We try new subject lines to see if we can get more people to open emails. We write different calls to action on our buttons to see what drives better engagement. We also consider how far a consumer has to scroll down in an email or a landing page to get to the call-to-action button. If something’s not working well, we try something else. And our machine-learning engine—which continuously learns from our experiences—helps us customize specific and customer follow-ups that resonate. All of these small experiments add up to get us very high open and click rates from customers.

How We Keep Content Compliant

The debt collection industry is heavily regulated and is inherently protective of consumers, as it should be. We always look at communications content through a customer-focused and thorough compliance lens.

Our system provides code-driven compliance, appending the appropriate disclosures and text to automatically comply with whatever is necessary for each user, such as debts unreported out of statute or specific state disclosures. Our compliance rules dictate the content parameters for each customer, making it easier for our content writers to focus on writing compelling content. And yet, because there is wide variation in our writing styles, syntax and payment options, our content is still engaging.

Our legal team gives our content a final review, and we get very granular to ensure the message is clear for every type of customer. We look at the actual message, the email layout and design (including button placement) and even the size of the font for our disclosures. We write content that engages customers but also clearly lays out the customer’s rights and responsibilities.

This process is highly collaborative. Our content and legal teams work in concert to continuously adapt new scenarios to see how different options might come across. Our communications library constantly evolves as we keep on improving our customer engagement.

Think About What You Can Say

Most of the industry is focused on what you can’t say, but they’re not thinking about what you can say. That’s why we spend so much time perfecting our content and why we end up with such great response rates and overall results.

Scaling TrueAccord’s Infrastructure

By on April 12th, 2018 in Industry Insights, Machine Learning, Product and Technology
TrueAccord Blog

TrueAccord’s machine learning based system handles millions of consumer interactions a month and is growing fast. In this podcast, hear our Head of Engineering Mike Higuera talk about scaling challenges, prioritizing work on bugs vs. features, and other pressing topics he’s had to deal with while building our system.

Conversion At TrueAccord: Tuning A Machine Learning Engine

By on April 3rd, 2018 in Industry Insights, Machine Learning, Product and Technology
TrueAccord Blog

TrueAccord’s system is machine learning based, but every new product type requires a little bit of tuning to beat the competition. Hear our CSO and VP of Finance in this short podcast about the Conversion Team and what it does to make sure TrueAccord stays ahead of competition.

 

Building An Experimentation Engine

By on March 20th, 2018 in Product and Technology
TrueAccord Blog

TrueAccord beats the competition on many levels, and does that through rigorous testing and improvement. Hear a talk from our CTO Paul Lucas and Director of Product Roger Lai on our approach to experimentation.

 

To download a transcript of this post, click here.

TrueAccord’s 2018 Customer Survey: Net Promoter Score and Digital Trends

By on February 27th, 2018 in Company News, Product and Technology, User Experience
TrueAccord Blog

 

We just posted our 2018 Customer Survey and the results are incredibly interesting.

Consumers in debt are definitely feeling more like TrueAccord customers, giving us a Net Promoter Score of 40, a new record for us and for the industry. We have also uncovered several interesting trends in customer preferences – not new, but definitely eye opening.

Click here to download the infographic summarizing our findings.

Podcast: Creating a Positive Impact in Debt Collection Using Technology and Building Consumer-Centric Experiences

By on January 29th, 2018 in Compliance, Industry Insights, Machine Learning, Product and Technology

Our CEO, Ohad Samet, recorded a podcast with Lend Academy discussing the positive impact technology is creating in the collections space and the need for more innovation. Will discuss TrueAccord’s unique approach to debt collection using data-driven, digital communications to create deeply personalized consumer experiences.

The podcast also covers the current state of the collections industry and where it’s likely headed as regulatory pressure, consumer preferences and compliance requirements converge.  Will cover how TrueAccord is using machine learning to deliver deeply personalized and engaging experiences for consumers while achieving higher recovery rates across various debt types.

Tune in and learn:

  • The state of the debt collection industry today and where it’s headed
  • How the use of machine learning is personalizing the debt collections experience for greater conversions
  • Why code-driven compliance outperforms traditional collections practices by reducing risk to organizations
  • How understanding consumers’ preferences for easy, self-service options with flexibility empowers  more consumers to pay off their debt and get on a path to financial health 

If you’re rather read the transcript, download it here.

The Results Are In:TrueAccord Consumer Satisfaction Survey

By on July 17th, 2017 in Company News, Industry Insights, Product and Technology

Today, 80 million consumers are in debt. They are often not treated well by collectors, and subjected to harassment, intimidation and an overall bad user experience that does not encourage or empower resolution. According to a recent CFPB survey 1 in 4 consumers felt threatened by collectors, 3 in 4 consumers reported that a collector did not honor a request to cease contact, over ⅓ reported being contacted at inconvenient times, and 40% of consumers reported they were contacted 4+ times per week. These results are quite disheartening, and demonstrate that the traditional debt collection agencies have not adopted user centric practices and behaviors, nor have they integrated technology into the process to adapt to changing consumer needs. They are stuck making large volumes of phone calls to uninterested consumers who end up complaining.

When we set out to survey our consumers about their experience with TrueAccord we weren’t quite sure what to expect, or if they would even respond. On one hand, we believe our data driven, consumer centric, digital first experience is reinventing the debt collection process and will replace legacy agencies, and consumer will appreciate that. On the other, we are still talking about debt collection, and most likely a lot of these consumers have experienced multiple negative collections experience and have low expectations of the process. They aren’t likely to recommend a debt collector, and as we’ve seen above, are highly likely to have had a bad experience.

Overall satisfaction

What we found was both exciting and inspiring, 80% of respondents were satisfied with their experience with TrueAccord. It’s an unprecedented number in an industry that, for decades, only attracted negative attention. TrueAccord is building a product and brand focused on delivering great user experiences and helping consumers rebuild financial health, and consumers are reacting to that. Traditional agencies’ behaviors have been impacting liquidation, hurting brand reputation and causing a lot of compliance risk. Yet they haven’t changed their ways. We show that working differently is possible – and will yield better results.  

Tone

81% of consumers stated that the tone and personalized offers in our messages were appropriate for their individual needs. Our content is personalized, and tailored to empower and motivate consumers to want to pay off the debt, combined with the ability to offer a wide selection of custom payment plans. Consumers’ needs are served and they are treated like customers. Our clients understand that debt collection is part of a natural consumer life cycle;at one point or another, most of us will encounter debt collectors, but unfortunately traditional agencies lack the technology and best practices to deliver good user experiences, leaving consumers feeling frustrated, angry and wronged. This does not have to be the case.

User experience

80% of our users had an overall positive experience with TrueAccord and recognized TrueAccord as different and better than other agencies. A large proportion of the other 20% resolved their debt by disputing it, so even though they may not feel great about their experience, they were able to dispute and discharge a debt electronically and with minimum hassle. It’s exciting to see that consumers see our brand the way we see ourselves, as innovators focused on great user experiences. We believe helping people get out of debt has positive impact for everyone involved, even (and sometimes more so) if getting out of debt means it can’t be collected.

What consumers had to say:

You were easy to work with and the payment plan worked for me. Even when I had to make a small change, it was no problem. I’m glad to have the debt behind me. I appreciate the email correspondence as opposed to numerous phone calls.”

“They worked with me and I needed that.”

“It is always a pleasant experience dealing with True Accord.”

“Wish you could handle all my debts.”

“I love the fact that TrueAccord was kind and polite! I wanted to pay my debt but needed a plan that wouldn’t leave me over spent or struggling every month. TrueAccord was happy to accept the payment plan I requested. Thank you!”

“TrueAccord provided me a way to be true to my word.”

“The agents are all very friendly and accommodating. It doesn’t feel like you are dealing with a collection agency.”

“The best collection agency ever!”

 

How Much Testing is Enough Testing?

By on February 2nd, 2017 in Product and Technology
TrueAccord Blog

Ggb by night


One hundred years ago, a proposal took hold to build a bridge across the Golden Gate Strait at the mouth of San Francisco Bay.  For more than a decade, engineer Joseph Strauss drummed up support for the bridge throughout Northern California.  Before the first concrete was poured, his original double-cantilever design was replaced with Leon Moisseiff’s suspension design.  Construction on the latter began in 1933, seventeen years after the bridge was conceived.  Four years later, the first vehicles drove across the bridge.  With the exception of a retrofit in 2012, there have been no structural changes since.  21 years in the making.  Virtually no changes for the next 80.

Now, compare that with a modern Silicon Valley software startup.  Year one: build an MVP.  Year two: funding and product-market fit.  Year three: profitability?…growth? Year four: make it or break it.  Year five: if the company still exists at this point, you’re lucky.

Software in a startup environment is a drastically different engineering problem than building a bridge.  So is the testing component of that problem.  The bridge will endure 100+ years of heavy use and people’s lives depend upon it.  One would be hard-pressed to over-test it.  A software startup endeavor, however, is prone to monthly changes and usually has far milder consequences when it fails (although being in a regulated environment dealing with financial data raises the stakes a bit).  Over-testing could burn through limited developer time and leave the company with an empty bank account and a fantastic product that no one wants.

I want to propose a framework to answer the question of how much testing is enough.  I’ll outline 6 criteria then throw them at few examples.  Skip to the charts at the end and come back if you are a highly visual person like me.  In general, I am proposing that testing efforts be assessed on a spectrum according to the nature of the product under test.  A bridge would be on one end of the spectrum whereas a prototype for a free app that makes funny noises would be on the other.

Assessment Criteria

Cost of Failure

What is the material impact if this thing fails?  If a bridge collapses, it’s life and death and a ton of money.  Similarly, in a stock trading app, there are potentially big dollar and legal impacts when the numbers are wrong.  On the contrary, an occasional failure in a dating app would annoy customers and maybe drive a few of them away, but wouldn’t be catastrophic. Bridges and stock trading have higher costs of failure and thus merit more rigorous testing.

Amount of Use

How often is this thing used and by how many people?  In other words, if a failure happens in this component, how widespread will the impact be?  A custom report that runs once a month gets far less use than the login page.  If the latter fails, a great number of users will feel the impact immediately.  Thus, I really want to make sure my login page (and similar) are well-tested.

Visibility

How visible is the component?  How easy will it be for customers to see that it’s broken?  If it’s a backend component that only affects engineers, then customers may not know it’s broken until they start to see second-order side effects down the road.  I have some leeway in how I go about fixing such a problem.  In contrast, a payment processing form would have high visibility.  If it breaks, it will give the impression that my app is broken big-time and will cause a fire drill until it is fixed.  I want to increase testing with increased visibility.

Lifespan

This is a matter of return on effort.  If the thing I’ve built is a run-once job, then any bugs will only show up once.  On the other hand, a piece of code that is core to my application will last for years (and produce bugs for years).  Longer lifespans give me greater returns on my testing efforts.  If a little extra testing can avoid a single bug per month, then that adds up to a lot of time savings when the code lasts for years.

Difficulty of Repair

Back to the bridge example, imagine there is a radio transmitter at the top.  If it breaks, a trained technician would have to make the climb (several hours) to the top, diagnose the problem, swap out some components (if he has them on hand), then make the climb down.  Compare that with a small crack in the road.  A worker spends 30 minutes squirting some tar into it at 3am.  The point here is that things which are more difficult to repair will result in a higher cost if they break.  Thus, it’s worth the larger investment of testing up front.  It is also worth mentioning that this can be inversely related to visibility.  That is, low visibility functionality can go unnoticed for long stretches and accumulate a huge pile of bad data.

Complexity

Complex pieces of code tend to be easier to break than simple code.  There are more edge cases and more paths to consider.  In other words, greater complexity translates to greater probability of bugs.  Hence, complex code merits greater testing.

Examples

Golden Gate Bridge

This is a large last-forever sort of project.  If we get it wrong, we have a monumental (literally) problem to deal with.  Test continually as much as possible.

Criterion Score
Cost of failure 5
Amount of use 5
Visibility 5
Lifespan 5
Difficulty of repair 5
Complexity 4

Cat Dating App

Once the word gets out, all of the cats in the neighborhood will be swiping in a cat-like unpredictable manner on this hot new dating app.  No words, just pictures.  Expect it to go viral then die just as quickly.  This thing will not last long and the failure modes are incredibly minor.  Not worth much time spent on testing.

Criterion Score
Cost of failure 1
Amount of use 4
Visibility 4
Lifespan 1
Difficulty of repair 1
Complexity 1

Enterprise App — AMEX Payment Processing Integration

Now, we get into the nuance.  Consider an American Express payment processing integration i.e. the part of a larger app that sends data to AMEX and receives confirmations that the payments were successful.  For this example, let’s assume that only 1% of your customers are AMEX users and they are all monthly auto-pay transactions.  In other words, it’s a small group that will not see payment failures immediately.  Even though this is a money-related feature, it will not merit as much testing as perhaps a VISA integration since it is lightly used with low visibility.

Criterion Score
Cost of failure 2
Amount of use 1
Visibility 1
Lifespan 5
Difficulty of repair 2
Complexity 2

Enterprise App — De-duplication of Persons Based on Demographic Info

This is a real problem for TrueAccord.  Our app imports “people” from various sources.  Sometimes, we get two versions of the same “person”.  It is to our advantage to know this and take action accordingly in other parts of our system.  Person-matching can be quite complex given that two people can easily look very similar from a demographic standpoint (same name, city, zip code, etc.) yet truly be different people.  If we get it wrong, we could inadvertently cross-pollinate private financial information.  To top it all off, we don’t know what shape this will take long term and are in a pre-prototyping phase. In this case, I am dividing the testing assessment into two parts: prototyping phase and production phase.

Prototyping

The functionality will be in dry-run mode.  Other parts of the app will not know it exists and will not take action based on its results.  Complexity alone drives light testing here.

Criterion Score
Cost of failure 1
Amount of use 1
Visibility 1
Lifespan 1
Difficulty of repair 1
Complexity 4

Production

Once adopted, this would become rather core functionality with a wide-sweeping impact.  If it is wrong, then other wrong data will be built upon it, creating a heavy cleanup burden and further customer impact.  That being said, it will still have low visibility since it is an asynchronous backend process.  Moderate to heavy testing is needed here.

Criterion Score
Cost of failure 4
Amount of use 3
Visibility 1
Lifespan 3
Difficulty of repair 4
Complexity 4

Testing at TrueAccord

TrueAccord is three years old.  We’ve found product-market fit and are on the road to success (fingers crossed).  At this juncture, engineering time is a bit scarce, so we have to be wise in how it is allocated.  That means we don’t have the luxury of 100% test coverage.  Though we don’t formally apply the above heuristics, they are evident in the automated tests that exist in our system.  For example, two of our larger test suites are PaymentPlanHelpersSpec and PaymentPlanScannerSpec at 1500 and 1200 lines respectively.  As you might guess, these are related to handling customers’ payment plans.  This is a fairly complex, highly visible, highly used core functionality for us.  Contrast that with TwilioClientSpec at 30 lines.  We use Twilio very lightly with low visibility and low cost of failures.  Since we are only calling a single endpoint on their api, this is a very simple piece of code.  In fact, the testing that exists is just for a helper function, not the api call itself.

I’d love to hear about other real world examples, and I’d love to hear if this way of thinking about testing would work for your software startup.  Please leave us a comment with your point of view!

Applying Machine Learning to Reinvent Debt Collection

By on January 24th, 2017 in Product and Technology
TrueAccord Blog

Our Head of Data Science, Richard Yeung, gave a talk at the Global Big Data conference. The talk focused on the first steps from heuristics to probabilistic model, when building a machine learning system based on expert knowledge. This feedback loop is what allowed our automated system to replace the old school call center-based model with a modernized, personalized approach.

You can find the slides here.

Skipping Photoshop: How we made ID Badge creation 10x faster by using facial recognition

By on November 1st, 2016 in Product and Technology
TrueAccord Blog

Recently TrueAccord has grown to the size where our compliance stance requires the addition of photo ID badges. It’s a rite of passage all small-but-growing companies endure and ours is no different.

Since I have previous experience setting up badge systems and dealing with the printers, I volunteered to kickoff this process. I’ve evaluated pre-existing badge creation software in the past and found them all significantly lacking. In a previous environment, I wrote my own badge creation software which fit the needs at the time. The key phrase being “at the time“. For tech startups, it’s not unusual to go from onboarding one person every other week, to 10 people a week in a year or two. That means every manual step for onboarding someone will go from an “oh well, it’s just once every other week” to “we need to dedicate several hours of someone’s time every week to this process.” Typically that same growth period also happens to be when your operations (IT, Facilities, and Office Admin) organizations are the most short staffed and the least likely to have the free time to do that. “Where is this going?” and “How much work does this mean for me?”, you ask? Allow me to share with you how I automated our badge system – Photoshop included.

Continue reading “Skipping Photoshop: How we made ID Badge creation 10x faster by using facial recognition”