I’m Excited to Join the CFPB’s Consumer Advisory Board.

I’m honored and excited to have been appointed to the CFPB’s Consumer Advisory Board. With this appointment, the CFPB is sending a strong message about how it views technology's role in shaping the future of consumer finance in general, and debt collection in particular. I'm proud to be able to represent the industry's point of view while making sure we usher in a new era of great user experience and technology innovation. When we founded TrueAccord in 2013, we set set a goal for ourselves - to go to Washington and influence policy making in the debt collection space. Ever since then, we engaged with the CFPB in various ways: quarterly meetings through Project Catalyst, participating in the SBREFA panel for the proposed debt collection rule, and even potential data exchange. We view policy making that enables better debt collections as our mission, and this appointment is just another step in the process. This appointment isn't about me, when I attend these meetings I will represent the industry, TrueAccord, its team and our consumers. I will take as very seriously, like we all take our mission. This could not have happened without the TrueAccord team’s hard work and laser focus on making a difference.

Read More

Default Rates Are Going Up As Bad Collection Practices Continue to Ignore Debtors

The US economy has taken a turn for the better in the past year. Unemployment has plummeted, the Federal Reserve is raising rates, and the stock market is soaring. However, for the past two quarters, several issuers reported an increase in charge off rates. While banks may be changing their underwriting standards to encourage growth, there is another contributing factor: a fundamental shift in the way consumers live and work, one that the credit card industry has failed to adjust to. 2008-2009 was a turning point for the US economy. Millions of jobs were lost across all industries, without much hope of recovery. College grads joined a crippled job market, and felt like they needed to “hustle” and find alternative means to sustain themselves. Uber, founded in 2009, created an opportunity as standard-bearer of the gig economy and many others have followed suit. At the same time, social media became prevalent as Facebook went international in 2007. These processes created  new consumers - the millennial cohort. Millennials are on the move, working several unsteady jobs, managing their own time and relying heavily on social media and digital communications. They use traditional financial solutions like credit cards, but the dominance of mobile and digital in their life is driving their preferences for communications and interactions with people and businesses. However, if they default, they are effectively sent back to the stone age, where in time to a world that knows nothing about them, and does little to service them effectively. When a system that “always worked” faces a new type of consumer behavior, it breaks - and leads to increased defaults and losses. Consumers expect a better user experience - even in collections As digital, always connected users, millennials expect their bank - or the bank’s collection vendor - to fit their lifestyle and preferences. Unfortunately, the debt collection and recovery industry hasn’t changed in decades. There has been little investment in moving away from phone calls and letters to a more digital and technology driven process,  that can deliver a better user experience for those in debt. Contact through digital channels is table stakes for the digital consumer. Many have never  visited a bank branch and most will not answer a call from an unidentified number, or respond to a letter. According to Accenture’s “Banking Customer 2020”, 58% of consumers use their mobile device when seeking support from their bank, 53% report going to their online banking center at least once per year to sort an issue; 78% report doing so to make a payment. More than half of the population has adopted  digital channels to manage their lives, and will not respond to cold calls and letters in nondescript white envelopes. Call center-based collection approaches fail to get these consumers on the phone, and debts make their way to charge off without any meaningful engagement from the consumer. Once contacted, millennials expect clear communications. The common disclosures used in debt collection, for example, feel onerous and obscure - causing them to disengage (the CFPB recognized that and is planning a survey regarding disclosures). The dispute process, asking for more information about their debt, is onerous and slow. Consumers need, and deserve, communication that drives them to action rather than intimidates and coerces them. Collectors are pressured to cold call and create instant rapport with unwilling debtors - and they are failing this task in growing numbers. Finally, consumers need flexible payment options that fit their work schedules. As Robert Reich notes, while 1099 workers may make slightly higher hourly salary when working, their hours are irregular and difficult to schedule. This means irregular paychecks that can vary in size and resulting disposable income. A consumer might be able to pay $100 this pay period, $150 next time and only $50 the following one. Traditional approaches fail to adjust to these realities, focusing on steady payment plans that these consumers cannot always keep up with.

Read More

Fintech Companies Are Learning to Work with Regulators

This article, written by our In House Counsel Adam Gottlieb, first appeared in the RMA Insights Magazine The word “startup” conjures images of stereotypical open offices, complete with ping pong tables, standing desks, and people in hoodies feverishly hammering at keyboards. Startups are often associated with high risk, scrappiness, and the ability to break things and move fast–all a stark contrast to the bureaucratic and highly-regulated environment that most debt buyers and collectors operate in. Yet, as startups begin venturing into the area of financial technology, they have had to adjust to new operating principles and new stakeholders, with the government chief among them. (more…)

Read More

How Tax Season Affects Debt Collection – and TrueAccord

Tax Season in Debt Collection Tax season is to debt collection as holiday season is to retail. According to the National Retail Federation, of the 66% of consumers who are expecting a tax refund this year, 35.5% plan to spend their refund on paying down debt. For this reason, mid-February through May is considered the most productive time of the year for debt collection by many in the industry. (more…)

Read More

Live from LendIt: TrueAccord on AI in FinTech

In case you missed it, our CEO Ohad Samet spoke in a panel at the LendIt Conference about the use of artificial intelligence in FinTech. Joined by industry leaders in a propelling talk, this video is not to be missed. (more…)

Read More

Live from LendIt: TrueAccord on Breaking Banks

This week, Breaking Banks host Brett King chatted about the ideas of debt rehabilitation with Ohad Samet, CEO of  TrueAccord, and how machine learning and AI can help people fix their credit situations. (more…)

Read More

Hear our CEO talk about AI in Fintech at LendIt

Our CEO , Ohad Samet, will be part of a panel discussing Artificial Intelligence Uses in Fintech. The panel will be held at 2:15pm Eastern on Tuesday, 3/7. (more…)

Read More

On American Banker: Real issue for debt collectors is the irrelevance of telephones

In a recent American Banker article, our team is saying: the regulatory discussion around phone calls in debt collection is rapidly becoming irrelevant for one very important reason: consumers don't answer their phones. (more…)

Read More

How Much Testing is Enough Testing?

One hundred years ago, a proposal took hold to build a bridge across the Golden Gate Strait at the mouth of San Francisco Bay.  For more than a decade, engineer Joseph Strauss drummed up support for the bridge throughout Northern California.  Before the first concrete was poured, his original double-cantilever design was replaced with Leon Moisseiff's suspension design.  Construction on the latter began in 1933, seventeen years after the bridge was conceived.  Four years later, the first vehicles drove across the bridge.  With the exception of a retrofit in 2012, there have been no structural changes since.  21 years in the making.  Virtually no changes for the next 80. Now, compare that with a modern Silicon Valley software startup.  Year one: build an MVP.  Year two: funding and product-market fit.  Year three: profitability?...growth? Year four: make it or break it.  Year five: if the company still exists at this point, you're lucky. Software in a startup environment is a drastically different engineering problem than building a bridge.  So is the testing component of that problem.  The bridge will endure 100+ years of heavy use and people's lives depend upon it.  One would be hard-pressed to over-test it.  A software startup endeavor, however, is prone to monthly changes and usually has far milder consequences when it fails (although being in a regulated environment dealing with financial data raises the stakes a bit).  Over-testing could burn through limited developer time and leave the company with an empty bank account and a fantastic product that no one wants. I want to propose a framework to answer the question of how much testing is enough.  I'll outline 6 criteria then throw them at few examples.  Skip to the charts at the end and come back if you are a highly visual person like me.  In general, I am proposing that testing efforts be assessed on a spectrum according to the nature of the product under test.  A bridge would be on one end of the spectrum whereas a prototype for a free app that makes funny noises would be on the other. Assessment Criteria Cost of Failure What is the material impact if this thing fails?  If a bridge collapses, it's life and death and a ton of money.  Similarly, in a stock trading app, there are potentially big dollar and legal impacts when the numbers are wrong.  On the contrary, an occasional failure in a dating app would annoy customers and maybe drive a few of them away, but wouldn’t be catastrophic. Bridges and stock trading have higher costs of failure and thus merit more rigorous testing. Amount of Use How often is this thing used and by how many people?  In other words, if a failure happens in this component, how widespread will the impact be?  A custom report that runs once a month gets far less use than the login page.  If the latter fails, a great number of users will feel the impact immediately.  Thus, I really want to make sure my login page (and similar) are well-tested. Visibility How visible is the component?  How easy will it be for customers to see that it's broken?  If it's a backend component that only affects engineers, then customers may not know it's broken until they start to see second-order side effects down the road.  I have some leeway in how I go about fixing such a problem.  In contrast, a payment processing form would have high visibility.  If it breaks, it will give the impression that my app is broken big-time and will cause a fire drill until it is fixed.  I want to increase testing with increased visibility. Lifespan This is a matter of return on effort.  If the thing I've built is a run-once job, then any bugs will only show up once.  On the other hand, a piece of code that is core to my application will last for years (and produce bugs for years).  Longer lifespans give me greater returns on my testing efforts.  If a little extra testing can avoid a single bug per month, then that adds up to a lot of time savings when the code lasts for years. Difficulty of Repair Back to the bridge example, imagine there is a radio transmitter at the top.  If it breaks, a trained technician would have to make the climb (several hours) to the top, diagnose the problem, swap out some components (if he has them on hand), then make the climb down.  Compare that with a small crack in the road.  A worker spends 30 minutes squirting some tar into it at 3am.  The point here is that things which are more difficult to repair will result in a higher cost if they break.  Thus, it's worth the larger investment of testing up front.  It is also worth mentioning that this can be inversely related to visibility.  That is, low visibility functionality can go unnoticed for long stretches and accumulate a huge pile of bad data. Complexity Complex pieces of code tend to be easier to break than simple code.  There are more edge cases and more paths to consider.  In other words, greater complexity translates to greater probability of bugs.  Hence, complex code merits greater testing. Examples Golden Gate Bridge This is a large last-forever sort of project.  If we get it wrong, we have a monumental (literally) problem to deal with.  Test continually as much as possible. Criterion Score Cost of failure 5 Amount of use 5 Visibility 5 Lifespan 5 Difficulty of repair 5 Complexity 4 Cat Dating App Once the word gets out, all of the cats in the neighborhood will be swiping in a cat-like unpredictable manner on this hot new dating app.  No words, just pictures.  Expect it to go viral then die just as quickly.  This thing will not last long and the failure modes are incredibly minor.  Not worth much time spent on testing. Criterion Score Cost of failure 1 Amount of use 4 Visibility 4 Lifespan 1 Difficulty of repair 1 Complexity 1 Enterprise App -- AMEX Payment Processing Integration Now, we get into the nuance.  Consider an American Express payment processing integration i.e. the part of a larger app that sends data to AMEX and receives confirmations that the payments were successful.  For this example, let’s assume that only 1% of your customers are AMEX users and they are all monthly auto-pay transactions.  In other words, it’s a small group that will not see payment failures immediately.  Even though this is a money-related feature, it will not merit as much testing as perhaps a VISA integration since it is lightly used with low visibility. Criterion Score Cost of failure 2 Amount of use 1 Visibility 1 Lifespan 5 Difficulty of repair 2 Complexity 2 Enterprise App -- De-duplication of Persons Based on Demographic Info This is a real problem for TrueAccord.  Our app imports “people” from various sources.  Sometimes, we get two versions of the same “person”.  It is to our advantage to know this and take action accordingly in other parts of our system.  Person-matching can be quite complex given that two people can easily look very similar from a demographic standpoint (same name, city, zip code, etc.) yet truly be different people.  If we get it wrong, we could inadvertently cross-pollinate private financial information.  To top it all off, we don’t know what shape this will take long term and are in a pre-prototyping phase. In this case, I am dividing the testing assessment into two parts: prototyping phase and production phase. Prototyping The functionality will be in dry-run mode.  Other parts of the app will not know it exists and will not take action based on its results.  Complexity alone drives light testing here. Criterion Score Cost of failure 1 Amount of use 1 Visibility 1 Lifespan 1 Difficulty of repair 1 Complexity 4 Production Once adopted, this would become rather core functionality with a wide-sweeping impact.  If it is wrong, then other wrong data will be built upon it, creating a heavy cleanup burden and further customer impact.  That being said, it will still have low visibility since it is an asynchronous backend process.  Moderate to heavy testing is needed here. Criterion Score Cost of failure 4 Amount of use 3 Visibility 1 Lifespan 3 Difficulty of repair 4 Complexity 4 Testing at TrueAccord TrueAccord is three years old.  We’ve found product-market fit and are on the road to success (fingers crossed).  At this juncture, engineering time is a bit scarce, so we have to be wise in how it is allocated.  That means we don’t have the luxury of 100% test coverage.  Though we don’t formally apply the above heuristics, they are evident in the automated tests that exist in our system.  For example, two of our larger test suites are PaymentPlanHelpersSpec and PaymentPlanScannerSpec at 1500 and 1200 lines respectively.  As you might guess, these are related to handling customers’ payment plans.  This is a fairly complex, highly visible, highly used core functionality for us.  Contrast that with TwilioClientSpec at 30 lines.  We use Twilio very lightly with low visibility and low cost of failures.  Since we are only calling a single endpoint on their api, this is a very simple piece of code.  In fact, the testing that exists is just for a helper function, not the api call itself. I’d love to hear about other real world examples, and I’d love to hear if this way of thinking about testing would work for your software startup.  Please leave us a comment with your point of view!

Read More

Applying Machine Learning to Reinvent Debt Collection

Our Head of Data Science, Richard Yeung, gave a talk at the Global Big Data conference. The talk focused on the first steps from heuristics to probabilistic model, when building a machine learning system based on expert knowledge. This feedback loop is what allowed our automated system to replace the old school call center-based model with a modernized, personalized approach. You can find the slides here.

Read More