Artificial intelligence has become a big buzzword in the accounts receivable management industry during the past couple of years. Companies are using AI to help identify which accounts to contact and when to contact them and using algorithms to help them make more decisions and handle more collection inquiries. The Federal Trade Commission dropped another buzzword yesterday in a warning about the growth of AI: racism.
Studies have indicated that AI tools meant to benefit all individuals are actually making things worse for minorities and the FTC wanted to remind all of the companies it regulates — including those in the ARM industry — that it will not tolerate such actions. Companies may find themselves to be in violation of a number of statutes, including the Fair Credit Reporting Act if they use technology that ultimately serves to discriminate against different groups of people.
To minimize the risk of this happening, the FTC laid out several principles to help companies use AI “truthfully, fairly, and equitably.” Among the principles are:
- Start with the right foundation
- Watch out for discriminatory outcomes
- Embrace transparency and independence
- Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results
- Tell the truth about how you use the data
- Do more good than harm
- Hold yourself accountable
“…keep in mind that if you don’t hold yourself accountable, the FTC may do it for you,” writes FTC staff attorney Elisa Jillson. “For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and [Equal Credit Opportunity Act]. Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously.”