Sophisticated vs. Unsophisticated Markets
Bar-Gill distinguishes between:
- Sophisticated markets, where consumers are generally able to make informed decisions
- Unsophisticated markets, where consumers are more likely to misunderstand complex products
In sophisticated markets, AI-driven personalization, such as individualized pricing, can increase efficiency and expand access to products by offering lower prices to consumers with lower willingness to pay.
In contrast, in markets involving complex financial products, such as credit cards, mortgages, or insurance, AI-powered personalization may harm consumers who misjudge product costs or benefits.
For example, if a consumer mistakenly overestimates the value of a financial product, an AI system may set the price just below that mistaken valuation, leading the consumer to pay more than the product is actually worth.
Algorithmic Price Discrimination
One area of growing concern is AI-enabled price discrimination, where algorithms tailor prices to each consumer’s willingness to pay.
Examples cited during the discussion included:
- Airlines experimenting with AI-based pricing strategies
- Online retail platforms offering individualized prices for identical products
- Insurance companies using algorithms to optimize premiums
While pricing based on individual risk, such as in insurance underwriting, is widely accepted, pricing based on willingness to pay raises significant consumer protection concerns.
As these practices expand, they are likely to attract increased attention from regulators and lawmakers, particularly at the state level.
AI Use Cases in Consumer Finance
The panel also highlighted several areas where AI is already being deployed across the consumer financial services lifecycle.
Marketing and Customer Acquisition
Financial institutions are using AI to analyze large data sets and create highly personalized marketing campaigns. Large language models can generate customized messaging tailored to specific demographic groups or individual consumers.
While this personalization improves targeting and engagement, it also creates compliance challenges related to:
- Misleading advertising
- Disclosure requirements
- Potential discriminatory targeting
Underwriting and Credit Decisions
AI-driven underwriting tools allow lenders to analyze alternative data, such as cash-flow information, to assess creditworthiness. These tools may expand access to credit for consumers who previously lacked traditional credit histories.
However, they also raise fair lending concerns under laws such as the Equal Credit Opportunity Act and its implementing regulation, Regulation B.
Because many AI models operate as “black boxes,” institutions may struggle to explain how decisions are made, an issue that can complicate discrimination analyses and regulatory oversight.
Fraud Detection
AI is particularly powerful in fraud detection, where pattern recognition is essential. Advanced models can analyze transaction behavior in real time to identify suspicious activity while minimizing unnecessary transaction declines.
These tools also allow financial institutions to communicate with customers instantly, confirming transactions or investigating suspicious activity through automated interactions.
Servicing and Collections
Agentic AI may soon conduct both inbound and outbound customer interactions, including:
- Customer service conversations
- Dispute resolution
- Collections calls
In some cases, AI-driven voice systems can conduct conversations that are indistinguishable from human interactions.
While this technology may improve efficiency and reduce costs, it raises legal concerns about consumer deception, harassment, and compliance with debt collection laws.
Core Legal Risks
Despite the novelty of the technology, many of the key legal risks arise from existing laws, not new AI-specific statutes.
Liability for AI Actions
As Joseph Schuster emphasized, AI is a tool, not a liability shield. Institutions remain responsible for the actions of AI systems just as they would for the actions of employees or third-party vendors.
Traditional legal doctrines, including agency law, vicarious liability, and unfair or deceptive acts or practices, continue to apply.
UDAP Risks
AI systems interacting with consumers may create risks under federal and state UDAP laws if they:
- Provide inaccurate information (“hallucinations”)
- Fail to deliver required disclosures
- Exhibit overconfidence in uncertain responses
- Engage in manipulative behavioral targeting.
Fair Lending and Discrimination
AI models can unintentionally produce discriminatory outcomes, even when protected characteristics are not used as inputs.
As Professor Bar-Gill noted, future litigation may increasingly focus on disparate impact analysis, which examines whether outcomes disproportionately affect protected classes regardless of the model’s internal logic.
Governance and Risk Management
Given these risks, institutions are increasingly adopting governance frameworks for AI deployment.
Common practices include:
- AI governance committees with cross-functional participation
- Model inventories and risk-tiering systems
- Vendor due diligence for AI providers
- Data mapping and validation processes
- Continuous monitoring of AI outputs.
Financial regulators are already asking supervised institutions detailed questions about how AI is being used. Institutions that implement structured governance processes are better positioned to respond to these inquiries.
The Rise of Agentic Commerce
One emerging application of agentic AI involves autonomous purchasing.
For example, a consumer might instruct an AI assistant to plan and purchase supplies for a birthday party. The AI would then select vendors, place orders, and initiate payments using the consumer’s stored payment credentials.
But what happens if AI makes a mistake, such as ordering supplies for 1,000 guests instead of 10?
Such scenarios raise difficult questions involving:
- consumer authorization
- merchant liability
- payment network rules
- dispute resolution
These issues are only beginning to receive attention from regulators and industry participants.
Key Takeaways for Financial Institutions
The panel concluded with several recommendations for institutions exploring AI deployment.
First, distinguish beneficial uses from harmful ones. AI can deliver significant consumer benefits, but firms must remain vigilant about potential misuse or unintended harm.
Second, prioritize governance. Robust policies, oversight structures, and risk management processes are essential.
Third, remember that existing laws still apply. AI systems must comply with the same consumer protection, fair lending, and disclosure requirements that govern traditional processes.
Finally, institutions must recognize that failing to adopt AI also carries risks. As fraudsters increasingly deploy advanced technology, financial institutions may need AI tools simply to keep pace.
As AI technology continues to evolve, the legal framework governing its use in financial services will also develop. For now, however, the most important lesson is that innovation must proceed hand-in-hand with careful legal and compliance oversight.
Consumer Finance Monitor is hosted by Alan Kaplinsky, Senior Counsel at Ballard Spahr, and the founder and former chair of the firm's Consumer Financial Services Group. We encourage listeners to subscribe to the podcast on their preferred platform for weekly insights into developments in the consumer finance industry.
View the recording transcript here.