Articles
3 minutes

AI in Housing Finance: Navigating Benefits, Risks, and the Need for Transparent Data

May 19, 2025
Author:
Polygon Research
Share this article

Artificial intelligence is no longer a laboratory curiosity in housing finance. From selecting borrowers and pricing credit to powering anomaly-detection systems for regulators, AI is reshaping mortgage origination at an unprecedented pace. The Government Accountability Office’s new study, Artificial Intelligence: Use and Oversight in Financial Services (May 19 2025), released on May 19 2025, provides a comprehensive look at this transformation. It confirms that advancements in algorithms, vast data availability, and the surge in AI enthusiasm post-ChatGPT have embedded AI deeply into financial services, including the mortgage sector.

At Polygon Research, we see AI’s rapid adoption as both an opportunity and a challenge. The GAO report highlights significant benefits like cost savings and faster loan approvals alongside critical risks, such as bias and regulatory gaps. Our tools, HMDAVision and CensusVision, built on granular microdata, offer a solution: transparent, high-quality data to ensure AI in housing finance is fair, defensible, and effective. Here’s how the GAO’s findings align with our mission, and why our tools are essential for navigating this AI-driven landscape.

AI's Promise: Efficiency, Inclusion, and Speed

The GAO report details how AI is delivering tangible benefits to housing finance. Lenders are leveraging AI to streamline mortgage origination, with industry estimates suggesting that AI-driven underwriting can reduce loan processing times by up to 50%. This efficiency translates into lower operational costs and faster approvals, making homeownership more accessible for borrowers. For example, chatbots powered by AI can handle routine inquiries, think balance checks or address updates, saving an estimated $0.70 per interaction compared to human agents, according to a 2017 Juniper Research study cited by GAO stakeholders.

Beyond efficiency, AI holds the potential to enhance financial inclusion. The report notes that some AI vendors report a 40% increase in loan approvals for women and borrowers of color after replacing traditional scorecards with AI models. By incorporating alternative data like rent and utility payments, AI can assess creditworthiness for those with thin or no credit histories, a group often excluded from conventional lending. This aligns with broader industry trends: the Financial Stability Board, referenced in the GAO report, highlights AI’s ability to extend credit to underserved populations, potentially reshaping access to housing finance.

The Risks: Bias, Opacity, and Data-Quality Pitfalls

Yet, the GAO report is clear that AI’s benefits come with significant risks. One of the most pressing is bias. AI models trained on historical lending data can inadvertently perpetuate past discrimination. For instance, if legacy practices favored certain demographics, say, by prioritizing zip codes with higher property values, an AI model might learn to replicate those patterns, leading to unfair credit denials or higher rates for protected classes. The report cites testimony from FinRegLab’s CEO, Melissa Koide, warning that complex AI models can infer race or gender from seemingly neutral data, amplifying disparate impacts that are harder to detect as models grow more sophisticated.

Data quality is another concern. Poor or incomplete inputs can lead to inaccurate predictions, risking financial losses for lenders and unfair outcomes for borrowers. The GAO also flags “hallucinations” in generative AI, credible but erroneous outputs, a particular worry when such models influence high-stakes decisions like mortgage approvals. Explainability compounds these issues: opaque AI systems make it difficult to justify adverse actions, potentially violating fair lending laws like the Equal Credit Opportunity Act (ECOA).

These risks aren’t abstract. They’re already prompting scrutiny from regulators and lenders alike, underscoring the need for robust data and oversight to keep AI in check.

Uneven Oversight: A Regulatory Patchwork

The GAO report tells a story of a fragmented regulatory landscape for AI in financial services, with implications for housing finance. Most banking regulators, such as the Federal Reserve, FDIC, and OCC, rely on existing model-risk management and fair-lending frameworks to oversee AI. Some have issued AI-specific guidance or conducted targeted examinations, but the approach varies. The National Credit Union Administration (NCUA), which oversees credit unions critical to affordable housing, stands out for its limitations:

  • Limited Guidance. Unlike its peers, the NCUA lacks detailed model-risk management guidance tailored to AI. Its current framework, as noted in the report, doesn’t provide examiners or credit unions with sufficient tools to manage AI-related risks.
  • Third-Party Gap. The NCUA has no statutory authority to examine third-party AI vendors, despite credit unions’ growing reliance on outsourced analytics. This gap, first flagged by GAO ten years ago, in 2015 (GAO-15-509), leaves thousands of institutions vulnerable to unscrutinized AI risks.

The GAO reiterates its call for Congress to grant NCUA third-party examination powers and recommends updating its guidance. Until these changes materialize, credit unions and their borrowers face heightened exposure to AI’s pitfalls—a gap our tools are designed to bridge.

Regulators Embrace AI with Caution

Regulators aren’t just overseeing AI; they’re using it. The GAO report details how agencies like the Federal Reserve, FDIC, and SEC deploy AI to enhance supervision, searching documents, detecting legal violations, and flagging outliers in financial reports. For example, the Federal Reserve uses AI to sift through bank examination files, cutting manual review time for documents that can span hundreds of pages. The NCUA employs AI to predict credit union financial health, pairing these insights with human judgment. Crucially, regulators treat AI as a decision-support tool, not an autonomous arbiter. Outputs are cross-checked against traditional supervisory data, ensuring accountability. This cautious adoption mirrors the industry’s need for balance: leveraging AI’s power while grounding it in transparent, reliable data.

Regulators treat AI as a decision-support tool, not an autonomous gatekeeper.

The Solution: Transparent, High-Quality Data

The GAO report consistently returns to a core principle: fair and effective AI requires transparent, high-quality data. This is where Polygon Research excels. Our tools, HMDAVision and CensusVision, bundled as Polygon Vision, provide the granular, authoritative data needed to audit, understand, and improve AI models in housing finance.

  • HMDAVision. This tool integrates every loan from the public Home Mortgage Disclosure Act Loan Application Register (HMDA LAR) 124 million applications and purchased loans from 2018-2024 with FFIEC census-tract demographic files. Unlike aggregated datasets, HMDAVision retains raw, lender-reported fields, enriched with engineered metrics like credit scores and APRs (and more). Analysts can trace a model’s prediction back to a single loan and census tract, pinpointing potential biases or errors with precision. For example, if an AI model flags a loan as high-risk, HMDAVision lets you verify whether demographic or geographic factors skewed the outcome, essential for fair lending compliance.
  • CensusVision. Our second tool models five years of American Community Survey Public-Use Microdata Sample (PUMS) files at the person- and household-record level. Updated annually with each new Census Bureau release, CensusVision ensures users have the latest socio-economic microdata while preserving long-term comparability. This granularity is ideal for benchmarking AI models against real-world demographics, say, checking if a model’s approval rates align with a region’s income or racial composition.

Together, Polygon Vision creates a transparent test bed for AI in housing finance. Lenders and regulators can:

  1. Audit with Precision. Examine every variable an AI model uses, identifying biases or data-quality issues before they impact borrowers.
  2. Build Data Literacy. Engage with authentic micro-records, not pre-digested tables, to develop the skills needed to critically assess AI outputs.
  3. Benchmark Vendor Models. Compare black-box predictions against open, authoritative data. This is an indispensable control, especially given the NCUA’s oversight limitations.

In an era where AI drives high-stakes decisions, data literacy and transparency are non-negotiable. Polygon Vision delivers both, empowering users to harness AI’s potential while mitigating its risks.

Why This Matters Now

The GAO’s recommendations, expanded NCUA authority and updated guidance, may take months or years to become policy. Meanwhile, AI’s role in housing finance is accelerating. Lenders can’t afford to wait for regulatory clarity; they need tools today to ensure their AI models are fair, compliant, and effective.

Polygon Vision provides that edge. With our platform, you can explore the newly modeled 2024 HMDA data, released annually by the CFPB; link loan-level insights to census-tract and household microdata in seconds, and even stress-test AI underwriting or marketing models for inclusion, bias, and performance.

Take Control of Your AI-Driven Operations

Don’t let regulatory gaps or opaque data slow you down. Start a free trial of Polygon Vision today and gain the clarity and confidence to lead in an AI-powered housing finance landscape.

  • Audit with Confidence: Ensure your models meet fair lending and compliance standards.
  • Learn with Real Data: Master AI’s underpinnings with hands-on microdata.
  • Stay Ahead: Outpace competitors with faster, fairer decisions.

Visit polygonresearch.com/pricing to activate your account. AI in housing finance is evolving rapidly. Polygon Vision ensures you set the standard.