NewBridge Pathway · Research note

Before mortgage AI can underwrite, the file must be decision-ready.

Why the bottleneck in mortgage AI is not model intelligence – it is evidence architecture.

Edited by Kenpachi Serendip (Founder) · Published 13 May 2026 · Updated 15 May 2026


Scope

The harder question comes before the model.

The mortgage industry is debating whether artificial intelligence can safely underwrite a loan. That debate is important, but it skips over a harder question about evidence.

Before any model – human or algorithmic – can reason over a mortgage file, the institution must preserve the facts, policy references, conditions, exceptions, compensating factors, and decision rationale that make the outcome reconstructable.

This note is not a statistical survey of lender files, an allegation of non-compliance, or an evaluation of AI vendors. It synthesizes public QC findings, GSE requirements, AI governance developments, and NewBridge's evidence-readiness framework to examine a narrower question: what must a mortgage file contain before any AI-assisted underwriting process can be trusted, audited, or defended?

The answer is not just more documents. It is a decision-ready file.


The underwriting evidence gap

Evidence of credit judgment does not easily write itself into the file.

The evidence that a credit decision was sound must be captured at the point of decision. Across the industry, the gap between what happened and what can be proved shows up in defect rates, repurchase requests, and QC findings.

ACES Quality Management reported that the overall critical defect rate rose in Q2 2025. Income and employment remained the largest defect category at 18.45%, while borrower and mortgage eligibility defects more than doubled from 6.90% to 15.87%, and legal, regulatory, and compliance defects rose to 16.24%. These are the categories where a decision-evidence layer matters most: income, employment, eligibility, compliance, and the data trail behind them.

Fannie Mae's Q4 2024 findings identified rental income documentation, missing employment verification at closing, and miscalculated debt payments as top defects. The common thread is not a complex judgment error, but the absence or incompleteness of evidence in the file. Fannie Mae separately noted that when lenders validate all four Day 1 Certainty components, repurchase risk drops by 64%.

The regulatory expectation is similar. HUD requires that for HECM loans, the mortgagee document and identify in the Financial Assessment any considered compensating factors. The obligation is not simply to consider compensating factors. It is to document them.

Fannie Mae makes this operational, where lenders performing post-closing QC must verify the accuracy and integrity of information supporting the underwriting decision, confirm support in the loan file, review Desktop Underwriter findings and conditions, review third-party data-analysis tool outputs, and ensure approval conditions are resolved and documented. The loan file must also include the documents, records, and reports used to support the underwriting decision required by the Lender Contract.

Third-party due diligence firms, which review loan files on behalf of securitization investors, test for the same thing: compensating factors, exceptions, approvals, and support. When compensating factors are insufficient or undocumented, the loan may fail review, not because the credit decision was necessarily wrong, but because it cannot be proved.


AI's evidence problem

AI adoption is gated by provability, not model performance alone.

Recent regulatory developments make clear that AI adoption in mortgage finance will not be gated by model performance alone. It will also depend on the institution's ability to prove what the model did and why.

Freddie Mac's updated AI/ML governance framework became effective in March 2026. Fannie Mae followed in April 2026 with Lender Letter LL-2026-04, establishing a governance framework for seller/servicers' use of AI/ML in origination and servicing practices. Together, the Enterprises are moving AI governance from policy conversation toward supervisory requirements.

The scope risk is practical rather than semantic: AI/ML capabilities may appear inside vendor tools, workflow features, analytics products, document tools, and decision-support systems. Seller/servicers therefore need evidence not only of their internal models, but of the AI-enabled systems and third-party capabilities that touch loan origination, servicing, or quality-control workflows.

While State-level attention is being built in parallel, Michigan DIFS has warned that when AI systems are used to make decisions or take actions impacting consumers, those uses must comply with the law and may be examined through documentation requests. The bulletin names risks, including inaccuracy, discrimination, data vulnerability, lack of transparency, and inability to map decision processes.


Data infrastructure bottleneck

AI exposes evidence weaknesses that manual processes can hide.

MBA NewsLink contributor Mark Dangelo argues that after years of AI experimentation, the industry now understands that promised AI value will not scale without a different relationship with data. The problem is not a shortage of AI pilots. It is the inability to scale them without downstream exceptions, compromised reporting, or tightly coupled data tracing.

The fragmentation is structural. Data sits in departmental silos – originations, risk, QC, fraud, servicing, investor reporting. Unstructured files remain opaque. Lineage breaks during extraction, transformation, and vendor touch. Model outputs are not always traceable to original data sources.

AI has made the problem exceptionally visible. When a human underwriter overrides a guideline based on a compensating factor and leaves a thin note, a peer reviewer may infer the intent. When an AI-assisted process does the same thing, the thin record becomes harder to explain. Explanations degrade when models encounter inconsistent definitions, and compliance teams struggle to validate AI-assisted outcomes.


From decision telemetry to decision receipts

Telemetry is an activity. Receipts are evidence.

A decision-ready file is not created by storing more documents. It is created by preserving the signals that show how a decision moved from facts to policy to judgment to outcome.

In servicing communications, those signals can become customer-understanding and outcome receipts: evidence that the customer was shown the right information, had support routes, interacted with the communication, and reached a recorded outcome.

In underwriting, the equivalent is a decision evidence receipt: a bounded, timestamped record that preserves which facts were available, which policy or investor rule applied, what exception or compensating factor was considered, who or what made the decision, which conditions were attached, and how the outcome was reached.

The distinction matters. Telemetry is a raw activity. A signal is an interpreted activity. A receipt is evidence that can be reconstructed later.


Regulatory direction

From process to effectiveness means more reconstruction pressure, not less.

In March 2026, Executive Order 14393, Promoting Access to Mortgage Credit, directed federal regulators to consider changes that move mortgage oversight toward policies focused on ability-to-repay and prudent underwriting, rather than process or technical compliance alone.

If oversight shifts from “did you follow the prescribed process?” to “can you prove the process was effective?”, the evidence burden does not disappear. It becomes more reconstruction-heavy. The implication is that an institution must be able to demonstrate, with an auditable record, that its underwriting policies produced sound outcomes – not merely that a record of rules applied.

CFPB mortgage-origination examination materials already treat underwriting as an examination domain. That makes the operational question practical: can the institution produce the records, data, and rationale needed to show how the underwriting decision was reached?


Toward a decision-ready file

The components are requirements for reconstruction.

At NewBridge Pathway, we focus on the layer underneath: the evidence infrastructure that helps institutions capture what happened, under which policy, through which system or actor, and with what supporting proof – so that the decision can be reconstructed later, by anyone or model who needs to.

Each component below maps to the servicing evidence framework NewBridge has been developing through its evidence-readiness work, applied through the open Evidence Portability Framework. The extension to underwriting is an exploratory next lens for future proofing. The components are not a product specification, schema release, certification program, or software requirement; they describe what a decision-ready file must eventually contain so that a credit or servicing judgment can be reconstructed without relying on one provider interface as the complete record of truth.

We maintain a controlled proof-bundle reference for qualified buyer, partner, and counsel conversations. The detailed schema, licensed-data retention model, and implementation reference remain controlled until our planned licensed-data review and schema-hardening work is complete.

Borrower and policy context

Borrower context

Why the file is not purely mechanical: non-standard income, gaps, assets, hardship context, layered risk, or other borrower-specific facts.

Policy trigger

Which guideline, overlay, investor rule, or tolerance was applied, and which version was current at decision time.

Compensating factors

The specific evidence that supported a layered-risk decision, not just the note that a factor existed.

Data and system lineage

Data lineage

Which LOS, AUS, vendor report, document, disclosure, income tool, credit file, asset report, or third-party feed informed the decision.

AUS / model output

The automated recommendation, conditions, messages, limits, or validation outputs used by the underwriter or workflow.

Judgment and exception record

Human judgment note

The underwriter's structured explanation of the why behind a manual decision, override, suspension, or condition.

Exception rationale

Why an exception was granted or denied, which compensating factors were considered, and who approved it.

Conditions receipt

The conditions attached to the decision, the evidence used to clear them, and the outcome.

Reconstruction and audit proof

Decision receipt

A timestamped record of approve, suspend, decline, withdraw, counteroffer, or refer, tied to the policy and evidence state at that moment.

Reconstruction packet

A hash-bound, timestamped, exportable bundle that can be reviewed without relying on the original provider interface.


What this means

The practical question is whether current decisions can readily and reliably be proved.

Several indicators suggest that a structural evidence gap exists.

  • Exception approvals are granted without a standardized, searchable rationale record.
  • Compensating factors are discussed in underwriting conversations but inconsistently documented in the file of record.
  • A complete decision reconstruction packet requires retrieving records from the LOS, AUS, and vendor systems individually.
  • QC findings concentrate on income, employment, asset, or borrower eligibility documentation – categories that depend heavily on human judgment and vendor data feeds.
  • AI governance exists at the policy level but has not been tested against a specific loan file: can the institution produce the evidence trail that would satisfy a GSE, insurer, or examiner?

Pathway

The decision-evidence layer surrounds the systems already in use.

NewBridge Pathway helps mortgage institutions capture, preserve, and prove what happened across the systems they already operate. Our work sits beside the POS, LOS, AUS, and the underwriter. We build the decision-evidence layer around them – the missing decision context that turns a file of documents into a file that can be reconstructed, audited, and defended.

Our focus is on servicing evidence: regulated notices, loss-mitigation communications, servicing-file reconstruction, and provider evidence continuity. But the same evidence problem appears in underwriting. Before AI can safely reason over a mortgage file, the institution must be able to prove that the file itself is complete.

A file that cannot preserve borrower context, policy triggers, exception rationale, compensating-factor evidence, condition history, and decision outcome cannot become explainable even to an AI model that reads it.

The bottleneck in mortgage AI is not model intelligence. It is evidence architecture, and that architecture has to be built before the model arrives.


Source

References for this note.

These references support the research framing. This note does not provide legal advice, regulatory interpretation, or vendor evaluation.



Tier 0 · Evidence Posture Snapshot

Request a Tier 0 Evidence Posture Snapshot.

A one-week diagnostic for mortgage servicers, subservicers, TPAs, specialist lenders, and regulated servicing teams assessing whether critical communications and servicing actions can be reconstructed across systems and vendors. Findings are delivered privately. Published research does not publish named-organization conclusions. No product purchase is required.

The Evidence Posture Snapshot is a diagnostic instrument, not a legal opinion or regulatory determination. Your organization should consult its own counsel on regulatory obligations.

By submitting this form, you will receive a response from us about your Evidence Posture Snapshot request.

We review snapshot inquiries in batches and respond within three business days. Findings are delivered privately and are never published. See the privacy notice for how your information is processed, retained, and shared.