System Architecture
Architecture Overview
LoanPilot is built on a modern, serverless-first stack designed for high scalability and secure financial data processing. The system leverages Next.js 14 with the App Router architecture, using Supabase as the unified backend for authentication, relational data, and file storage.
The core value proposition of the architecture is its AI Provider Layer, which abstracts complex OCR and financial analysis tasks behind a standardized interface, allowing the system to process bank statements, GST returns, and financial statements through various AI engines.
Core Components
1. Next.js App Router Structure
The application is organized into three primary logical segments:
(auth): Handles user onboarding, login, and organization setup.(dashboard): The primary user interface for loan officers to manage applications, view analytics, and generate credit memos.api/: A collection of stateless Route Handlers that bridge the frontend with AI providers and the database.
2. AI Provider Layer
The system utilizes a factory pattern to manage document extraction. The createAIProvider utility instantiates a specific AI engine (e.g., OpenAI, Anthropic, or specialized financial OCRs) based on configuration or user selection.
Usage Example (Internal API):
import { createAIProvider } from '@/lib/ai/provider';
const provider = createAIProvider('provider-name');
const result = await provider.extractBankStatement(base64File, 'application/pdf');
3. Data & Storage Integration
LoanPilot uses Supabase for all persistence requirements:
- PostgreSQL: Stores relational data including
loan_applications,borrowers, andbank_statement_analysis. - Supabase Auth: Manages session-based authentication and organization-level Row Level Security (RLS).
- Smart Storage Service: A custom abstraction (
smartUpload/smartDownload) that manages file transfers between the client and Supabase buckets, ensuring files are sanitized and organized byorganization_id.
API Reference: Document Analysis
The analysis engine is exposed via a set of POST endpoints. All analysis requests require a valid session and organization context.
Bank Statement Analysis
POST /api/analyze/bank-statement
Processes a bank statement to extract transactions, balances, and fraud metrics.
Request Body:
| Field | Type | Description |
| :--- | :--- | :--- |
| documentId | string | The ID of the uploaded document in loan_documents. |
| applicationId | string | The target loan application ID. |
| organizationId | string | The user's organization ID. |
| aiProvider | string? | Optional override for the AI model used. |
Financial Statement Analysis
POST /api/analyze/financial-statement
Analyzes Balance Sheets, Profit & Loss (P&L) statements, or Income Tax Returns (ITR).
Request Body:
| Field | Type | Description |
| :--- | :--- | :--- |
| statementType | string | One of balance_sheet, profit_loss, or itr. |
| documentId | string | The ID of the document to analyze. |
| applicationId | string | The associated application. |
System Workflows
Document Processing Lifecycle
- Upload: The frontend sends a file to
/api/documents/upload. ThesmartUploadservice saves the file to storage and creates a record in theloan_documentstable. - Extraction: The user triggers an analysis. The backend downloads the file, converts it to base64, and passes it to the AI Provider.
- Normalization: The AI returns a standardized JSON object. The system calculates additional metrics (e.g.,
debt_to_equity_ratio,cashflow_volatility). - Persistence: Data is saved into specific analysis tables (e.g.,
bank_statement_analysis), allowing for fast querying without re-running AI extraction.
Credit Memo Generation
The Credit Memo is a composite analysis. The endpoint /api/analyze/credit-memo aggregates:
- Borrower Profile: Basic business data.
- Bank Data: Average balances and net income.
- Financial Ratios: Derived from P&L and Balance Sheet analysis.
- Reconciliation: A cross-check between GST returns, Bank Statements, and reported P&L revenue to ensure data consistency.
Security & Authentication
Authentication is handled via Supabase Auth. Secure sessions are maintained using PKCE flow for server-side operations.
- Server Components: Use
createClientfrom@/lib/supabase/serverto access the database with user context. - Client Components: Use
@/lib/supabase/clientfor real-time updates and UI-bound data fetching. - Protection: All API routes (except Auth callbacks) verify the user's session and organization membership before processing data.