Usage Analytics & Insights
Metrics that tell you which forms get filled most, how long fills take, where the AI leaves fields blank, and which workflows are actually being used
Overview
Usage analytics for Instafill.ai is specifically about measuring form automation performance — not generic "user engagement." The metrics that matter in a form-filling platform are different from those in a CRM or project management tool: fill success rate (did the AI fill all fields, or did it leave some blank?), processing time by form type (a 200-field mortgage application takes longer than a W-9 — how much longer, and is it within expectations?), source document quality (do PDF sources produce better fill accuracy than pasted text?), and which forms are actually being used versus which ones were uploaded and forgotten.
The analytics dashboard aggregates this data across sessions, batch jobs, and API calls and surfaces it as time-series trends, per-form breakdowns, and per-user activity summaries. Event data is collected via OpenTelemetry instrumentation across both the .NET and Python services, indexed in Elasticsearch 7.17.9 for fast aggregation, and tracked for product behavior analysis via Amplitude. Administrators see organization-wide metrics; workspace admins see their workspace; members see their own activity.
Key Capabilities
- Fill session metrics: total sessions per day/week/month, completion rate (sessions where the AI filled all required fields vs. sessions with blank fields remaining), average fill duration by form type
- Per-form analytics: which forms are filled most frequently, average field completion rate per form, average processing time, most common source document types used
- Batch job metrics: batch jobs initiated, rows processed, rows failed, average rows-per-minute throughput, most common failure reasons
- AI fill accuracy indicators: fields filled vs. fields total per session (a proxy for AI confidence on that form/source combination); trend over time showing whether fine-tuning improved completion rates
- Source document performance: which source types (PDF, Word, pasted text, profile data) produce the highest field completion rates for specific form types
- API usage: calls per endpoint, API key activity, response time percentiles, error rate by endpoint
- User and workspace activity: active users, sessions per user, most-used forms by workspace, workspace-level comparisons
- Time-series trends: day-over-day and week-over-week comparisons for all key metrics
- Exportable reports: CSV, Excel, JSON export for any metric set and date range
- Scheduled email summaries: daily, weekly, or monthly reports sent automatically to configured recipients
How It Works
Data Collection
Every form fill session, batch job, API call, and user action produces structured events that flow through OpenTelemetry instrumentation. Both the .NET web service (sessions, auth, file operations) and the Python processing API (AI fill jobs, source processing, PDF generation) contribute telemetry to a shared event stream. Key metrics captured per session:
- Session start and end timestamps
- Form ID and form name
- Fields total / fields filled / fields left blank
- Source document count and types
- AI fill duration (source processing time + fill time + PDF generation time)
- Success/failure status and failure reason if applicable
For batch jobs, the same metrics are captured per row plus aggregate totals for the batch.
Aggregation and Storage
Raw events are aggregated into metrics at hourly, daily, and monthly granularities and stored in Elasticsearch 7.17.9. Dashboard queries run against pre-aggregated indices rather than raw event data, which keeps query performance fast even as event volume grows. A workspace that has processed 50,000 fill sessions loads its analytics dashboard in the same time as a workspace with 500 sessions.
Dashboard Display
The main analytics dashboard shows:
Summary cards (selected date range): total forms filled, average fill duration, overall field completion rate, active users
Fill volume trend: line graph of sessions per day over the selected period, with batch rows counted separately from individual sessions
Form leaderboard: top 10 most-filled forms in the period with fill count, average duration, and average field completion rate per form
Fill duration distribution: histogram of fill duration across all sessions in the period — shows whether most fills complete in under 30 seconds or whether there's a tail of slow sessions worth investigating
Failure analysis: sessions and batch rows that failed, grouped by failure reason (source processing error, AI timeout, PDF generation error, user abandoned)
User activity table: sessions per user, broken down by form type, sortable by volume or by completion rate
Use Cases
Identifying forms that need fine-tuning: The per-form analytics table shows field completion rate alongside fill volume. A form that gets filled 50 times per month but only achieves 80% field completion on average (meaning 20% of fields are left blank for humans to fill manually) is a candidate for AI fine-tuning. After fine-tuning, the completion rate trend shows whether accuracy improved.
Measuring the impact of adding new source documents to profiles: If the team adds a new document type to their standard profile (e.g., adding a credentialing certificate to a healthcare worker profile), the analytics show whether field completion rates improved on the credentialing forms filled with that profile in subsequent sessions.
Batch processing throughput monitoring: Operations teams processing large weekly batches (500+ permit applications, monthly compliance filings) can see rows-per-minute throughput and failure rates per batch job. If a batch is running slower than usual, the per-row timing data surfaces where the slowdown occurred.
Demonstrating ROI: Organizations that adopted Instafill.ai to replace manual form entry can pull session volume and average fill duration data to quantify time saved. If the platform fills 200 forms per week at an average of 45 seconds each vs. 15 minutes each for manual entry, the analytics export provides the raw numbers for that calculation.
API integration health monitoring: Teams using the REST API to automate form filling can monitor API call error rates, response time percentiles, and throughput per API key to detect degradation before it affects downstream systems.
Benefits
- Field completion rate tells you whether the AI is actually filling the form or leaving work for humans — a metric not visible in any generic platform analytics tool
- Per-form breakdowns identify which specific form types are performing well and which need attention (fine-tuning, better source templates, or revised source document formats)
- Source document performance data shows which input formats produce the best fill results for specific form types, helping teams standardize their source document preparation
- Batch job analytics give operations teams the data to plan processing windows — if 100-row batches consistently take 8 minutes, teams can schedule jobs accordingly
- Trend comparisons show whether fine-tuning runs, new source document types, or workflow changes actually moved the metrics in the expected direction
Security & Privacy
Analytics data contains aggregated metrics, not form content or source document data. Session records include form names, field counts, and timing data — not field values or the PII contained in filled forms. User activity is visible to workspace and organization admins; individual members can see only their own session history.
User anonymization is available for analytics exports: reports can show "User A," "User B" instead of email addresses, for sharing with executives or external auditors without exposing staff identities. When a user account is deleted, their sessions are anonymized in historical analytics — aggregate counts are preserved, the user identity is removed.
All analytics data is scoped to workspaceId and protected via the shared JWT authentication middleware running in both the .NET and Python service layers.
Common Questions
What form-filling-specific metrics are available?
The metrics most specific to form automation workflows:
Field completion rate: the percentage of form fields that received a value from the AI (vs. left blank for manual entry). A 95% completion rate on a 100-field form means 5 fields need human input. Tracked per session and averaged per form over the selected period.
Fill duration by form type: average seconds from fill job start to filled PDF available, broken down by form. Useful for setting user expectations and identifying forms where fill time is unexpectedly high.
Source document hit rate: for each source type (uploaded PDF, Word document, profile data, pasted text), how many fields in a session were filled from that source. Shows which source types are doing the most work.
Blank field frequency by field name: across all fills of a specific form, which fields are most commonly left blank by the AI. High blank rates on specific fields indicate where fine-tuning or better source templates would help most.
Batch success rate per job: the percentage of rows in a batch job that completed without error. Trend over time shows whether batch job reliability is improving or degrading.
Processing pipeline timing: for session fills, the time split between source processing (OCR, vector indexing), AI fill, and PDF generation — useful for diagnosing whether slowness is in document parsing or in the fill step.
Can I export analytics data for external analysis?
Yes. Every metric visible in the dashboard can be exported:
CSV: tabular data with one row per session, batch row, or API call — for import into Excel, Google Sheets, or BI tools. Includes all metric fields (form name, field counts, duration, source types, status, user identifier).
Excel (.xlsx): pre-formatted with column headers and date formatting — directly usable in most analytics workflows without data cleaning.
JSON: structured event data — suitable for programmatic analysis, loading into Python or R, or ingestion into a data warehouse.
Date range and filter scope: exports honor the currently applied filters — export data for a specific workspace, specific form, specific user, or specific date range. Combine filters before exporting to get exactly the slice needed.
For Enterprise plans, a REST API endpoint (/analytics/export) provides programmatic access to analytics data for integration with Tableau, Power BI, Looker, or custom dashboards.
How does analytics compare to quota tracking?
Usage quotas and usage analytics serve different purposes:
Usage Quotas (the Usage Quotas feature):
- Shows current-month consumption against plan limits: forms filled this month, storage used, API calls made
- Alerts when you're approaching a limit
- The question it answers: "Will I run out of quota this month?"
Usage Analytics (this feature):
- Shows historical performance data: trends over weeks and months, per-form breakdowns, AI accuracy indicators
- Identifies optimization opportunities and workflow patterns
- The question it answers: "How well is the platform working for my team, and where should I focus improvement?"
Both draw from the same underlying event data. Analytics provides the historical context that helps interpret quota usage — if storage is growing fast, analytics shows which workspace or form type is driving the growth.
Can I set up automated reports?
Yes. Scheduled reports send a configured analytics view to specified email recipients on a recurring schedule:
Configuration:
- Set up a dashboard view with the filters and metrics relevant to your audience (e.g., executive summary: total fills, field completion rate, top 5 forms — no per-user detail)
- Set the schedule: daily, weekly (specify day and time), or monthly (specify day of month)
- Specify recipients: any email addresses, not just Instafill.ai account holders
- Choose format: PDF attachment (with charts) or CSV attachment (raw data)
Useful scheduled report patterns:
- Weekly operations summary for team managers: batch job counts, failure rates, processing volume
- Monthly executive summary: total forms processed, trend vs. prior month, top form types
- Form-specific reports: weekly breakdown of a high-volume form (fill count, completion rate, average duration) sent to the team that owns that workflow
Alert conditions (Professional+ plans) can supplement scheduled reports: send an immediate notification when the field completion rate on a specific form drops below a threshold, or when a batch job's failure rate exceeds 5%.
Is real-time analytics available?
Near-real-time metrics are available for active monitoring:
Under 1-minute latency: active sessions currently processing, recent fill completions (last 10 sessions), current API call rate
1–5 minute latency: forms filled today, error rate for the current hour, active batch jobs with current row count
Hourly aggregation: daily totals, average fill duration for the day, user activity counts
The latency exists because complex metrics (averages, percentiles, rates) require aggregation over a time window — raw events aren't aggregated instantly. For the operational use case of monitoring an active batch job, the per-row completion events appear within 1–2 minutes of each row completing.
Dashboard auto-refresh runs every 60 seconds by default and can be disabled for static report views.