NyvLearn

Deployed

Academic RAG Platform

A hallucination-resistant RAG platform that serves students with verified, citation-backed answers. Built on a 3-agent pipeline that retrieves, re-ranks, and validates every response.

AI Hallucinations in Academic Research

Students increasingly rely on AI tools for research, but generic LLMs hallucinate facts, misstate sources, and invent citations. In academic disciplines where accuracy is paramount, this is not acceptable.

Existing solutions either provide unverified answers or are too rigid to handle nuanced reasoning across domains. The gap between AI capability and academic reliability remains unbridged.

Unverified citations
Hallucinated precedents
Generic unverified answers
No source traceability

The 3-Agent Pipeline

01

Retrieval Agent

Embeds queries using FastEmbed (BAAI/bge-large-en-v1.5) and retrieves top-80 chunks from Qdrant vector store. Supports page-level precision and filename boosting.

02

Re-ranking Agent

Uses cross-encoder (Xenova/ms-marco-MiniLM-L-6-v2) to re-rank retrieved chunks to the top-20 most relevant passages. Eliminates semantic noise.

03

Validation Agent

Adversarial validation agent confirms citations exist, claims are grounded, and answers engage with specific facts. Rejects hallucinated drafts and triggers rewrite.

Verified Results for Students

Adversarial Validation

Every response is challenged by a dedicated validation agent that confirms citations, verifies claims, and rejects hallucinated content before delivery.

Citation-Grounded Answers

Sources are cross-referenced with the knowledge base. Students receive verified, page-level citations — never fabricated references.

IRAC-Formatted Legal Analysis

Answers follow Issue-Rule-Application-Conclusion structure tailored for legal education, with syllabus-aligned reasoning.

Multi-Tenant Architecture

Dedicated environments per university with isolated document stores, chat histories, and module configurations.

Module-Specific Knowledge

Tailored to individual course modules — from Banking & FinTech Law to AI Governance — with curriculum-aligned content ingestion.

Persistent Chat History

Full conversation persistence via MongoDB so students can return to previous queries and build on past answers.

Technical Foundation

FastEmbed
BAAI/bge-large-en-v1.5 embeddings
Cross-Encoder
Xenova/ms-marco-MiniLM re-ranking
Qdrant
Vector database with payload filtering
MongoDB
Chat history & user state persistence
FastAPI
Python backend with async endpoints
Next.js
React frontend with SSR

NyvLearn in Action

Placeholder screenshots showing the platform interface. Actual product screenshots coming soon.

Student Query View
Chat Interface

Chat Interface

IRAC-formatted legal answers with verified citations and source references

Course Management
Module Dashboard

Module Dashboard

Multi-tenant module selection with Banking & FinTech, M&A, AI Governance, and more

Agent Pipeline Monitor
Validation Log

Validation Log

Real-time visibility into the 3-agent pipeline: Retrieval → Re-rank → Validation status

Source Traceability
Citation View

Citation View

Page-level citations cross-referenced with uploaded academic materials

Live Deployment

ActiveSince February 2026

Queen Mary University of London

NyvLearn is deployed at QMUL across four academic modules, providing students with verified, citation-backed research assistance. The platform handles hundreds of queries per week with zero tolerance for hallucination.

Banking & FinTech
Financial Rescue & Recovery
M&A & Corporate Finance
AI Governance & Technology
4
Academic Modules
3
Agent Pipeline
0
Hallucinations Tolerated
100%
Citation Validation

See NyvLearn in Action

We offer live demos for academic institutions, research collaborations, and technical partnerships. Let us show you how NyvLearn makes AI reliable.