Deep Research Engine

v0.2

AI-Powered Research Engine

Ingest documents, search the web, and generate comprehensive cited research reports using a deterministic AI pipeline. Get answers faster with real-time streaming and automatic quality evaluation.

The Research Pipeline

1
Plan
Break down question into sub-questions
2
Retrieve
Search ingested documents
3
Web Search
Augment with live web results
4
Write
Synthesize evidence into report
5
Judge & Refine
Score quality, refine if needed

Core Features

Multi-Source Ingest
PDFs, URLs, GitHub repos
Dual Search
Local + web augmentation
Deep Reports
Deterministic 5-step pipeline
Auto Flashcards
Study cards from reports

Architecture Stack

Frontend

  • • Next.js 15 (React 19)
  • • Real-time SSE streaming
  • • Tailwind CSS + Lucide icons
  • • Vercel deployment ready

Backend

  • • FastAPI + Python 3.11+
  • • SQLite + SQLAlchemy async
  • • Pydantic v2 validation
  • • Railway deployment ready

LLM & Search

  • • 7 LLM providers (OpenRouter, Groq, OpenAI, etc.)
  • • DuckDuckGo + Tavily web search
  • • Circuit breaker + auto-failover
  • • Free-tier models available

LLM Models Used

Primary (OpenRouter)

Meta Llama 3.3 70B Instruct (free tier)

Fallback (Groq)

Llama 3.3 70B Versatile (fast inference)

Embeddings

sentence-transformers → Cohere → OpenAI fallback

Other Providers

OpenAI, Gemini, DeepSeek, Grok (switchable)

Ready to Start?

Ingest your first document or ask a question about the web. The AI will research and present findings in real-time.

Learn more on the Architecture & How It Works page