--- Query Now
Back

Query Now

MultiDoc RAG Platform

TypeScript Agentic-RAG OpenAI Embedding Neo4j Pinecone Redis

Introduction

Query Now offers a revolutionary approach to document management and interaction through its AI-powered search and knowledge graphs. This platform allows users to upload documents in formats such as PDF, DOCX, and TXT, and engage in intelligent conversations with their knowledge base. With advanced embeddings and entity extraction, Query Now provides deep document understanding, enabling users to ask natural language questions and receive contextual answers.

Architecture Overview

Document-to-Graph Pipeline

Query Now processes documents through a sophisticated multi-stage pipeline:

  1. Document Upload - Supports PDF, DOCX, and TXT formats
  2. Text Extraction & Chunking - Intelligent document segmentation
  3. LLM-Powered Ontology Generation - Automatic schema creation
  4. OpenAI Embeddings - 3072-dimensional vector representations
  5. Parallel Storage:
    • Pinecone Vector Store - Semantic search capabilities
    • Neo4j Knowledge Graph - Entity relationship mapping
    • Redis Cache - Query optimization and performance

Agentic Retrieval System

The system employs autonomous AI agents for intelligent query processing:

  • Query Analysis Agent - Understands user intent and context
  • Dynamic Strategy Selection - Chooses optimal retrieval methods:
    • Vector Similarity Search (OpenAI embeddings)
    • Graph Traversal (Neo4j Cypher queries)
    • Logical Filtering (metadata/attributes)
  • Multi-Step Reasoning - Iterative query refinement
  • Response Graph Generation - Structured knowledge extraction
  • Streaming Responses - Real-time reasoning chain visualization

Core Components

Embedding Layer

  • Model: OpenAI text-embedding-3-large
  • Dimensions: 3072
  • Coverage: Documents, entities, relationships, and queries

Vector Store (Pinecone)

  • Semantic similarity search
  • Hybrid search capabilities
  • Advanced metadata filtering

Graph Database (Neo4j)

  • Entity resolution & deduplication
  • Relationship extraction and mapping
  • Dynamic ontology management
  • Automated Cypher query generation

Cache Layer (Redis)

  • Query result caching
  • Session management
  • Performance optimization

AI Orchestration

  • GPT-4 for ontology generation & entity extraction
  • Autonomous agent routing
  • Multi-tool reasoning capabilities

Key Features

Intelligent Knowledge Processing

  • Automatic Ontology Generation - LLM extracts entities, relationships, and hierarchies
  • Entity Resolution & Deduplication - Intelligent merging of similar entities
  • OpenAI Embeddings - 3072-dimensional vectors for all graph elements

Advanced Retrieval

  • Agentic Retrieval - Dynamic tool selection across vector/graph/filter methods
  • Multi-Step Reasoning - Iterative query refinement for complex questions
  • Visual Knowledge Graphs - Interactive graph visualization

Developer Experience

  • Streaming Responses - Real-time reasoning chains
  • Production-Ready - Scalable architecture for enterprise use
  • API-First Design - RESTful APIs for seamless integration

Use Cases

  • Enterprise Knowledge Management - Centralize and query organizational knowledge
  • Research & Analysis - Extract insights from large document collections
  • Customer Support - Build intelligent FAQ and documentation systems
  • Legal & Compliance - Navigate complex regulatory documents
  • Technical Documentation - Create searchable engineering knowledge bases