1. Accueil
  2. Thèse
  3. RLAMA

RLAMA

Local assistant for document question answering using RAG.

RLAMAIntroduction

Qu'est-ce que RLAMA?

RLAMA is a powerful local assistant tool designed for document question answering by employing Retrieval-Augmented Generation (RAG) systems. It connects to local Ollama models to index and process documents efficiently. Users can create, manage, and interact with their document knowledge bases securely on their local machines.

Comment utiliser RLAMA?

To use RLAMA, first index your document folder using a command like 'rlama rag [model] [rag-name] [folder-path]'. Then, start an interactive session with 'rlama run [rag-name]' to query your documents.

Cas d'utilisation de RLAMA