AI Technology News

Your Daily Dose of AI Innovation & Insights

🧠 Building a Lightweight LLM-Powered Q&A API Using Ollama and Node.js | by Raviya Technical | AdonisJS | Jul, 2025

🧠 Building a Lightweight LLM-Powered Q&A API Using Ollama and Node.js | by Raviya Technical | AdonisJS | Jul, 2025

Put Ads For Free On FiverrClerks.com

Modern AI applications often require integrating local or hosted Large Language Models (LLMs) into web backends. In this article, we walk through the structure of a simple yet efficient Node.js API that leverages Ollama for running LLMs like Mistral locally. This setup enables asking questions based on a predefined text file using chunking and keyword matching.

project-root/
β”œβ”€β”€ src/
β”‚ β”œβ”€β”€ config/
β”‚ β”‚ └── llms.js # LLM configuration
β”‚ β”œβ”€β”€ controllers/
β”‚ β”‚ └── api/
β”‚ β”‚ └── llmsController.js # Core logic for question answering
β”‚ β”œβ”€β”€ services/
β”‚ β”‚ └── llms/
β”‚ β”‚ └── ollamaService.js # Communication with Ollama API
β”‚ β”œβ”€β”€ utils/
β”‚ β”‚ └── llms.js # Text chunking and relevance logic
β”‚ β”‚ └── utils.js # Utility functions (loadTXT, apiResponse)
β”‚ β”œβ”€β”€ routes/
β”‚ β”‚ └── api/
β”‚ β”‚ └── llms.routes.js # API route definition
└── uploads/
└── docs/
└── example.txt # The source document for Q&A

This config file defines the model and chunking parameters used for splitting documents into manageable pieces.

module.exports = {
OLLAMA_URL: "http://localhost:11434",
MODEL_NAME: process.env.MODEL_NAME || "mistral",
// CHUNK_SIZE: 1000,
// OVERLAP: 100,
CHUNK_SIZE: 300,
OVERLAP: 50,
};
Put Ads For Free On FiverrClerks.com

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Website by EzeSavers.
error: Content is protected !!