Featured Projects

LLM-Powered Documentation Assistant

Built a RAG-based Q&A system using LangChain, GPT-4, and ChromaDB to answer questions from large document sets.

Read Case Study

Telemetry Prediction System

Designed a FastAPI-based system for real-time predictions with scheduled model training and PostgreSQL integration.

Read Case Study

Cloud MLOps Pipeline

Automated ML model deployment pipelines on Azure Kubernetes Service using Azure DevOps and MLflow tracking.

Read Case Study

LLM-Powered Documentation Assistant

A scalable, intelligent document Q&A system using LangChain, LLMs, and Vector Databases. Designed to extract accurate answers from technical documentation using Retrieval-Augmented Generation (RAG).

Problem

Needed a smart internal assistant to help employees query large volumes of technical product documentation and manuals. Traditional keyword-based search was ineffective, leading to wasted time and inaccurate answers.

Solution

  • Developed a RAG pipeline using LangChain and OpenAI GPT.
  • Indexed document corpus using ChromaDB (VectorDB) for fast similarity search.
  • Deployed as an interactive web app using FastAPI + Streamlit.
  • Chunked and embedded documents using sentence-transformers.
  • Added logging and observability for LLM query effectiveness.

Architecture

RAG System Architecture Diagram

The system uses a VectorDB (ChromaDB or FAISS) for document retrieval, passes results through LangChain’s RAG chain, and uses OpenAI’s GPT-4 to generate natural answers. The final output is rendered in a responsive UI.

Tech Stack

LangChain OpenAI GPT ChromaDB FastAPI Streamlit Docker sentence-transformers

Impact

  • 90% improvement in document search time.
  • Enabled employees to self-serve technical Q&A with high accuracy.
  • Set the foundation for multilingual support and feedback loop training.

Real-Time Telemetry & Prediction System

A real-time telemetry system built to predict asset safety using live metrics like setpoint, SOC, and temperature. This FastAPI-powered project handles model training, prediction, and asynchronous database operations, and is designed for future integration with Kafka and Redis for scalable streaming.

Key Features

  • Async FastAPI endpoints for real-time data ingestion
  • RandomForestClassifier for safety prediction
  • Endpoints for submitting data, training model, and getting predictions
  • Model serialization with joblib
  • Support for dynamic threshold alerting and inference scheduling

Tech Stack

FastAPI scikit-learn PostgreSQL (async) Joblib Kafka (planned) Redis (planned) Docker / Uvicorn

Scalability & Monitoring

Designed to support horizontal scaling with container orchestration (e.g., ECS, Kubernetes). Plans include Kafka integration for telemetry streaming, TimescaleDB for time-series data, and a background task scheduler for automated retraining and alerting.

Example Setup


# Setup Environment
conda create --name mlops_env --file requirements_challenge.txt
conda activate mlops_env
pip install -r pip-requirements.txt

# Run API locally
uvicorn predictive_api:app --reload
      

Use Case

Designed for use in energy, IoT, and industrial applications where real-time safety and performance monitoring are critical. Future enhancements include weekly/monthly reporting via unified endpoints and multivariate telemetry statistics.

GitHub

View on GitHub

↑ Back to Top

Cloud MLOps Pipeline Automation

Automated ML lifecycle using Azure DevOps, MLflow, and AKS for continuous deployment of scalable machine learning systems.

Project Overview

This project demonstrates a full MLOps pipeline automation leveraging Azure DevOps for CI/CD, MLflow for experiment tracking and model registry, and Azure Kubernetes Service (AKS) for scalable model deployment. The pipeline supports automated training, validation, and deployment workflows to streamline machine learning operations in production.

Architecture & Technologies

Results & Demo

The pipeline successfully automates the model lifecycle from development to production with reduced manual intervention, ensuring robust deployment and monitoring.

Demo and screenshots will be added here soon.

GitHub

View on GitHub

↑ Back to Top