SSM

Sparse State Space Models for World Modeling

Built a sparse sequence-modeling pipeline using Mamba-2 in JAX with vector quantization for efficient latent representations.

  • 4x faster inference through linear-time sequence modeling.
  • 60% lower memory usage with discrete latent quantization.
  • 25% stronger long-horizon prediction versus recurrent baselines.
World Models Python JAX Mamba-2 Vector Quantization
NLP

FakeNewsClassifier Research Package

Designed and trained BERT and RoBERTa classification pipelines with robust benchmarking for noisy and synthetic label settings.

  • 95% accuracy across a 50K+ sample training pipeline.
  • 12% performance degradation quantified under increasing noise.
  • Reproducible robustness benchmarks for comparative model evaluation.
Python PyTorch Transformers BERT
RAG

ChatRAG: RAG-WebUI Chatbot

Created a retrieval-augmented assistant with semantic search and vector similarity retrieval over a document collection.

  • 40% reduction in hallucinations using retrieval-grounded responses.
  • Sub-200ms query latency with optimized embeddings retrieval.
  • Semantic search indexing over 1K+ documents.
Python LangChain Vector DB RAG