About the Project

This project focuses on fine-tuning a pre-trained large language model, specifically Google's Gemma 2b instruction-tuned model, to enhance its ability to answer questions related to Python programming.

By fine-tuning on a domain-specific dataset, the aim is to improve the model's accuracy, relevance to answering question concerning sensitive topics like mental health (normally return generic answer in pre-trained model)

Technique

Achievement

Further Development

Potential areas for further development include: