The present research explores the pre-trained transformer's capability to provide conversational chatbot, using a niche approach of fine tuning the transformer on a custom dataset. Rags, Sentence Transformers, LLMs are all part of this suggested technique's implementation. This experiment examines the versatility of a pre-trained transformer and its ability to not only provide semantic context to highly sensitive conversation topics but also use its pre-training knowledge base to provide appropriate recommendations. The results are more promising to increase the nuances of a conversation chatbot. We give an original perspective on increasing the accuracy and effectiveness of an advanced chatbot that exceeds existing methodologies, revealing insight on the expanding landscape of artificial conversational chatbots using this complete methodology.