The rise of artificial intelligence (AI) has opened up new frontiers in various fields, including natural language processing. One of the most significant advancements in this area is the development of conversational agents (i.e., chatbots), which are computer programs designed to interact with humans through messaging interfaces. The emergence of large language models, such as ChatGPT, has enabled the creation of highly sophisticated chatbots that can mimic human conversations with impressive accuracy. However, the use of these chatbots also poses significant cyber risks that must be addressed. This research paper seeks to investigate the cyber risks associated with the use of ChatGPT and other similar AI-based chatbots, including potential vulnerabilities that could be exploited by malicious actors. As part of this research, a survey was conducted to explore the cybersecurity risks associated with AI-based chatbots like ChatGPT. Further, the paper also suggests mitigation methods that can be used to mitigate these cyber risks and vulnerabilities.