BACKGROUND
The regulatory affairs division in a pharmaceutical establishment is the point of contact between regulatory authorities and pharmaceutical companies. They are delegated to the crucial and strenuous task of extracting and summarizing relevant information in the most meticulous manner from various search systems. An AI-based intelligent search system that can significantly bring down the manual efforts in existing processes of the regulatory affairs department while maintaining/ improving the quality of final outcomes is desirable. We proposed a frequently asked questions (FAQ) component and its utility in an AI-based intelligent search system in this paper. The scenario is furthermore complicated by the lack of publicly available relevant datasets in the regulatory affairs domain to train the machine learning models that can facilitate cognitive search systems for regulatory authorities.
OBJECTIVE
This paper aims to use AI-based intelligent computational models to automatically recognize semantically similar question pairs in the regulatory affairs domain and evaluate the recognize question entailment (RQE) based system.
METHODS
We used the transfer learning techniques and experimented with transformer-based models pre-trained on corpora collected from different resources, like BERT, Clinical BERT, BioBERT, and BlueBERT. We used a manually labeled dataset that contained 150 question pairs in the pharmaceutical regulatory domain to evaluate our model’s performance.
RESULTS
Clinical BERT model performs better than other domain specific BERT-based models in identifying question similarity from the regulatory affairs domain. The BERT model has the best ability to learn domain specific knowledge with transfer learning, which reaches the best performance when fine-tuned with enough clinical domain question pairs. The top-performing model achieves an accuracy of 90.66% on the test set.
CONCLUSIONS
This work demonstrates the possibility of using pre-trained language models to recognize question similarity in the pharmaceutical regulatory domain. Transformer-based models pre-trained on clinical notes give a cut above performance than models pre-trained on biomedical text in recognizing question’s semantic similarity in this domain. We also discuss the challenges of using data augmentation techniques to tackle the issue of lack of relevant data in this domain. The results of our experiment indicate that increasing the number of training samples using back translation and entity replacement did not enhance the model's performance. This lack of improvement may be attributed to the intricate and specialized nature of text in the regulatory domain. Our work is the foundation of further studies that apply state-of-the-art linguistic models to regulatory documents in the pharmaceutical industry.
CLINICALTRIAL