Abstract
ABSTRACTObjectiveModelDB (https://modeldb.science) is a discovery platform for computational neuroscience, containing over 1800 published model codes with standardized metadata. These codes were mainly supplied from unsolicited model author submissions, but this approach is inherently limited. We estimate we have captured only around one-third of NEURON models and lower fractions for other simulators. To more completely characterize the state of computational neuroscience modeling work, we aim to identify works containing results derived from computational neuroscience approaches and their standardized associated metadata (e.g. cell types, research topics).Materials and MethodsKnown computational neuroscience work from ModelDB and identified neuroscience work queried from PubMed were included in our study. After pre-screening with SPECTER2, GPT-3.5 and GPT-4 were used to identify likely computational neuroscience work and their relevant metadata.ResultsSPECTER2, GPT-4, and GPT-3.5 demonstrated varied but high abilities in identification of computational neuroscience work. GPT-4 achieved 96.9% accuracy and GPT-3.5 improved from 54.2% to 85.5% through instruction-tuning and Chain of Thought. GPT-4 also showed high potential in identifying relevant metadata annotations.DiscussionDue to computational limitations, we only used each paper’s title and abstract, partially leading to false negatives. Further efforts should be devoted to including more training data and further improving current LLMs through fine-tuning approaches.ConclusionNLP and LLM techniques can be added to ModelDB to facilitate further model discovery, and will contribute to a more standardized and comprehensive framework for establishing domain-specific resources.
Publisher
Cold Spring Harbor Laboratory