https://www.sciencedirect.com/science/article/pii/S2667318524000023 JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. [1711839688] Skip to main content Skip to article Elsevier logo * Journals & Books * * Search RegisterSign in * View PDF * Download full issue Search ScienceDirect[ ] Elsevier Artificial Intelligence in the Life Sciences Volume 5, June 2024, 100095 Artificial Intelligence in the Life Sciences Research Article Rationalism in the face of GPT hypes: Benchmarking the output of large language models against human expert-curated biomedical knowledge graphs Author links open overlay panelNegin Sadat Babaiha ^a ^b, Sathvik Guru Rao ^a, Jurgen Klein ^a, Bruce Schultz ^a, Marc Jacobs ^a, Martin Hofmann-Apitius ^a ^b Show more Share Cite https://doi.org/10.1016/j.ailsci.2024.100095Get rights and content Under a Creative Commons license open access Abstract Biomedical knowledge graphs (KGs) hold valuable information regarding biomedical entities such as genes, diseases, biological processes, and drugs. KGs have been successfully employed in challenging biomedical areas such as the identification of pathophysiology mechanisms or drug repurposing. The creation of high-quality KGs typically requires labor-intensive multi-database integration or substantial human expert curation, both of which take time and contribute to the workload of data processing and annotation. Therefore, the use of automatic systems for KG building and maintenance is a prerequisite for the wide uptake and utilization of KGs. Technologies supporting the automated generation and updating of KGs typically make use of Natural Language Processing (NLP), which is optimized for extracting implicit triples described in relevant biomedical text sources. At the core of this challenge is how to improve the accuracy and coverage of the information extraction module by utilizing different models and tools. The emergence of pre-trained large language models (LLMs), such as ChatGPT which has grown in popularity dramatically, has revolutionized the field of NLP, making them a potential candidate to be used in text-based graph creation as well. So far, no previous work has investigated the power of LLMs on the generation of cause-and-effect networks and KGs encoded in Biological Expression Language (BEL). In this paper, we present initial studies towards one-shot BEL relation extraction using two different versions of the Generative Pre-trained Transformer (GPT) models and evaluate its performance by comparing the extracted results to a highly accurate, manually curated BEL KG curated by domain experts. * Previous article in issue * Next article in issue Keywords Large language models (LLMs) Natural language processing (NLP) Biomedical text mining Biomedical knowledge graphs Biological expression language (BEL) Recommended articles Cited by (0) (c) 2024 The Authors. Published by Elsevier B.V. Recommended articles No articles found. Article Metrics View article metrics Elsevier logo with wordmark * About ScienceDirect * Remote access * Shopping cart * Advertise * Contact and support * Terms and conditions * Privacy policy Cookies are used by this site. Cookie Settings All content on this site: Copyright (c) 2024 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the Creative Commons licensing terms apply. RELX group home page