23
0

Predicting Drug-Gene Relations via Analogy Tasks with Word Embeddings

Abstract

Natural language processing (NLP) is utilized in a wide range of fields, where words in text are typically transformed into feature vectors called embeddings. BioConceptVec is a specific example of embeddings tailored for biology, trained on approximately 30 million PubMed abstracts using models such as skip-gram. Generally, word embeddings are known to solve analogy tasks through simple vector arithmetic. For instance, kingman+woman\mathrm{\textit{king}} - \mathrm{\textit{man}} + \mathrm{\textit{woman}} predicts queen\mathrm{\textit{queen}}. In this study, we demonstrate that BioConceptVec embeddings, along with our own embeddings trained on PubMed abstracts, contain information about drug-gene relations and can predict target genes from a given drug through analogy computations. We also show that categorizing drugs and genes using biological pathways improves performance. Furthermore, we illustrate that vectors derived from known relations in the past can predict unknown future relations in datasets divided by year. Despite the simplicity of implementing analogy tasks as vector additions, our approach demonstrated performance comparable to that of large language models such as GPT-4 in predicting drug-gene relations.

View on arXiv
@article{yamagiwa2025_2406.00984,
  title={ Predicting Drug-Gene Relations via Analogy Tasks with Word Embeddings },
  author={ Hiroaki Yamagiwa and Ryoma Hashimoto and Kiwamu Arakane and Ken Murakami and Shou Soeda and Momose Oyama and Yihua Zhu and Mariko Okada and Hidetoshi Shimodaira },
  journal={arXiv preprint arXiv:2406.00984},
  year={ 2025 }
}
Comments on this paper