Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
A Combined Semantic Dependency and Lexical Embedding RoBERTa Model for Grid Field Relational Extraction
oleh: Qi Meng, Xixiang Zhang, Yun Dong, Yan Chen, Dezhao Lin
Format: | Article |
---|---|
Diterbitkan: | MDPI AG 2023-10-01 |
Deskripsi
Relationship extraction is a crucial step in the construction of a knowledge graph. In this research, the grid field entity relationship extraction was performed via a labeling approach that used span representation. The subject entity and object entity were used as training instances to bolster the linkage between them. The embedding layer of the RoBERTa pre-training model included word embedding, position embedding, and paragraph embedding information. In addition, semantic dependency was introduced to establish an effective linkage between different entities. To facilitate the effective linkage, an additional lexically labeled embedment was introduced to empower the model to acquire more profound semantic insights. After obtaining the embedding layer, the RoBERTa model was used for multi-task learning of entities and relations. The multi-task information was then fused using the parameter hard sharing mechanism. Finally, after the layer was fully connected, the predicted entity relations were obtained. The approach was tested on a grid field dataset created for this study. The obtained results demonstrated that the proposed model has high performance.