A Novel Optimization Scheme for Named Entity Recognition with Pre-trained Language Models

Abstract

Named Entity Recognition (NER) is crucial for extracting structured information from text. While traditional methods rely on rules, Conditional Random Fields (CRFs), or deep learning, the advent of large-scale Pre-trained Language Models (PLMs) offers new possibilities. PLMs excel at contextual learning, potentially simplifying many natural language processing tasks. However, their application to NER remains underexplored. This paper investigates leveraging the GPT3 PLM for NER without fine-tuning. We propose a novel scheme that utilizes carefully crafted templates and context examples selected based on semantic similarity. Our experimental results demonstrate the feasibility of this approach, suggesting a promising direction for harnessing PLMs in NER.

Type
Publication
In Journal of Electronic Research and Application
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.