Using EasyTokenTagger to quickly perform POS tagging
 

We'll import the adaptnlp EasyTokenTagger class:

from adaptnlp import EasyTokenTagger
from pprint import pprint

Let's write some simple example text, and instantiate an EasyTokenTagger:

example_text = '''Novetta Solutions is the best. Albert Einstein used to be employed at Novetta Solutions. 
The Wright brothers loved to visit the JBF headquarters, and they would have a chat with Albert.'''
tagger = EasyTokenTagger()

With Transformers

First we will use some Transformers models, specifically bert.

We'll search HuggingFace for the model we want, in this case we want to use sshleifer's tiny-dbmdz-bert model:

from adaptnlp.model_hub import HFModelHub
hub = HFModelHub()
model = hub.search_model_by_name('sshleifer/tiny-dbmdz-bert', user_uploaded=True)[0]; model
Model Name: sshleifer/tiny-dbmdz-bert-large-cased-finetuned-conll03-english, Tasks: [token-classification]

Next we'll use our tagger to generate some sentences:

sentences = tagger.tag_text(text=example_text, model_name_or_path = model)
2021-04-20 16:39:49,456 loading file /root/.flair/models/tiny-dbmdz-bert-large-cased-finetuned-conll03-english/1e2c09da4ad5b3257008353a87852a7148389cc8308b91cf837f066b95650a0d.595173de82e795b5e4022dca79d10d885137a50ed2ee3974f15a75d328c0cd0a
/opt/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:2073: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).
  warnings.warn(
type(model)
adaptnlp.model_hub.HFModelResult

And then look at some of our results:

print("List string outputs of tags:\n")
for sen in sentences:
    pprint(sen)
List string outputs of tags:

[{'entity_group': 'I-LOC',
  'offsets': (-1, 2),
  'score': 0.11716679483652115,
  'word': '[CLS] Novetta'},
 {'entity_group': 'B-ORG',
  'offsets': (2, 3),
  'score': 0.11758644878864288,
  'word': 'Solutions'},
 {'entity_group': 'I-LOC',
  'offsets': (3, 5),
  'score': 0.11716679483652115,
  'word': 'is the'},
 {'entity_group': 'B-ORG',
  'offsets': (5, 6),
  'score': 0.11758644878864288,
  'word': 'best'},
 {'entity_group': 'I-LOC',
  'offsets': (6, 13),
  'score': 0.11716679483652115,
  'word': '. Albert Einstein used to be employed'},
 {'entity_group': 'B-ORG',
  'offsets': (13, 15),
  'score': 0.11758644878864288,
  'word': 'at Nov'},
 {'entity_group': 'I-LOC',
  'offsets': (15, 24),
  'score': 0.11716679483652115,
  'word': '##etta Solutions. The Wright brothers loved to visit'},
 {'entity_group': 'B-ORG',
  'offsets': (24, 25),
  'score': 0.11758644878864288,
  'word': 'the'},
 {'entity_group': 'I-LOC',
  'offsets': (25, 27),
  'score': 0.11716679483652115,
  'word': 'JBF'},
 {'entity_group': 'B-ORG',
  'offsets': (27, 28),
  'score': 0.11758644878864288,
  'word': 'headquarters'},
 {'entity_group': 'I-LOC',
  'offsets': (28, 31),
  'score': 0.11716679483652115,
  'word': ', and they'},
 {'entity_group': 'B-ORG',
  'offsets': (31, 32),
  'score': 0.11758644878864288,
  'word': 'would'},
 {'entity_group': 'I-LOC',
  'offsets': (32, 39),
  'score': 0.11716679483652115,
  'word': 'have a chat with Albert. [SEP]'}]

With Flair

Named Entity Recognition (NER)

With Flair we can follow a similar setup to earlier, searching HuggingFace for valid ner models. In our case we'll use Flair's ner-english-ontonotes-fast model

from adaptnlp.model_hub import FlairModelHub
hub = FlairModelHub()
model = hub.search_model_by_name('ontonotes-fast')[0]; model
Model Name: flair/ner-english-ontonotes-fast, Tasks: [token-classification], Source: HuggingFace Model Hub

Then we'll tag the string:

model.name
'flair/ner-english-ontonotes-fast'
tagger.token_taggers[model.name]
False
sentences = tagger.tag_text(text = example_text, model_name_or_path = model)
2021-04-20 16:39:53,487 loading file /root/.flair/models/ner-english-ontonotes-fast/0d55dd3b912da9cf26e003035a0c269a0e9ab222f0be1e48a3bbba3a58c0fed0.c9907cd5fde3ce84b71a4172e7ca03841cd81ab71d13eb68aa08b259f57c00b6

And look at our results, either the tagged strings themselves:

print("List string outputs of tags:\n")
for sen in sentences:
    print(sen.to_tagged_string())
List string outputs of tags:

Novetta <B-ORG> Solutions <E-ORG> is the best . Albert <B-PERSON> Einstein <E-PERSON> used to be employed at Novetta <B-ORG> Solutions <E-ORG> . The Wright <S-PERSON> brothers loved to visit the JBF <S-ORG> headquarters , and they would have a chat with Albert <S-PERSON> .

The entities and their spans:

print("List entities tagged:\n")
for sen in sentences:
    for entity in sen.get_spans("ner"):
        print(entity)
List entities tagged:

Span [1,2]: "Novetta Solutions"   [− Labels: ORG (0.7751)]
Span [7,8]: "Albert Einstein"   [− Labels: PERSON (0.9917)]
Span [14,15]: "Novetta Solutions"   [− Labels: ORG (0.7489)]
Span [18]: "Wright"   [− Labels: PERSON (0.9993)]
Span [24]: "JBF"   [− Labels: ORG (0.967)]
Span [34]: "Albert"   [− Labels: PERSON (0.9979)]

Or all of the raw tagged information:

print("Get json of tagged information:\n")
for sen in sentences:
    pprint(sen.to_dict(tag_type="ner"))
Get json of tagged information:

{'entities': [{'end_pos': 17,
               'labels': [ORG (0.7751)],
               'start_pos': 0,
               'text': 'Novetta Solutions'},
              {'end_pos': 46,
               'labels': [PERSON (0.9917)],
               'start_pos': 31,
               'text': 'Albert Einstein'},
              {'end_pos': 87,
               'labels': [ORG (0.7489)],
               'start_pos': 70,
               'text': 'Novetta Solutions'},
              {'end_pos': 100,
               'labels': [PERSON (0.9993)],
               'start_pos': 94,
               'text': 'Wright'},
              {'end_pos': 132,
               'labels': [ORG (0.967)],
               'start_pos': 129,
               'text': 'JBF'},
              {'end_pos': 185,
               'labels': [PERSON (0.9979)],
               'start_pos': 179,
               'text': 'Albert'}],
 'labels': [],
 'text': 'Novetta Solutions is the best. Albert Einstein used to be employed '
         'at Novetta Solutions.  The Wright brothers loved to visit the JBF '
         'headquarters, and they would have a chat with Albert.'}

Parts of Speech

Next we'll look at a parts-of-speech tagger.

We can simply pass in "pos", but let's use our search API to find an english POS tagger:

hub.search_model_by_task('pos')
[Model Name: flair/pos-english-fast, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/pos-english, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/upos-english-fast, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/upos-english, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/upos-multi-fast, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/upos-multi, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/upos, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/upos-fast, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/pos, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/pos-fast, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/pos-multi, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/multi-pos, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/pos-multi-fast, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/multi-pos-fast, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/da-pos, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/de-pos, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/de-pos-tweets, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/ml-pos, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/ml-upos, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/pt-pos-clinical, Tasks: [token-classification], Source: Flair's Private Model Hub]

We'll use the pos-english-fast model

model = hub.search_model_by_name('pos-english-fast')[0]; model
Model Name: flair/pos-english-fast, Tasks: [token-classification], Source: HuggingFace Model Hub
sentences = tagger.tag_text(text = example_text, model_name_or_path = model)
2021-04-20 16:39:58,139 loading file /root/.flair/models/pos-english-fast/36f7923039eed4c66e4275927daaff6cd275997d61d238355fb1fe0338fe10a1.ff87e5b4e47fdb42a0c00237d9506c671db773e0a7932179ace82e584383a1b8

Then just as before, we can look at our results. Either as a sentence with tags:

print("List string outputs of tags:\n")
for sen in sentences:
    print(sen.to_tagged_string())
List string outputs of tags:

Novetta <NNP> Solutions <NNPS> is <VBZ> the <DT> best <JJS> . <.> Albert <NNP> Einstein <NNP> used <VBD> to <TO> be <VB> employed <VBN> at <IN> Novetta <NNP> Solutions <NNP> . <,> The <DT> Wright <NNP> brothers <NNS> loved <VBD> to <TO> visit <VB> the <DT> JBF <NNP> headquarters <NN> , <,> and <CC> they <PRP> would <MD> have <VB> a <DT> chat <NN> with <IN> Albert <NNP> . <.>

With a list of tagged entities:

print("List text/entities tagged:\n")
for sen in sentences:
    for entity in sen.get_spans("pos"):
        print(entity)
List text/entities tagged:

Span [1]: "Novetta"   [− Labels: NNP (0.9991)]
Span [2]: "Solutions"   [− Labels: NNPS (0.8552)]
Span [3]: "is"   [− Labels: VBZ (1.0)]
Span [4]: "the"   [− Labels: DT (1.0)]
Span [5]: "best"   [− Labels: JJS (0.8458)]
Span [6]: "."   [− Labels: . (0.943)]
Span [7]: "Albert"   [− Labels: NNP (1.0)]
Span [8]: "Einstein"   [− Labels: NNP (1.0)]
Span [9]: "used"   [− Labels: VBD (0.9755)]
Span [10]: "to"   [− Labels: TO (0.9995)]
Span [11]: "be"   [− Labels: VB (1.0)]
Span [12]: "employed"   [− Labels: VBN (0.9998)]
Span [13]: "at"   [− Labels: IN (1.0)]
Span [14]: "Novetta"   [− Labels: NNP (1.0)]
Span [15]: "Solutions"   [− Labels: NNP (0.5506)]
Span [16]: "."   [− Labels: , (0.5253)]
Span [17]: "The"   [− Labels: DT (1.0)]
Span [18]: "Wright"   [− Labels: NNP (0.9982)]
Span [19]: "brothers"   [− Labels: NNS (0.9998)]
Span [20]: "loved"   [− Labels: VBD (0.9999)]
Span [21]: "to"   [− Labels: TO (0.9997)]
Span [22]: "visit"   [− Labels: VB (1.0)]
Span [23]: "the"   [− Labels: DT (1.0)]
Span [24]: "JBF"   [− Labels: NNP (1.0)]
Span [25]: "headquarters"   [− Labels: NN (0.9047)]
Span [26]: ","   [− Labels: , (1.0)]
Span [27]: "and"   [− Labels: CC (1.0)]
Span [28]: "they"   [− Labels: PRP (1.0)]
Span [29]: "would"   [− Labels: MD (1.0)]
Span [30]: "have"   [− Labels: VB (0.9999)]
Span [31]: "a"   [− Labels: DT (0.9999)]
Span [32]: "chat"   [− Labels: NN (0.9985)]
Span [33]: "with"   [− Labels: IN (1.0)]
Span [34]: "Albert"   [− Labels: NNP (0.9999)]
Span [35]: "."   [− Labels: . (0.9999)]

Or the raw JSON of information:

print("Get json of tagged information:\n")
for sen in sentences:
    pprint(sen.to_dict(tag_type="pos"))
Get json of tagged information:

{'entities': [{'end_pos': 7,
               'labels': [NNP (0.9991)],
               'start_pos': 0,
               'text': 'Novetta'},
              {'end_pos': 17,
               'labels': [NNPS (0.8552)],
               'start_pos': 8,
               'text': 'Solutions'},
              {'end_pos': 20,
               'labels': [VBZ (1.0)],
               'start_pos': 18,
               'text': 'is'},
              {'end_pos': 24,
               'labels': [DT (1.0)],
               'start_pos': 21,
               'text': 'the'},
              {'end_pos': 29,
               'labels': [JJS (0.8458)],
               'start_pos': 25,
               'text': 'best'},
              {'end_pos': 30,
               'labels': [. (0.943)],
               'start_pos': 29,
               'text': '.'},
              {'end_pos': 37,
               'labels': [NNP (1.0)],
               'start_pos': 31,
               'text': 'Albert'},
              {'end_pos': 46,
               'labels': [NNP (1.0)],
               'start_pos': 38,
               'text': 'Einstein'},
              {'end_pos': 51,
               'labels': [VBD (0.9755)],
               'start_pos': 47,
               'text': 'used'},
              {'end_pos': 54,
               'labels': [TO (0.9995)],
               'start_pos': 52,
               'text': 'to'},
              {'end_pos': 57,
               'labels': [VB (1.0)],
               'start_pos': 55,
               'text': 'be'},
              {'end_pos': 66,
               'labels': [VBN (0.9998)],
               'start_pos': 58,
               'text': 'employed'},
              {'end_pos': 69,
               'labels': [IN (1.0)],
               'start_pos': 67,
               'text': 'at'},
              {'end_pos': 77,
               'labels': [NNP (1.0)],
               'start_pos': 70,
               'text': 'Novetta'},
              {'end_pos': 87,
               'labels': [NNP (0.5506)],
               'start_pos': 78,
               'text': 'Solutions'},
              {'end_pos': 88,
               'labels': [, (0.5253)],
               'start_pos': 87,
               'text': '.'},
              {'end_pos': 93,
               'labels': [DT (1.0)],
               'start_pos': 90,
               'text': 'The'},
              {'end_pos': 100,
               'labels': [NNP (0.9982)],
               'start_pos': 94,
               'text': 'Wright'},
              {'end_pos': 109,
               'labels': [NNS (0.9998)],
               'start_pos': 101,
               'text': 'brothers'},
              {'end_pos': 115,
               'labels': [VBD (0.9999)],
               'start_pos': 110,
               'text': 'loved'},
              {'end_pos': 118,
               'labels': [TO (0.9997)],
               'start_pos': 116,
               'text': 'to'},
              {'end_pos': 124,
               'labels': [VB (1.0)],
               'start_pos': 119,
               'text': 'visit'},
              {'end_pos': 128,
               'labels': [DT (1.0)],
               'start_pos': 125,
               'text': 'the'},
              {'end_pos': 132,
               'labels': [NNP (1.0)],
               'start_pos': 129,
               'text': 'JBF'},
              {'end_pos': 145,
               'labels': [NN (0.9047)],
               'start_pos': 133,
               'text': 'headquarters'},
              {'end_pos': 146,
               'labels': [, (1.0)],
               'start_pos': 145,
               'text': ','},
              {'end_pos': 150,
               'labels': [CC (1.0)],
               'start_pos': 147,
               'text': 'and'},
              {'end_pos': 155,
               'labels': [PRP (1.0)],
               'start_pos': 151,
               'text': 'they'},
              {'end_pos': 161,
               'labels': [MD (1.0)],
               'start_pos': 156,
               'text': 'would'},
              {'end_pos': 166,
               'labels': [VB (0.9999)],
               'start_pos': 162,
               'text': 'have'},
              {'end_pos': 168,
               'labels': [DT (0.9999)],
               'start_pos': 167,
               'text': 'a'},
              {'end_pos': 173,
               'labels': [NN (0.9985)],
               'start_pos': 169,
               'text': 'chat'},
              {'end_pos': 178,
               'labels': [IN (1.0)],
               'start_pos': 174,
               'text': 'with'},
              {'end_pos': 185,
               'labels': [NNP (0.9999)],
               'start_pos': 179,
               'text': 'Albert'},
              {'end_pos': 186,
               'labels': [. (0.9999)],
               'start_pos': 185,
               'text': '.'}],
 'labels': [],
 'text': 'Novetta Solutions is the best. Albert Einstein used to be employed '
         'at Novetta Solutions.  The Wright brothers loved to visit the JBF '
         'headquarters, and they would have a chat with Albert.'}

Chunk

As with everything before, chunk tasks operate the same way. You can either pass chunk to get the default en-chunk model, or we can search the model hub:

models = hub.search_model_by_task('chunk'); models
[Model Name: flair/chunk-english-fast, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/chunk-english, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/chunk, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/chunk-fast, Tasks: [token-classification], Source: Flair's Private Model Hub]

We'll use the fast model again:

model = models[0]; model
Model Name: flair/chunk-english-fast, Tasks: [token-classification], Source: HuggingFace Model Hub
sentences = tagger.tag_text(text = example_text, model_name_or_path = model)
2021-04-20 16:39:58,442 loading file /root/.flair/models/chunk-english-fast/be3a207f4993dd6d174d5083341a717d371ec16f721358e7a4d72158ebab28a6.a7f897d05c83e618a8235bbb7ddfca5a79d2daefb8a97c776eb73f97dbaea508

Let's view our results.

Tagged string:

print("List string outputs of tags:\n")
for sen in sentences:
    print(sen.to_tagged_string())
List string outputs of tags:

Novetta <B-NP> Solutions <E-NP> is <S-VP> the <B-NP> best <I-NP> . <I-NP> Albert <I-NP> Einstein <E-NP> used <B-VP> to <I-VP> be <I-VP> employed <E-VP> at <S-PP> Novetta <B-NP> Solutions <E-NP> . The <B-NP> Wright <I-NP> brothers <E-NP> loved <B-VP> to <I-VP> visit <E-VP> the <B-NP> JBF <I-NP> headquarters <E-NP> , and they <S-NP> would <B-VP> have <E-VP> a <B-NP> chat <E-NP> with <S-PP> Albert <S-NP> .

Tagged entities:

print("List text/entities tagged:\n")
for sen in sentences:
    for entity in sen.get_spans("np"):
        print(entity)
List text/entities tagged:

Span [1,2]: "Novetta Solutions"   [− Labels: NP (0.9865)]
Span [3]: "is"   [− Labels: VP (1.0)]
Span [4,5,6,7,8]: "the best . Albert Einstein"   [− Labels: NP (0.8215)]
Span [9,10,11,12]: "used to be employed"   [− Labels: VP (0.9314)]
Span [13]: "at"   [− Labels: PP (1.0)]
Span [14,15]: "Novetta Solutions"   [− Labels: NP (0.9916)]
Span [17,18,19]: "The Wright brothers"   [− Labels: NP (0.8962)]
Span [20,21,22]: "loved to visit"   [− Labels: VP (0.9181)]
Span [23,24,25]: "the JBF headquarters"   [− Labels: NP (0.986)]
Span [28]: "they"   [− Labels: NP (1.0)]
Span [29,30]: "would have"   [− Labels: VP (0.9376)]
Span [31,32]: "a chat"   [− Labels: NP (0.9899)]
Span [33]: "with"   [− Labels: PP (1.0)]
Span [34]: "Albert"   [− Labels: NP (0.9995)]

Frame

We can either delegate the "frame" task and use the default en-frame-ontonotes model, or search the API for usable models:

models = hub.search_model_by_task("frame"); models
[Model Name: flair/frame-english-fast, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/frame-english, Tasks: [token-classification], Source: HuggingFace Model Hub,
 Model Name: flair/frame, Tasks: [token-classification], Source: Flair's Private Model Hub,
 Model Name: flair/frame-fast, Tasks: [token-classification], Source: Flair's Private Model Hub]

Again we will use the "fast" model:

model = models[0]; model
Model Name: flair/frame-english-fast, Tasks: [token-classification], Source: HuggingFace Model Hub
sentences = tagger.tag_text(text = example_text, model_name_or_path = model)
2021-04-20 16:39:58,687 loading file /root/.flair/models/frame-english-fast/b2f10f9bc52898d86d8e6f3bf20369d681cc1e9badcb71650aa274ac696433c7.643ca10453770684aca3f2e886a7243adb2979c67a68de6379e50ccf5dc248da

And look at our tagged string:

print("List string outputs of tags:\n")
for sen in sentences:
    print(sen.to_tagged_string())
List string outputs of tags:

Novetta Solutions is <be.01> the best . Albert Einstein used <use.03> to be <be.03> employed <employ.01> at Novetta Solutions . The Wright brothers loved <love.01> to visit <visit.01> the JBF headquarters , and they would have <have.03> a chat <chat.01> with Albert .

Notice:Pay attention to the "fast" versus regular naming. "fast" models are designed to be extremely efficient on the CPU, and are worth checking out

Tag Tokens with All Loaded Models At Once

As different taggers are loaded into memory, we can tag with all of them at once, for example we'll make a new EasyTokenTagger and load in a ner and pos tagger:

tagger = EasyTokenTagger()
_ = tagger.tag_text(text=example_text, model_name_or_path="ner-ontonotes")
_ = tagger.tag_text(text=example_text, model_name_or_path="pos")
2021-04-20 16:39:59,530 --------------------------------------------------------------------------------
2021-04-20 16:39:59,531 The model key 'ner-ontonotes' now maps to 'https://huggingface.co/flair/ner-english-ontonotes' on the HuggingFace ModelHub
2021-04-20 16:39:59,531  - The most current version of the model is automatically downloaded from there.
2021-04-20 16:39:59,532  - (you can alternatively manually download the original model at https://nlp.informatik.hu-berlin.de/resources/models/ner-ontonotes/en-ner-ontonotes-v0.4.pt)
2021-04-20 16:39:59,532 --------------------------------------------------------------------------------
2021-04-20 16:39:59,551 loading file /root/.flair/models/ner-english-ontonotes/f46dcd14689a594a7dd2a8c9c001a34fd55b02fded2528410913c7e88dbe43d4.1207747bf5ae24291205b6f3e7417c8bedd5c32cacfb5a439f3eff38afda66f7
2021-04-20 16:40:04,606 loading file /root/.flair/models/pos-english-fast/36f7923039eed4c66e4275927daaff6cd275997d61d238355fb1fe0338fe10a1.ff87e5b4e47fdb42a0c00237d9506c671db773e0a7932179ace82e584383a1b8

Before finally using both at once:

sentences = tagger.tag_all(text=example_text)
2021-04-20 16:40:04,886 loading file /root/.flair/models/pos-english-fast/36f7923039eed4c66e4275927daaff6cd275997d61d238355fb1fe0338fe10a1.ff87e5b4e47fdb42a0c00237d9506c671db773e0a7932179ace82e584383a1b8

And now we can look at the entities tagged of each kind:

print("List entities tagged:\n")
for sen in sentences:
    for entity in sen.get_spans("ner"):
        print(entity)
List entities tagged:

Span [1,2]: "Novetta Solutions"   [− Labels: ORG (0.9644)]
Span [7,8]: "Albert Einstein"   [− Labels: PERSON (0.9969)]
Span [14,15]: "Novetta Solutions"   [− Labels: ORG (0.9796)]
Span [18]: "Wright"   [− Labels: PERSON (0.9995)]
Span [24]: "JBF"   [− Labels: ORG (0.9898)]
Span [34]: "Albert"   [− Labels: PERSON (0.9999)]
print("List entities tagged:\n")
for sen in sentences:
    for entity in sen.get_spans("pos"):
        print(entity)
List entities tagged:

Span [1]: "Novetta"   [− Labels: NNP (0.9991)]
Span [2]: "Solutions"   [− Labels: NNPS (0.8552)]
Span [3]: "is"   [− Labels: VBZ (1.0)]
Span [4]: "the"   [− Labels: DT (1.0)]
Span [5]: "best"   [− Labels: JJS (0.8458)]
Span [6]: "."   [− Labels: . (0.943)]
Span [7]: "Albert"   [− Labels: NNP (1.0)]
Span [8]: "Einstein"   [− Labels: NNP (1.0)]
Span [9]: "used"   [− Labels: VBD (0.9755)]
Span [10]: "to"   [− Labels: TO (0.9995)]
Span [11]: "be"   [− Labels: VB (1.0)]
Span [12]: "employed"   [− Labels: VBN (0.9998)]
Span [13]: "at"   [− Labels: IN (1.0)]
Span [14]: "Novetta"   [− Labels: NNP (1.0)]
Span [15]: "Solutions"   [− Labels: NNP (0.5506)]
Span [16]: "."   [− Labels: , (0.5253)]
Span [17]: "The"   [− Labels: DT (1.0)]
Span [18]: "Wright"   [− Labels: NNP (0.9982)]
Span [19]: "brothers"   [− Labels: NNS (0.9998)]
Span [20]: "loved"   [− Labels: VBD (0.9999)]
Span [21]: "to"   [− Labels: TO (0.9997)]
Span [22]: "visit"   [− Labels: VB (1.0)]
Span [23]: "the"   [− Labels: DT (1.0)]
Span [24]: "JBF"   [− Labels: NNP (1.0)]
Span [25]: "headquarters"   [− Labels: NN (0.9047)]
Span [26]: ","   [− Labels: , (1.0)]
Span [27]: "and"   [− Labels: CC (1.0)]
Span [28]: "they"   [− Labels: PRP (1.0)]
Span [29]: "would"   [− Labels: MD (1.0)]
Span [30]: "have"   [− Labels: VB (0.9999)]
Span [31]: "a"   [− Labels: DT (0.9999)]
Span [32]: "chat"   [− Labels: NN (0.9985)]
Span [33]: "with"   [− Labels: IN (1.0)]
Span [34]: "Albert"   [− Labels: NNP (0.9999)]
Span [35]: "."   [− Labels: . (0.9999)]