{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Initializing & Training with SpanMarker\n", "[SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) is an accessible yet powerful Python module for training Named Entity Recognition models.\n", "\n", "In this short notebook, we'll have a look at how to initialize and train an NER model using SpanMarker. For a larger and more general tutorial on how to use SpanMarker, please have a look at the [Getting Started](getting_started.ipynb) notebook." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Setup\n", "First of all, the `span_marker` Python module needs to be installed. If we want to use [Weights and Biases](https://wandb.ai/) for logging, we can install `span_marker` using the `[wandb]` extra." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install span_marker\n", "# %pip install span_marker[wandb]" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Loading the dataset\n", "For this example, we'll load the commonly used [CoNLL2003 dataset](https://huggingface.co/datasets/conll2003) from the Hugging Face hub using 🤗 Datasets." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "DatasetDict({\n", " train: Dataset({\n", " features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\n", " num_rows: 14041\n", " })\n", " validation: Dataset({\n", " features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\n", " num_rows: 3250\n", " })\n", " test: Dataset({\n", " features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\n", " num_rows: 3453\n", " })\n", "})" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from datasets import load_dataset\n", "\n", "dataset_id = \"conll2003\"\n", "dataset = load_dataset(dataset_id)\n", "dataset" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC']" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "labels = dataset[\"train\"].features[\"ner_tags\"].feature.names\n", "labels" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "SpanMarker accepts any dataset as long as it has `tokens` and `ner_tags` columns. The `ner_tags` can be annotated using the IOB, IOB2, BIOES or BILOU labeling scheme, but also regular unschemed labels. This CoNLL dataset uses the common IOB or IOB2 labeling scheme, with PER, ORG, LOC and MISC labels." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Initializing a `SpanMarkerModel`\n", "A SpanMarker model is initialized via [SpanMarkerModel.from_pretrained](https://tomaarsen.github.io/SpanMarkerNER/api/span_marker.modeling.html#span_marker.modeling.SpanMarkerModel.from_pretrained). This method will be familiar to those who know 🤗 Transformers. It accepts either a path to a local model or the name of a model on the [Hugging Face Hub](https://huggingface.co/models).\n", "\n", "Importantly, the model can *either* be an encoder or an already trained and saved SpanMarker model. As we haven't trained anything yet, we will use an encoder. To learn how to load and use a saved SpanMarker model, please have a look at the [Loading & Inferencing](model_loading.ipynb) notebook.\n", "\n", "Reasonable options for encoders include BERT and RoBERTa, which means that the following are all good options:\n", "\n", "* [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny)\n", "* [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini)\n", "* [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small)\n", "* [prajjwal1/bert-medium](https://huggingface.co/prajjwal1/bert-medium)\n", "* [bert-base-cased](https://huggingface.co/bert-base-cased)\n", "* [bert-large-cased](https://huggingface.co/bert-large-cased)\n", "* [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)\n", "* [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)\n", "* [roberta-base](https://huggingface.co/roberta-base)\n", "* [roberta-large](https://huggingface.co/roberta-large)\n", "* [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)\n", "* [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)\n", "\n", "\n", "Not all encoders work though, they **must** allow for `position_ids` as an input argument, which disqualifies DistilBERT, T5, DistilRoBERTa, ALBERT & BART. \n", "\n", "Additionally, it's important to consider that cased models typically demand consistent capitalization in the inference data, aligning with how the training data is formatted. In simpler terms, if your training data consistently uses correct capitalization, but your inference data does not, it may lead to suboptimal performance. In such cases, you might find an uncased model more suitable. Although it may exhibit slightly lower F1 scores on the testing set, it remains functional regardless of capitalization, making it potentially more effective in real-world scenarios.\n", "\n", "We'll use `\"roberta-base\"` for this notebook. If you're running this on Google Colab, be sure to set hardware accelerator to \"GPU\" in `Runtime` > `Change runtime type`." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight']\n", "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" ] } ], "source": [ "from span_marker import SpanMarkerModel, SpanMarkerModelCardData\n", "\n", "encoder_id = \"roberta-base\"\n", "model = SpanMarkerModel.from_pretrained(\n", " # Required arguments\n", " encoder_id,\n", " labels=labels,\n", " # Optional arguments\n", " model_max_length=256,\n", " entity_max_length=6,\n", " # To improve the generated model card\n", " model_card_data=SpanMarkerModelCardData(\n", " language=[\"en\"],\n", " license=\"apache-2.0\",\n", " encoder_id=encoder_id,\n", " dataset_id=dataset_id,\n", " )\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "For us, these warnings are expected, as we are initializing `RobertaModel` for a new task.\n", "\n", "Note that we provided `SpanMarkerModel.from_pretrained` with a list of our labels. This is required when training a new model using an encoder. Furthermore, we can specify some useful configuration parameters from `SpanMarkerConfig`, such as:\n", "\n", "* `model_max_length`: The maximum number of tokens that the model will process. If you only use short sentences for your model, reducing this number may help training and inference speeds with no loss in performance. Defaults to the encoder maximum, or 512 if the encoder doesn't have a maximum.\n", "* `entity_max_length`: The total number of words that one entity can be. Defaults to 8.\n", "* `model_card_data`: A [SpanMarkerModelCardData](https://tomaarsen.github.io/SpanMarkerNER/api/span_marker.model_card.html#span_marker.model_card.SpanMarkerModelCardData) instance where you can provide a lot of useful data about your model. This data will be automatically included in a generated model card whenever a model is saved or pushed to the Hugging Face Hub.\n", " * Consider adding `language`, `license`, `model_id`, `encoder_id` and `dataset_id` to improve the generated model card README.md file." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Training\n", "At this point, our model is already ready for training! We can import [TrainingArguments](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments) directly from 🤗 Transformers as well as the SpanMarker `Trainer`. The `Trainer` is a subclass of the 🤗 Transformers [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) that simplifies some tasks for you, but otherwise it works just like the regular `Trainer`.\n", "\n", "This next snippet shows some reasonable defaults. Feel free to adjust the batch size to a lower value if you experience out of memory exceptions." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from transformers import TrainingArguments\n", "\n", "args = TrainingArguments(\n", " output_dir=\"models/span-marker-roberta-base-conll03\",\n", " learning_rate=1e-5,\n", " gradient_accumulation_steps=2,\n", " per_device_train_batch_size=4,\n", " per_device_eval_batch_size=4,\n", " num_train_epochs=1,\n", " evaluation_strategy=\"steps\",\n", " save_strategy=\"steps\",\n", " eval_steps=500,\n", " push_to_hub=False,\n", " logging_steps=50,\n", " fp16=True,\n", " warmup_ratio=0.1,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now we can create a SpanMarker [Trainer](https://tomaarsen.github.io/SpanMarkerNER/api/span_marker.trainer.html#span_marker.trainer.Trainer) in the same way that you would initialize a 🤗 Transformers `Trainer`. We'll train on a subsection of the data to save us some time. Amazingly, this `Trainer` will automatically create logs using exactly the logging tools that you have installed. With other words, if you prefer logging with [Tensorboard](https://www.tensorflow.org/tensorboard), all that you have to do is install it." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "This SpanMarker model will ignore 0.097877% of all annotated entities in the train dataset. This is caused by the SpanMarkerModel maximum entity length of 6 words.\n", "These are the frequencies of the missed entities due to maximum entity length out of 23499 total entities:\n", "- 18 missed entities with 7 words (0.076599%)\n", "- 2 missed entities with 8 words (0.008511%)\n", "- 3 missed entities with 10 words (0.012767%)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'loss': 1.1135, 'learning_rate': 2.707182320441989e-06, 'epoch': 0.03}\n", "{'loss': 0.245, 'learning_rate': 5.469613259668509e-06, 'epoch': 0.06}\n", "{'loss': 0.1466, 'learning_rate': 8.232044198895029e-06, 'epoch': 0.08}\n", "{'loss': 0.1077, 'learning_rate': 9.888957433682912e-06, 'epoch': 0.11}\n", "{'loss': 0.0839, 'learning_rate': 9.58050586057989e-06, 'epoch': 0.14}\n", "{'loss': 0.0702, 'learning_rate': 9.272054287476866e-06, 'epoch': 0.17}\n", "{'loss': 0.0614, 'learning_rate': 8.963602714373844e-06, 'epoch': 0.19}\n", "{'loss': 0.0476, 'learning_rate': 8.65515114127082e-06, 'epoch': 0.22}\n", "{'loss': 0.0446, 'learning_rate': 8.346699568167798e-06, 'epoch': 0.25}\n", "{'loss': 0.0327, 'learning_rate': 8.038247995064774e-06, 'epoch': 0.28}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "This SpanMarker model won't be able to predict 0.172563% of all annotated entities in the evaluation dataset. This is caused by the SpanMarkerModel maximum entity length of 6 words.\n", "These are the frequencies of the missed entities due to maximum entity length out of 3477 total entities:\n", "- 5 missed entities with 7 words (0.143802%)\n", "- 1 missed entities with 10 words (0.028760%)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'eval_loss': 0.02650175243616104, 'eval_overall_precision': 0.8974691758598313, 'eval_overall_recall': 0.7968885047536733, 'eval_overall_f1': 0.8441934991606898, 'eval_overall_accuracy': 0.9632217370208637, 'eval_runtime': 20.1351, 'eval_samples_per_second': 102.656, 'eval_steps_per_second': 25.676, 'epoch': 0.28}\n", "{'loss': 0.0348, 'learning_rate': 7.729796421961752e-06, 'epoch': 0.31}\n", "{'loss': 0.0378, 'learning_rate': 7.42134484885873e-06, 'epoch': 0.33}\n", "{'loss': 0.0275, 'learning_rate': 7.112893275755707e-06, 'epoch': 0.36}\n", "{'loss': 0.0242, 'learning_rate': 6.804441702652684e-06, 'epoch': 0.39}\n", "{'loss': 0.0255, 'learning_rate': 6.495990129549661e-06, 'epoch': 0.42}\n", "{'loss': 0.0235, 'learning_rate': 6.187538556446638e-06, 'epoch': 0.44}\n", "{'loss': 0.0223, 'learning_rate': 5.879086983343616e-06, 'epoch': 0.47}\n", "{'loss': 0.0183, 'learning_rate': 5.570635410240592e-06, 'epoch': 0.5}\n", "{'loss': 0.0194, 'learning_rate': 5.26218383713757e-06, 'epoch': 0.53}\n", "{'loss': 0.0191, 'learning_rate': 4.953732264034547e-06, 'epoch': 0.55}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "This SpanMarker model won't be able to predict 0.172563% of all annotated entities in the evaluation dataset. This is caused by the SpanMarkerModel maximum entity length of 6 words.\n", "These are the frequencies of the missed entities due to maximum entity length out of 3477 total entities:\n", "- 5 missed entities with 7 words (0.143802%)\n", "- 1 missed entities with 10 words (0.028760%)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'eval_loss': 0.016905048862099648, 'eval_overall_precision': 0.9247838616714698, 'eval_overall_recall': 0.9245174301354077, 'eval_overall_f1': 0.9246506267108485, 'eval_overall_accuracy': 0.9844412097687207, 'eval_runtime': 20.2213, 'eval_samples_per_second': 102.219, 'eval_steps_per_second': 25.567, 'epoch': 0.55}\n", "{'loss': 0.0206, 'learning_rate': 4.645280690931524e-06, 'epoch': 0.58}\n", "{'loss': 0.0198, 'learning_rate': 4.336829117828501e-06, 'epoch': 0.61}\n", "{'loss': 0.0184, 'learning_rate': 4.028377544725479e-06, 'epoch': 0.64}\n", "{'loss': 0.0203, 'learning_rate': 3.7199259716224557e-06, 'epoch': 0.67}\n", "{'loss': 0.0206, 'learning_rate': 3.4114743985194327e-06, 'epoch': 0.69}\n", "{'loss': 0.0187, 'learning_rate': 3.1030228254164097e-06, 'epoch': 0.72}\n", "{'loss': 0.015, 'learning_rate': 2.794571252313387e-06, 'epoch': 0.75}\n", "{'loss': 0.0221, 'learning_rate': 2.486119679210364e-06, 'epoch': 0.78}\n", "{'loss': 0.0189, 'learning_rate': 2.177668106107341e-06, 'epoch': 0.8}\n", "{'loss': 0.0158, 'learning_rate': 1.8692165330043186e-06, 'epoch': 0.83}\n", "{'eval_loss': 0.01296199019998312, 'eval_overall_precision': 0.9394202898550724, 'eval_overall_recall': 0.933736675309709, 'eval_overall_f1': 0.9365698598468429, 'eval_overall_accuracy': 0.9868348698043021, 'eval_runtime': 20.2701, 'eval_samples_per_second': 101.973, 'eval_steps_per_second': 25.506, 'epoch': 0.83}\n", "{'loss': 0.0165, 'learning_rate': 1.5607649599012956e-06, 'epoch': 0.86}\n", "{'loss': 0.017, 'learning_rate': 1.2523133867982728e-06, 'epoch': 0.89}\n", "{'loss': 0.0183, 'learning_rate': 9.438618136952499e-07, 'epoch': 0.92}\n", "{'loss': 0.0164, 'learning_rate': 6.35410240592227e-07, 'epoch': 0.94}\n", "{'loss': 0.0162, 'learning_rate': 3.2695866748920424e-07, 'epoch': 0.97}\n", "{'loss': 0.021, 'learning_rate': 1.850709438618137e-08, 'epoch': 1.0}\n", "{'train_runtime': 479.9392, 'train_samples_per_second': 30.033, 'train_steps_per_second': 3.755, 'train_loss': 0.06940532092560087, 'epoch': 1.0}\n" ] }, { "data": { "text/plain": [ "TrainOutput(global_step=1802, training_loss=0.06940532092560087, metrics={'train_runtime': 479.9392, 'train_samples_per_second': 30.033, 'train_steps_per_second': 3.755, 'train_loss': 0.06940532092560087, 'epoch': 1.0})" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from span_marker import Trainer\n", "\n", "trainer = Trainer(\n", " model=model,\n", " args=args,\n", " train_dataset=dataset[\"train\"],\n", " eval_dataset=dataset[\"validation\"].select(range(2000)),\n", ")\n", "trainer.train()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "And now the final step is to compute the model's performance." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'eval_loss': 0.012707239016890526,\n", " 'eval_LOC': {'precision': 0.9642857142857143,\n", " 'recall': 0.9503610108303249,\n", " 'f1': 0.9572727272727273,\n", " 'number': 1108},\n", " 'eval_MISC': {'precision': 0.8805309734513275,\n", " 'recall': 0.8378947368421052,\n", " 'f1': 0.8586839266450916,\n", " 'number': 475},\n", " 'eval_ORG': {'precision': 0.8736842105263158,\n", " 'recall': 0.9021739130434783,\n", " 'f1': 0.8877005347593583,\n", " 'number': 736},\n", " 'eval_PER': {'precision': 0.9776247848537005,\n", " 'recall': 0.9861111111111112,\n", " 'f1': 0.9818496110630942,\n", " 'number': 1152},\n", " 'eval_overall_precision': 0.9379688401615696,\n", " 'eval_overall_recall': 0.9366176894266782,\n", " 'eval_overall_f1': 0.9372927778578637,\n", " 'eval_overall_accuracy': 0.9872553776483908,\n", " 'eval_runtime': 19.9052,\n", " 'eval_samples_per_second': 103.842,\n", " 'eval_steps_per_second': 25.973,\n", " 'epoch': 1.0}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "metrics = trainer.evaluate()\n", "metrics" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Additionally, we should evaluate using the test set." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'test_loss': 0.029485255479812622,\n", " 'test_LOC': {'precision': 0.9335384615384615,\n", " 'recall': 0.9094724220623501,\n", " 'f1': 0.9213483146067416,\n", " 'number': 1668},\n", " 'test_MISC': {'precision': 0.7503429355281207,\n", " 'recall': 0.7792022792022792,\n", " 'f1': 0.76450034940601,\n", " 'number': 702},\n", " 'test_ORG': {'precision': 0.8538243626062323,\n", " 'recall': 0.9072847682119205,\n", " 'f1': 0.87974314068885,\n", " 'number': 1661},\n", " 'test_PER': {'precision': 0.9658808933002482,\n", " 'recall': 0.9628942486085343,\n", " 'f1': 0.964385258593992,\n", " 'number': 1617},\n", " 'test_overall_precision': 0.8947827604257547,\n", " 'test_overall_recall': 0.9079320113314447,\n", " 'test_overall_f1': 0.9013094296511117,\n", " 'test_overall_accuracy': 0.9782276300204588,\n", " 'test_runtime': 33.9555,\n", " 'test_samples_per_second': 104.401,\n", " 'test_steps_per_second': 26.122,\n", " 'epoch': 1.0}" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "trainer.evaluate(dataset[\"test\"], metric_key_prefix=\"test\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great performance for 8 minutes trained! 🎉" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Once trained, we can save our new model locally." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "trainer.save_model(\"models/span-marker-roberta-base-conll03/checkpoint-final\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Or we can push it to the 🤗 Hub like so." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "trainer.push_to_hub(repo_id=\"span-marker-roberta-base-conll03\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "If we want to use it again, we can just load it using the checkpoint or using the model name on the Hub. This is how it would be done using a local checkpoint. See the [Loading & Inferencing](model_loading.ipynb) notebook for more details." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "# model = SpanMarkerModel.from_pretrained(\"models/span-marker-roberta-base-conll03/checkpoint-final\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "That was all! As simple as that. If we put it all together into a single script, it looks something like this:\n", "```python\n", "from datasets import load_dataset\n", "from span_marker import SpanMarkerModel, Trainer\n", "from transformers import TrainingArguments\n", "\n", "def main() -> None:\n", " dataset_id = \"conll2003\"\n", " dataset = load_dataset(dataset_id)\n", " labels = dataset[\"train\"].features[\"ner_tags\"].feature.names\n", "\n", " encoder_id = \"roberta-base\"\n", " model = SpanMarkerModel.from_pretrained(\n", " # Required arguments\n", " encoder_id,\n", " labels=labels,\n", " # Optional arguments\n", " model_max_length=256,\n", " entity_max_length=6,\n", " # To improve the generated model card\n", " model_card_data=SpanMarkerModelCardData(\n", " language=[\"en\"],\n", " license=\"apache-2.0\",\n", " encoder_id=encoder_id,\n", " dataset_id=dataset_id,\n", " )\n", " )\n", "\n", " args = TrainingArguments(\n", " output_dir=\"models/span-marker-roberta-base-conll03\",\n", " learning_rate=1e-5,\n", " gradient_accumulation_steps=2,\n", " per_device_train_batch_size=4,\n", " per_device_eval_batch_size=4,\n", " num_train_epochs=1,\n", " evaluation_strategy=\"steps\",\n", " save_strategy=\"steps\",\n", " eval_steps=500,\n", " push_to_hub=False,\n", " logging_steps=50,\n", " fp16=True,\n", " warmup_ratio=0.1,\n", " )\n", "\n", " trainer = Trainer(\n", " model=model,\n", " args=args,\n", " train_dataset=dataset[\"train\"].select(range(8000)),\n", " eval_dataset=dataset[\"validation\"].select(range(2000)),\n", " )\n", " trainer.train()\n", "\n", " metrics = trainer.evaluate()\n", " print(metrics)\n", "\n", " trainer.save_model(\"models/span-marker-roberta-base-conll03/checkpoint-final\")\n", "\n", "if __name__ == \"__main__\":\n", " main()\n", "```" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "With `wandb` initialized, you can enjoy their very useful training graphs straight in your browser. It ends up looking something like this.\n", "![image](https://user-images.githubusercontent.com/37621491/235172501-a3cdae91-faf0-42b7-ac60-e6738b78e67e.png)\n", "![image](https://user-images.githubusercontent.com/37621491/235172726-795ded55-4b1c-40fa-ab91-476762f7dd57.png)\n", "\n", "Furthermore, you can use the `wandb` hyperparameter search functionality using the tutorial from the Hugging Face documentation [here](https://huggingface.co/docs/transformers/hpo_train). This transfers very well to the SpanMarker `Trainer`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "span-marker-ner", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "c231fc6d0de0df4a232423539031d78e3a72f0f8d848d7b948e520fe3bfbe8ca" } } }, "nbformat": 4, "nbformat_minor": 2 }