{"name":"napari-annotatorj","display_name":"napari-annotatorj","visibility":"public","icon":"","categories":[],"schema_version":"0.1.0","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"napari-annotatorj.AnnotatorJ","title":"Create AnnotatorJ","python_name":"napari_annotatorj._dock_widget:AnnotatorJ","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"napari-annotatorj.get_reader","title":"Get Reader","python_name":"napari_annotatorj._reader:napari_get_reader","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"napari-annotatorj.write_labels","title":"Write Labels","python_name":"napari_annotatorj._writer:napari_write_labels","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"napari-annotatorj.ExportFrame","title":"Create Export widget","python_name":"napari_annotatorj._dock_widget:ExportFrame","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":[{"command":"napari-annotatorj.get_reader","filename_patterns":[""],"accepts_directories":true}],"writers":[{"command":"napari-annotatorj.write_labels","layer_types":["labels"],"filename_extensions":[".tiff"],"display_name":"labels"}],"widgets":[{"command":"napari-annotatorj.AnnotatorJ","display_name":"AnnotatorJ","autogenerate":false},{"command":"napari-annotatorj.ExportFrame","display_name":"AnnotatorJExport","autogenerate":false}],"sample_data":null,"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.1","name":"napari-annotatorj","version":"0.0.8","dynamic":null,"platform":null,"supported_platform":null,"summary":"The napari adaptation of the ImageJ/Fiji plugin AnnotatorJ for easy image annotation.","description":"# napari-annotatorj\n\n[![License](https://img.shields.io/pypi/l/napari-annotatorj.svg?color=green)](https://github.com/spreka/napari-annotatorj/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/napari-annotatorj.svg?color=green)](https://pypi.org/project/napari-annotatorj)\n[![Python Version](https://img.shields.io/pypi/pyversions/napari-annotatorj.svg?color=green)](https://python.org)\n[![tests](https://github.com/spreka/napari-annotatorj/workflows/tests/badge.svg)](https://github.com/spreka/napari-annotatorj/actions)\n[![codecov](https://codecov.io/gh/spreka/napari-annotatorj/branch/main/graph/badge.svg)](https://codecov.io/gh/spreka/napari-annotatorj)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari-annotatorj)](https://napari-hub.org/plugins/napari-annotatorj)\n\nThe napari adaptation of the ImageJ/Fiji plugin [AnnotatorJ](https://github.com/spreka/annotatorj) for easy image annotation.\n\n![image](https://drive.google.com/uc?export=view&id=1fVfvanffTdrXvLE0m1Yo6FV5TAjh6sb2)\n\n----------------------------------\n\nThis [napari] plugin was generated with [Cookiecutter] using with [@napari]'s [cookiecutter-napari-plugin] template.\n\n\n\n## Installation\n\nInstallation is possible with [pip](#pip), [napari](#bundled-napari-app) or [scripts](#script).\n### Pip\nYou can install `napari-annotatorj` via [pip]:\n\n pip install napari[all]\n\tpip install napari-annotatorj\n\n\n\nTo install latest development version :\n\n pip install git+https://github.com/spreka/napari-annotatorj.git\n\n\nOn Linux distributions, the following error may arise upon napari startup after the installation of the plugin: `Could not load the Qt platform plugin “xcb” in “” even though it was found`. In this case, the manual install of `libxcb-xinerama0` for Qt is required:\n\n\tsudo apt install libxcb-xinerama0\n\n### Bundled napari app\nThe bundled application version of [napari](https://github.com/napari/napari/releases) allows the pip install of plugins in the .zip distribution. After installation of this release, napari-annotatorj can be installed from the `Plugins --> Install/Uninstall plugins...` menu by searching for its name and clicking on the `Install` button next to it.\n\n### Script\nSingle-file install is supported on [**Windows**](#windows) and [Linux](#linux) (currently). It will create a virtual environment named `napariAnnotatorjEnv` in the parent folder of the cloned repository, install the package via pip and start napari. It requires a valid Python install.\n\n#### Windows\nTo start it, run in the Command prompt\n\n\tgit clone https://github.com/spreka/napari-annotatorj.git\n\tcd napari-annotatorj\n\tinstall.bat\n\nOr download [install.bat](https://github.com/spreka/napari-annotatorj/blob/main/install.bat) and run it from the Command prompt.\n\nAfter install, you can use [startup_napari.bat](https://github.com/spreka/napari-annotatorj/blob/main/startup_napari.bat) to activate your installed virtual environment and run napari. Run it from the Command prompt with:\n\n\tstartup_napari.bat\n\n\n#### Linux\nTo start it, run in the Terminal\n\n\tgit clone https://github.com/spreka/napari-annotatorj.git\n\tcd napari-annotatorj\n\tinstall.sh\n\nOr download [install.sh](https://github.com/spreka/napari-annotatorj/blob/main/install.sh) and run it from the Terminal.\n\nAfter install, you can use [startup_napari.sh](https://github.com/spreka/napari-annotatorj/blob/main/startup_napari.sh) to activate your installed virtual environment and run napari. Run it from the Terminal with:\n\n\tstartup_napari.sh\n\n***\n## Intro\n\nnapari-annotatorj has several convenient functions to speed up the annotation process, make it easier and more fun. These *modes* can be activated by their corresponding checkboxes on the left side of the main AnnotatorJ widget.\n\n- [Contour assist mode](#contour-assist-mode)\n- [Edit mode](#edit-mode)\n- [Class mode](#class-mode)\n- [Overlay](#overlay)\n\nFreehand drawing is enabled in the plugin. The \"Add polygon\" tool is selected by default upon startup. To draw a freehand object (shape) simply hold the mouse and drag it around the object. The contour is visualized when the mouse button is released.\n\nSee the [guide](#how-to-annotate) below for a quick start or a [demo](#demo). See [shortcuts](#shortcuts) for easy operation.\n\n***\n## How to annotate\n\n1. Open --> opens an image\n2. (Optionally) \n\t- ... --> Select annotation type --> Ok --> a default tool is selected from the toolbar that fits the selected annotation type\n\t- The default annotation type is instance\n\t- Selected annotation type is saved to a config file\n3. Start annotating objects\n\t- [instance](#instance-annotation): draw contours around objects\n\t- [semantic](#semantic-annotation): paint the objects' area\n\t- [bounding box](#bounding-box-annotation): draw rectangles around the objects\n4. Save --> Select class --> saves the annotation to a file in a sub-folder of the original image folder with the name of the selected class\n\n5. (Optionally)\n\t- Load --> continue a previous annotation\n\t- Overlay --> display a different annotation as overlay (semi-transparent) on the currently opened image\n\t- Colours --> select annotation and overlay colours\n\t- ... (coming soon) --> set options for semantic segmentation and *Contour assist* mode\n\t- checkboxes --> Various options\n\t\t- (default) Add automatically --> adds the most recent annotation to the ROI list automatically when releasing the left mouse button\n\t\t- Smooth (coming soon) --> smooths the contour (in instance annotation type only)\n\t\t- Show contours --> displays all the contours in the ROI list\n\t\t- Contours assist --> suggests a contour in the region of an initial, lazily drawn contour using the deep learning method U-Net\n\t\t- Show overlay --> displays the overlayed annotation if loaded with the Overlay button\n\t\t- Edit mode --> edits a selected, already saved contour in the ROI list by clicking on it on the image\n\t\t- Class mode --> assigns the selected class to the selected contour in the ROI list by clicking on it on the image and displays its contour in the class's colour (can be set in the Class window); clicking on the object a second time unclassifies it\n\t- [^] --> quick export in 16-bit multi-labelled .tiff format; if classified, also exports by classes\n\n***\n## Instance annotation\nAllows freehand drawing of object contours (shapes) with the mouse as in ImageJ.\n\nShape contour points are tracked automatically when the left mouse button is held and dragged to draw a shape. The shape is closed when the mouse button is released, automatically, and added to the default shapes layer (named \"ROI\"). In direct selection mode (from the layer controls panel), you can see the saved contour points. The slower you drag the mouse, the more contour points saved, i.e. the more refined your contour will be.\n\nClick to watch demo video below.\n\n[![instance-annot-demo](https://drive.google.com/uc?export=view&id=1sBg19d_hqGH-UI8irkrwame7ZjrldwHr)](https://drive.google.com/uc?export=view&id=1wELreE9MdCZq4Kf4oCWdxIw4e5o05XzK \"Click to watch instance annotation demo\")\n\n***\n## Semantic annotation\nAllows painting with the brush tool (labels).\n\nUseful for semantic (e.g. scene) annotation. Currently saves all labels to binary mask only (foreground-background).\n\n***\n## Bounding box annotation\nAllows drawing bounding boxes (shapes, rectangles) around objects with the mouse.\n\nUseful for object detection annotation.\n\n***\n## Contour assist mode\nAssisted annotation via a pre-trained deep learning model's suggested contour.\n\n1. initialize a contour with mouse drag around an object\n2. the suggested contour is displayed automatically\n3. modify the contour:\n - edit with mouse drag or \n - erase holding \"Alt\" or\n\t- invert with pressing \"u\"\n4. finalize it\n - accept with pressing \"q\" or\n - reject with pressing \"Ctrl\" + \"Del\"\n\n- if the suggested contour is a merge of multiple objects, you can erase the dividing line around the object you wish to keep, and keep erasing (or splitting with the eraser) until the object you wish to keep is the largest, then press \"q\" to accept it\n- this mode requires a Keras model to be present in the [model folder](#configure-model-folder)\n\nClick to watch demo video below\n\n[![contour-assist-demo](https://drive.google.com/uc?export=view&id=1Mw2fCPdm5WHBVRgNnp8fGNmqxI84F_9I)](https://drive.google.com/uc?export=view&id=1VTd6RScjNfAwi3vMk-bU87U4ucPmOO_M \"Click to watch contour assist demo\")\n\n***\n## Edit mode\nAllows to modify created objects with a brush tool.\n\n1. select an object (shape) to modify by clicking on it\n2. an editing layer (labels layer) is created for painting automatically\n3. modify the contour:\n - edit with mouse drag or \n - erase holding \"Alt\"\n4. finalize it\n - accept with pressing \"q\" or\n - delete with pressing \"Ctrl\" + \"Del\" or\n - revert changes with pressing \"Esc\" (to the state before editing)\n\n- if the edited contour is a merge of multiple objects, you can erase the dividing line around the object you wish to keep, and keep erasing (or splitting with the eraser) until the object you wish to keep is the largest, then press \"q\" to accept it\n\nClick to watch demo video below\n\n[![edit-mode-demo](https://drive.google.com/uc?export=view&id=1M-XdEWPXMsIOtO0ncyUtvGACS0SRX-3K)](https://drive.google.com/uc?export=view&id=10MQm53hblLKQlfBNrfUsi1vxvIdTbzCZ \"Click to watch edit mode demo\")\n\n***\n## Class mode\nAllows to assign class labels to objects by clicking on shapes.\n\n1. select a class from the class list to assign\n2. click on an object (shape) to assign the selected class label to it\n3. the contour colour of the clicked object will be updated to the selected class colour, plus the class label is updated in the text properties of object (turn on \"display text\" on the layer control panel to see the text properties as `objectID:(classLabel)` e.g. 1:(0) for the first object)\n\n- optionally, you can set a default class for all currently unlabelled objects on the ROI (shapes) layer by selecting a class from the drop-down menu on the right to the text label \"Default class\"\n- class colours can be changed with the drop-down menu right to the class list; upon selection, all objects whose class label is the currently selected class will have their contour colour updated to the selected colour\n- clicking on an object that has already been assigned a class label will unclassify it: assign the label *0* to it\n\nClick to watch demo video below\n\n[![class-mode-demo](https://drive.google.com/uc?export=view&id=1EV1cn_mySO11S_ZDFv6Dl1laAk30jGJk)](https://drive.google.com/uc?export=view&id=1uOmznUvfHEFvviWTtOnUHty8rkKyWR7Q \"Click to watch class mode demo\")\n\n***\n## Export\nSee also: [Quick export](#quick-export)\n\nThe exporter plugin AnnotatorJExport can be invoked from the Plugins menu under the plugin name `napari-annotatorj`. It is used for batch export of annotations to various formats directly suitable to train different types of deep learning models. See a [demonstrative figure](https://raw.githubusercontent.com/spreka/annotatorj/master/demos/annotation_and_export_types.png) in the [AnnotatorJ repository](https://github.com/spreka/annotatorj) and further description in its [README](https://github.com/spreka/annotatorj#export) or [documentation](https://github.com/spreka/annotatorj/blob/master/AnnotatorJ_documentation.pdf).\n\n1. browse original image folder with either the\n - \"Browse original...\" button or\n - text input field next to it\n2. browse annotation folder with either the\n - \"Browse annot...\" button or\n - text input field next to it\n3. select the export options you wish to export the annotations to (see tooltips on hover for help)\n - at least one export option must be selected to start export\n - (optional) right click on the checkbox \"Coordinates\" to switch between the default COCO format and YOLO format; see [explanation](#coordinate-formats)\n4. click on \"Export masks\" to start the export\n - this will open a progress bar in the napari window and close it upon finish\n\nThe folder structure required by the exporter is as follows:\n\n```\nimage_folder\n\t|--- image1.png\n\t|--- another_image.png\n\t|--- something.png\n\t|--- ...\n\nannotation_folder\n\t|--- image1_ROIs.zip\n\t|--- another_image_ROIs.zip\n\t|--- something_ROIs.zip\n\t|--- ...\n```\n\nMultiple export options can be selected at once, any selected will create a subfolder in the folder where the annotations are saved.\n\n\nClick to watch demo video below\n\n[![exporter-demo](https://drive.google.com/uc?export=view&id=1QoaJrI9pKziUzYwiZNdWlfRD7PcvJB9U)](https://drive.google.com/uc?export=view&id=1uJz-x_ypEOjc7SYPUTqrEt0ieyNLFy6u \"Click to watch exporter demo\")\n\n***\n## Quick export\nClick on the \"[^]\" button to quickly save annotations and export to mask image. It saves the current annotations (shapes) to an ImageJ-compatible roi.zip file and a generated a 16-bit multi-labelled mask image to the subfolder \"masks\" under the current original image's folder.\n\n\n***\n## Coordinate formats\nIn the AnnotatorJExport plugin 2 coordinates formats can be selecting by right clicking on the Coordinates checkbox: COCO or YOLO. The default is COCO.\n\n*COCO format*:\n- `[x, y, width, height]` based on the top-left corner of the bounding box around the object\n- coordinates are not normalized\n- annotations are saved with header to \n - .csv file\n - tab delimeted\n\n*YOLO format*:\n- `[class, x, y, width, height]` based on the center point of the bounding box around the object\n- coordinates are normalized to the image size as floating point values between 0 and 1\n- annotations are saved with header to\n - .txt file\n - whitespace delimeted\n - class is saved as the 1st column\n\n***\n## Overlay\nA separate annotation file can be loaded as overlay for convenience, e.g. to compare annotations.\n\n1. load another annotation file with the \"Overlay\" button\n\n- (optional) switch its visibility with the \"Show overlay\" checkbox\n- (optional) change the contour colour of the overlay shapes with the [\"Colours\" button](#change-colours)\n\n***\n## Change colours\nClicking on the \"Colours\" button opens the Colours widget where you can set the annotation and overlay colours.\n\n1. select a colour from the drop-down list either next to the text label \"overlay\" or \"annotation\"\n2. click the \"Ok\" button to apply changes\n\n- contour colour of shapes on the annotation shapes layer (named \"ROI\") that already have a class label assigned to them will **not** be updated to the new annotation colour, only those not having a class label (the class label can be displayed with the \"display text\" checkbox on the layer controls panel as `objectID:(classLabel)` e.g. 1:(0) for the first object)\n- contour colour of shapes on the overlay shapes layer (named \"overlay\") will all have the overlay colour set, regardless of any existing class information saved to the annotation file loaded as overlay\n\n***\n## Configure model folder\nThe Contour assist mode imports a pre-trained Keras model from a folder named *models* under exactly the path *napari_annotatorj*. This is automatically created on the first startup in your user folder:\n- `C:\\Users\\Username\\.napari_annotatorj` on Windows\n- `\\home\\username\\.napari_annotatorj` on Linux\n\nA pre-trained model for nucleus segmentation is automatically downloaded from the GitHub repository of the [ImageJ version of AnnotatorJ](https://github.com/spreka/annotatorj/releases/tag/v0.0.2-model). The model will be saved to `[your user folder]\\.napari_annotatorj\\models\\model_real.h5`. This location is printed to the console (command prompt or powershell on Windows, terminal on Linux).\n\n(deprecated) When bulding from source the model folder is located at *path\\to\\napari-annotatorj\\src\\napari_annotatorj\\models* whereas installing from pypi it is located at *path\\to\\virtualenv\\Lib\\site-packages\\napari_annotatorj\\models*.\n\nThe model must be in either of these file formats:\n- config .json file + weights file: *model_real.json* and *model_real_weights.h5*\n- combined weights file: *model_real.hdf5*\n\nYou can also train a new model on your own data in e.g. Python and save it with this code block:\n\n```python\n\t# save model as json\n\tmodel_json=model.to_json()\n\twith open(‘model_real.json’, ‘w’) as f:\n\t\tf.write(model_json)\n\t\n\t# save weights too\n\tmodel.save_weights(‘model_real_weights.h5’)\n\n```\nYou can also train in the [train widget](#Training).\n\nThis configuration will change in the next release to allow model browse and custom model name in an [options widget](#options).\n\n***\n## Training\nTo start training a new model or refine an existing one click the **Train** button on the right of the napari-annotatorj widget. This will open the training widget where you can set input paths and training options. During training a progress bar will show the epochs passed and plot the loss on a graph. See a [guide](#how-to-train) below.\n\nThe trained model will be saved to the `model` folder under the located training data folder which is named `training` by default when preparing data. Each new training will be saved to a new training folder with increased numbering e.g. `training_1`, `training_2` etc.\n\nWhen an existing training data folder is browsed with the \"Browse train ...\" button, the `model` folder will be created under it without an additiona `training` folder.\n\nAfter training is finished, a message is shown to indicate the newly trained model can be tested by drawing bounding boxes (rectangles) to initiate [contour assist](#contour-assist-mode) prediction. The presented region on the editing layer (Label layer) can be modified with the paint brush tool (automatically selected) as in [contour assist](#contour-assist-mode).\n\nThe trained model can be further refined by selecting the \"Retrain latest\" checkbox from the [training parameters](#training-parameters) (⚙ button on the right).\n\nTo use this new model for annotation in [contour assist mode](#contour-assist-mode), you mush set the model path in the [Options widget](#options) or in the configuration file (see how to [here](#configure-model-folder)), then restart napari.\n\n### How to train\n1. On current annotation\n\t1. \"Use current annotation\" checkbox --> use this image and its current annotation for training\n\t2. Prep data --> prepare data to [suitable format](#training-data-format)\n\t3. (optional) ⚙ --> [set parameters](#training-parameters)\n\t4. Start --> start training\n2. Additional data\n\t1. Select images and annotations to use for training\n\t\t- Browse original ... --> locate folder of original images\n\t\t- Browse annot ... --> locate folder of annotations\n\t\t- Prep data --> prepare data to [suitable format](#training-data-format)\n\t2. Browse train ... --> select already prepared training data\n\t3. (optional) ⚙ --> [set parameters](#training-parameters)\n\t4. Start --> start training\n\n### Training data format\nThe data format expected by the training widget is as follows.\n\n```\nimages\n\t|--- image1.png\n\t|--- another_image.png\n\t|--- something.png\n\t|--- ...\n\nunet_masks\n\t|--- image1.tiff\n\t|--- another_image.tiff\n\t|--- something.tiff\n\t|--- ...\n```\n\nMasks are 8-bit binary (black and white) .tiff images that can be exported from the [Exporter widget](#export) selecting the Semantic (binary) export option. When the \"Prep data\" button is clicked in the Training widget, these folders are automatically created from the located annotation files and original images.\n\n### Training parameters\nThe following configurable parameters can be set after clicking on the ⚙ icon:\n| Parameter | Description | Default value |\n| --------- | ----------- | ------------- |\n| Epochs | number of epochs to train | 5 |\n| Steps | number of steps in each epoch | 1 |\n| Batch size | number of samples in an iteration| 1 |\n| Image size | size of training images | 256 |\n| Start from scratch | train a new model from scratch| `False` |\n| Retrain latest | re-fine latest training | `False` |\n| Write pred | write test image prediction to file| `False` |\n| Test image | path to test image | `None`|\n\nNote: by default CPU will be used for training. This can be changed to GPU in the [Options widget](#options) if your computer has a capable CUDA-device.\n\n***\n## Options\nSettings found in the configuration file can be set in the Options widget opened with the \"...\" button on the right of the main plugin. For changes to take effect you must save your changes with the \"Ok\" button at the bottom of the Options widget.\n\nThe following options can be configured:\n|Group|Option|Description|Default|Valid values|\n|-----|------|-----------|-------|------------|\n|General|Annotation type | see [instance](#instance-annotation), [bbox](#bounding-box-annotation), [semantic](#semantic-annotation) | instance |instance, bbox, semantic |\n| |Remember annotation type|use the same annotation type on next startup|`True`|`True`, `False`|\n| |Colours|select annotation and overlay colours; see [here](#change-colours)|white|white, red, green, blue, cyan, magenta, yellow, orange, black|\n| |Classes|names of folders to save annotations when not classified*|normal|(any string)**|\n|Semantic segmentation|Brush size|size of the brush|50|`int`|\n|*Advanced settings*|\n|Contour assist|Max distance|number of pixels to extend the initial contour with|17|`int`|\n||Threshold|intensity threshold after prediction|||\n||- gray||0.1|`float` in [0,1]|\n||- R (red)||0.2|`float` in [0,1]|\n||- G (green)||0.4|`float` in [0,1]|\n||- B (blue)||0.2|`float` in [0,1]|\n||Brush size|correction brush size|5|`int`|\n||Method|contour assist prediction method|U-Net|U-Net, Classic***|\n||Model|U-Net model to use for Contour assist prediction|||\n||folder|path to the model folder|`user/.napari_annotatorj/models`|existing `models` folder path|\n||.json file|name of the model .json file **without** extension|model_real|any string**|\n||weights file|name of the model weights file|model_real_weights.h5|any string**|\n||full file|name of the combined config+weights file|model_real.hdf5|any string**|\n||Device|computation device to perform prediction|cpu|cpu,`int`****|\n|Mask/text import|\n||Auto mask load|load annotation files automatically when a new image is opened|`False`|`True`, `False`|\n||Enable mask load|load instance annotation mask image|`False`|`True`, `False`|\n||Enable text load|load object detection bounding box coordinate text file|`False`|`True`, `False`|\n||Method|load as editable or overlay|load|load, overlay|\n|Others|\n||Save outlines|save image with annotations outlined|`False`|`True`, `False`|\n||Show help on startup|show the help window upon every startup|`False`|`True`, `False`|\n||Save annot times|save annotation times to text file*****|`False`|`True`, `False`|\n\n*: right click the last element (other...) to add a new item to the list. When annotations are assigned class labels in [class mode](#class-mode), they will be saved to the folder `masks` by default.\n\n**: do not use whitespace (' ') if possible\n\n***: region growing classical algorithm\n\n****: valid id of a GPU device e.g. `0` or `3`; if your computer has only one GPU the id is `0`\n\n*****: currently disabled, used for development\n\n***\n## Demo\nRun a demo of napari-annotatorj with sample data: a small 3-channel RGB image as original image and an ImageJ roi.zip file as annotations loaded.\n\n```shell\n # from the napari-annotatorj folder\n\tpython src/napari_annotatorj/load_imagej_roi.py\n```\nAlternatively, you can startup the napari-annotatorj plugin by running\n\n```shell\n # from the napari-annotatorj folder\n\tpython src/napari_annotatorj/startup_annotatorj.py\n```\n\n***\n## Shortcuts\n\n| Function | Shortcut |\n| -------- | -------- |\n| Contour assist | `a` |\n| Class mode | `c` |\n| Edit mode | `Shift` + `e` |\n| Show contours | `Shift` + `v` |\n| Accept Contour assist | `q` |\n| Reject Contour assist | `Ctrl` + `del` |\n| Invert Contour assist | `u` |\n| Erase in Edit/Contour assist mode | `Alt` (hold) |\n| Revert changes in Edit mode | `Esc` |\n\n\n***\n## Setting device for deep learning model prediction\nThe [Contour assist](#contour-assist-mode) mode uses a pre-trained U-Net model for suggesting contours based on a lazily initialized contour drawn by the user. The default configuration loads and runs the model on the CPU so all users can run it. It is possible to switch to GPU if you have:\n- a CUDA-capable GPU in your computer\n- nVidia's CUDA toolkit + cuDNN installed\n\nSee installation guide on [nVidia's website](https://developer.nvidia.com/cuda-downloads) according to your system.\n\nTo switch to GPU utilization, edit [_dock_widget.py](https://github.com/spreka/napari-annotatorj/blob/main/src/napari_annotatorj/_dock_widget.py#L112) and set to the device you would like to use. Valid values are `'cpu','0','1','2',...`. The default value is `cpu`. The default GPU device is `0` if your system has any CUDA-capable GPU. If the device you set cannot be found or utilized by the code, it will fall back to `cpu`. An informative message is printed to the console upon plugin startup.\n\n***\n## Contributing\n\nContributions are very welcome. Tests can be run with [tox], please ensure\nthe coverage at least stays the same before you submit a pull request.\n\n## License\n\nDistributed under the terms of the [BSD-3] license,\n\"napari-annotatorj\" is free and open source software\n\n## Issues\n\nIf you encounter any problems, please [file an issue] along with a detailed description.\n\n[napari]: https://github.com/napari/napari\n[Cookiecutter]: https://github.com/audreyr/cookiecutter\n[@napari]: https://github.com/napari\n[MIT]: http://opensource.org/licenses/MIT\n[BSD-3]: http://opensource.org/licenses/BSD-3-Clause\n[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt\n[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt\n[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0\n[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt\n[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin\n\n[file an issue]: https://github.com/spreka/napari-annotatorj/issues\n\n[napari]: https://github.com/napari/napari\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/\n","description_content_type":"text/markdown","keywords":null,"home_page":"https://github.com/spreka/napari-annotatorj","download_url":null,"author":"Reka Hollandi","author_email":"reka.hollandi@gmail.com","maintainer":null,"maintainer_email":null,"license":"BSD-3-Clause","classifier":["Development Status :: 3 - Alpha","Intended Audience :: Developers","Framework :: napari","Topic :: Software Development :: Testing","Programming Language :: Python","Programming Language :: Python :: 3","Programming Language :: Python :: 3.8","Programming Language :: Python :: 3.9","Programming Language :: Python :: 3.10","Operating System :: OS Independent","License :: OSI Approved :: BSD License"],"requires_dist":["napari","napari-plugin-engine >=0.1.4","numpy","roifile","scikit-image","opencv-python >=4.5.5","keras","tensorflow >=2.5.0","tifffile","imagecodecs","tqdm","pyqtgraph"],"requires_python":">=3.7","requires_external":null,"project_url":["Bug Tracker, https://github.com/spreka/napari-annotatorj/issues","Documentation, https://github.com/spreka/napari-annotatorj#README.md","Source Code, https://github.com/spreka/napari-annotatorj","User Support, https://github.com/spreka/napari-annotatorj/issues"],"provides_extra":null,"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}