{"name":"yt-napari","display_name":"yt-napari","visibility":"public","icon":"","categories":[],"schema_version":"0.2.0","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"yt-napari.get_reader","title":"Open data with yt-napari","python_name":"yt_napari._reader:napari_get_reader","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"yt-napari.reader_widget","title":"Read in a selection of data from yt","python_name":"yt_napari._widget_reader:ReaderWidget","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"yt-napari.timeseries_widget","title":"Read 2D selections from yt timeseries","python_name":"yt_napari._widget_reader:TimeSeriesReader","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"yt-napari.metadata_widget","title":"Inspect the metadata for a yt dataset","python_name":"yt_napari._widget_matadata:MetadataWidget","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":[{"command":"yt-napari.get_reader","filename_patterns":["*.json"],"accepts_directories":false}],"writers":null,"widgets":[{"command":"yt-napari.reader_widget","display_name":"yt Reader","autogenerate":false},{"command":"yt-napari.timeseries_widget","display_name":"yt Time Series Reader","autogenerate":false},{"command":"yt-napari.metadata_widget","display_name":"yt Metadata Explorer","autogenerate":false}],"sample_data":null,"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.1","name":"yt-napari","version":"0.5.0","dynamic":null,"platform":null,"supported_platform":null,"summary":"A napari plugin for loading data from yt","description":"# yt-napari\n\n[![License](https://img.shields.io/pypi/l/yt-napari.svg?color=green)](https://github.com/data-exp-lab/yt-napari/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/yt-napari.svg?color=green)](https://pypi.org/project/yt-napari)\n[![Python Version](https://img.shields.io/pypi/pyversions/yt-napari.svg?color=green)](https://python.org)\n[![tests](https://github.com/data-exp-lab/yt-napari/workflows/tests/badge.svg)](https://github.com/data-exp-lab/yt-napari/actions)\n[![codecov](https://codecov.io/gh/data-exp-lab/yt-napari/branch/main/graph/badge.svg)](https://codecov.io/gh/data-exp-lab/yt-napari)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/yt-napari)](https://napari-hub.org/plugins/yt-napari)\n[![Documentation Status](https://readthedocs.org/projects/yt-napari/badge/?version=latest)](https://yt-napari.readthedocs.io/en/latest/?badge=latest)\n\nA [napari] plugin for loading data from [yt].\n\nThis readme provides a brief overview including:\n\n1. [Installation](#Installation)\n2. [Quick Start](#Quick-Start)\n3. [Contributing](#Contributing)\n\nFull documentation is available at [yt-napari.readthedocs.io].\n\n## Installation\n\n### 1. (optional) install `yt` and `napari`\n\nIf you skip this step, the installation in the following section will only install the minimal package requirements for `yt` or `napari`, in which case you will likely need to manually install some packages. So if you are new to either package, or if you are installing in a clean environment, it may be simpler to install these packages first.\n\nFor `napari`,\n\n pip install napari[all]\n\nwill install `napari` with the default `Qt` backend (see [here](https://napari.org/tutorials/fundamentals/installation#choosing-a-different-qt-backend) for how to choose between `PyQt5` or `PySide2`).\n\nFor `yt`, you will need `yt>=4.0.1` and any of the optional dependencies for your particular workflow. If you know that you'll need more than the base `yt` install, you can install the full suite of dependent packages with\n\n pip install yt[full]\n\nSee the [`yt` documentation](https://yt-project.org/doc/installing.html#leveraging-optional-yt-runtime-dependencies) for more information. If you're not sure which packages you'll need but don't want the full yt installation, you can proceed to the next step and then install any packages as needed (you will receive error messages when a required package is missing).\n\n### 2. install `yt-napari`\n\nYou can install the `yt-napari` plugin with:\n\n pip install yt-napari\n\nIf you are missing either `yt` or `napari` (or they need to be updated), the above installation will fetch and run a minimal installation for both.\n\nTo install the latest development version of the plugin instead, use:\n\n pip install git+https://github.com/data-exp-lab/yt-napari.git\n\nNote that if you are working off the development version, be sure to use the latest documentation\nfor reference: https://yt-napari.readthedocs.io/en/latest/\n\n## Quick Start\n\nAfter [installation](#Installation), there are three modes of using `yt-napari`:\n\n1. jupyter notebook interaction ([jump down](#jupyter-notebook-interaction))\n2. loading a json file from the napari gui ([jump down](#loading-a-json-file-from-the-napari-gui))\n3. napari widget plugins ([jump down](#napari-widget-plugins))\n\n### jupyter notebook interaction\n\n`yt-napari` provides a helper class, `yt_napari.viewer.Scene` that assists in properly aligning new yt selections in the napari viewer when working in a Jupyter notebook.\n\n```python\nimport napari\nimport yt\nfrom yt_napari.viewer import Scene\nfrom napari.utils import nbscreenshot\n\nviewer = napari.Viewer()\nds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\nyt_scene = Scene()\n\nleft_edge = ds.domain_center - ds.arr([40, 40, 40], 'kpc')\nright_edge = ds.domain_center + ds.arr([40, 40, 40], 'kpc')\nres = (600, 600, 600)\n\nyt_scene.add_region(viewer,\n ds,\n (\"enzo\", \"Temperature\"),\n left_edge=left_edge,\n right_edge=right_edge,\n resolution=res)\n\nyt_scene.add_region(viewer,\n ds,\n (\"enzo\", \"Density\"),\n left_edge=left_edge,\n right_edge=right_edge,\n resolution=res)\n\nnbscreenshot(viewer)\n```\n\n ![Loading a subset of a yt dataset in napari from a Jupyter notebook](./assets/images/readme_ex_001.png)\n\n`yt_scene.add_to_viewer` accepts any of the keyword arguments allowed by `viewer.add_image`. See the full documentation ([yt-napari.readthedocs.io]) for more examples, including additional helper methods for linking layer appearance.\n\nAdditionally, with `yt_napari`>= v0.2.0, you can use the `yt_napari.timeseries` module to help sample and load in selections from across datasets.\n\n### loading a selection from a yt dataset interactively\n\n`yt-napari` provides two ways to sample a yt dataset and load in an image layer into a Napari viewer: the yt Reader plugin and json file specification.\n\n#### using the yt Reader plugin\n\nTo use the yt Reader plugin, click on `Plugins -> yt-napari: yt Reader`. From there, add a region or slice selector then specify a field type and field and bounds to sample between and then simply click \"Load\":\n\n![Loading a subset of a yt dataset from the napari viewer](./assets/images/readme_ex_003_gui_reader.gif)\n\nYou can add multiple selections and load them all at once or adjust values and click \"Load\" again.\n\n#### using the yt Time Series Reader plugin\n\nTo use the yt Time Series Reader plugin, click on `Plugins -> yt-napari: yt Time Series Reader`. Specify your file matching: use `file_pattern` to enter glob expressions or use `file_list` to enter a list of specific files.\nThen add a slice or region to sample for each matched dataset file (note: be careful of memory here!):\n\n![Loading timeseries selections from the napari viewer](./assets/images/readme_ex_004_gui_timeseries.gif)\n\n#### using a json file and schema\n\n`yt-napari` also provides the ability to load json that contain specifications for loading a file. Properly formatted files can be loaded from the napari GUI as you would load any image file (`File->Open`). The json file describes the selection process for a dataset as described by a json-schema. The following json file results in similar layers as the above examples:\n\n\n```json\n{\"$schema\": \"https://raw.githubusercontent.com/data-exp-lab/yt-napari/main/src/yt_napari/schemas/yt-napari_0.0.1.json\",\n \"datasets\": [{\"filename\": \"IsolatedGalaxy/galaxy0030/galaxy0030\",\n \"selections\": {\"regions\": [{\n \"fields\": [{\"field_name\": \"Temperature\", \"field_type\": \"enzo\", \"take_log\": true},\n {\"field_name\": \"Density\", \"field_type\": \"enzo\", \"take_log\": true}],\n \"left_edge\": [460.0, 460.0, 460.0],\n \"right_edge\": [560.0, 560.0, 560.0],\n \"resolution\": [600, 600, 600]\n }]},\n \"edge_units\": \"kpc\"\n }]\n}\n```\n\nTo help in filling out a json file, it is recommended that you use an editor capable of parsing a json schema and displaying hints. For example, in vscode, you will see field suggestions after specifying the `yt-napari` schema:\n\n![interactive json completion for yt-napari](./assets/images/readme_ex_002_json.png)\n\nSee the full documentation at [yt-napari.readthedocs.io] for a complete specification.\n\n\n## Contributing\n\nContributions are very welcome! Development follows a fork and pull request workflow. To get started, you'll need a development installation and a testing environment.\n\n### development environment\n\nTo start developing, fork the repository and clone your fork to get a local copy. You can then install in development mode with\n\n pip install -e .\n\n### tests and style checks\n\nBoth bug fixes and new features will need to pass the existing test suite and style checks. While both will be run automatically when you submit a pull request, it is helpful to run the test suites locally and run style checks throughout development. For testing, you can use [tox] to test different python versions on your platform.\n\n pip install tox\n\nAnd then from the top level of the `yt-napari` directory, run\n\n tox\n\nTox will then run a series of tests in isolated environments. In addition to checking the terminal output for test results, the tox run will generate a test coverage report: a `coverage.xml` file and a `htmlcov` folder -- to view the results, open `htmlcov/index.html` in a browser.\n\nIf you prefer a lighter weight test, you can also use `pytest` directly and rely on the Github CI to test different python versions and systems. To do so, first install `pytest` and some related plugins:\n\n pip install pytest pytest-qt pytest-cov\n\nNow, to run the tests:\n\n pytest -v --cov=yt_napari --cov-report=html\n\nIn addition to telling you whether or not the tests pass, the above command will write out a code coverage report to the `htmlcov` directory. You can open up `htmlcov/index.html` in a browser and check out the lines of code that were missed by existing tests.\n\nFor style checks, you can use [pre-commit](https://pre-commit.com/) to run checks as you develop. To set up `pre-commit`:\n\n pip install pre-commit\n pre-commit install\n\nafter which, every time you run `git commit`, some automatic style adjustments and checks will run. The same style checks will run when you submit a pull request, but it's often easier to catch them early.\n\nAfter submitting a pull request, the `pre-commit.ci` bot will run the style checks. If style checks fail, you can have the bot attempt to auto-fix the failures by adding the following in a comment on it's own:\n\n pre-commit.ci autofix\n\nThe bot will then commit changes to your pull request after which you will want to run `git pull` locally to update your local version of the branch before making further changes to the branch.\n\n### building documentation locally\n\nDocumentation can be built using `sphinx` in two steps. First, update the api mapping with\n\n sphinx-apidoc -f -o docs/source src/yt_napari/\n\nThis will update the `rst` files in `docs/source/` with the latest docstrings in `yt_napari`. Next, build the html documentation with\n\n make html\n\n\n### updating the pydantic models and schema\n\nThe schema versioning follows a `major.minor.micro` versioning pattern that matches the yt-napari versioning. Each yt-napari release should have an accompanying updated schema file, even if the contents of the schema file have not changed. On-disk schema are stored in `src/yt_napari/schemas/`, with copies in the documentation at `docs/_static`.\n\nThere are a number of utilities to help automate the management of schema in `repo_utilities/`. The easiest way to use these utitities is with `taskipy` from the command line. To list available scripts:\n\n```commandline\ntask --list\n```\n\nBefore a release, run\n\n```commandline\ntask validate_release vX.X.X\n```\n\nwhere `vX.X.X` is the version of the upcoming release. This script will run through some checks that ensure:\n* the on-disk schema matches the schema generated by the pydantic model\n* the schema files in the documentation match the schema files in the package\n\nIf any of the checks fail, you will be advised to update the schema using `update_schema_docs`. If you\nrun without providing a version:\n\n```commandline\ntask update_schema_docs\n```\n\nIt will simply copy over the existing on-disk schema files to the documentation. If you run with a version:\n\n```commandline\ntask update_schema_docs -v vX.X.X\n```\nIt will write a schema file for the current pydantic model, overwriting any on-disk schema files for\nthe provided version.\n\n## License\n\nDistributed under the terms of the [BSD-3] license,\n\"yt-napari\" is free and open source software\n\n## Issues\n\nIf you encounter any problems, please [file an issue] along with a detailed description.\n\n## Funding\n\nThe yt-napari plugin project was funded with support from the Chan Zuckerberg Initiative through the napari Plugin Accelerator Grants project, [Enabling Access To Multi-resolution Data](https://chanzuckerberg.com/science/programs-resources/imaging/napari/enabling-access-to-multi-resolution-data/).\n\n----------------------------------\n\nThis [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.\n\n\n\n[napari]: https://github.com/napari/napari\n[Cookiecutter]: https://github.com/audreyr/cookiecutter\n[@napari]: https://github.com/napari\n[MIT]: http://opensource.org/licenses/MIT\n[BSD-3]: http://opensource.org/licenses/BSD-3-Clause\n[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt\n[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt\n[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0\n[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt\n[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin\n[yt-napari.readthedocs.io]: https://yt-napari.readthedocs.io/en/stable/\n\n[file an issue]: https://github.com/data-exp-lab/yt-napari/issues\n\n[napari]: https://github.com/napari/napari\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/\n[yt]: https://yt-project.org/\n","description_content_type":"text/markdown","keywords":null,"home_page":"https://github.com/data-exp-lab/yt-napari","download_url":null,"author":"Chris Havlin","author_email":"chris.havlin@gmail.com","maintainer":null,"maintainer_email":null,"license":"BSD-3-Clause","classifier":["Development Status :: 2 - Pre-Alpha","Framework :: napari","Intended Audience :: Developers","License :: OSI Approved :: BSD License","Operating System :: OS Independent","Programming Language :: Python","Programming Language :: Python :: 3","Programming Language :: Python :: 3 :: Only","Topic :: Software Development :: Testing"],"requires_dist":["magicgui >=0.6.1","napari >=0.4.19","numpy","packaging","pydantic >2.0","qtpy","unyt","yt >=4.0.1","pytest ; extra == 'dev'","pytest-qt ; extra == 'dev'","taskipy ; extra == 'dev'","sphinx ; extra == 'docs'","nbsphinx <0.8.8 ; extra == 'docs'","sphinx-jsonschema <1.19.0 ; extra == 'docs'","Jinja2 <3.1.0 ; extra == 'docs'","dask[array,distributed] ; extra == 'full'"],"requires_python":">=3.8","requires_external":null,"project_url":["Bug Tracker, https://github.com/data-exp-lab/yt-napari/issues","Documentation, https://github.com/data-exp-lab/yt-napari#README.md","Source Code, https://github.com/data-exp-lab/yt-napari","User Support, https://github.com/data-exp-lab/yt-napari/issues"],"provides_extra":["dev","docs","full"],"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}