This commit is contained in:
carl 2023-09-12 11:58:36 -03:00
commit 65e2394602
163 changed files with 126039 additions and 77 deletions

Binary file not shown.

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,123 @@
Metadata-Version: 2.1
Name: openvino
Version: 2023.0.1
Summary: OpenVINO(TM) Runtime
Home-page: https://docs.openvino.ai/2023.0/index.html
Download-URL: https://github.com/openvinotoolkit/openvino/tags
Author: Intel(R) Corporation
Author-email: openvino_pushbot@intel.com
License: OSI Approved :: Apache Software License
Description-Content-Type: text/markdown
License-File: readme.txt
License-File: LICENSE
Requires-Dist: numpy (>=1.16.6)
Requires-Dist: singledispatchmethod ; python_version < "3.8"
# OpenVINO™ Runtime
Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing and deploying AI inference. It can be used to develop applications and solutions based on deep learning tasks, such as: emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, etc. It provides high-performance and rich deployment options, from edge to cloud.
If you have already finished developing your models and converting them to the OpenVINO model format, you can install OpenVINO Runtime to deploy your applications on various devices. The [OpenVINO™ Runtime](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) Python package includes a set of libraries for an easy inference integration with your products.
## System Requirements
Before you start the installation, check the supported operating systems and required Python* versions. The complete list of supported hardware is available in the [System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html).
**C++ libraries** are also required for the installation on Windows*. To install that, you can [download the Visual Studio Redistributable file (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe).
> **NOTE**: This package can be installed on other versions of Linux and Windows OSes, but only the specific versions above are fully validated.
## Install the OpenVINO™ Runtime Package
### Step 1. Set Up Python Virtual Environment
Use a virtual environment to avoid dependency conflicts.
To create a virtual environment, use the following commands:
On Windows:
```sh
python -m venv openvino_env
```
On Linux and macOS:
```sh
python3 -m venv openvino_env
```
> **NOTE**: On Linux and macOS, you may need to [install pip](https://pip.pypa.io/en/stable/installation/). For example, on Ubuntu execute the following command to get pip installed: `sudo apt install python3-venv python3-pip`.
### Step 2. Activate Virtual Environment
On Linux and macOS:
```sh
source openvino_env/bin/activate
```
On Windows:
```sh
openvino_env\Scripts\activate
```
### Step 3. Set Up and Update PIP to the Highest Version
Run the command below:
```sh
python -m pip install --upgrade pip
```
### Step 4. Install the Package
Run the command below: <br>
```sh
pip install openvino
```
### Step 5. Verify that the Package Is Installed
Run the command below:
```sh
python -c "from openvino.runtime import Core; print(Core().available_devices)"
```
If installation was successful, you will see the list of available devices.
## Troubleshooting
For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2023.0/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages.
### Errors with Installing via PIP for Users in China
Users in China might encounter errors while downloading sources via PIP during OpenVINO™ installation. To resolve the issues, try the following solution:
* Add the download source using the ``-i`` parameter with the Python ``pip`` command. For example:
``` sh
pip install openvino -i https://mirrors.aliyun.com/pypi/simple/
```
Use the ``--trusted-host`` parameter if the URL above is ``http`` instead of ``https``.
### ERROR:root:Could not find the Inference Engine or nGraph Python API.
On Windows*, some libraries are necessary to run OpenVINO. To resolve this issue, install the [C++ redistributable (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe). You can also view a full download list on the [official support page](https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist).
### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory
To resolve missing external dependency on Ubuntu*, execute the following command:
```sh
sudo apt-get install libpython3.7
```
## Additional Resources
- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit)
- [OpenVINO™ Documentation](https://docs.openvino.ai/)
- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks)
- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)
Copyright © 2018-2023 Intel Corporation
> **LEGAL NOTICE**: Your use of this software and any required dependent software (the
“Software Package”) is subject to the terms and conditions of the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html) for the Software Package, which may also include notices, disclaimers, or
license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details.
>Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.

View File

@ -0,0 +1,233 @@
_pyngraph.cpython-310-x86_64-linux-gnu.so,sha256=Vez9GPQTUfYAdHNhuPC0UYP-cTTt3FmPw4MTf0bS7CU,1050745
ngraph/__init__.py,sha256=WbKJf0Be7i4rs0ak4u1MnjF3KFIkEI6Ks5zVNwsHEH4,7686
ngraph/__pycache__/__init__.cpython-310.pyc,,
ngraph/__pycache__/exceptions.cpython-310.pyc,,
ngraph/__pycache__/helpers.cpython-310.pyc,,
ngraph/__pycache__/opset_utils.cpython-310.pyc,,
ngraph/exceptions.py,sha256=jcAXSagXiBd00SFD2iIyTMNVi1K9W7wRcbmovhN1k4E,400
ngraph/helpers.py,sha256=aJ6PFl24x9a4x0H8P8W1JbUOu8JcywFG47D0eHOCg6Q,686
ngraph/impl/__init__.py,sha256=TnP1_Vn1DH7d4RQEHx2IUcqWUQRPaHX4VoCQp5ZnRvs,1890
ngraph/impl/__pycache__/__init__.cpython-310.pyc,,
ngraph/impl/op/__init__.py,sha256=50FkixgFBjwH7nq-ZGotzPtKgIEw1ubQf4hYW5Ls0YU,495
ngraph/impl/op/__pycache__/__init__.cpython-310.pyc,,
ngraph/impl/op/util/__init__.py,sha256=22vHVZ4asAtfHWMMirP8NmlSL6YwFp4ZDSIyPPt5DYE,561
ngraph/impl/op/util/__pycache__/__init__.cpython-310.pyc,,
ngraph/impl/passes/__init__.py,sha256=IsvfSSvbl7pxVKeuDsjLZjKt83ORF8_Co_0Nc4sbWd8,136
ngraph/impl/passes/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset1/__init__.py,sha256=HYBk3KkasXg7PCmKYjJKt7LL45gDmxkAD0wBE_DV4Sg,4439
ngraph/opset1/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset1/__pycache__/ops.cpython-310.pyc,,
ngraph/opset1/ops.py,sha256=TDtoIa_-XWOaWXR4-oASIYWi69RQkhTe5dcXbMqQ51w,111820
ngraph/opset10/__init__.py,sha256=f66xm4MCHbVhoMVhmB7YR8KYOyl-5xjLCcXvQsbQ08g,7158
ngraph/opset10/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset10/__pycache__/ops.cpython-310.pyc,,
ngraph/opset10/ops.py,sha256=pXDV-VPASTr_BPfI9VeY2tBOLUU3WBiQN2lU69lXzf8,7261
ngraph/opset11/__init__.py,sha256=BUr3b_OtjLsO3mGJtInB6JJyPf6MC-S4Fb_3cXoovJU,7159
ngraph/opset11/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset11/__pycache__/ops.cpython-310.pyc,,
ngraph/opset11/ops.py,sha256=PFlD8D72cKqPfVbl9N4stUDnL0R0fSiJIftgPqE8C5k,4452
ngraph/opset2/__init__.py,sha256=ORS4p77ULYck5TAkXC0VuRSNwVkDPNDtOgdGw2XRutw,4681
ngraph/opset2/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset2/__pycache__/ops.cpython-310.pyc,,
ngraph/opset2/ops.py,sha256=XM7PhxxgzllSt8D3YINX0uhi7WEOFETn9nr2Ocua0Jo,6107
ngraph/opset3/__init__.py,sha256=K5RVdBxezDF9JTJkPxRVdRFvCYR0QpthDB7wSEamzbo,5404
ngraph/opset3/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset3/__pycache__/ops.cpython-310.pyc,,
ngraph/opset3/ops.py,sha256=4LDkdNu_X6t6sbNsHPogWYhYlhr7MFRrcrD3qqJgpsQ,23926
ngraph/opset4/__init__.py,sha256=G0R2vRN2p6UQ0NAmayOq7elPROgNBm14OBJkk1uH11Y,5778
ngraph/opset4/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset4/__pycache__/ops.cpython-310.pyc,,
ngraph/opset4/ops.py,sha256=rasP0qDMIZu0vr__U8kbQ1JoS_iYIExjT35_b_EQr9Y,16737
ngraph/opset5/__init__.py,sha256=jIvXmCUTfX3oFJAbjtZMc96c70elr151dAdttE322_c,6056
ngraph/opset5/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset5/__pycache__/ops.cpython-310.pyc,,
ngraph/opset5/ops.py,sha256=WsO6cT2wUH1nJget15NgDCBLnPOHIbZRNsOqVQiAFk4,18513
ngraph/opset6/__init__.py,sha256=gEJvJroOgDBog4OG7e6PhplMNaHIPy-gl8pkQxpnKRI,6159
ngraph/opset6/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset6/__pycache__/ops.cpython-310.pyc,,
ngraph/opset6/ops.py,sha256=QNJagb6zsbWsv5W_Rw8dtUst2ZpwOPyw3nljDSPEDp4,5065
ngraph/opset7/__init__.py,sha256=RUAuNYna3ouANgaQjWYDUMeGUpk5cM2-kttYlLl_3yo,6300
ngraph/opset7/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset7/__pycache__/ops.cpython-310.pyc,,
ngraph/opset7/ops.py,sha256=4VMd26U0q9a4bI7-Ub3n0Ftsk5wl-4Em_dLAxVIqNv4,4799
ngraph/opset8/__init__.py,sha256=zDQYEbih4CeeIn4DfyLKv_PtUWqUMZv2NqVk_eVl970,6767
ngraph/opset8/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset8/__pycache__/ops.cpython-310.pyc,,
ngraph/opset8/ops.py,sha256=oKXMqna5vypAffCpDz1kF1bl55p1CQ1o8hplUbmWeVY,34022
ngraph/opset9/__init__.py,sha256=OdiWi56rjGbtBfsKQXXUtMpU1GuWfiNVtuGGV29XebU,7002
ngraph/opset9/__pycache__/__init__.cpython-310.pyc,,
ngraph/opset9/__pycache__/ops.cpython-310.pyc,,
ngraph/opset9/ops.py,sha256=_7m2y44MW4vCFGDMLC7nikfawl_6L5OtpRm-BTGBJHw,13285
ngraph/opset_utils.py,sha256=kchvzITh5zkyPhrsKONF3-nh_3KTmCgHxG8M4o4U2EE,591
ngraph/utils/__init__.py,sha256=NrJBbV0hoR59oxcdGuqv5lYJyndWM61P8UDB5PHYCHs,156
ngraph/utils/__pycache__/__init__.cpython-310.pyc,,
ngraph/utils/__pycache__/broadcasting.cpython-310.pyc,,
ngraph/utils/__pycache__/decorators.cpython-310.pyc,,
ngraph/utils/__pycache__/input_validation.cpython-310.pyc,,
ngraph/utils/__pycache__/node_factory.cpython-310.pyc,,
ngraph/utils/__pycache__/reduction.cpython-310.pyc,,
ngraph/utils/__pycache__/tensor_iterator_types.cpython-310.pyc,,
ngraph/utils/__pycache__/types.cpython-310.pyc,,
ngraph/utils/broadcasting.py,sha256=Hdl4aAZgZ9Yvl5orBXogqxSTqxT4m8DhYp6c3Q2AWrc,1316
ngraph/utils/decorators.py,sha256=C-sEVK6KEzosZPF8OFP44rcmZ4ZRBGOkjapAhpzto4s,1716
ngraph/utils/input_validation.py,sha256=LZF68OK8reszCIy4B3DV8w0WQfrR6DRJNGV9U5ERvvc,4710
ngraph/utils/node_factory.py,sha256=KVeWIW0oGmk8-18YiFhSPpqidPjzTeoFs7ysWyjs7Rk,6019
ngraph/utils/reduction.py,sha256=JdHPa91uE3ybopx67egudny3jEasmo0IeOCh2GYJ0oA,805
ngraph/utils/tensor_iterator_types.py,sha256=GX-EgRzNpqnxBo3QIVzBtKq5cGG9mIRHrjMxBgHhe1w,5161
ngraph/utils/types.py,sha256=2ZldM9gl7sb_SCVYp40iBq5BlUJGw3MgXoEuJGb39Y4,4456
openvino-2023.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
openvino-2023.0.1.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
openvino-2023.0.1.dist-info/METADATA,sha256=yALX6yRr-Lm7VhRbZtHbYiPX2lDbGF7yKFAQ5APn-Ds,6039
openvino-2023.0.1.dist-info/RECORD,,
openvino-2023.0.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
openvino-2023.0.1.dist-info/WHEEL,sha256=AKhSqkxx2Oi4GGawyqk_GF_GG6v8PuFUH15pp1ChJY4,126
openvino-2023.0.1.dist-info/readme.txt,sha256=2DarVokzTjtGv_12UIrjfF1qt7uUHGNrIYCdnm972wY,980
openvino-2023.0.1.dist-info/top_level.txt,sha256=Y0XOVLMmbcKo3oOze1-BhHO1lyFGSNG9gYea-XpyMRA,9
openvino/__init__.py,sha256=wj4rmBSCX2ezHYmXjfsHFt9i9h6jcEH_EMGPlnw5KCM,207
openvino/__pycache__/__init__.cpython-310.pyc,,
openvino/__pycache__/utils.cpython-310.pyc,,
openvino/_offline_transformations/__init__.py,sha256=mG2P3iEn2T1GVLOlvmyUxHJWmMsZDcbNWlcrtHHjC9M,1207
openvino/_offline_transformations/__pycache__/__init__.cpython-310.pyc,,
openvino/_pyopenvino.cpython-310-x86_64-linux-gnu.so,sha256=8oDkZhgI1Yzi4qUK8xgSKKSsdTqLFpxGH1jtsxU8jBs,3637905
openvino/frontend/__init__.py,sha256=wdgMRh2lZXcM2W7CxCfum1ZbGWO1Es8QFVF5QvScFr8,1144
openvino/frontend/__pycache__/__init__.cpython-310.pyc,,
openvino/frontend/onnx/__init__.py,sha256=EGj4Dn6XobRJKqTivesSkIjKRP3Auf6uEhxShhXo_G8,616
openvino/frontend/onnx/__pycache__/__init__.cpython-310.pyc,,
openvino/frontend/onnx/py_onnx_frontend.cpython-310-x86_64-linux-gnu.so,sha256=2gzgDftJlnDrrfFvRw6Ib4eRCcP3HQcMbLrElLONA1s,389545
openvino/frontend/paddle/__init__.py,sha256=RbkprOh9r9sSvFEt-N7PyKNyx1XcHw3nurwpEBOaScA,630
openvino/frontend/paddle/__pycache__/__init__.cpython-310.pyc,,
openvino/frontend/paddle/py_paddle_frontend.cpython-310-x86_64-linux-gnu.so,sha256=Nvf07hdtYUUiR6tKY8uRFEw213jdrL-pMLGfpKs9Y1w,385489
openvino/frontend/pytorch/__init__.py,sha256=4Pj9jfkjoBIxEcpqTdREQ4HILnZYUkbFeFfLAwqDQM8,839
openvino/frontend/pytorch/__pycache__/__init__.cpython-310.pyc,,
openvino/frontend/pytorch/__pycache__/decoder.cpython-310.pyc,,
openvino/frontend/pytorch/decoder.py,sha256=I5dgdLJ_Y4_CdgXuhe4-YRcx-pfEFY718hVMbgNcIeQ,18260
openvino/frontend/pytorch/py_pytorch_frontend.cpython-310-x86_64-linux-gnu.so,sha256=Bir5bdU0C5zroPBF97M6CqSQZ6DJxcWiPRbD9N6YLPs,426585
openvino/frontend/pytorch/torchdynamo/__pycache__/backend.cpython-310.pyc,,
openvino/frontend/pytorch/torchdynamo/backend.py,sha256=5TT6CqxlzAW3PP6oW1hKN-q1ouuNp0Gw0SrFzGGSxxI,3624
openvino/frontend/tensorflow/__init__.py,sha256=prXzfOorn1DVno1GJkAUi22E588GdoQ458nYfx1IPJs,658
openvino/frontend/tensorflow/__pycache__/__init__.cpython-310.pyc,,
openvino/helpers/__init__.py,sha256=ion2PLkn_4HvJzFy_ejU99H3wS8sV8-tRCQe60AUWd4,154
openvino/helpers/__pycache__/__init__.cpython-310.pyc,,
openvino/helpers/__pycache__/packing.cpython-310.pyc,,
openvino/helpers/packing.py,sha256=ev1gIvPuxXJ0U_qUugVEZy5YmV-TKsOmZnIFG_Dwc-I,4002
openvino/inference_engine/__init__.py,sha256=HTNYGrmir9bBHHIiP0-f-3GUjrLsth_dSuJU1F68bzE,1432
openvino/inference_engine/__pycache__/__init__.cpython-310.pyc,,
openvino/inference_engine/constants.cpython-310-x86_64-linux-gnu.so,sha256=N1jp1qSsxgeTy2jswouQlWgHH4QehGLovWtiPEGOWAY,121449
openvino/inference_engine/ie_api.cpython-310-x86_64-linux-gnu.so,sha256=44ut6GOp9-Nb_7wo8lVOHjmfvCsNhqNqhXiS2VmjAV0,901729
openvino/libs/cache.json,sha256=G1D17VUxd1mczt2nGk7Rvt-Up1x4pUvf8M6gMnaNR6w,8872422
openvino/libs/libhwloc.so.15,sha256=qbhG1Crmthxk_lXX4IQRY_gA8qnSwKLGQMtyFT-Nkec,483985
openvino/libs/libopenvino.so.2301,sha256=3Td1pBIJ1b5DZjynxHu1Dst6sGO8SMhpM0fGFHk2C50,16267489
openvino/libs/libopenvino_auto_batch_plugin.so,sha256=t9GHSMvk4Jq-vIB1eO5V4QRfPKoQ2VRAtCHaY46I0kY,365369
openvino/libs/libopenvino_auto_plugin.so,sha256=xZx8x_obPSe3SpnkUgDpVZuIvyBGh5DyzBrJsAe-xFs,623025
openvino/libs/libopenvino_gapi_preproc.so,sha256=sbocCD0l_T_61_lqW21Xy-UoN3cNt1XYRTtIkW6UVxA,1292745
openvino/libs/libopenvino_hetero_plugin.so,sha256=7XVrID3fuQZKx9wG2WatJG7mxUAfc-V2y_BVHpIgat4,431145
openvino/libs/libopenvino_intel_cpu_plugin.so,sha256=4hR6XD_YZE32Yc47vEHTGTqPJvGzHIuGYpsZFM--Ow0,40008089
openvino/libs/libopenvino_intel_gpu_plugin.so,sha256=EOdYzvvV--5q0IYQGAhuYnEfh2YYXqrCSRJXDJM8-BI,19276801
openvino/libs/libopenvino_ir_frontend.so.2301,sha256=VtScy9Rmvnr7xPPTQDcldfS98AA8sO8_clnZjWFbwT4,473449
openvino/libs/libopenvino_onnx_frontend.so.2301,sha256=5uiBPfGffco_Cr5uIDceIBUKkoKKeMEKSJCVFteCQ24,4160873
openvino/libs/libopenvino_paddle_frontend.so.2301,sha256=nX9tVXn106z6Ai4u8RyZCdcUJJtwQLJaIxefO9MRTh0,1440729
openvino/libs/libopenvino_pytorch_frontend.so.2301,sha256=eRD6XFx--yYu-wxPT1eN7gfgYQocQtB0xobyPd-VkgA,1590665
openvino/libs/libopenvino_tensorflow_frontend.so.2301,sha256=kdeYzVMfNMFsqWKtGda0uDA2fSDQcEbQ_mHiX5EEGXw,3990049
openvino/libs/libopenvino_tensorflow_lite_frontend.so.2301,sha256=r8VOLzGlB_BObr69X4ujaJEbj-95Z-rREm2DW3-fdus,1053785
openvino/libs/libprotobuf-lite.a,sha256=3Rd41ZgImeTGvSmIf2QnEJA8XXI-cpsnFiPCzSPk1Gg,4163006
openvino/libs/libpugixml.so.1,sha256=ulRv5OQusHoUQh5Oefx0rlC2G-MKhuezRRosfu5jOcc,249128
openvino/libs/libtbb.so.12,sha256=f1qupKPP-2ByAeJLsXZgGCEfwU_2IlagBKDuRW4iI-8,378481
openvino/libs/libtbbbind_2_5.so.3,sha256=-ogtVtShfuJYpAHtIwFqK2grtC3J00Zlhdu62PmPGBE,35697
openvino/libs/libtbbmalloc.so.2,sha256=HrCxvUUjo7OLWR5isxWTXzMm0cG9s3_GGGOzFppGUcg,182873
openvino/libs/libtbbmalloc_proxy.so.2,sha256=8SgZ_QhqxJrbfFUYFHpPcNvCuFXRJepqpZicj4nJnOI,22609
openvino/offline_transformations/__init__.py,sha256=XONfuZ7PTS1tvYRlGu4EJi3tfmjhm9850skpHgFGIkU,3791
openvino/offline_transformations/__pycache__/__init__.cpython-310.pyc,,
openvino/preprocess/__init__.py,sha256=rET7n9sVqWZpQw0GtK9DESozFZdUw56Dlk5tF4AnrQA,1016
openvino/preprocess/__pycache__/__init__.cpython-310.pyc,,
openvino/pyopenvino/__init__.py,sha256=CqWsw8a2B5fltCl1VwjZ11j7o0b9BpxqhlQJwQP_ONM,332
openvino/pyopenvino/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/__init__.py,sha256=7P_p47VywlRv5MFFGPckVG2OUil-ZGh3k2wyAdnRInw,3115
openvino/runtime/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/__pycache__/exceptions.cpython-310.pyc,,
openvino/runtime/__pycache__/ie_api.cpython-310.pyc,,
openvino/runtime/__pycache__/opset_utils.cpython-310.pyc,,
openvino/runtime/exceptions.py,sha256=5NYrnM8EVko8nQra0H6LmJR1Nzf5NTzu18oovPXCoSc,402
openvino/runtime/ie_api.py,sha256=addsMPHQt3slsPqptAEKpkD0wZgxOTjfDE-KTk3xg6g,19394
openvino/runtime/op/__init__.py,sha256=TiHGHHZziXGjQ1FMO_asiNk5ilZtzjd_N3750n4AgEI,643
openvino/runtime/op/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/op/util/__init__.py,sha256=SHzC-2BKRkj57mUMNRD-ywU6zx1iQQDnBWbOH0TJjEU,1000
openvino/runtime/op/util/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset1/__init__.py,sha256=wJlz-v0KoUz5V_461Djrpj4dWj7xwA4LVZn8e7RmbPQ,5543
openvino/runtime/opset1/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset1/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset1/ops.py,sha256=aBCvOFqbgO4RSbNCrdR8vRHa7NVza6it149cmYoG02Q,113123
openvino/runtime/opset10/__init__.py,sha256=afng6tQui6G6667xa46yEAkTdzuN172rZF0_pyJy03A,8922
openvino/runtime/opset10/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset10/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset10/ops.py,sha256=EFNlhF2nLTDKLWaeb565o1R6GoL5BavGnHSFVXsyaR8,7296
openvino/runtime/opset11/__init__.py,sha256=6sw3J2N2TTjAbUmi02M38a6KR48dKv8kAbilfteEPWY,8923
openvino/runtime/opset11/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset11/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset11/ops.py,sha256=z3evhJCuD14LP9SgZxTcyYonYa9EsL60iXN_ccDd6d0,4486
openvino/runtime/opset2/__init__.py,sha256=FN2hfp8xfd9SonP_dPwT-iSWakd9DkFjvuHD8pnpDRI,5845
openvino/runtime/opset2/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset2/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset2/ops.py,sha256=9YuHJ98w8MdQzukuQYWq5pSImLxfG9waYaSqn-aBD3Y,6099
openvino/runtime/opset3/__init__.py,sha256=6Rcj9SoaiYVbXln4xjpcIWWIYo0GhDBMRu0QUbQUhS0,6728
openvino/runtime/opset3/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset3/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset3/ops.py,sha256=K_l72znOzv7gHiSLtGA4HwQBrZdJ2mCiIYpKJuRg-R0,24040
openvino/runtime/opset4/__init__.py,sha256=TTuENynmWP2JdFTTjNek6tZHtbhsSkCSumvbpPC-xzY,7202
openvino/runtime/opset4/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset4/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset4/ops.py,sha256=MNit2_7DhYbjhrhd5AFEBR00Y5JQhyxQ3U9ukm_ZdCA,16691
openvino/runtime/opset5/__init__.py,sha256=Lv81yAqxkC6x_5pUjY4I47ucRAQHw_pTUi4k9xnWsmI,7550
openvino/runtime/opset5/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset5/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset5/ops.py,sha256=AFZZrNUKk1SPzop3HIzx_IyM0X_Qxa2uFq68JHfeX9I,14828
openvino/runtime/opset6/__init__.py,sha256=tu1jsgdtCzdqkcrOLE49II9govnuhBPOycYbIuRGbbY,7673
openvino/runtime/opset6/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset6/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset6/ops.py,sha256=gXq_GVBfjTHtlDkJL6JVmoGw_ywyTXJjomSWCEZnzv8,5047
openvino/runtime/opset7/__init__.py,sha256=059bdEzlJjyQY29q9JhsNb7_gRdDjqql7TdpVotUL2Q,7854
openvino/runtime/opset7/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset7/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset7/ops.py,sha256=ww_oC4BE4Ci1D6w8IW_9lAidVGoDtoOBV-k16IYwUbs,4698
openvino/runtime/opset8/__init__.py,sha256=tozYXryBGcJKOAnpTzaKzYoteuJZBZW8Au8sokTQWk8,8431
openvino/runtime/opset8/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset8/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset8/ops.py,sha256=qu3RhgEo_o5b7LZm8WxGc4c7dW6ny4BiIBBnZKw-x_Y,32240
openvino/runtime/opset9/__init__.py,sha256=m6A-jBlvQQ4ipJW0KIeJSYPBUgH3rVskai3C0X23Hw8,8726
openvino/runtime/opset9/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/opset9/__pycache__/ops.cpython-310.pyc,,
openvino/runtime/opset9/ops.py,sha256=PSUEUrf_f4L6MAdVTlGMOTX_XctWXbO0AtRHdzMe8Y4,13380
openvino/runtime/opset_utils.py,sha256=8BxAg0sSGoY_WIuBf0z6iIhckvJYE13Ah1MJbT6SJak,650
openvino/runtime/passes/__init__.py,sha256=g54HC_ioch9cXs3Tm9pBGVo1gYqAUMiNaOPV1CSP2_4,699
openvino/runtime/passes/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/passes/__pycache__/graph_rewrite.cpython-310.pyc,,
openvino/runtime/passes/__pycache__/manager.cpython-310.pyc,,
openvino/runtime/passes/graph_rewrite.py,sha256=CzA_ldPL0oFHG7jM9OqOQ0DNAIVNjQjVMA4C_Afh-LA,1340
openvino/runtime/passes/manager.py,sha256=s_t_3BCd44JNJ3sr84X7jqTn5pQTdln48gikUVP5ELM,865
openvino/runtime/properties/__init__.py,sha256=wymwuMmR-X_Ww6XQo7dOPqcwxKd8hVxx5ZZA7k8_PS4,1614
openvino/runtime/properties/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/properties/hint/__init__.py,sha256=nv8HPhwfU6S_whesUlYBMknxUilTqSexglwoXbEjv3c,1048
openvino/runtime/properties/hint/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/properties/hint/__pycache__/overloads.cpython-310.pyc,,
openvino/runtime/properties/hint/overloads.py,sha256=UT5zedbbOr3HbCr34OPgwFrFCkXv1QxWME5olpSVV-c,592
openvino/runtime/utils/__init__.py,sha256=9_UFA71-b9EDRk1pdU30U432rZWAZ12vfLu3JgLcYQc,385
openvino/runtime/utils/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/utils/__pycache__/broadcasting.cpython-310.pyc,,
openvino/runtime/utils/__pycache__/decorators.cpython-310.pyc,,
openvino/runtime/utils/__pycache__/input_validation.cpython-310.pyc,,
openvino/runtime/utils/__pycache__/node_factory.cpython-310.pyc,,
openvino/runtime/utils/__pycache__/reduction.cpython-310.pyc,,
openvino/runtime/utils/__pycache__/types.cpython-310.pyc,,
openvino/runtime/utils/broadcasting.py,sha256=dQrkBjFo7yivycUI1L46xC8Os6J9UdsFqUiRbAojvtw,1372
openvino/runtime/utils/data_helpers/__init__.py,sha256=hbS2zLhXXPbgYQad7YrNnOt5-TE3bhkrZL1m5YSOiV0,402
openvino/runtime/utils/data_helpers/__pycache__/__init__.cpython-310.pyc,,
openvino/runtime/utils/data_helpers/__pycache__/data_dispatcher.cpython-310.pyc,,
openvino/runtime/utils/data_helpers/__pycache__/wrappers.cpython-310.pyc,,
openvino/runtime/utils/data_helpers/data_dispatcher.py,sha256=JT1ZyCYe6D5V33cTJn9_kltHh_GhKEh72lIkzAhFEdE,11036
openvino/runtime/utils/data_helpers/wrappers.py,sha256=pzyoxEMyTulDtvsKKDHcQJ9aVi38A5D8sws017R67Ck,4877
openvino/runtime/utils/decorators.py,sha256=znuoUS2_Gyh8hkGLtysQyc854uW8aN8fdLAExw4oj2M,2046
openvino/runtime/utils/input_validation.py,sha256=1OeAf6vbmfTaiU3Et6eMFWCXJ5Q0OM6xPP4sXPHM89w,4809
openvino/runtime/utils/node_factory.py,sha256=tIBnXvQ2YwI0zcSkllJZkKH6mC7wCfsvMOLK8HpyYbg,5785
openvino/runtime/utils/reduction.py,sha256=M5FxoM7XDNmguyRoEjx_7Iwsl59LRWL5GRXvVAT_9M8,835
openvino/runtime/utils/types.py,sha256=6t74_PGRokrWKJ0PWZhtP6epRvgGG7XEg8HnGV5wjs0,4820
openvino/utils.py,sha256=vbGQ-0h0ff9PUj5jXRnaFABT97HcmcagvSdqDv-mjAs,4482
requirements.txt,sha256=4PoHh5l_sIAH_Y4_zZcRFau7raAs3UphQUaO7qnN8og,57

View File

@ -0,0 +1,6 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.40.0)
Root-Is-Purelib: false
Build: 11005
Tag: cp310-cp310-manylinux2014_x86_64

View File

@ -0,0 +1,10 @@
“LEGAL NOTICE: Your use of this software and any required dependent software (the “Software Package”) is subject to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details.
------------------------------------------------------------------------
Components and their third party programs:
* OpenVINO(TM) Runtime (Apache 2.0): <install_root>/runtime-third-party-programs.txt, <install_root>/onednn_third-party-programs.txt, <install_root>/tbb_third-party-programs.txt
------------------------------------------------------------------------
Licenses:
* Apache 2.0 <install_root>/LICENSE

View File

@ -0,0 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore # mypy issue #1422

View File

@ -0,0 +1,24 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
from openvino._pyopenvino import get_version
__version__ = get_version()
from openvino._pyopenvino._offline_transformations import apply_fused_names_cleanup
from openvino._pyopenvino._offline_transformations import apply_moc_transformations
from openvino._pyopenvino._offline_transformations import apply_moc_legacy_transformations
from openvino._pyopenvino._offline_transformations import apply_pot_transformations
from openvino._pyopenvino._offline_transformations import apply_low_latency_transformation
from openvino._pyopenvino._offline_transformations import apply_pruning_transformation
from openvino._pyopenvino._offline_transformations import apply_make_stateful_transformation
from openvino._pyopenvino._offline_transformations import compress_model_transformation
from openvino._pyopenvino._offline_transformations import compress_quantize_weights_transformation
from openvino._pyopenvino._offline_transformations import convert_sequence_to_tensor_iterator_transformation

View File

@ -0,0 +1,38 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino
Low level wrappers for the FrontEnd C++ API.
"""
# flake8: noqa
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
from openvino._pyopenvino import get_version
__version__ = get_version()
# main classes
from openvino._pyopenvino import FrontEndManager
from openvino._pyopenvino import FrontEnd
from openvino._pyopenvino import InputModel
from openvino._pyopenvino import NodeContext
from openvino._pyopenvino import Place
# extensions
from openvino._pyopenvino import DecoderTransformationExtension
from openvino._pyopenvino import ConversionExtension
from openvino._pyopenvino import OpExtension
from openvino._pyopenvino import ProgressReporterExtension
from openvino._pyopenvino import TelemetryExtension
# exceptions
from openvino._pyopenvino import NotImplementedFailure
from openvino._pyopenvino import InitializationFailure
from openvino._pyopenvino import OpConversionFailure
from openvino._pyopenvino import OpValidationFailure
from openvino._pyopenvino import GeneralFailure

View File

@ -0,0 +1,19 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino
Low level wrappers for the FrontEnd C++ API.
"""
# flake8: noqa
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
try:
from openvino.frontend.onnx.py_onnx_frontend import ConversionExtensionONNX as ConversionExtension
from openvino.frontend.onnx.py_onnx_frontend import OpExtensionONNX as OpExtension
except ImportError as err:
raise ImportError("OpenVINO ONNX frontend is not available, please make sure the frontend is built. " "{}".format(err))

View File

@ -0,0 +1,20 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino
Low level wrappers for the FrontEnd C++ API.
"""
# flake8: noqa
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
try:
from openvino.frontend.paddle.py_paddle_frontend import ConversionExtensionPaddle as ConversionExtension
from openvino.frontend.paddle.py_paddle_frontend import OpExtensionPaddle as OpExtension
except ImportError as err:
raise ImportError("OpenVINO Paddle frontend is not available, please make sure the frontend is built." "{}".format(err))

View File

@ -0,0 +1,23 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino
Low level wrappers for the FrontEnd C++ API.
"""
# flake8: noqa
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
try:
from openvino.frontend.pytorch.py_pytorch_frontend import _FrontEndPytorchDecoder as Decoder
from openvino.frontend.pytorch.py_pytorch_frontend import _Type as DecoderType
from openvino.frontend.pytorch.py_pytorch_frontend import ConversionExtensionPytorch as ConversionExtension
from openvino.frontend.pytorch.py_pytorch_frontend import OpExtensionPytorch as OpExtension
except ImportError as err:
raise ImportError("OpenVINO PyTorch frontend is not available, please make sure the frontend is built."
"{}".format(err))

View File

@ -0,0 +1,435 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
# mypy: ignore-errors
from openvino.frontend.pytorch.py_pytorch_frontend import _FrontEndPytorchDecoder as Decoder
from openvino.frontend.pytorch.py_pytorch_frontend import _Type as DecoderType
from openvino.runtime import op, PartialShape, Type as OVType, OVAny, Shape
import typing
import torch
import numpy as np
def get_type_from_py_type(value):
if isinstance(value, float):
return OVType.f32
if isinstance(value, bool):
return OVType.boolean
if isinstance(value, int):
# Python int is 64 bit, but we will convert it to int32 except cases when it can't fit in 32 bits
if torch.iinfo(torch.int).min <= value <= torch.iinfo(torch.int).max:
return OVType.i32
return OVType.i64
return OVType.dynamic
def ivalue_to_constant(ivalue):
ov_type = get_type_from_py_type(ivalue)
if ov_type.is_static():
return op.Constant(ov_type, Shape([]), [ivalue]).outputs()
if isinstance(ivalue, (list, tuple)):
assert len(ivalue) > 0, "Can't deduce type for empty list"
ov_type = get_type_from_py_type(ivalue[0])
assert ov_type.is_static(), "Can't deduce type for list"
return op.Constant(ov_type, Shape([len(ivalue)]), ivalue).outputs()
if isinstance(ivalue, torch.Tensor):
if ivalue.dim() == 0:
assert str(ivalue.dtype) in pt_to_ov_type_map, f"Type is not known {ivalue.dtype}"
ov_type = pt_to_ov_type_map[str(ivalue.dtype)]
ov_const = op.Constant(ov_type, Shape([]), [ivalue.item()])
else:
ivalue = ivalue.to(memory_format=torch.contiguous_format)
narr = ivalue.numpy(force=True)
if not narr.flags['C_CONTIGUOUS']:
narr = np.ascontiguousarray(narr)
ov_const = op.Constant(narr, shared_memory=True)
return ov_const.outputs()
return None
def get_value_from_getattr(getattr_node, self_module):
assert getattr_node.kind() == "prim::GetAttr", "Got node of kind not equal to prim::GetAttr"
# GetAttr nodes can be nested
stack = []
while getattr_node.kind() == "prim::GetAttr":
stack.append(getattr_node)
inputs = list(getattr_node.inputs())
if len(inputs) == 0:
break
getattr_node = inputs[0].node()
module = self_module
while len(stack) > 0:
node = stack.pop()
assert (hasattr(module, node.s("name")))
module = getattr(module, node.s("name"))
return module
pt_to_ov_type_map = {
"float": OVType.f32,
"int": OVType.i32,
"bool": OVType.boolean,
"torch.float16": OVType.f16,
"torch.float32": OVType.f32,
"torch.float64": OVType.f64,
"torch.uint8": OVType.u8,
"torch.int8": OVType.i8,
"torch.int32": OVType.i32,
"torch.int64": OVType.i64,
"torch.bool": OVType.boolean,
"torch.DoubleTensor": OVType.f64,
"torch.FloatTensor": OVType.f32,
"torch.IntTensor": OVType.i32,
"torch.LongTensor": OVType.i64,
"torch.BoolTensor": OVType.boolean,
}
class TorchScriptPythonDecoder (Decoder):
def __init__(self, pt_module, graph_element=None, example_input=None, freeze=True):
Decoder.__init__(self)
# We store every decoder created by this decoder so that all them are not deleted until the first decoder is deleted
self.m_decoders = []
self._input_signature = None
if graph_element is None:
try:
pt_module = self._get_scripted_model(pt_module, example_input, freeze)
except Exception as e:
if example_input is not None:
msg = "tracing or scripting"
help_msg = ""
else:
msg = "scripting"
help_msg = "Tracing sometimes provide better results, "
"please provide valid 'example_input' argument. "
raise RuntimeError(
f"Couldn't get TorchScript module by {msg}. {help_msg}"
"You can also provide TorchScript module that you obtained"
" yourself, please refer to PyTorch documentation: "
"https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html.")
self.graph_element = pt_module.inlined_graph
else:
self.graph_element = graph_element
self.pt_module = pt_module
self.raw_inputs = list(self.graph_element.inputs())
self.raw_outputs = list(self.graph_element.outputs())
if self._input_signature is not None and "self" in self.raw_inputs[0].debugName():
self._input_signature.insert(0, "self")
if isinstance(self.graph_element, torch.Graph):
self._transform_tensor_list_constants_to_listconstruct(self.graph_element)
self._transform_optional_constants(self.graph_element)
def _get_scripted_model(self, pt_module, example_inputs=None, freeze=True):
import torch
import inspect
def prepare_example_inputs(inputs, input_signature):
if inputs is not None:
if isinstance(inputs, dict):
if input_signature is not None:
ordered_inputs = []
used_sign = []
for key in input_signature:
if key not in inputs:
continue
ordered_inputs.append(inputs[key])
used_sign.append(key)
inputs = ordered_inputs
input_signature = used_sign
else:
inputs = list(inputs.values())
input_signature = input_signature[:len(inputs)]
if isinstance(inputs, torch.Tensor):
inputs = [inputs]
return inputs, input_signature
if isinstance(pt_module, torch.nn.Module):
pt_module.eval()
input_signature = None
if isinstance(pt_module, torch.nn.Module) and not isinstance(pt_module, (torch.jit._trace.TopLevelTracedModule, torch.jit._script.RecursiveScriptModule)):
input_signature = list(inspect.signature(pt_module.forward).parameters.keys())
if example_inputs is None:
scripted = torch.jit.script(pt_module)
else:
inputs, input_signature = prepare_example_inputs(example_inputs, input_signature)
try:
scripted = torch.jit.trace(pt_module, inputs)
except Exception:
try:
scripted = torch.jit.script(pt_module)
except Exception:
scripted = torch.jit.trace(pt_module, inputs, strict=False)
else:
scripted = pt_module
if freeze:
try:
f_model = torch.jit.freeze(scripted)
except Exception:
# usually freezing failed when model already frozen for inference
f_model = scripted
else:
f_model = scripted
self._input_signature = input_signature
return f_model
def inputs(self) -> list:
return [x.unique() for x in self.raw_inputs]
def get_input(self, index: int):
return self.inputs()[index]
def get_input_debug_name(self, index: int) -> str:
return self._raw_input(index).debugName()
def get_input_signature_name(self, index: int) -> str:
if self._input_signature is not None and index < len(self._input_signature):
return self._input_signature[index]
return self.get_input_debug_name(index)
def get_input_shape(self, index: int):
raw_input = self._raw_input(index)
return self.get_shape_for_value(raw_input)
def get_input_type(self, index: int):
raw_input = self._raw_input(index)
return self.get_type_for_value(raw_input)
def get_output_debug_name(self, index: int) -> str:
return self._raw_output(index).debugName()
def get_output_shape(self, index: int):
output = self._raw_output(index)
return self.get_shape_for_value(output)
def get_output_type(self, index: int):
output = self._raw_output(index)
return self.get_type_for_value(output)
def _get_known_type_for_value(self, pt_type):
"""Returns known/unknown types wrapped as OVAny."""
# Check for simple scalar types first
if pt_type is None:
return OVAny(OVType.dynamic)
# TODO: Don't use str, use native types
if str(pt_type) in pt_to_ov_type_map:
return OVAny(pt_to_ov_type_map[str(pt_type)])
elif isinstance(pt_type, torch.TensorType):
# Tensor type, parse element type
return OVAny(DecoderType.Tensor(self._get_known_type_for_value(pt_type.dtype())))
elif isinstance(pt_type, torch.ListType):
element_type = pt_type.getElementType()
return OVAny(DecoderType.List(self._get_known_type_for_value(element_type)))
elif isinstance(pt_type, (torch.StringType, torch.DeviceObjType)):
return OVAny(DecoderType.Str())
elif isinstance(pt_type, torch.NoneType):
return OVAny(DecoderType.PyNone())
else:
# Not yet recognized
return OVAny(OVType.dynamic)
def get_shape_for_value(self, value: torch.Value):
if value.isCompleteTensor():
ps = PartialShape(value.type().sizes())
return ps
else:
# TODO: Recognize types that we can represent as a nested constructs with objects from DecoderType
# If recognized, return scalar instead of dynamic. Scalar means a single value of that custom type.
# See get_type_for_value for reference
pass
return PartialShape.dynamic()
def get_type_for_value(self, value: torch.Value):
full_type = self._get_known_type_for_value(value.type())
return full_type
def get_input_transpose_order(self, index: int) -> list:
raw_input = self._raw_input(index)
if raw_input.type() is not None and raw_input.type().kind() == "TensorType":
strides = raw_input.type().strides()
if strides is not None:
return [s[0] for s in sorted(enumerate(strides), key=lambda x:x[1], reverse=True)]
return []
def get_output_transpose_order(self, index: int) -> list:
output = self._raw_output(index)
if output.type() is not None and output.type().kind() == "TensorType":
strides = output.type().strides()
if strides is not None:
return [s[0] for s in sorted(enumerate(strides), key=lambda x:x[1], reverse=True)]
return []
def get_subgraph_size(self) -> int:
if isinstance(self.graph_element, torch.Node):
return len(self.get_subgraphs())
else:
return 1
def visit_subgraph(self, node_visitor) -> None:
# make sure topological order is satisfied
for node in self.graph_element.nodes():
decoder = TorchScriptPythonDecoder(self.pt_module, node)
self.m_decoders.append(decoder)
node_visitor(decoder)
def get_subgraphs(self) -> list:
if self.graph_element.kind() == "prim::PythonOp":
if "Subgraph" in self.graph_element.attributeNames():
assert isinstance(self.graph_element, torch.Node), "Graph element must be of type torch.Node."
return [getattr(self.graph_element, self.graph_element.kindOf("Subgraph"))("Subgraph")]
else:
# Attribute "Subgraph" is only available if Graph was created using tracing.
# TODO Find way to extract subgraph for scripted Graph.
return []
return list(self.graph_element.blocks())
def get_subgraph_decoder(self, index: int):
decoder = TorchScriptPythonDecoder(self.pt_module, self.get_subgraphs()[index])
self.m_decoders.append(decoder)
return decoder
def get_op_type(self) -> str:
assert isinstance(self.graph_element, torch.Node), "Function can be called only when self.graph_element is of type torch.Node"
return self.graph_element.kind()
def get_schema(self) -> str:
return self.graph_element.schema()
def outputs(self) -> list:
return [x.unique() for x in self.raw_outputs]
def _raw_output(self, index: int):
return self.raw_outputs[index]
def _raw_input(self, index: int):
return self.raw_inputs[index]
def num_of_outputs(self):
return len(self.raw_outputs)
def output(self, index: int):
return self.outputs()[index]
def mark_node(self, node):
return node
def try_decode_get_attr(self):
pt_value = get_value_from_getattr(self.graph_element, self.pt_module)
assert pt_value is not None, "Couldn't retrieve value from prim::GetAttr"
if not isinstance(pt_value, (torch.jit.ScriptModule, torch.jit.TracedModule)):
return ivalue_to_constant(pt_value)
else:
return []
def as_constant(self):
if not isinstance(self.graph_element, torch.Node):
return None
if not self.get_op_type() == "prim::Constant":
return None
pt_value = self._raw_output(0)
pt_type = pt_value.type()
if isinstance(pt_type, torch.TensorType):
return ivalue_to_constant(pt_value.toIValue())
if isinstance(pt_type, torch.ListType):
return self._as_constant_list(pt_value)
return ivalue_to_constant(pt_value.toIValue())
def as_string(self):
if self.get_op_type() == "prim::Constant":
pt_value = self._raw_output(0)
if str(pt_value.type()) in ["torch.StringType", "str"]:
return pt_value.toIValue()
elif str(pt_value.type()) == "Device":
return pt_value.toIValue().type
elif self.get_op_type() == "prim::device":
return self._get_device_string()
return None
@staticmethod
def _as_constant_list(pt_value: torch.Value):
# For now it is treat a list as a 1D tensor; it is required by converters to avoid need to massively
# rewrite them in that part where constant attributes are queried
pt_element_type = str(pt_value.type().getElementType())
ivalue = pt_value.toIValue()
is_known_type = pt_element_type in pt_to_ov_type_map
if is_known_type:
ovtype = pt_to_ov_type_map[pt_element_type]
ovshape = PartialShape([len(ivalue)])
ov_const = op.Constant(ovtype, ovshape.get_shape(), ivalue)
return ov_const.outputs()
def _get_device_string(self) -> str:
assert self.graph_element.kind() == "prim::device", "This function can be called for prim::device node."
value = self.raw_inputs[0]
if value.type().isSubtypeOf(torch.TensorType.get()):
tensor = typing.cast(torch.TensorType, value.type())
device = tensor.device()
if device:
return str(device)
# Device cannot be statically determined.
return "cpu"
def input_is_none(self, index: int) -> bool:
if index >= len(self.inputs()) or self._raw_input(index) is None:
return True
else:
r_input = self._raw_input(index)
if str(r_input.type()) in ["torch.NoneType", "NoneType"]:
return True
else:
in_node = r_input.node()
if in_node.kind() == "prim::GetAttr":
pt_value = get_value_from_getattr(in_node, self.pt_module)
return pt_value is None
return False
@staticmethod
def _transform_tensor_list_constants_to_listconstruct(graph: torch.Graph):
# Function replaces prim::Constant containing List of Tensors with
# prim::ListConstruct containing prim::Constant Tensors.
assert isinstance(graph, torch.Graph), "Function can be called only with parameters of type torch.Graph."
for node in graph.nodes():
if node.kind() != "prim::Constant":
continue
output_type = node.output().type()
allowed_types = [
output_type.isSubtypeOf(torch.ListType.ofTensors()),
output_type.isSubtypeOf(torch.ListType(torch.OptionalType.ofTensor())),
]
if not any(allowed_types):
continue
const_inputs = []
for val in node.output().toIValue():
const_input = graph.insertConstant(val)
const_input.node().moveBefore(node)
const_input.node().copyMetadata(node)
const_inputs.append(const_input)
replacement = graph.create("prim::ListConstruct", const_inputs)
replacement.insertBefore(node)
replacement.output().setType(torch.ListType.ofTensors())
replacement.copyMetadata(node)
node.output().replaceAllUsesWith(replacement.output())
@staticmethod
def _transform_optional_constants(graph: torch.Graph):
# Function replaces prim::Constant containing torch.OptionalType with
# prim::Constant containing torch.NoneType or type of IValue.
assert isinstance(graph, torch.Graph), "Function can be called only with parameters of type torch.Graph."
for node in graph.nodes():
if node.kind() != "prim::Constant":
continue
output_type = node.output().type()
if not isinstance(output_type, torch.OptionalType):
continue
value = node.output().toIValue()
const_input = graph.insertConstant(value)
const_input.node().moveBefore(node)
const_input.node().copyMetadata(node)
node.output().replaceAllUsesWith(const_input)

View File

@ -0,0 +1,98 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
# mypy: ignore-errors
import logging
import os
import torch
from torch._dynamo.backends.common import fake_tensor_unsupported
from torch._dynamo.backends.registry import register_backend
from torch._inductor.compile_fx import compile_fx
from openvino.frontend import FrontEndManager
from openvino.runtime import Core, Type, PartialShape
from openvino.frontend.pytorch.decoder import TorchScriptPythonDecoder
log = logging.getLogger(__name__)
"""
This is a preview feature in OpenVINO. Torchscript backend
enables users to compile PyTorch models using torch.compile
with OpenVINO as a target backend in PyTorch applications
Sample usage:
This sample code loads resnet50 torchvision model and compiles it using torch dynamo.
We can then use this model for inference. We only need to add two lines of code to
the Pytorch applications which are marked in the code below
1) import openvino.frontend.pytorch.torchdynamo.backend
model = torchvision.models.resnet50()
2) model = torch.compile(model, backend="openvino")
"""
@register_backend
@fake_tensor_unsupported
def openvino(subgraph, example_inputs):
return ts_openvino(subgraph, example_inputs)
def ts_openvino(subgraph, example_inputs):
try:
model = torch.jit.script(subgraph)
model.eval()
fr_model = torch.jit.freeze(model)
core = Core()
fe_manager = FrontEndManager()
fe = fe_manager.load_by_framework('pytorch')
dtype_mapping = {
torch.float64: Type.f64,
torch.float32: Type.f32,
torch.float16: Type.f16,
torch.int64: Type.i64,
torch.int32: Type.i32,
torch.uint8: Type.u8,
torch.int8: Type.i8,
torch.bool: Type.boolean,
}
decoder = TorchScriptPythonDecoder(fr_model)
# TODO: Use convert_model instead when mo --convert_model api becomes a part of OV runtime
im = fe.load(decoder)
om = fe.convert(im)
for idx, input_data in enumerate(example_inputs):
om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
om.inputs[idx].get_node().set_partial_shape(PartialShape(list(input_data.shape)))
om.validate_nodes_and_infer_types()
device = "CPU"
if (os.getenv("OPENVINO_TS_BACKEND_DEVICE") is not None):
device = os.getenv("OPENVINO_TS_BACKEND_DEVICE")
assert device in core.available_devices, "Specified device " + device + " is not in the list of OpenVINO Available Devices"
compiled_model = core.compile_model(om, device)
def _call(*args):
if not hasattr(_call, "execute_on_ov"):
_call.execute_on_ov = True
execute_on_ov = getattr(_call, "execute_on_ov")
if execute_on_ov:
ov_inputs = [a.detach().cpu().numpy() for a in args]
try:
res = compiled_model(ov_inputs)
except Exception as e:
log.debug(f"Failed in OpenVINO execution: {e}")
_call.execute_on_ov = False
return subgraph.forward(*args)
result = [torch.from_numpy(res[out]) for out in compiled_model.outputs]
return result
else:
return subgraph.forward(*args)
return _call
except Exception as e:
log.debug(f"Failed in compilation: {e}")
return compile_fx(subgraph, example_inputs)

View File

@ -0,0 +1,19 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino
Low level wrappers for the FrontEnd C++ API.
"""
# flake8: noqa
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
try:
from openvino.frontend.tensorflow.py_tensorflow_frontend import ConversionExtensionTensorflow as ConversionExtension
from openvino.frontend.tensorflow.py_tensorflow_frontend import OpExtensionTensorflow as OpExtension
except ImportError as err:
raise ImportError("OpenVINO Tensorflow frontend is not available, please make sure the frontend is built. " "{}".format(err))

View File

@ -0,0 +1,6 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
from openvino.helpers.packing import pack_data, unpack_data

View File

@ -0,0 +1,87 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
import numpy as np
from typing import Union
from openvino.runtime import Type, Shape
def pack_data(array: np.ndarray, type: Type) -> np.ndarray:
"""Represent array values as u1,u4 or i4 openvino element type and pack them into uint8 numpy array.
If the number of elements in array is odd we pad them with zero value to be able to fit the bit
sequence into the uint8 array.
Example: two uint8 values - [7, 8] can be represented as uint4 values and be packed into one int8
value - [120], because [7, 8] bit representation is [0111, 1000] will be viewed
as [01111000], which is bit representation of [120].
:param array: numpy array with values to pack.
:type array: numpy array
:param type: Type to interpret the array values. Type must be u1, u4 or i4.
:type type: openvino.runtime.Type
"""
assert type in [Type.u1, Type.u4, Type.i4], "Packing algorithm for the" "data types stored in 1, 2 or 4 bits"
minimum_regular_dtype = np.int8 if type == Type.i4 else np.uint8
casted_to_regular_type = array.astype(dtype=minimum_regular_dtype, casting="unsafe")
if not np.array_equal(casted_to_regular_type, array):
raise RuntimeError(f'The conversion of array "{array}" to dtype' f' "{casted_to_regular_type}" results in rounding')
data_size = casted_to_regular_type.size
num_bits = type.bitwidth
assert num_bits < 8 and 8 % num_bits == 0, "Packing algorithm for the" "data types stored in 1, 2 or 4 bits"
num_values_fitting_into_uint8 = 8 // num_bits
pad = (-data_size) % num_values_fitting_into_uint8
flattened = casted_to_regular_type.flatten()
padded = np.concatenate((flattened, np.zeros([pad], dtype=minimum_regular_dtype))) # type: ignore
assert padded.size % num_values_fitting_into_uint8 == 0
bit_order_little = (padded[:, None] & (1 << np.arange(num_bits)) > 0).astype(minimum_regular_dtype)
bit_order_big = np.flip(bit_order_little, axis=1) # type: ignore
bit_order_big_flattened = bit_order_big.flatten()
return np.packbits(bit_order_big_flattened)
def unpack_data(array: np.ndarray, type: Type, shape: Union[list, Shape]) -> np.ndarray:
"""Extract openvino element type values from array into new uint8/int8 array given shape.
Example: uint8 value [120] can be represented as two u4 values and be unpacked into [7, 8]
because [120] bit representation is [01111000] will be viewed as [0111, 1000],
which is bit representation of [7, 8].
:param array: numpy array to unpack.
:type array: numpy array
:param type: Type to extract from array values. Type must be u1, u4 or i4.
:type type: openvino.runtime.Type
:param shape: the new shape for the unpacked array.
:type shape: Union[list, openvino.runtime.Shape]
"""
assert type in [Type.u1, Type.u4, Type.i4], "Unpacking algorithm for the" "data types stored in 1, 2 or 4 bits"
unpacked = np.unpackbits(array.view(np.uint8))
shape = list(shape)
if type.bitwidth == 1:
return np.resize(unpacked, shape)
else:
unpacked = unpacked.reshape(-1, type.bitwidth)
padding_shape = (unpacked.shape[0], 8 - type.bitwidth)
padding = np.ndarray(padding_shape, np.uint8) # type: np.ndarray
if type == Type.i4:
for axis, bits in enumerate(unpacked):
if bits[0] == 1:
padding[axis] = np.ones((padding_shape[1],), np.uint8)
else:
padding[axis] = np.zeros((padding_shape[1],), np.uint8)
else:
padding = np.zeros(padding_shape, np.uint8)
padded = np.concatenate((padding, unpacked), 1) # type: ignore
packed = np.packbits(padded, 1)
if type == Type.i4:
return np.resize(packed, shape).astype(dtype=np.int8)
else:
return np.resize(packed, shape)

View File

@ -0,0 +1,33 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import sys
if sys.platform == "win32":
# Installer, yum, pip installs openvino dlls to the different directories
# and those paths need to be visible to the openvino modules
#
# If you're using a custom installation of openvino,
# add the location of openvino dlls to your system PATH.
#
# looking for the libs in the pip installation path by default.
openvino_libs = [os.path.join(os.path.dirname(__file__), "..", "..", "openvino", "libs")]
# setupvars.bat script set all libs paths to OPENVINO_LIB_PATHS environment variable.
openvino_libs_installer = os.getenv("OPENVINO_LIB_PATHS")
if openvino_libs_installer:
openvino_libs.extend(openvino_libs_installer.split(";"))
for lib in openvino_libs:
lib_path = os.path.join(os.path.dirname(__file__), lib)
if os.path.isdir(lib_path):
# On Windows, with Python >= 3.8, DLLs are no longer imported from the PATH.
if (3, 8) <= sys.version_info:
os.add_dll_directory(os.path.abspath(lib_path))
else:
os.environ["PATH"] = os.path.abspath(lib_path) + ";" + os.environ["PATH"]
from .ie_api import *
__all__ = ["IENetwork", "TensorDesc", "IECore", "Blob", "PreProcessInfo", "get_version"]
__version__ = get_version() # type: ignore

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,88 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
# type: ignore
import warnings
warnings.warn(
message="The module is private and following namespace "
"`offline_transformations` will be removed in the future.",
category=FutureWarning,
)
from openvino.utils import _add_openvino_libs_to_search_path, deprecated
_add_openvino_libs_to_search_path()
from openvino._pyopenvino import get_version
from openvino._pyopenvino import serialize as _base_serialize
import openvino._pyopenvino._offline_transformations as _base
__version__ = get_version()
@deprecated(
version="2023.1",
message="The module is private and following namespace "
"`offline_transformations` will be removed in "
"the future, use `openvino.runtime.passes` instead!",
)
def serialize(model, xml_path, bin_path, version):
_base_serialize(model, xml_path, bin_path, version)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def apply_fused_names_cleanup(model):
_base.apply_fused_names_cleanup(model)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def apply_moc_transformations(model, cf, smart_reshape=False):
_base.apply_moc_transformations(model, cf, smart_reshape)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def apply_moc_legacy_transformations(model, params_with_custom_types):
_base.apply_moc_legacy_transformations(model, params_with_custom_types)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def apply_pot_transformations(model, device):
_base.apply_pot_transformations(model, device)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def apply_low_latency_transformation(model, use_const_initializer):
_base.apply_low_latency_transformation(model, use_const_initializer)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def apply_pruning_transformation(model):
_base.apply_pruning_transformation(model)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def apply_make_stateful_transformation(model, param_res_names):
_base.apply_make_stateful_transformation(model, param_res_names)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def compress_model_transformation(model):
_base.compress_model_transformation(model)
@deprecated(version="2023.1",
message="The module is private and following namespace "
"`offline_transformations` will be removed in the future. "
"This transformation will be enabled as a part of read_model method of ov::Core "
"and convert method of ov::Frontend classes.")
def compress_quantize_weights_transformation(model):
_base.compress_quantize_weights_transformation(model)
@deprecated(version="2023.1", message="The module is private and following namespace " "`offline_transformations` will be removed in " "the future.")
def convert_sequence_to_tensor_iterator_transformation(model):
_base.convert_sequence_to_tensor_iterator_transformation(model)

View File

@ -0,0 +1,30 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino
Low level wrappers for the PrePostProcessing C++ API.
"""
# flake8: noqa
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
from openvino._pyopenvino import get_version
__version__ = get_version()
# main classes
from openvino._pyopenvino.preprocess import InputInfo
from openvino._pyopenvino.preprocess import OutputInfo
from openvino._pyopenvino.preprocess import InputTensorInfo
from openvino._pyopenvino.preprocess import OutputTensorInfo
from openvino._pyopenvino.preprocess import InputModelInfo
from openvino._pyopenvino.preprocess import OutputModelInfo
from openvino._pyopenvino.preprocess import PrePostProcessor
from openvino._pyopenvino.preprocess import PreProcessSteps
from openvino._pyopenvino.preprocess import PostProcessSteps
from openvino._pyopenvino.preprocess import ColorFormat
from openvino._pyopenvino.preprocess import ResizeAlgorithm

View File

@ -0,0 +1,12 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# flake8: noqa
# type: ignore
import warnings
warnings.warn(message="The module is private and following namespace " "`pyopenvino` will be removed in the future", category=FutureWarning)
from openvino._pyopenvino import *

View File

@ -0,0 +1,86 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""openvino module namespace, exposing factory functions for all ops and other classes."""
# noqa: F401
from openvino.utils import _add_openvino_libs_to_search_path
_add_openvino_libs_to_search_path()
from openvino._pyopenvino import get_version
__version__ = get_version()
# Openvino pybind bindings and python extended classes
from openvino._pyopenvino import Dimension
from openvino._pyopenvino import Model
from openvino._pyopenvino import Input
from openvino._pyopenvino import Output
from openvino._pyopenvino import Node
from openvino._pyopenvino import Type
from openvino._pyopenvino import PartialShape
from openvino._pyopenvino import Shape
from openvino._pyopenvino import Strides
from openvino._pyopenvino import CoordinateDiff
from openvino._pyopenvino import DiscreteTypeInfo
from openvino._pyopenvino import AxisSet
from openvino._pyopenvino import AxisVector
from openvino._pyopenvino import Coordinate
from openvino._pyopenvino import Layout
from openvino._pyopenvino import ConstOutput
from openvino._pyopenvino import layout_helpers
from openvino._pyopenvino import OVAny
from openvino._pyopenvino import RTMap
from openvino.runtime.ie_api import Core
from openvino.runtime.ie_api import CompiledModel
from openvino.runtime.ie_api import InferRequest
from openvino.runtime.ie_api import AsyncInferQueue
from openvino._pyopenvino import Version
from openvino._pyopenvino import Tensor
from openvino._pyopenvino import Extension
from openvino._pyopenvino import ProfilingInfo
from openvino._pyopenvino import get_batch
from openvino._pyopenvino import set_batch
from openvino._pyopenvino import serialize
from openvino._pyopenvino import shutdown
# Import opsets
from openvino.runtime import opset1
from openvino.runtime import opset2
from openvino.runtime import opset3
from openvino.runtime import opset4
from openvino.runtime import opset5
from openvino.runtime import opset6
from openvino.runtime import opset7
from openvino.runtime import opset8
from openvino.runtime import opset9
from openvino.runtime import opset10
from openvino.runtime import opset11
# Import properties API
from openvino.runtime import properties
# Helper functions for openvino module
from openvino.runtime.ie_api import tensor_from_file
from openvino.runtime.ie_api import compile_model
# Extend Node class to support binary operators
Node.__add__ = opset11.add
Node.__sub__ = opset11.subtract
Node.__mul__ = opset11.multiply
Node.__div__ = opset11.divide
Node.__truediv__ = opset11.divide
Node.__radd__ = lambda left, right: opset11.add(right, left)
Node.__rsub__ = lambda left, right: opset11.subtract(right, left)
Node.__rmul__ = lambda left, right: opset11.multiply(right, left)
Node.__rdiv__ = lambda left, right: opset11.divide(right, left)
Node.__rtruediv__ = lambda left, right: opset11.divide(right, left)
Node.__eq__ = opset11.equal
Node.__ne__ = opset11.not_equal
Node.__lt__ = opset11.less
Node.__le__ = opset11.less_equal
Node.__gt__ = opset11.greater
Node.__ge__ = opset11.greater_equal

View File

@ -0,0 +1,17 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""openvino exceptions hierarchy. All exceptions are descendants of OVError."""
class OVError(Exception):
"""Base class for OV exceptions."""
class UserInputError(OVError):
"""User provided unexpected input."""
class OVTypeError(OVError, TypeError):
"""Type mismatch error."""

View File

@ -0,0 +1,465 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from typing import Any, Iterable, Union, Dict, Optional
from pathlib import Path
import numpy as np
from openvino._pyopenvino import Model
from openvino._pyopenvino import Core as CoreBase
from openvino._pyopenvino import CompiledModel as CompiledModelBase
from openvino._pyopenvino import AsyncInferQueue as AsyncInferQueueBase
from openvino._pyopenvino import ConstOutput
from openvino._pyopenvino import Tensor
from openvino.runtime.utils.data_helpers import (
OVDict,
_InferRequestWrapper,
_data_dispatch,
tensor_from_file,
)
class InferRequest(_InferRequestWrapper):
"""InferRequest class represents infer request which can be run in asynchronous or synchronous manners."""
def infer(self, inputs: Any = None, shared_memory: bool = False) -> OVDict:
"""Infers specified input(s) in synchronous mode.
Blocks all methods of InferRequest while request is running.
Calling any method will lead to throwing exceptions.
The allowed types of keys in the `inputs` dictionary are:
(1) `int`
(2) `str`
(3) `openvino.runtime.ConstOutput`
The allowed types of values in the `inputs` are:
(1) `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
(2) `openvino.runtime.Tensor`
Can be called with only one `openvino.runtime.Tensor` or `numpy.ndarray`,
it will work only with one-input models. When model has more inputs,
function throws error.
:param inputs: Data to be set on input tensors.
:type inputs: Any, optional
:param shared_memory: Enables `shared_memory` mode.
If set to `False` inputs the data dispatcher will safely copy data
to existing Tensors (including up- or down-casting according to data type,
resizing of the input Tensor). Keeps Tensor inputs "as-is".
If set to `True` the data dispatcher tries to provide "zero-copy"
Tensors for every input in form of:
* `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
Data that is going to be copied:
* `numpy.ndarray` which are not C contiguous
* inputs which data types are mismatched from Infer Request's inputs
* inputs that should be in `BF16` data type
* scalar inputs (i.e. `np.float_`/`int`/`float`)
Keeps Tensor inputs "as-is".
Note: Use with extra care, shared data can be modified during runtime!
Note: Using `shared_memory` may result in the extra memory overhead.
Default value: False
:type shared_memory: bool, optional
:return: Dictionary of results from output tensors with port/int/str keys.
:rtype: OVDict
"""
return OVDict(super().infer(_data_dispatch(
self,
inputs,
is_shared=shared_memory,
)))
def start_async(
self,
inputs: Any = None,
userdata: Any = None,
shared_memory: bool = False,
) -> None:
"""Starts inference of specified input(s) in asynchronous mode.
Returns immediately. Inference starts also immediately.
Calling any method on the `InferRequest` object while the request is running
will lead to throwing exceptions.
The allowed types of keys in the `inputs` dictionary are:
(1) `int`
(2) `str`
(3) `openvino.runtime.ConstOutput`
The allowed types of values in the `inputs` are:
(1) `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
(2) `openvino.runtime.Tensor`
Can be called with only one `openvino.runtime.Tensor` or `numpy.ndarray`,
it will work only with one-input models. When model has more inputs,
function throws error.
:param inputs: Data to be set on input tensors.
:type inputs: Any, optional
:param userdata: Any data that will be passed inside the callback.
:type userdata: Any
:param shared_memory: Enables `shared_memory` mode.
If set to `False` inputs the data dispatcher will safely copy data
to existing Tensors (including up- or down-casting according to data type,
resizing of the input Tensor). Keeps Tensor inputs "as-is".
If set to `True` the data dispatcher tries to provide "zero-copy"
Tensors for every input in form of:
* `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
Data that is going to be copied:
* `numpy.ndarray` which are not C contiguous
* inputs which data types are mismatched from Infer Request's inputs
* inputs that should be in `BF16` data type
* scalar inputs (i.e. `np.float_`/`int`/`float`)
Keeps Tensor inputs "as-is".
Note: Use with extra care, shared data can be modified during runtime!
Note: Using `shared_memory` may result in extra memory overhead.
Default value: False
:type shared_memory: bool, optional
"""
super().start_async(
_data_dispatch(
self,
inputs,
is_shared=shared_memory,
),
userdata,
)
@property
def results(self) -> OVDict:
"""Gets all outputs tensors of this InferRequest.
:return: Dictionary of results from output tensors with ports as keys.
:rtype: Dict[openvino.runtime.ConstOutput, numpy.array]
"""
return OVDict(super().results)
class CompiledModel(CompiledModelBase):
"""CompiledModel class.
CompiledModel represents Model that is compiled for a specific device by applying
multiple optimization transformations, then mapping to compute kernels.
"""
def __init__(self, other: CompiledModelBase) -> None:
# Private memeber to store already created InferRequest
self._infer_request: Optional[InferRequest] = None
super().__init__(other)
def create_infer_request(self) -> InferRequest:
"""Creates an inference request object used to infer the compiled model.
The created request has allocated input and output tensors.
:return: New InferRequest object.
:rtype: openvino.runtime.InferRequest
"""
return InferRequest(super().create_infer_request())
def infer_new_request(self, inputs: Union[dict, list, tuple, Tensor, np.ndarray] = None) -> OVDict:
"""Infers specified input(s) in synchronous mode.
Blocks all methods of CompiledModel while request is running.
Method creates new temporary InferRequest and run inference on it.
It is advised to use a dedicated InferRequest class for performance,
optimizing workflows, and creating advanced pipelines.
The allowed types of keys in the `inputs` dictionary are:
(1) `int`
(2) `str`
(3) `openvino.runtime.ConstOutput`
The allowed types of values in the `inputs` are:
(1) `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
(2) `openvino.runtime.Tensor`
Can be called with only one `openvino.runtime.Tensor` or `numpy.ndarray`,
it will work only with one-input models. When model has more inputs,
function throws error.
:param inputs: Data to be set on input tensors.
:type inputs: Union[Dict[keys, values], List[values], Tuple[values], Tensor, numpy.ndarray], optional
:return: Dictionary of results from output tensors with port/int/str keys.
:rtype: OVDict
"""
# It returns wrapped python InferReqeust and then call upon
# overloaded functions of InferRequest class
return self.create_infer_request().infer(inputs)
def __call__(self,
inputs: Union[dict, list, tuple, Tensor, np.ndarray] = None,
shared_memory: bool = True) -> OVDict:
"""Callable infer wrapper for CompiledModel.
Infers specified input(s) in synchronous mode.
Blocks all methods of CompiledModel while request is running.
Method creates new temporary InferRequest and run inference on it.
It is advised to use a dedicated InferRequest class for performance,
optimizing workflows, and creating advanced pipelines.
This method stores created `InferRequest` inside `CompiledModel` object,
which can be later reused in consecutive calls.
The allowed types of keys in the `inputs` dictionary are:
(1) `int`
(2) `str`
(3) `openvino.runtime.ConstOutput`
The allowed types of values in the `inputs` are:
(1) `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
(2) `openvino.runtime.Tensor`
Can be called with only one `openvino.runtime.Tensor` or `numpy.ndarray`,
it will work only with one-input models. When model has more inputs,
function throws error.
:param inputs: Data to be set on input tensors.
:type inputs: Union[Dict[keys, values], List[values], Tuple[values], Tensor, numpy.ndarray], optional
:param shared_memory: Enables `shared_memory` mode.
If set to `False` inputs the data dispatcher will safely copy data
to existing Tensors (including up- or down-casting according to data type,
resizing of the input Tensor). Keeps Tensor inputs "as-is".
If set to `True` the data dispatcher tries to provide "zero-copy"
Tensors for every input in form of:
* `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
Data that is going to be copied:
* `numpy.ndarray` which are not C contiguous
* inputs which data types are mismatched from Infer Request's inputs
* inputs that should be in `BF16` data type
* scalar inputs (i.e. `np.float_`/`int`/`float`)
Keeps Tensor inputs "as-is".
Note: Use with extra care, shared data can be modified during runtime!
Note: Using `shared_memory` may result in extra memory overhead.
Default value: True
:type shared_memory: bool, optional
:return: Dictionary of results from output tensors with port/int/str as keys.
:rtype: OVDict
"""
if self._infer_request is None:
self._infer_request = self.create_infer_request()
return self._infer_request.infer(
inputs,
shared_memory=shared_memory,
)
class AsyncInferQueue(AsyncInferQueueBase):
"""AsyncInferQueue with a pool of asynchronous requests.
AsyncInferQueue represents a helper that creates a pool of asynchronous
InferRequests and provides synchronization functions to control flow of
a simple pipeline.
"""
def __iter__(self) -> Iterable[InferRequest]:
"""Allows to iterate over AsyncInferQueue.
:return: a generator that yields InferRequests.
:rtype: Iterable[openvino.runtime.InferRequest]
"""
return (InferRequest(x) for x in super().__iter__())
def __getitem__(self, i: int) -> InferRequest:
"""Gets InferRequest from the pool with given i id.
:param i: InferRequest id.
:type i: int
:return: InferRequests from the pool with given id.
:rtype: openvino.runtime.InferRequest
"""
return InferRequest(super().__getitem__(i))
def start_async(
self,
inputs: Any = None,
userdata: Any = None,
shared_memory: bool = False,
) -> None:
"""Run asynchronous inference using the next available InferRequest from the pool.
The allowed types of keys in the `inputs` dictionary are:
(1) `int`
(2) `str`
(3) `openvino.runtime.ConstOutput`
The allowed types of values in the `inputs` are:
(1) `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
(2) `openvino.runtime.Tensor`
Can be called with only one `openvino.runtime.Tensor` or `numpy.ndarray`,
it will work only with one-input models. When model has more inputs,
function throws error.
:param inputs: Data to be set on input tensors of the next available InferRequest.
:type inputs: Any, optional
:param userdata: Any data that will be passed to a callback.
:type userdata: Any, optional
:param shared_memory: Enables `shared_memory` mode.
If set to `False` inputs the data dispatcher will safely copy data
to existing Tensors (including up- or down-casting according to data type,
resizing of the input Tensor). Keeps Tensor inputs "as-is".
If set to `True` the data dispatcher tries to provide "zero-copy"
Tensors for every input in form of:
* `numpy.ndarray` and all the types that are castable to it, e.g. `torch.Tensor`
Data that is going to be copied:
* `numpy.ndarray` which are not C contiguous
* inputs which data types are mismatched from Infer Request's inputs
* inputs that should be in `BF16` data type
* scalar inputs (i.e. `np.float_`/`int`/`float`)
Keeps Tensor inputs "as-is".
Note: Use with extra care, shared data can be modified during runtime!
Note: Using `shared_memory` may result in extra memory overhead.
Default value: False
"""
super().start_async(
_data_dispatch(
self[self.get_idle_request_id()],
inputs,
is_shared=shared_memory,
),
userdata,
)
class Core(CoreBase):
"""Core class represents OpenVINO runtime Core entity.
User applications can create several Core class instances, but in this
case, the underlying plugins are created multiple times and not shared
between several Core instances. The recommended way is to have a single
Core instance per application.
"""
def compile_model(
self,
model: Union[Model, str, Path],
device_name: Optional[str] = None,
config: Optional[dict] = None,
) -> CompiledModel:
"""Creates a compiled model.
Creates a compiled model from a source Model object or
reads model and creates a compiled model from IR / ONNX / PDPD / TF and TFLite file.
This can be more efficient than using read_model + compile_model(model_in_memory_object) flow,
especially for cases when caching is enabled and cached model is available.
If device_name is not specified, the default OpenVINO device will be selected by AUTO plugin.
Users can create as many compiled models as they need, and use them simultaneously
(up to the limitation of the hardware resources).
:param model: Model acquired from read_model function or a path to a model in IR / ONNX / PDPD /
TF and TFLite format.
:type model: Union[openvino.runtime.Model, str, pathlib.Path]
:param device_name: Optional. Name of the device to load the model to. If not specified,
the default OpenVINO device will be selected by AUTO plugin.
:type device_name: str
:param config: Optional dict of pairs:
(property name, property value) relevant only for this load operation.
:type config: dict, optional
:return: A compiled model.
:rtype: openvino.runtime.CompiledModel
"""
if device_name is None:
return CompiledModel(
super().compile_model(model, {} if config is None else config),
)
return CompiledModel(
super().compile_model(model, device_name, {} if config is None else config),
)
def import_model(
self,
model_stream: bytes,
device_name: str,
config: Optional[dict] = None,
) -> CompiledModel:
"""Imports a compiled model from a previously exported one.
:param model_stream: Input stream, containing a model previously exported, using export_model method.
:type model_stream: bytes
:param device_name: Name of device to which compiled model is imported.
Note: if device_name is not used to compile the original model,
an exception is thrown.
:type device_name: str
:param config: Optional dict of pairs:
(property name, property value) relevant only for this load operation.
:type config: dict, optional
:return: A compiled model.
:rtype: openvino.runtime.CompiledModel
:Example:
.. code-block:: python
user_stream = compiled.export_model()
with open('./my_model', 'wb') as f:
f.write(user_stream)
# ...
new_compiled = core.import_model(user_stream, "CPU")
.. code-block:: python
user_stream = io.BytesIO()
compiled.export_model(user_stream)
with open('./my_model', 'wb') as f:
f.write(user_stream.getvalue()) # or read() if seek(0) was applied before
# ...
new_compiled = core.import_model(user_stream, "CPU")
"""
return CompiledModel(
super().import_model(
model_stream,
device_name,
{} if config is None else config,
),
)
def compile_model(model_path: Union[str, Path]) -> CompiledModel:
"""Compact method to compile model with AUTO plugin.
:param model_path: Path to file with model.
:type model_path: str, pathlib.Path
:return: A compiled model
:rtype: openvino.runtime.CompiledModel
"""
core = Core()
return core.compile_model(model_path, "AUTO")

View File

@ -0,0 +1,26 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino.op
Low level wrappers for the c++ api in ov::op.
"""
# flake8: noqa
import numpy as np
from openvino._pyopenvino.op import Constant
"""Retrieve Constant inner data.
Internally uses PyBind11 Numpy's buffer protocol.
:return Numpy array containing internally stored constant data.
"""
Constant.get_data = lambda self: np.array(self, copy=True)
from openvino._pyopenvino.op import Parameter
from openvino._pyopenvino.op import if_op
from openvino._pyopenvino.op import loop
from openvino._pyopenvino.op import tensor_iterator

View File

@ -0,0 +1,22 @@
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Package: openvino.op.util
Low level wrappers for the c++ api in ov::op::util.
"""
# flake8: noqa
from openvino._pyopenvino.op.util import UnaryElementwiseArithmetic
from openvino._pyopenvino.op.util import BinaryElementwiseComparison
from openvino._pyopenvino.op.util import BinaryElementwiseArithmetic
from openvino._pyopenvino.op.util import BinaryElementwiseLogical
from openvino._pyopenvino.op.util import ArithmeticReduction
from openvino._pyopenvino.op.util import IndexReduction
from openvino._pyopenvino.op.util import VariableInfo
from openvino._pyopenvino.op.util import Variable
from openvino._pyopenvino.op.util import MergedInputDescription
from openvino._pyopenvino.op.util import InvariantInputDescription
from openvino._pyopenvino.op.util import SliceInputDescription
from openvino._pyopenvino.op.util import ConcatOutputDescription
from openvino._pyopenvino.op.util import BodyOutputDescription

View File

@ -0,0 +1,112 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.runtime.opset1.ops import absolute
from openvino.runtime.opset1.ops import absolute as abs
from openvino.runtime.opset1.ops import acos
from openvino.runtime.opset1.ops import add
from openvino.runtime.opset1.ops import asin
from openvino.runtime.opset1.ops import atan
from openvino.runtime.opset1.ops import avg_pool
from openvino.runtime.opset1.ops import batch_norm_inference
from openvino.runtime.opset1.ops import binary_convolution
from openvino.runtime.opset1.ops import broadcast
from openvino.runtime.opset1.ops import ceiling
from openvino.runtime.opset1.ops import ceiling as ceil
from openvino.runtime.opset1.ops import clamp
from openvino.runtime.opset1.ops import concat
from openvino.runtime.opset1.ops import constant
from openvino.runtime.opset1.ops import convert
from openvino.runtime.opset1.ops import convert_like
from openvino.runtime.opset1.ops import convolution
from openvino.runtime.opset1.ops import convolution_backprop_data
from openvino.runtime.opset1.ops import cos
from openvino.runtime.opset1.ops import cosh
from openvino.runtime.opset1.ops import ctc_greedy_decoder
from openvino.runtime.opset1.ops import deformable_convolution
from openvino.runtime.opset1.ops import deformable_psroi_pooling
from openvino.runtime.opset1.ops import depth_to_space
from openvino.runtime.opset1.ops import detection_output
from openvino.runtime.opset1.ops import divide
from openvino.runtime.opset1.ops import elu
from openvino.runtime.opset1.ops import equal
from openvino.runtime.opset1.ops import erf
from openvino.runtime.opset1.ops import exp
from openvino.runtime.opset1.ops import fake_quantize
from openvino.runtime.opset1.ops import floor
from openvino.runtime.opset1.ops import floor_mod
from openvino.runtime.opset1.ops import gather
from openvino.runtime.opset1.ops import gather_tree
from openvino.runtime.opset1.ops import greater
from openvino.runtime.opset1.ops import greater_equal
from openvino.runtime.opset1.ops import grn
from openvino.runtime.opset1.ops import group_convolution
from openvino.runtime.opset1.ops import group_convolution_backprop_data
from openvino.runtime.opset1.ops import hard_sigmoid
from openvino.runtime.opset1.ops import interpolate
from openvino.runtime.opset1.ops import less
from openvino.runtime.opset1.ops import less_equal
from openvino.runtime.opset1.ops import log
from openvino.runtime.opset1.ops import logical_and
from openvino.runtime.opset1.ops import logical_not
from openvino.runtime.opset1.ops import logical_or
from openvino.runtime.opset1.ops import logical_xor
from openvino.runtime.opset1.ops import lrn
from openvino.runtime.opset1.ops import lstm_cell
from openvino.runtime.opset1.ops import lstm_sequence
from openvino.runtime.opset1.ops import matmul
from openvino.runtime.opset1.ops import max_pool
from openvino.runtime.opset1.ops import maximum
from openvino.runtime.opset1.ops import minimum
from openvino.runtime.opset1.ops import mod
from openvino.runtime.opset1.ops import multiply
from openvino.runtime.opset1.ops import negative
from openvino.runtime.opset1.ops import non_max_suppression
from openvino.runtime.opset1.ops import normalize_l2
from openvino.runtime.opset1.ops import not_equal
from openvino.runtime.opset1.ops import one_hot
from openvino.runtime.opset1.ops import pad
from openvino.runtime.opset1.ops import parameter
from openvino.runtime.opset1.ops import power
from openvino.runtime.opset1.ops import prelu
from openvino.runtime.opset1.ops import prior_box
from openvino.runtime.opset1.ops import prior_box_clustered
from openvino.runtime.opset1.ops import psroi_pooling
from openvino.runtime.opset1.ops import proposal
from openvino.runtime.opset1.ops import range
from openvino.runtime.opset1.ops import reduce_logical_and
from openvino.runtime.opset1.ops import reduce_logical_or
from openvino.runtime.opset1.ops import reduce_max
from openvino.runtime.opset1.ops import reduce_mean
from openvino.runtime.opset1.ops import reduce_min
from openvino.runtime.opset1.ops import reduce_prod
from openvino.runtime.opset1.ops import reduce_sum
from openvino.runtime.opset1.ops import region_yolo
from openvino.runtime.opset1.ops import relu
from openvino.runtime.opset1.ops import reshape
from openvino.runtime.opset1.ops import result
from openvino.runtime.opset1.ops import reverse_sequence
from openvino.runtime.opset1.ops import select
from openvino.runtime.opset1.ops import selu
from openvino.runtime.opset1.ops import shape_of
from openvino.runtime.opset1.ops import sigmoid
from openvino.runtime.opset1.ops import sign
from openvino.runtime.opset1.ops import sin
from openvino.runtime.opset1.ops import sinh
from openvino.runtime.opset1.ops import softmax
from openvino.runtime.opset1.ops import space_to_depth
from openvino.runtime.opset1.ops import split
from openvino.runtime.opset1.ops import sqrt
from openvino.runtime.opset1.ops import squared_difference
from openvino.runtime.opset1.ops import squeeze
from openvino.runtime.opset1.ops import strided_slice
from openvino.runtime.opset1.ops import subtract
from openvino.runtime.opset1.ops import tan
from openvino.runtime.opset1.ops import tanh
from openvino.runtime.opset1.ops import tensor_iterator
from openvino.runtime.opset1.ops import tile
from openvino.runtime.opset1.ops import topk
from openvino.runtime.opset1.ops import transpose
from openvino.runtime.opset1.ops import unsqueeze
from openvino.runtime.opset1.ops import variadic_split

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,178 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.runtime.opset1.ops import absolute
from openvino.runtime.opset1.ops import absolute as abs
from openvino.runtime.opset1.ops import acos
from openvino.runtime.opset4.ops import acosh
from openvino.runtime.opset8.ops import adaptive_avg_pool
from openvino.runtime.opset8.ops import adaptive_max_pool
from openvino.runtime.opset1.ops import add
from openvino.runtime.opset1.ops import asin
from openvino.runtime.opset4.ops import asinh
from openvino.runtime.opset3.ops import assign
from openvino.runtime.opset1.ops import atan
from openvino.runtime.opset4.ops import atanh
from openvino.runtime.opset1.ops import avg_pool
from openvino.runtime.opset5.ops import batch_norm_inference
from openvino.runtime.opset2.ops import batch_to_space
from openvino.runtime.opset1.ops import binary_convolution
from openvino.runtime.opset3.ops import broadcast
from openvino.runtime.opset3.ops import bucketize
from openvino.runtime.opset1.ops import ceiling
from openvino.runtime.opset1.ops import ceiling as ceil
from openvino.runtime.opset1.ops import clamp
from openvino.runtime.opset1.ops import concat
from openvino.runtime.opset1.ops import constant
from openvino.runtime.opset1.ops import convert
from openvino.runtime.opset1.ops import convert_like
from openvino.runtime.opset1.ops import convolution
from openvino.runtime.opset1.ops import convolution_backprop_data
from openvino.runtime.opset1.ops import cos
from openvino.runtime.opset1.ops import cosh
from openvino.runtime.opset1.ops import ctc_greedy_decoder
from openvino.runtime.opset6.ops import ctc_greedy_decoder_seq_len
from openvino.runtime.opset4.ops import ctc_loss
from openvino.runtime.opset3.ops import cum_sum
from openvino.runtime.opset3.ops import cum_sum as cumsum
from openvino.runtime.opset8.ops import deformable_convolution
from openvino.runtime.opset1.ops import deformable_psroi_pooling
from openvino.runtime.opset1.ops import depth_to_space
from openvino.runtime.opset8.ops import detection_output
from openvino.runtime.opset7.ops import dft
from openvino.runtime.opset1.ops import divide
from openvino.runtime.opset7.ops import einsum
from openvino.runtime.opset1.ops import elu
from openvino.runtime.opset3.ops import embedding_bag_offsets_sum
from openvino.runtime.opset3.ops import embedding_bag_packed_sum
from openvino.runtime.opset3.ops import embedding_segments_sum
from openvino.runtime.opset3.ops import extract_image_patches
from openvino.runtime.opset1.ops import equal
from openvino.runtime.opset1.ops import erf
from openvino.runtime.opset1.ops import exp
from openvino.runtime.opset9.ops import eye
from openvino.runtime.opset1.ops import fake_quantize
from openvino.runtime.opset1.ops import floor
from openvino.runtime.opset1.ops import floor_mod
from openvino.runtime.opset8.ops import gather
from openvino.runtime.opset6.ops import gather_elements
from openvino.runtime.opset8.ops import gather_nd
from openvino.runtime.opset1.ops import gather_tree
from openvino.runtime.opset7.ops import gelu
from openvino.runtime.opset9.ops import generate_proposals
from openvino.runtime.opset1.ops import greater
from openvino.runtime.opset1.ops import greater_equal
from openvino.runtime.opset9.ops import grid_sample
from openvino.runtime.opset1.ops import grn
from openvino.runtime.opset1.ops import group_convolution
from openvino.runtime.opset1.ops import group_convolution_backprop_data
from openvino.runtime.opset3.ops import gru_cell
from openvino.runtime.opset5.ops import gru_sequence
from openvino.runtime.opset1.ops import hard_sigmoid
from openvino.runtime.opset5.ops import hsigmoid
from openvino.runtime.opset4.ops import hswish
from openvino.runtime.opset7.ops import idft
from openvino.runtime.opset8.ops import if_op
from openvino.runtime.opset10.ops import interpolate
from openvino.runtime.opset9.ops import irdft
from openvino.runtime.opset10.ops import is_finite
from openvino.runtime.opset10.ops import is_inf
from openvino.runtime.opset10.ops import is_nan
from openvino.runtime.opset8.ops import i420_to_bgr
from openvino.runtime.opset8.ops import i420_to_rgb
from openvino.runtime.opset1.ops import less
from openvino.runtime.opset1.ops import less_equal
from openvino.runtime.opset1.ops import log
from openvino.runtime.opset1.ops import logical_and
from openvino.runtime.opset1.ops import logical_not
from openvino.runtime.opset1.ops import logical_or
from openvino.runtime.opset1.ops import logical_xor
from openvino.runtime.opset5.ops import log_softmax
from openvino.runtime.opset5.ops import loop
from openvino.runtime.opset1.ops import lrn
from openvino.runtime.opset4.ops import lstm_cell
from openvino.runtime.opset5.ops import lstm_sequence
from openvino.runtime.opset1.ops import matmul
from openvino.runtime.opset8.ops import matrix_nms
from openvino.runtime.opset8.ops import max_pool
from openvino.runtime.opset1.ops import maximum
from openvino.runtime.opset1.ops import minimum
from openvino.runtime.opset4.ops import mish
from openvino.runtime.opset1.ops import mod
from openvino.runtime.opset9.ops import multiclass_nms
from openvino.runtime.opset1.ops import multiply
from openvino.runtime.opset6.ops import mvn
from openvino.runtime.opset1.ops import negative
from openvino.runtime.opset9.ops import non_max_suppression
from openvino.runtime.opset3.ops import non_zero
from openvino.runtime.opset1.ops import normalize_l2
from openvino.runtime.opset1.ops import not_equal
from openvino.runtime.opset8.ops import nv12_to_bgr
from openvino.runtime.opset8.ops import nv12_to_rgb
from openvino.runtime.opset1.ops import one_hot
from openvino.runtime.opset1.ops import pad
from openvino.runtime.opset1.ops import parameter
from openvino.runtime.opset1.ops import power
from openvino.runtime.opset1.ops import prelu
from openvino.runtime.opset8.ops import prior_box
from openvino.runtime.opset1.ops import prior_box_clustered
from openvino.runtime.opset1.ops import psroi_pooling
from openvino.runtime.opset4.ops import proposal
from openvino.runtime.opset1.ops import range
from openvino.runtime.opset8.ops import random_uniform
from openvino.runtime.opset9.ops import rdft
from openvino.runtime.opset3.ops import read_value
from openvino.runtime.opset4.ops import reduce_l1
from openvino.runtime.opset4.ops import reduce_l2
from openvino.runtime.opset1.ops import reduce_logical_and
from openvino.runtime.opset1.ops import reduce_logical_or
from openvino.runtime.opset1.ops import reduce_max
from openvino.runtime.opset1.ops import reduce_mean
from openvino.runtime.opset1.ops import reduce_min
from openvino.runtime.opset1.ops import reduce_prod
from openvino.runtime.opset1.ops import reduce_sum
from openvino.runtime.opset1.ops import region_yolo
from openvino.runtime.opset2.ops import reorg_yolo
from openvino.runtime.opset1.ops import relu
from openvino.runtime.opset1.ops import reshape
from openvino.runtime.opset1.ops import result
from openvino.runtime.opset1.ops import reverse_sequence
from openvino.runtime.opset3.ops import rnn_cell
from openvino.runtime.opset5.ops import rnn_sequence
from openvino.runtime.opset9.ops import roi_align
from openvino.runtime.opset2.ops import roi_pooling
from openvino.runtime.opset7.ops import roll
from openvino.runtime.opset5.ops import round
from openvino.runtime.opset3.ops import scatter_elements_update
from openvino.runtime.opset3.ops import scatter_update
from openvino.runtime.opset1.ops import select
from openvino.runtime.opset1.ops import selu
from openvino.runtime.opset3.ops import shape_of
from openvino.runtime.opset3.ops import shuffle_channels
from openvino.runtime.opset1.ops import sigmoid
from openvino.runtime.opset1.ops import sign
from openvino.runtime.opset1.ops import sin
from openvino.runtime.opset1.ops import sinh
from openvino.runtime.opset8.ops import slice
from openvino.runtime.opset8.ops import softmax
from openvino.runtime.opset4.ops import softplus
from openvino.runtime.opset9.ops import softsign
from openvino.runtime.opset2.ops import space_to_batch
from openvino.runtime.opset1.ops import space_to_depth
from openvino.runtime.opset1.ops import split
from openvino.runtime.opset1.ops import sqrt
from openvino.runtime.opset1.ops import squared_difference
from openvino.runtime.opset1.ops import squeeze
from openvino.runtime.opset1.ops import strided_slice
from openvino.runtime.opset1.ops import subtract
from openvino.runtime.opset4.ops import swish
from openvino.runtime.opset1.ops import tan
from openvino.runtime.opset1.ops import tanh
from openvino.runtime.opset1.ops import tensor_iterator
from openvino.runtime.opset1.ops import tile
from openvino.runtime.opset3.ops import topk
from openvino.runtime.opset1.ops import transpose
from openvino.runtime.opset10.ops import unique
from openvino.runtime.opset1.ops import unsqueeze
from openvino.runtime.opset1.ops import variadic_split

View File

@ -0,0 +1,173 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""Factory functions for all openvino ops."""
from functools import partial
from typing import List, Optional
from openvino.runtime import Node
from openvino.runtime.opset_utils import _get_node_factory
from openvino.runtime.utils.decorators import nameable_op
from openvino.runtime.utils.types import (
NodeInput,
as_nodes,
as_node,
make_constant_node,
)
_get_node_factory_opset4 = partial(_get_node_factory, "opset4")
_get_node_factory_opset10 = partial(_get_node_factory, "opset10")
# -------------------------------------------- ops ------------------------------------------------
@nameable_op
def interpolate(
image: NodeInput,
output_shape: NodeInput,
scales: NodeInput,
mode: str,
shape_calculation_mode: str,
pads_begin: Optional[List[int]] = None,
pads_end: Optional[List[int]] = None,
coordinate_transformation_mode: str = "half_pixel",
nearest_mode: str = "round_prefer_floor",
antialias: bool = False,
cube_coeff: float = -0.75,
axes: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Perform interpolation of independent slices in input tensor.
:param image: The node providing input tensor with data for interpolation.
:param output_shape: 1D tensor describing output shape for spatial axes.
:param scales: 1D tensor describing scales for spatial axes.
:param mode: Specifies type of interpolation. Possible values are: nearest, linear,
linear_onnx, cubic.
:param shape_calculation_mode:
Specifies which input, sizes or scales, is used to calculate an output
shape.
:param pads_begin: Specifies the number of pixels to add to the beginning of the image
being interpolated. Default is None.
:param pads_end: Specifies the number of pixels to add to the end of the image being
interpolated. Default is None.
:param coordinate_transformation_mode:
Specifies how to transform the coordinate in the resized tensor to the
coordinate in the original tensor. Default is "half_pixel".
:param nearest_mode: Specifies round mode when mode == nearest and is used only when
mode == nearest. Default is "round_prefer_floor".
:param antialias: Specifies whether to perform anti-aliasing. Default is False.
:param cube_coeff: Specifies the parameter a for cubic interpolation. Default is -0.75.
:param axes: 1D tensor specifying dimension indices where interpolation is applied.
Default is None.
:param name: Optional name for the output node. Default is None.
:return: Node representing interpolation operation.
"""
attrs = {
"mode": mode,
"shape_calculation_mode": shape_calculation_mode,
"coordinate_transformation_mode": coordinate_transformation_mode,
"nearest_mode": nearest_mode,
"antialias": antialias,
"cube_coeff": cube_coeff,
}
attrs["pads_begin"] = [] if pads_begin is None else pads_begin
attrs["pads_end"] = [] if pads_end is None else pads_end
inputs = as_nodes(image, output_shape, scales) if axes is None else as_nodes(image, output_shape, scales, axes)
# This is an update of the operator version, so even though this is opset 10,
# the operator is taken from opset 4.
return _get_node_factory_opset4().create("Interpolate", inputs, attrs)
@nameable_op
def is_finite(data: NodeInput, name: Optional[str] = None) -> Node:
"""Performs element-wise mapping from NaN and Infinity to False. Other values are mapped to True.
:param data: A tensor of floating-point numeric type and arbitrary shape.
:param name: Optional name for the output node. The default is None.
:return: Node representing is_finite operation.
"""
return _get_node_factory_opset10().create("IsFinite", as_nodes(data))
@nameable_op
def is_inf(
data: NodeInput,
attributes: Optional[dict] = None,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs IsInf operation.
:param data: The input tensor.
:param attributes: Optional dictionary containing IsInf attributes.
:param name: Optional name of the node.
Available attributes:
* detect_negative Specifies whether to map negative infinities to true in output map.
Range of values: true, false
Default value: true
Required: no
* detect_positive Specifies whether to map positive infinities to true in output map.
Range of values: true, false
Default value: true
Required: no
:return: A new IsInf node.
"""
if not attributes:
attributes = {}
return _get_node_factory_opset10().create("IsInf", as_nodes(data), attributes)
@nameable_op
def is_nan(data: NodeInput, name: Optional[str] = None) -> Node:
"""Performs element-wise mapping from NaN to True. Other values are mapped to False.
:param data: A tensor of floating point numeric type and arbitrary shape.
:param name: Optional name for the output node. Default is None.
:return: Node representing is_nan operation.
"""
return _get_node_factory_opset10().create("IsNaN", as_nodes(data))
@nameable_op
def unique(
data: NodeInput,
axis: Optional[NodeInput] = None,
sorted: Optional[bool] = True,
index_element_type: Optional[str] = "i64",
count_element_type: Optional[str] = "i64",
name: Optional[str] = None,
) -> Node:
"""Operator which selects and returns unique elements or unique slices of the input tensor.
:param data: Input data tensor.
:param axis: (Optional) An input tensor containing the axis value.
If not provided or None, data input is considered as a flattened tensor.
Default value: None.
:param sorted: (Optional) Controls the order of the returned unique values,
sorts ascendingly when true.
Default value: True.
:param index_element_type: (Optional) The data type set for outputs containing indices.
Default value: "i64".
:param count_element_type: (Optional) The data type set for the output with repetition count.
Default value: "i64".
:param name: (Optional) A name for the output node. Default value: None.
:return: Node representing Unique operation.
"""
if axis is None:
inputs = as_nodes(data)
else:
inputs = as_nodes(data, axis)
attributes = {
"sorted": sorted,
"index_element_type": index_element_type,
"count_element_type": count_element_type,
}
return _get_node_factory_opset10().create("Unique", inputs, attributes)

View File

@ -0,0 +1,178 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.runtime.opset1.ops import absolute
from openvino.runtime.opset1.ops import absolute as abs
from openvino.runtime.opset1.ops import acos
from openvino.runtime.opset4.ops import acosh
from openvino.runtime.opset8.ops import adaptive_avg_pool
from openvino.runtime.opset8.ops import adaptive_max_pool
from openvino.runtime.opset1.ops import add
from openvino.runtime.opset1.ops import asin
from openvino.runtime.opset4.ops import asinh
from openvino.runtime.opset3.ops import assign
from openvino.runtime.opset1.ops import atan
from openvino.runtime.opset4.ops import atanh
from openvino.runtime.opset1.ops import avg_pool
from openvino.runtime.opset5.ops import batch_norm_inference
from openvino.runtime.opset2.ops import batch_to_space
from openvino.runtime.opset1.ops import binary_convolution
from openvino.runtime.opset3.ops import broadcast
from openvino.runtime.opset3.ops import bucketize
from openvino.runtime.opset1.ops import ceiling
from openvino.runtime.opset1.ops import ceiling as ceil
from openvino.runtime.opset1.ops import clamp
from openvino.runtime.opset1.ops import concat
from openvino.runtime.opset1.ops import constant
from openvino.runtime.opset1.ops import convert
from openvino.runtime.opset1.ops import convert_like
from openvino.runtime.opset1.ops import convolution
from openvino.runtime.opset1.ops import convolution_backprop_data
from openvino.runtime.opset1.ops import cos
from openvino.runtime.opset1.ops import cosh
from openvino.runtime.opset1.ops import ctc_greedy_decoder
from openvino.runtime.opset6.ops import ctc_greedy_decoder_seq_len
from openvino.runtime.opset4.ops import ctc_loss
from openvino.runtime.opset3.ops import cum_sum
from openvino.runtime.opset3.ops import cum_sum as cumsum
from openvino.runtime.opset8.ops import deformable_convolution
from openvino.runtime.opset1.ops import deformable_psroi_pooling
from openvino.runtime.opset1.ops import depth_to_space
from openvino.runtime.opset8.ops import detection_output
from openvino.runtime.opset7.ops import dft
from openvino.runtime.opset1.ops import divide
from openvino.runtime.opset7.ops import einsum
from openvino.runtime.opset1.ops import elu
from openvino.runtime.opset3.ops import embedding_bag_offsets_sum
from openvino.runtime.opset3.ops import embedding_bag_packed_sum
from openvino.runtime.opset3.ops import embedding_segments_sum
from openvino.runtime.opset3.ops import extract_image_patches
from openvino.runtime.opset1.ops import equal
from openvino.runtime.opset1.ops import erf
from openvino.runtime.opset1.ops import exp
from openvino.runtime.opset9.ops import eye
from openvino.runtime.opset1.ops import fake_quantize
from openvino.runtime.opset1.ops import floor
from openvino.runtime.opset1.ops import floor_mod
from openvino.runtime.opset8.ops import gather
from openvino.runtime.opset6.ops import gather_elements
from openvino.runtime.opset8.ops import gather_nd
from openvino.runtime.opset1.ops import gather_tree
from openvino.runtime.opset7.ops import gelu
from openvino.runtime.opset9.ops import generate_proposals
from openvino.runtime.opset1.ops import greater
from openvino.runtime.opset1.ops import greater_equal
from openvino.runtime.opset9.ops import grid_sample
from openvino.runtime.opset1.ops import grn
from openvino.runtime.opset1.ops import group_convolution
from openvino.runtime.opset1.ops import group_convolution_backprop_data
from openvino.runtime.opset3.ops import gru_cell
from openvino.runtime.opset5.ops import gru_sequence
from openvino.runtime.opset1.ops import hard_sigmoid
from openvino.runtime.opset5.ops import hsigmoid
from openvino.runtime.opset4.ops import hswish
from openvino.runtime.opset7.ops import idft
from openvino.runtime.opset8.ops import if_op
from openvino.runtime.opset11.ops import interpolate
from openvino.runtime.opset9.ops import irdft
from openvino.runtime.opset10.ops import is_finite
from openvino.runtime.opset10.ops import is_inf
from openvino.runtime.opset10.ops import is_nan
from openvino.runtime.opset8.ops import i420_to_bgr
from openvino.runtime.opset8.ops import i420_to_rgb
from openvino.runtime.opset1.ops import less
from openvino.runtime.opset1.ops import less_equal
from openvino.runtime.opset1.ops import log
from openvino.runtime.opset1.ops import logical_and
from openvino.runtime.opset1.ops import logical_not
from openvino.runtime.opset1.ops import logical_or
from openvino.runtime.opset1.ops import logical_xor
from openvino.runtime.opset5.ops import log_softmax
from openvino.runtime.opset5.ops import loop
from openvino.runtime.opset1.ops import lrn
from openvino.runtime.opset4.ops import lstm_cell
from openvino.runtime.opset5.ops import lstm_sequence
from openvino.runtime.opset1.ops import matmul
from openvino.runtime.opset8.ops import matrix_nms
from openvino.runtime.opset8.ops import max_pool
from openvino.runtime.opset1.ops import maximum
from openvino.runtime.opset1.ops import minimum
from openvino.runtime.opset4.ops import mish
from openvino.runtime.opset1.ops import mod
from openvino.runtime.opset9.ops import multiclass_nms
from openvino.runtime.opset1.ops import multiply
from openvino.runtime.opset6.ops import mvn
from openvino.runtime.opset1.ops import negative
from openvino.runtime.opset9.ops import non_max_suppression
from openvino.runtime.opset3.ops import non_zero
from openvino.runtime.opset1.ops import normalize_l2
from openvino.runtime.opset1.ops import not_equal
from openvino.runtime.opset8.ops import nv12_to_bgr
from openvino.runtime.opset8.ops import nv12_to_rgb
from openvino.runtime.opset1.ops import one_hot
from openvino.runtime.opset1.ops import pad
from openvino.runtime.opset1.ops import parameter
from openvino.runtime.opset1.ops import power
from openvino.runtime.opset1.ops import prelu
from openvino.runtime.opset8.ops import prior_box
from openvino.runtime.opset1.ops import prior_box_clustered
from openvino.runtime.opset1.ops import psroi_pooling
from openvino.runtime.opset4.ops import proposal
from openvino.runtime.opset1.ops import range
from openvino.runtime.opset8.ops import random_uniform
from openvino.runtime.opset9.ops import rdft
from openvino.runtime.opset3.ops import read_value
from openvino.runtime.opset4.ops import reduce_l1
from openvino.runtime.opset4.ops import reduce_l2
from openvino.runtime.opset1.ops import reduce_logical_and
from openvino.runtime.opset1.ops import reduce_logical_or
from openvino.runtime.opset1.ops import reduce_max
from openvino.runtime.opset1.ops import reduce_mean
from openvino.runtime.opset1.ops import reduce_min
from openvino.runtime.opset1.ops import reduce_prod
from openvino.runtime.opset1.ops import reduce_sum
from openvino.runtime.opset1.ops import region_yolo
from openvino.runtime.opset2.ops import reorg_yolo
from openvino.runtime.opset1.ops import relu
from openvino.runtime.opset1.ops import reshape
from openvino.runtime.opset1.ops import result
from openvino.runtime.opset1.ops import reverse_sequence
from openvino.runtime.opset3.ops import rnn_cell
from openvino.runtime.opset5.ops import rnn_sequence
from openvino.runtime.opset9.ops import roi_align
from openvino.runtime.opset2.ops import roi_pooling
from openvino.runtime.opset7.ops import roll
from openvino.runtime.opset5.ops import round
from openvino.runtime.opset3.ops import scatter_elements_update
from openvino.runtime.opset3.ops import scatter_update
from openvino.runtime.opset1.ops import select
from openvino.runtime.opset1.ops import selu
from openvino.runtime.opset3.ops import shape_of
from openvino.runtime.opset3.ops import shuffle_channels
from openvino.runtime.opset1.ops import sigmoid
from openvino.runtime.opset1.ops import sign
from openvino.runtime.opset1.ops import sin
from openvino.runtime.opset1.ops import sinh
from openvino.runtime.opset8.ops import slice
from openvino.runtime.opset8.ops import softmax
from openvino.runtime.opset4.ops import softplus
from openvino.runtime.opset9.ops import softsign
from openvino.runtime.opset2.ops import space_to_batch
from openvino.runtime.opset1.ops import space_to_depth
from openvino.runtime.opset1.ops import split
from openvino.runtime.opset1.ops import sqrt
from openvino.runtime.opset1.ops import squared_difference
from openvino.runtime.opset1.ops import squeeze
from openvino.runtime.opset1.ops import strided_slice
from openvino.runtime.opset1.ops import subtract
from openvino.runtime.opset4.ops import swish
from openvino.runtime.opset1.ops import tan
from openvino.runtime.opset1.ops import tanh
from openvino.runtime.opset1.ops import tensor_iterator
from openvino.runtime.opset1.ops import tile
from openvino.runtime.opset11.ops import topk
from openvino.runtime.opset1.ops import transpose
from openvino.runtime.opset10.ops import unique
from openvino.runtime.opset1.ops import unsqueeze
from openvino.runtime.opset1.ops import variadic_split

View File

@ -0,0 +1,107 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""Factory functions for all openvino ops."""
from functools import partial
from typing import List, Optional
from openvino.runtime import Node
from openvino.runtime.opset_utils import _get_node_factory
from openvino.runtime.utils.decorators import nameable_op
from openvino.runtime.utils.types import (
NodeInput,
as_nodes,
)
_get_node_factory_opset11 = partial(_get_node_factory, "opset11")
# -------------------------------------------- ops ------------------------------------------------
@nameable_op
def interpolate(
image: NodeInput,
scales_or_sizes: NodeInput,
mode: str,
shape_calculation_mode: str,
pads_begin: Optional[List[int]] = None,
pads_end: Optional[List[int]] = None,
coordinate_transformation_mode: str = "half_pixel",
nearest_mode: str = "round_prefer_floor",
antialias: bool = False,
cube_coeff: float = -0.75,
axes: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Perfors the interpolation of the input tensor.
:param image: The node providing input tensor with data for interpolation.
:param scales_or_sizes:
1D tensor providing information used to calculate the output shape
of the operation. It might contain floats (scales) or integers(sizes).
:param mode: Specifies type of interpolation. Possible values are: nearest, linear,
linear_onnx, cubic, bilinear_pillow, bicubic_pillow.
:param shape_calculation_mode:
Specifies how the scales_or_sizes input should be interpreted.
:param pads_begin: Specifies the number of pixels to add to the beginning of the image
being interpolated. Default is None.
:param pads_end: Specifies the number of pixels to add to the end of the image being
interpolated. Default is None.
:param coordinate_transformation_mode:
Specifies how to transform the coordinate in the resized tensor to the
coordinate in the original tensor. Default is "half_pixel".
:param nearest_mode: Specifies round mode when mode == nearest and is used only when
mode == nearest. Default is "round_prefer_floor".
:param antialias: Specifies whether to perform anti-aliasing. Default is False.
:param cube_coeff: Specifies the parameter a for cubic interpolation. Default is -0.75.
:param axes: 1D tensor specifying dimension indices where interpolation is applied.
The default is None.
:param name: Optional name for the output node. The default is None.
:return: Node representing the interpolation operation.
"""
attrs = {
"mode": mode,
"shape_calculation_mode": shape_calculation_mode,
"coordinate_transformation_mode": coordinate_transformation_mode,
"nearest_mode": nearest_mode,
"antialias": antialias,
"cube_coeff": cube_coeff,
}
attrs["pads_begin"] = [] if pads_begin is None else pads_begin
attrs["pads_end"] = [] if pads_end is None else pads_end
inputs = as_nodes(image, scales_or_sizes) if axes is None else as_nodes(image, scales_or_sizes, axes)
return _get_node_factory_opset11().create("Interpolate", inputs, attrs)
@nameable_op
def topk(
data: NodeInput,
k: NodeInput,
axis: int,
mode: str,
sort: str,
index_element_type: str = "i32",
stable: bool = False,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs TopK.
:param data: Input data.
:param k: K.
:param axis: TopK Axis.
:param mode: Compute TopK largest ('max') or smallest ('min')
:param sort: Order of output elements (sort by: 'none', 'index' or 'value')
:param index_element_type: Type of output tensor with indices.
:param stable: Specifies whether the equivalent elements should maintain
their relative order from the input tensor during sorting.
:return: The new node which performs TopK
"""
return _get_node_factory_opset11().create(
"TopK",
as_nodes(data, k),
{"axis": axis, "mode": mode, "sort": sort, "index_element_type": index_element_type, "stable": stable},
)

View File

@ -0,0 +1,118 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.runtime.opset1.ops import absolute
from openvino.runtime.opset1.ops import absolute as abs
from openvino.runtime.opset1.ops import acos
from openvino.runtime.opset1.ops import add
from openvino.runtime.opset1.ops import asin
from openvino.runtime.opset1.ops import atan
from openvino.runtime.opset1.ops import avg_pool
from openvino.runtime.opset1.ops import batch_norm_inference
from openvino.runtime.opset2.ops import batch_to_space
from openvino.runtime.opset1.ops import binary_convolution
from openvino.runtime.opset1.ops import broadcast
from openvino.runtime.opset1.ops import ceiling
from openvino.runtime.opset1.ops import ceiling as ceil
from openvino.runtime.opset1.ops import clamp
from openvino.runtime.opset1.ops import concat
from openvino.runtime.opset1.ops import constant
from openvino.runtime.opset1.ops import convert
from openvino.runtime.opset1.ops import convert_like
from openvino.runtime.opset1.ops import convolution
from openvino.runtime.opset1.ops import convolution_backprop_data
from openvino.runtime.opset1.ops import cos
from openvino.runtime.opset1.ops import cosh
from openvino.runtime.opset1.ops import ctc_greedy_decoder
from openvino.runtime.opset1.ops import deformable_convolution
from openvino.runtime.opset1.ops import deformable_psroi_pooling
from openvino.runtime.opset1.ops import depth_to_space
from openvino.runtime.opset1.ops import detection_output
from openvino.runtime.opset1.ops import divide
from openvino.runtime.opset1.ops import elu
from openvino.runtime.opset1.ops import equal
from openvino.runtime.opset1.ops import erf
from openvino.runtime.opset1.ops import exp
from openvino.runtime.opset1.ops import fake_quantize
from openvino.runtime.opset1.ops import floor
from openvino.runtime.opset1.ops import floor_mod
from openvino.runtime.opset1.ops import gather
from openvino.runtime.opset1.ops import gather_tree
from openvino.runtime.opset2.ops import gelu
from openvino.runtime.opset1.ops import greater
from openvino.runtime.opset1.ops import greater_equal
from openvino.runtime.opset1.ops import grn
from openvino.runtime.opset1.ops import group_convolution
from openvino.runtime.opset1.ops import group_convolution_backprop_data
from openvino.runtime.opset1.ops import hard_sigmoid
from openvino.runtime.opset1.ops import interpolate
from openvino.runtime.opset1.ops import less
from openvino.runtime.opset1.ops import less_equal
from openvino.runtime.opset1.ops import log
from openvino.runtime.opset1.ops import logical_and
from openvino.runtime.opset1.ops import logical_not
from openvino.runtime.opset1.ops import logical_or
from openvino.runtime.opset1.ops import logical_xor
from openvino.runtime.opset1.ops import lrn
from openvino.runtime.opset1.ops import lstm_cell
from openvino.runtime.opset1.ops import lstm_sequence
from openvino.runtime.opset1.ops import matmul
from openvino.runtime.opset1.ops import max_pool
from openvino.runtime.opset1.ops import maximum
from openvino.runtime.opset1.ops import minimum
from openvino.runtime.opset1.ops import mod
from openvino.runtime.opset1.ops import multiply
from openvino.runtime.opset2.ops import mvn
from openvino.runtime.opset1.ops import negative
from openvino.runtime.opset1.ops import non_max_suppression
from openvino.runtime.opset1.ops import normalize_l2
from openvino.runtime.opset1.ops import not_equal
from openvino.runtime.opset1.ops import one_hot
from openvino.runtime.opset1.ops import pad
from openvino.runtime.opset1.ops import parameter
from openvino.runtime.opset1.ops import power
from openvino.runtime.opset1.ops import prelu
from openvino.runtime.opset1.ops import prior_box
from openvino.runtime.opset1.ops import prior_box_clustered
from openvino.runtime.opset1.ops import psroi_pooling
from openvino.runtime.opset1.ops import proposal
from openvino.runtime.opset1.ops import range
from openvino.runtime.opset1.ops import reduce_logical_and
from openvino.runtime.opset1.ops import reduce_logical_or
from openvino.runtime.opset1.ops import reduce_max
from openvino.runtime.opset1.ops import reduce_mean
from openvino.runtime.opset1.ops import reduce_min
from openvino.runtime.opset1.ops import reduce_prod
from openvino.runtime.opset1.ops import reduce_sum
from openvino.runtime.opset1.ops import region_yolo
from openvino.runtime.opset2.ops import reorg_yolo
from openvino.runtime.opset1.ops import relu
from openvino.runtime.opset1.ops import reshape
from openvino.runtime.opset1.ops import result
from openvino.runtime.opset1.ops import reverse_sequence
from openvino.runtime.opset2.ops import roi_pooling
from openvino.runtime.opset1.ops import select
from openvino.runtime.opset1.ops import selu
from openvino.runtime.opset1.ops import shape_of
from openvino.runtime.opset1.ops import sigmoid
from openvino.runtime.opset1.ops import sign
from openvino.runtime.opset1.ops import sin
from openvino.runtime.opset1.ops import sinh
from openvino.runtime.opset1.ops import softmax
from openvino.runtime.opset2.ops import space_to_batch
from openvino.runtime.opset1.ops import space_to_depth
from openvino.runtime.opset1.ops import split
from openvino.runtime.opset1.ops import sqrt
from openvino.runtime.opset1.ops import squared_difference
from openvino.runtime.opset1.ops import squeeze
from openvino.runtime.opset1.ops import strided_slice
from openvino.runtime.opset1.ops import subtract
from openvino.runtime.opset1.ops import tan
from openvino.runtime.opset1.ops import tanh
from openvino.runtime.opset1.ops import tensor_iterator
from openvino.runtime.opset1.ops import tile
from openvino.runtime.opset1.ops import topk
from openvino.runtime.opset1.ops import transpose
from openvino.runtime.opset1.ops import unsqueeze
from openvino.runtime.opset1.ops import variadic_split

View File

@ -0,0 +1,182 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""Factory functions for all openvino ops."""
from typing import Callable, Iterable, List, Optional, Set, Union
import numpy as np
from functools import partial
from openvino.runtime import Node, Shape
from openvino.runtime.op import Constant, Parameter
from openvino.runtime.opset_utils import _get_node_factory
from openvino.runtime.utils.decorators import binary_op, nameable_op, unary_op
from openvino.runtime.utils.input_validation import (
assert_list_of_ints,
check_valid_attributes,
is_non_negative_value,
is_positive_value,
)
from openvino.runtime.utils.node_factory import NodeFactory
from openvino.runtime.utils.types import (
NodeInput,
NumericData,
NumericType,
ScalarData,
TensorShape,
as_node,
as_nodes,
get_dtype,
get_element_type,
get_element_type_str,
make_constant_node,
)
_get_node_factory_opset2 = partial(_get_node_factory, "opset2")
# -------------------------------------------- ops ------------------------------------------------
@nameable_op
def batch_to_space(
data: NodeInput,
block_shape: NodeInput,
crops_begin: NodeInput,
crops_end: NodeInput,
name: Optional[str] = None,
) -> Node:
"""Perform BatchToSpace operation on the input tensor.
BatchToSpace permutes data from the batch dimension of the data tensor into spatial dimensions.
:param data: Node producing the data tensor.
:param block_shape: The sizes of the block of values to be moved.
:param crops_begin: Specifies the amount to crop from the beginning along each axis of `data`.
:param crops_end: Specifies the amount to crop from the end along each axis of `data`.
:param name: Optional output node name.
:return: The new node performing a BatchToSpace operation.
"""
return _get_node_factory_opset2().create(
"BatchToSpace",
as_nodes(data, block_shape, crops_begin, crops_end),
)
@unary_op
def gelu(node: NodeInput, name: Optional[str] = None) -> Node:
r"""Perform Gaussian Error Linear Unit operation element-wise on data from input node.
Computes GELU function:
\f[ f(x) = 0.5\cdot x\cdot(1 + erf( \dfrac{x}{\sqrt{2}}) \f]
For more information refer to [Gaussian Error Linear Unit (GELU)](https://arxiv.org/pdf/1606.08415.pdf>)
:param node: Input tensor. One of: input node, array or scalar.
:param name: Optional output node name.
:return: The new node performing a GELU operation on its input data element-wise.
"""
return _get_node_factory_opset2().create("Gelu", [node])
@nameable_op
def mvn(
data: Node,
across_channels: bool = False,
normalize_variance: bool = False,
eps: float = 1e-9,
name: Optional[str] = None,
) -> Node:
r"""Perform Mean Variance Normalization operation on data from input node.
Computes MVN on the input tensor `data` (called `X`) using formula:
\f[ Y = \dfrac{X-EX}{\sqrt{E(X-EX)^2}} \f]
:param data: The node with data tensor.
:param across_channels: Denotes if mean values are shared across channels.
:param normalize_variance: Denotes whether to perform variance normalization.
:param eps: The number added to the variance to avoid division by zero
when normalizing the value. Scalar value.
:param name: Optional output node name.
:return: The new node performing a MVN operation on input tensor.
"""
return _get_node_factory_opset2().create(
"MVN",
[data],
{
"across_channels": across_channels,
"normalize_variance": normalize_variance,
"eps": eps,
},
)
@nameable_op
def reorg_yolo(input: Node, stride: List[int], name: Optional[str] = None) -> Node:
"""Return a node which produces the ReorgYolo operation.
:param input: Input data.
:param stride: Stride to reorganize input by.
:param name: Optional name for output node.
:return: ReorgYolo node.
"""
return _get_node_factory_opset2().create("ReorgYolo", [input], {"stride": stride})
@nameable_op
def roi_pooling(
input: NodeInput,
coords: NodeInput,
output_size: TensorShape,
spatial_scale: NumericData,
method: str,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces an ROIPooling operation.
:param input: Input feature map `{N, C, ...}`.
:param coords: Coordinates of bounding boxes.
:param output_size: Height/Width of ROI output features (shape).
:param spatial_scale: Ratio of input feature map over input image size (float).
:param method: Method of pooling - string: "max" or "bilinear".
:return: ROIPooling node.
"""
method = method.lower()
return _get_node_factory_opset2().create(
"ROIPooling",
as_nodes(input, coords),
{
"output_size": Shape(output_size),
"spatial_scale": spatial_scale,
"method": method,
},
)
@nameable_op
def space_to_batch(
data: NodeInput,
block_shape: NodeInput,
pads_begin: NodeInput,
pads_end: NodeInput,
name: Optional[str] = None,
) -> Node:
"""Perform SpaceToBatch operation on the input tensor.
SpaceToBatch permutes data tensor blocks of spatial data into batch dimension.
The operator returns a copy of the input tensor where values from spatial blocks dimensions
are moved in the batch dimension
:param data: Node producing the data tensor.
:param block_shape: The sizes of the block of values to be moved.
:param pads_begin: Specifies the padding for the beginning along each axis of `data`.
:param pads_end: Specifies the padding for the ending along each axis of `data`.
:param name: Optional output node name.
:return: The new node performing a SpaceToBatch operation.
"""
return _get_node_factory_opset2().create(
"SpaceToBatch",
as_nodes(data, block_shape, pads_begin, pads_end),
)

View File

@ -0,0 +1,134 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.runtime.opset1.ops import absolute
from openvino.runtime.opset1.ops import absolute as abs
from openvino.runtime.opset1.ops import acos
from openvino.runtime.opset1.ops import add
from openvino.runtime.opset1.ops import asin
from openvino.runtime.opset3.ops import assign
from openvino.runtime.opset1.ops import atan
from openvino.runtime.opset1.ops import avg_pool
from openvino.runtime.opset1.ops import batch_norm_inference
from openvino.runtime.opset2.ops import batch_to_space
from openvino.runtime.opset1.ops import binary_convolution
from openvino.runtime.opset3.ops import broadcast
from openvino.runtime.opset3.ops import bucketize
from openvino.runtime.opset1.ops import ceiling
from openvino.runtime.opset1.ops import ceiling as ceil
from openvino.runtime.opset1.ops import clamp
from openvino.runtime.opset1.ops import concat
from openvino.runtime.opset1.ops import constant
from openvino.runtime.opset1.ops import convert
from openvino.runtime.opset1.ops import convert_like
from openvino.runtime.opset1.ops import convolution
from openvino.runtime.opset1.ops import convolution_backprop_data
from openvino.runtime.opset1.ops import cos
from openvino.runtime.opset1.ops import cosh
from openvino.runtime.opset1.ops import ctc_greedy_decoder
from openvino.runtime.opset3.ops import cum_sum
from openvino.runtime.opset3.ops import cum_sum as cumsum
from openvino.runtime.opset1.ops import deformable_convolution
from openvino.runtime.opset1.ops import deformable_psroi_pooling
from openvino.runtime.opset1.ops import depth_to_space
from openvino.runtime.opset1.ops import detection_output
from openvino.runtime.opset1.ops import divide
from openvino.runtime.opset1.ops import elu
from openvino.runtime.opset3.ops import embedding_bag_offsets_sum
from openvino.runtime.opset3.ops import embedding_bag_packed_sum
from openvino.runtime.opset3.ops import embedding_segments_sum
from openvino.runtime.opset3.ops import extract_image_patches
from openvino.runtime.opset1.ops import equal
from openvino.runtime.opset1.ops import erf
from openvino.runtime.opset1.ops import exp
from openvino.runtime.opset1.ops import fake_quantize
from openvino.runtime.opset1.ops import floor
from openvino.runtime.opset1.ops import floor_mod
from openvino.runtime.opset1.ops import gather
from openvino.runtime.opset1.ops import gather_tree
from openvino.runtime.opset2.ops import gelu
from openvino.runtime.opset1.ops import greater
from openvino.runtime.opset1.ops import greater_equal
from openvino.runtime.opset1.ops import grn
from openvino.runtime.opset1.ops import group_convolution
from openvino.runtime.opset1.ops import group_convolution_backprop_data
from openvino.runtime.opset3.ops import gru_cell
from openvino.runtime.opset1.ops import hard_sigmoid
from openvino.runtime.opset1.ops import interpolate
from openvino.runtime.opset1.ops import less
from openvino.runtime.opset1.ops import less_equal
from openvino.runtime.opset1.ops import log
from openvino.runtime.opset1.ops import logical_and
from openvino.runtime.opset1.ops import logical_not
from openvino.runtime.opset1.ops import logical_or
from openvino.runtime.opset1.ops import logical_xor
from openvino.runtime.opset1.ops import lrn
from openvino.runtime.opset1.ops import lstm_cell
from openvino.runtime.opset1.ops import lstm_sequence
from openvino.runtime.opset1.ops import matmul
from openvino.runtime.opset1.ops import max_pool
from openvino.runtime.opset1.ops import maximum
from openvino.runtime.opset1.ops import minimum
from openvino.runtime.opset1.ops import mod
from openvino.runtime.opset1.ops import multiply
from openvino.runtime.opset2.ops import mvn
from openvino.runtime.opset1.ops import negative
from openvino.runtime.opset3.ops import non_max_suppression
from openvino.runtime.opset3.ops import non_zero
from openvino.runtime.opset1.ops import normalize_l2
from openvino.runtime.opset1.ops import not_equal
from openvino.runtime.opset1.ops import one_hot
from openvino.runtime.opset1.ops import pad
from openvino.runtime.opset1.ops import parameter
from openvino.runtime.opset1.ops import power
from openvino.runtime.opset1.ops import prelu
from openvino.runtime.opset1.ops import prior_box
from openvino.runtime.opset1.ops import prior_box_clustered
from openvino.runtime.opset1.ops import psroi_pooling
from openvino.runtime.opset1.ops import proposal
from openvino.runtime.opset1.ops import range
from openvino.runtime.opset3.ops import read_value
from openvino.runtime.opset1.ops import reduce_logical_and
from openvino.runtime.opset1.ops import reduce_logical_or
from openvino.runtime.opset1.ops import reduce_max
from openvino.runtime.opset1.ops import reduce_mean
from openvino.runtime.opset1.ops import reduce_min
from openvino.runtime.opset1.ops import reduce_prod
from openvino.runtime.opset1.ops import reduce_sum
from openvino.runtime.opset1.ops import region_yolo
from openvino.runtime.opset2.ops import reorg_yolo
from openvino.runtime.opset1.ops import relu
from openvino.runtime.opset1.ops import reshape
from openvino.runtime.opset1.ops import result
from openvino.runtime.opset1.ops import reverse_sequence
from openvino.runtime.opset3.ops import rnn_cell
from openvino.runtime.opset3.ops import roi_align
from openvino.runtime.opset2.ops import roi_pooling
from openvino.runtime.opset3.ops import scatter_elements_update
from openvino.runtime.opset3.ops import scatter_update
from openvino.runtime.opset1.ops import select
from openvino.runtime.opset1.ops import selu
from openvino.runtime.opset3.ops import shape_of
from openvino.runtime.opset3.ops import shuffle_channels
from openvino.runtime.opset1.ops import sigmoid
from openvino.runtime.opset1.ops import sign
from openvino.runtime.opset1.ops import sin
from openvino.runtime.opset1.ops import sinh
from openvino.runtime.opset1.ops import softmax
from openvino.runtime.opset2.ops import space_to_batch
from openvino.runtime.opset1.ops import space_to_depth
from openvino.runtime.opset1.ops import split
from openvino.runtime.opset1.ops import sqrt
from openvino.runtime.opset1.ops import squared_difference
from openvino.runtime.opset1.ops import squeeze
from openvino.runtime.opset1.ops import strided_slice
from openvino.runtime.opset1.ops import subtract
from openvino.runtime.opset1.ops import tan
from openvino.runtime.opset1.ops import tanh
from openvino.runtime.opset1.ops import tensor_iterator
from openvino.runtime.opset1.ops import tile
from openvino.runtime.opset3.ops import topk
from openvino.runtime.opset1.ops import transpose
from openvino.runtime.opset1.ops import unsqueeze
from openvino.runtime.opset1.ops import variadic_split

View File

@ -0,0 +1,638 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""Factory functions for all openvino ops."""
from typing import Callable, Iterable, List, Optional, Set, Union
import numpy as np
from functools import partial
from openvino.runtime import Node, Shape
from openvino.runtime.op import Constant, Parameter
from openvino.runtime.opset_utils import _get_node_factory
from openvino.runtime.utils.decorators import binary_op, nameable_op, unary_op
from openvino.runtime.utils.input_validation import (
assert_list_of_ints,
check_valid_attributes,
is_non_negative_value,
is_positive_value,
)
from openvino.runtime.utils.node_factory import NodeFactory
from openvino.runtime.utils.types import (
NodeInput,
NumericData,
NumericType,
ScalarData,
TensorShape,
as_node,
as_nodes,
get_dtype,
get_element_type,
get_element_type_str,
make_constant_node,
)
_get_node_factory_opset3 = partial(_get_node_factory, "opset3")
# -------------------------------------------- ops ------------------------------------------------
@nameable_op
def assign(new_value: NodeInput, variable_id: str, name: Optional[str] = None) -> Node:
"""Return a node which produces the Assign operation.
:param new_value: Node producing a value to be assigned to a variable.
:param variable_id: Id of a variable to be updated.
:param name: Optional name for output node.
:return: Assign node
"""
return _get_node_factory_opset3().create(
"Assign",
[as_node(new_value)],
{"variable_id": variable_id},
)
@nameable_op
def broadcast(
data: NodeInput,
target_shape: NodeInput,
axes_mapping: Optional[NodeInput] = None,
broadcast_spec: str = "NUMPY",
name: Optional[str] = None,
) -> Node:
"""Create a node which broadcasts the input node's values along specified axes to a desired shape.
:param data: The node with input tensor data.
:param target_shape: The node with a new shape we want to broadcast tensor to.
:param axes_mapping: The node with a axis positions (0-based) in the result
that are being broadcast.
:param broadcast_spec: The type of broadcasting that specifies mapping of input tensor axes
to output shape axes. Range of values: NUMPY, EXPLICIT, BIDIRECTIONAL.
:param name: Optional new name for output node.
:return: New node with broadcast shape.
"""
inputs = as_nodes(data, target_shape)
if broadcast_spec.upper() == "EXPLICIT":
inputs.append(as_node(axes_mapping))
return _get_node_factory_opset3().create(
"Broadcast",
inputs,
{"mode": broadcast_spec.upper()},
)
@nameable_op
def bucketize(
data: Node,
buckets: NodeInput,
output_type: str = "i64",
with_right_bound: bool = True,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces the Bucketize operation.
:param data: Input data to bucketize
:param buckets: 1-D of sorted unique boundaries for buckets
:param output_type: Output tensor type, "i64" or "i32", defaults to i64
:param with_right_bound: indicates whether bucket includes the right or left
edge of interval. default true = includes right edge
:param name: Optional name for output node.
:return: Bucketize node
"""
return _get_node_factory_opset3().create(
"Bucketize",
[data, as_node(buckets)],
{"output_type": output_type, "with_right_bound": with_right_bound},
)
@nameable_op
def cum_sum(
arg: NodeInput,
axis: NodeInput,
exclusive: bool = False,
reverse: bool = False,
name: Optional[str] = None,
) -> Node:
"""Construct a cumulative summation operation.
:param arg: The tensor to be summed.
:param axis: zero dimension tensor specifying axis position along which sum will be performed.
:param exclusive: if set to true, the top element is not included
:param reverse: if set to true, will perform the sums in reverse direction
:return: New node performing the operation
"""
return _get_node_factory_opset3().create(
"CumSum",
as_nodes(arg, axis),
{"exclusive": exclusive, "reverse": reverse},
)
@nameable_op
def embedding_bag_offsets_sum(
emb_table: Node,
indices: NodeInput,
offsets: NodeInput,
default_index: Optional[NodeInput] = None,
per_sample_weights: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs sums of bags of embeddings without the intermediate embeddings.
:param emb_table: Tensor containing the embedding lookup table.
:param indices: Tensor with indices.
:param offsets: Tensor containing the starting index positions of each bag in indices.
:param per_sample_weights: Tensor with weights for each sample.
:param default_index: Scalar containing default index in embedding table to fill empty bags.
:param name: Optional name for output node.
:return: The new node which performs EmbeddingBagOffsetsSum
"""
inputs = [emb_table, as_node(indices), as_node(offsets)]
if per_sample_weights is not None:
inputs.append(default_index)
inputs.append(per_sample_weights)
elif default_index is not None:
inputs.append(default_index)
return _get_node_factory_opset3().create("EmbeddingBagOffsetsSum", inputs, {})
@nameable_op
def embedding_bag_packed_sum(
emb_table: NodeInput,
indices: NodeInput,
per_sample_weights: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Return an EmbeddingBagPackedSum node.
EmbeddingSegmentsSum constructs an output tensor by replacing every index in a given
input tensor with a row (from the weights matrix) at that index
:param emb_table: Tensor containing the embedding lookup table.
:param indices: Tensor with indices.
:param per_sample_weights: Weights to be multiplied with embedding table.
:param name: Optional name for output node.
:return: EmbeddingBagPackedSum node
"""
inputs = [as_node(emb_table), as_node(indices)]
if per_sample_weights is not None:
inputs.append(as_node(per_sample_weights))
return _get_node_factory_opset3().create("EmbeddingBagPackedSum", inputs, {})
@nameable_op
def embedding_segments_sum(
emb_table: Node,
indices: NodeInput,
segment_ids: NodeInput,
num_segments: Optional[NodeInput] = None,
default_index: Optional[NodeInput] = None,
per_sample_weights: Optional[NodeInput] = None,
name: Optional[str] = None,
) -> Node:
"""Return an EmbeddingSegmentsSum node.
EmbeddingSegmentsSum constructs an output tensor by replacing every index in a given
input tensor with a row (from the weights matrix) at that index
:param emb_table: Tensor containing the embedding lookup table.
:param indices: Tensor with indices.
:param segment_ids: Tensor with indices into the output Tensor
:param num_segments: Tensor with number of segments.
:param default_index: Scalar containing default index in embedding table to fill empty bags.
:param per_sample_weights: Weights to be multiplied with embedding table.
:param name: Optional name for output node.
:return: EmbeddingSegmentsSum node
"""
inputs = [as_node(emb_table), as_node(indices), as_node(segment_ids)]
if per_sample_weights is not None:
inputs.append(as_node(num_segments))
inputs.append(as_node(default_index))
inputs.append(as_node(per_sample_weights))
elif default_index is not None:
inputs.append(as_node(num_segments))
inputs.append(as_node(default_index))
elif num_segments is not None:
inputs.append(as_node(num_segments))
return _get_node_factory_opset3().create("EmbeddingSegmentsSum", inputs, {})
@nameable_op
def extract_image_patches(
image: NodeInput,
sizes: TensorShape,
strides: List[int],
rates: TensorShape,
auto_pad: str,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces the ExtractImagePatches operation.
:param image: 4-D Input data to extract image patches.
:param sizes: Patch size in the format of [size_rows, size_cols].
:param strides: Patch movement stride in the format of [stride_rows, stride_cols]
:param rates: Element seleciton rate for creating a patch.
:param auto_pad: Padding type.
:param name: Optional name for output node.
:return: ExtractImagePatches node
"""
return _get_node_factory_opset3().create(
"ExtractImagePatches",
[as_node(image)],
{"sizes": sizes, "strides": strides, "rates": rates, "auto_pad": auto_pad},
)
@nameable_op
def gru_cell(
X: NodeInput,
initial_hidden_state: NodeInput,
W: NodeInput,
R: NodeInput,
B: NodeInput,
hidden_size: int,
activations: Optional[List[str]] = None,
activations_alpha: Optional[List[float]] = None,
activations_beta: Optional[List[float]] = None,
clip: float = 0.0,
linear_before_reset: bool = False,
name: Optional[str] = None,
) -> Node:
"""Perform GRUCell operation on the tensor from input node.
GRUCell represents a single GRU Cell that computes the output
using the formula described in the paper: https://arxiv.org/abs/1406.1078
Note this class represents only single *cell* and not whole *layer*.
:param X: The input tensor with shape: [batch_size, input_size].
:param initial_hidden_state: The hidden state tensor at current time step with shape:
[batch_size, hidden_size].
:param W: The weights for matrix multiplication, gate order: zrh.
Shape: [3*hidden_size, input_size].
:param R: The recurrence weights for matrix multiplication.
Shape: [3*hidden_size, hidden_size].
:param B: The sum of biases (weight and recurrence).
For linear_before_reset set True the shape is [4*hidden_size].
Otherwise the shape is [3*hidden_size].
:param hidden_size: The number of hidden units for recurrent cell.
Specifies hidden state size.
:param activations: The vector of activation functions used inside recurrent cell.
:param activation_alpha: The vector of alpha parameters for activation functions in
order respective to activation list.
:param activation_beta: The vector of beta parameters for activation functions in order
respective to activation list.
:param clip: The value defining clipping range [-clip, clip] on input of
activation functions.
:param linear_before_reset: Flag denotes if the layer behaves according to the modification
of GRUCell described in the formula in the ONNX documentation.
:param name: Optional output node name.
:return: The new node performing a GRUCell operation on tensor from input node.
"""
if activations is None:
activations = ["sigmoid", "tanh"]
if activations_alpha is None:
activations_alpha = []
if activations_beta is None:
activations_beta = []
input_nodes = as_nodes(X, initial_hidden_state, W, R, B)
attributes = {
"hidden_size": hidden_size,
"activations": activations,
"activations_alpha": activations_alpha,
"activations_beta": activations_beta,
"linear_before_reset": linear_before_reset,
"clip": clip,
}
return _get_node_factory_opset3().create("GRUCell", input_nodes, attributes)
@nameable_op
def non_max_suppression(
boxes: NodeInput,
scores: NodeInput,
max_output_boxes_per_class: Optional[NodeInput] = None,
iou_threshold: Optional[NodeInput] = None,
score_threshold: Optional[NodeInput] = None,
box_encoding: str = "corner",
sort_result_descending: bool = True,
output_type: str = "i64",
name: Optional[str] = None,
) -> Node:
"""Return a node which performs NonMaxSuppression.
:param boxes: Tensor with box coordinates.
:param scores: Tensor with box scores.
:param max_output_boxes_per_class: Tensor Specifying maximum number of boxes
to be selected per class.
:param iou_threshold: Tensor specifying intersection over union threshold
:param score_threshold: Tensor specifying minimum score to consider box for the processing.
:param box_encoding: Format of boxes data encoding.
:param sort_result_descending: Flag that specifies whenever it is necessary to sort selected
boxes across batches or not.
:param output_type: Output element type.
:return: The new node which performs NonMaxSuppression
"""
if max_output_boxes_per_class is None:
max_output_boxes_per_class = make_constant_node(0, np.int64)
if iou_threshold is None:
iou_threshold = make_constant_node(0, np.float32)
if score_threshold is None:
score_threshold = make_constant_node(0, np.float32)
inputs = as_nodes(boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold)
attributes = {
"box_encoding": box_encoding,
"sort_result_descending": sort_result_descending,
"output_type": output_type,
}
return _get_node_factory_opset3().create("NonMaxSuppression", inputs, attributes)
@nameable_op
def non_zero(data: NodeInput, output_type: str = "i64", name: Optional[str] = None) -> Node:
"""Return the indices of the elements that are non-zero.
:param data: Input data.
:param output_type: Output tensor type.
:return: The new node which performs NonZero
"""
return _get_node_factory_opset3().create(
"NonZero",
[as_node(data)],
{"output_type": output_type},
)
@nameable_op
def read_value(init_value: NodeInput, variable_id: str, name: Optional[str] = None) -> Node:
"""Return a node which produces the Assign operation.
:param init_value: Node producing a value to be returned instead of an unassigned variable.
:param variable_id: Id of a variable to be read.
:param name: Optional name for output node.
:return: ReadValue node
"""
return _get_node_factory_opset3().create(
"ReadValue",
[as_node(init_value)],
{"variable_id": variable_id},
)
@nameable_op
def rnn_cell(
X: NodeInput,
initial_hidden_state: NodeInput,
W: NodeInput,
R: NodeInput,
B: NodeInput,
hidden_size: int,
activations: List[str],
activations_alpha: List[float],
activations_beta: List[float],
clip: float = 0.0,
name: Optional[str] = None,
) -> Node:
"""Perform RNNCell operation on tensor from input node.
It follows notation and equations defined as in ONNX standard:
https://github.com/onnx/onnx/blob/master/docs/Operators.md#RNN
Note this class represents only single *cell* and not whole RNN *layer*.
:param X: The input tensor with shape: [batch_size, input_size].
:param initial_hidden_state: The hidden state tensor at current time step with shape:
[batch_size, hidden_size].
:param W: The weight tensor with shape: [hidden_size, input_size].
:param R: The recurrence weight tensor with shape: [hidden_size,
hidden_size].
:param B: The sum of biases (weight and recurrence) with shape: [hidden_size].
:param hidden_size: The number of hidden units for recurrent cell.
Specifies hidden state size.
:param activations: The vector of activation functions used inside recurrent cell.
:param activation_alpha: The vector of alpha parameters for activation functions in
order respective to activation list.
:param activation_beta: The vector of beta parameters for activation functions in order
respective to activation list.
:param clip: The value defining clipping range [-clip, clip] on input of
activation functions.
:param name: Optional output node name.
:return: The new node performing a RNNCell operation on tensor from input node.
"""
if activations is None:
activations = ["tanh"]
if activations_alpha is None:
activations_alpha = []
if activations_beta is None:
activations_beta = []
input_nodes = as_nodes(X, initial_hidden_state, W, R, B)
attributes = {
"hidden_size": hidden_size,
"activations": activations,
"activations_alpha": activations_alpha,
"activations_beta": activations_beta,
"clip": clip,
}
return _get_node_factory_opset3().create("RNNCell", input_nodes, attributes)
@nameable_op
def roi_align(
data: NodeInput,
rois: NodeInput,
batch_indices: NodeInput,
pooled_h: int,
pooled_w: int,
sampling_ratio: int,
spatial_scale: float,
mode: str,
name: Optional[str] = None,
) -> Node:
"""Return a node which performs ROIAlign.
:param data: Input data.
:param rois: RoIs (Regions of Interest) to pool over.
:param batch_indices: Tensor with each element denoting the index of
the corresponding image in the batch.
:param pooled_h: Height of the ROI output feature map.
:param pooled_w: Width of the ROI output feature map.
:param sampling_ratio: Number of bins over height and width to use to calculate
each output feature map element.
:param spatial_scale: Multiplicative spatial scale factor to translate ROI coordinates.
:param mode: Method to perform pooling to produce output feature map elements.
:return: The new node which performs ROIAlign
"""
inputs = as_nodes(data, rois, batch_indices)
attributes = {
"pooled_h": pooled_h,
"pooled_w": pooled_w,
"sampling_ratio": sampling_ratio,
"spatial_scale": spatial_scale,
"mode": mode,
}
return _get_node_factory_opset3().create("ROIAlign", inputs, attributes)
@nameable_op
def scatter_elements_update(
data: NodeInput,
indices: NodeInput,
updates: NodeInput,
axis: NodeInput,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces a ScatterElementsUpdate operation.
:param data: The input tensor to be updated.
:param indices: The tensor with indexes which will be updated.
:param updates: The tensor with update values.
:param axis: The axis for scatter.
:return: ScatterElementsUpdate node
ScatterElementsUpdate creates a copy of the first input tensor with updated elements
specified with second and third input tensors.
For each entry in `updates`, the target index in `data` is obtained by combining
the corresponding entry in `indices` with the index of the entry itself: the
index-value for dimension equal to `axis` is obtained from the value of the
corresponding entry in `indices` and the index-value for dimension not equal
to `axis` is obtained from the index of the entry itself.
"""
return _get_node_factory_opset3().create(
"ScatterElementsUpdate",
as_nodes(data, indices, updates, axis),
)
@nameable_op
def scatter_update(
data: Node,
indices: NodeInput,
updates: NodeInput,
axis: NodeInput,
name: Optional[str] = None,
) -> Node:
"""Return a node which produces a ScatterUpdate operation.
ScatterUpdate sets new values to slices from data addressed by indices.
:param data: The input tensor to be updated.
:param indices: The tensor with indexes which will be updated.
:param updates: The tensor with update values.
:param axis: The axis at which elements will be updated.
:return: ScatterUpdate node
"""
return _get_node_factory_opset3().create(
"ScatterUpdate",
as_nodes(data, indices, updates, axis),
)
@nameable_op
def shape_of(data: NodeInput, output_type: str = "i64", name: Optional[str] = None) -> Node:
"""Return a node which produces a tensor containing the shape of its input data.
:param data: The tensor containing the input data.
:param output_type: Output element type.
:return: ShapeOf node
"""
return _get_node_factory_opset3().create(
"ShapeOf",
[as_node(data)],
{"output_type": output_type},
)
@nameable_op
def shuffle_channels(data: Node, axis: int, group: int, name: Optional[str] = None) -> Node:
"""Perform permutation on data in the channel dimension of the input tensor.
:param data: The node with input tensor.
:param axis: Channel dimension index in the data tensor.
A negative value means that the index should be calculated
from the back of the input data shape.
:param group: The channel dimension specified by the axis parameter
should be split into this number of groups.
:param name: Optional output node name.
:return: The new node performing a permutation on data in the channel dimension
of the input tensor.
The operation is the equivalent with the following transformation of the input tensor
`data` of shape [N, C, H, W]:
`data_reshaped` = reshape(`data`, [N, group, C / group, H * W])
`data_trnasposed` = transpose(`data_reshaped`, [0, 2, 1, 3])
`output` = reshape(`data_trnasposed`, [N, C, H, W])
For example:
.. code-block:: python
Inputs: tensor of shape [1, 6, 2, 2]
data = [[[[ 0., 1.], [ 2., 3.]],
[[ 4., 5.], [ 6., 7.]],
[[ 8., 9.], [10., 11.]],
[[12., 13.], [14., 15.]],
[[16., 17.], [18., 19.]],
[[20., 21.], [22., 23.]]]]
axis = 1
groups = 3
Output: tensor of shape [1, 6, 2, 2]
output = [[[[ 0., 1.], [ 2., 3.]],
[[ 8., 9.], [10., 11.]],
[[16., 17.], [18., 19.]],
[[ 4., 5.], [ 6., 7.]],
[[12., 13.], [14., 15.]],
[[20., 21.], [22., 23.]]]]
"""
return _get_node_factory_opset3().create(
"ShuffleChannels",
[as_node(data)],
{"axis": axis, "group": group},
)
@nameable_op
def topk(
data: NodeInput,
k: NodeInput,
axis: int,
mode: str,
sort: str,
index_element_type: str = "i32",
name: Optional[str] = None,
) -> Node:
"""Return a node which performs TopK.
:param data: Input data.
:param k: K.
:param axis: TopK Axis.
:param mode: Compute TopK largest ('max') or smallest ('min')
:param sort: Order of output elements (sort by: 'none', 'index' or 'value')
:param index_element_type: Type of output tensor with indices.
:return: The new node which performs TopK (both indices and values)
"""
return _get_node_factory_opset3().create(
"TopK",
as_nodes(data, k),
{"axis": axis, "mode": mode, "sort": sort, "index_element_type": index_element_type},
)

View File

@ -0,0 +1,144 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
from openvino.runtime.opset1.ops import absolute
from openvino.runtime.opset1.ops import absolute as abs
from openvino.runtime.opset1.ops import acos
from openvino.runtime.opset4.ops import acosh
from openvino.runtime.opset1.ops import add
from openvino.runtime.opset1.ops import asin
from openvino.runtime.opset4.ops import asinh
from openvino.runtime.opset3.ops import assign
from openvino.runtime.opset1.ops import atan
from openvino.runtime.opset4.ops import atanh
from openvino.runtime.opset1.ops import avg_pool
from openvino.runtime.opset1.ops import batch_norm_inference
from openvino.runtime.opset2.ops import batch_to_space
from openvino.runtime.opset1.ops import binary_convolution
from openvino.runtime.opset3.ops import broadcast
from openvino.runtime.opset3.ops import bucketize
from openvino.runtime.opset1.ops import ceiling
from openvino.runtime.opset1.ops import ceiling as ceil
from openvino.runtime.opset1.ops import clamp
from openvino.runtime.opset1.ops import concat
from openvino.runtime.opset1.ops import constant
from openvino.runtime.opset1.ops import convert
from openvino.runtime.opset1.ops import convert_like
from openvino.runtime.opset1.ops import convolution
from openvino.runtime.opset1.ops import convolution_backprop_data
from openvino.runtime.opset1.ops import cos
from openvino.runtime.opset1.ops import cosh
from openvino.runtime.opset1.ops import ctc_greedy_decoder
from openvino.runtime.opset4.ops import ctc_loss
from openvino.runtime.opset3.ops import cum_sum
from openvino.runtime.opset3.ops import cum_sum as cumsum
from openvino.runtime.opset1.ops import deformable_convolution
from openvino.runtime.opset1.ops import deformable_psroi_pooling
from openvino.runtime.opset1.ops import depth_to_space
from openvino.runtime.opset1.ops import detection_output
from openvino.runtime.opset1.ops import divide
from openvino.runtime.opset1.ops import elu
from openvino.runtime.opset3.ops import embedding_bag_offsets_sum
from openvino.runtime.opset3.ops import embedding_bag_packed_sum
from openvino.runtime.opset3.ops import embedding_segments_sum
from openvino.runtime.opset3.ops import extract_image_patches
from openvino.runtime.opset1.ops import equal
from openvino.runtime.opset1.ops import erf
from openvino.runtime.opset1.ops import exp
from openvino.runtime.opset1.ops import fake_quantize
from openvino.runtime.opset1.ops import floor
from openvino.runtime.opset1.ops import floor_mod
from openvino.runtime.opset1.ops import gather
from openvino.runtime.opset1.ops import gather_tree
from openvino.runtime.opset2.ops import gelu
from openvino.runtime.opset1.ops import greater
from openvino.runtime.opset1.ops import greater_equal
from openvino.runtime.opset1.ops import grn
from openvino.runtime.opset1.ops import group_convolution
from openvino.runtime.opset1.ops import group_convolution_backprop_data
from openvino.runtime.opset3.ops import gru_cell
from openvino.runtime.opset1.ops import hard_sigmoid
from openvino.runtime.opset4.ops import hswish
from openvino.runtime.opset1.ops import interpolate
from openvino.runtime.opset1.ops import less
from openvino.runtime.opset1.ops import less_equal
from openvino.runtime.opset1.ops import log
from openvino.runtime.opset1.ops import logical_and
from openvino.runtime.opset1.ops import logical_not
from openvino.runtime.opset1.ops import logical_or
from openvino.runtime.opset1.ops import logical_xor
from openvino.runtime.opset1.ops import lrn
from openvino.runtime.opset4.ops import lstm_cell
from openvino.runtime.opset1.ops import lstm_sequence
from openvino.runtime.opset1.ops import matmul
from openvino.runtime.opset1.ops import max_pool
from openvino.runtime.opset1.ops import maximum
from openvino.runtime.opset1.ops import minimum
from openvino.runtime.opset4.ops import mish
from openvino.runtime.opset1.ops import mod
from openvino.runtime.opset1.ops import multiply
from openvino.runtime.opset2.ops import mvn
from openvino.runtime.opset1.ops import negative
from openvino.runtime.opset4.ops import non_max_suppression
from openvino.runtime.opset3.ops import non_zero
from openvino.runtime.opset1.ops import normalize_l2
from openvino.runtime.opset1.ops import not_equal
from openvino.runtime.opset1.ops import one_hot
from openvino.runtime.opset1.ops import pad
from openvino.runtime.opset1.ops import parameter
from openvino.runtime.opset1.ops import power
from openvino.runtime.opset1.ops import prelu
from openvino.runtime.opset1.ops import prior_box
from openvino.runtime.opset1.ops import prior_box_clustered
from openvino.runtime.opset1.ops import psroi_pooling
from openvino.runtime.opset4.ops import proposal
from openvino.runtime.opset1.ops import range
from openvino.runtime.opset3.ops import read_value
from openvino.runtime.opset4.ops import reduce_l1
from openvino.runtime.opset4.ops import reduce_l2
from openvino.runtime.opset1.ops import reduce_logical_and
from openvino.runtime.opset1.ops import reduce_logical_or
from openvino.runtime.opset1.ops import reduce_max
from openvino.runtime.opset1.ops import reduce_mean
from openvino.runtime.opset1.ops import reduce_min
from openvino.runtime.opset1.ops import reduce_prod
from openvino.runtime.opset1.ops import reduce_sum
from openvino.runtime.opset1.ops import region_yolo
from openvino.runtime.opset2.ops import reorg_yolo
from openvino.runtime.opset1.ops import relu
from openvino.runtime.opset1.ops import reshape
from openvino.runtime.opset1.ops import result
from openvino.runtime.opset1.ops import reverse_sequence
from openvino.runtime.opset3.ops import rnn_cell
from openvino.runtime.opset3.ops import roi_align
from openvino.runtime.opset2.ops import roi_pooling
from openvino.runtime.opset3.ops import scatter_elements_update
from openvino.runtime.opset3.ops import scatter_update
from openvino.runtime.opset1.ops import select
from openvino.runtime.opset1.ops import selu
from openvino.runtime.opset3.ops import shape_of
from openvino.runtime.opset3.ops import shuffle_channels
from openvino.runtime.opset1.ops import sigmoid
from openvino.runtime.opset1.ops import sign
from openvino.runtime.opset1.ops import sin
from openvino.runtime.opset1.ops import sinh
from openvino.runtime.opset1.ops import softmax
from openvino.runtime.opset4.ops import softplus
from openvino.runtime.opset2.ops import space_to_batch
from openvino.runtime.opset1.ops import space_to_depth
from openvino.runtime.opset1.ops import split
from openvino.runtime.opset1.ops import sqrt
from openvino.runtime.opset1.ops import squared_difference
from openvino.runtime.opset1.ops import squeeze
from openvino.runtime.opset1.ops import strided_slice
from openvino.runtime.opset1.ops import subtract
from openvino.runtime.opset4.ops import swish
from openvino.runtime.opset1.ops import tan
from openvino.runtime.opset1.ops import tanh
from openvino.runtime.opset1.ops import tensor_iterator
from openvino.runtime.opset1.ops import tile
from openvino.runtime.opset3.ops import topk
from openvino.runtime.opset1.ops import transpose
from openvino.runtime.opset1.ops import unsqueeze
from openvino.runtime.opset1.ops import variadic_split

Some files were not shown because too many files have changed in this diff Show More