diff --git a/lib/yolov5-face_Jan1/LICENSE b/lib/yolov5-face_Jan1/LICENSE deleted file mode 100644 index 9e419e042..000000000 --- a/lib/yolov5-face_Jan1/LICENSE +++ /dev/null @@ -1,674 +0,0 @@ -GNU GENERAL PUBLIC LICENSE - Version 3, 29 June 2007 - - Copyright (C) 2007 Free Software Foundation, Inc. - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - Preamble - - The GNU General Public License is a free, copyleft license for -software and other kinds of works. - - The licenses for most software and other practical works are designed -to take away your freedom to share and change the works. By contrast, -the GNU General Public License is intended to guarantee your freedom to -share and change all versions of a program--to make sure it remains free -software for all its users. We, the Free Software Foundation, use the -GNU General Public License for most of our software; it applies also to -any other work released this way by its authors. You can apply it to -your programs, too. - - When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -them if you wish), that you receive source code or can get it if you -want it, that you can change the software or use pieces of it in new -free programs, and that you know you can do these things. - - To protect your rights, we need to prevent others from denying you -these rights or asking you to surrender the rights. Therefore, you have -certain responsibilities if you distribute copies of the software, or if -you modify it: responsibilities to respect the freedom of others. - - For example, if you distribute copies of such a program, whether -gratis or for a fee, you must pass on to the recipients the same -freedoms that you received. You must make sure that they, too, receive -or can get the source code. And you must show them these terms so they -know their rights. - - Developers that use the GNU GPL protect your rights with two steps: -(1) assert copyright on the software, and (2) offer you this License -giving you legal permission to copy, distribute and/or modify it. - - For the developers' and authors' protection, the GPL clearly explains -that there is no warranty for this free software. For both users' and -authors' sake, the GPL requires that modified versions be marked as -changed, so that their problems will not be attributed erroneously to -authors of previous versions. - - Some devices are designed to deny users access to install or run -modified versions of the software inside them, although the manufacturer -can do so. This is fundamentally incompatible with the aim of -protecting users' freedom to change the software. The systematic -pattern of such abuse occurs in the area of products for individuals to -use, which is precisely where it is most unacceptable. Therefore, we -have designed this version of the GPL to prohibit the practice for those -products. If such problems arise substantially in other domains, we -stand ready to extend this provision to those domains in future versions -of the GPL, as needed to protect the freedom of users. - - Finally, every program is threatened constantly by software patents. -States should not allow patents to restrict development and use of -software on general-purpose computers, but in those that do, we wish to -avoid the special danger that patents applied to a free program could -make it effectively proprietary. To prevent this, the GPL assures that -patents cannot be used to render the program non-free. - - The precise terms and conditions for copying, distribution and -modification follow. - - TERMS AND CONDITIONS - - 0. Definitions. - - "This License" refers to version 3 of the GNU General Public License. - - "Copyright" also means copyright-like laws that apply to other kinds of -works, such as semiconductor masks. - - "The Program" refers to any copyrightable work licensed under this -License. Each licensee is addressed as "you". "Licensees" and -"recipients" may be individuals or organizations. - - To "modify" a work means to copy from or adapt all or part of the work -in a fashion requiring copyright permission, other than the making of an -exact copy. The resulting work is called a "modified version" of the -earlier work or a work "based on" the earlier work. - - A "covered work" means either the unmodified Program or a work based -on the Program. - - To "propagate" a work means to do anything with it that, without -permission, would make you directly or secondarily liable for -infringement under applicable copyright law, except executing it on a -computer or modifying a private copy. Propagation includes copying, -distribution (with or without modification), making available to the -public, and in some countries other activities as well. - - To "convey" a work means any kind of propagation that enables other -parties to make or receive copies. Mere interaction with a user through -a computer network, with no transfer of a copy, is not conveying. - - An interactive user interface displays "Appropriate Legal Notices" -to the extent that it includes a convenient and prominently visible -feature that (1) displays an appropriate copyright notice, and (2) -tells the user that there is no warranty for the work (except to the -extent that warranties are provided), that licensees may convey the -work under this License, and how to view a copy of this License. If -the interface presents a list of user commands or options, such as a -menu, a prominent item in the list meets this criterion. - - 1. Source Code. - - The "source code" for a work means the preferred form of the work -for making modifications to it. "Object code" means any non-source -form of a work. - - A "Standard Interface" means an interface that either is an official -standard defined by a recognized standards body, or, in the case of -interfaces specified for a particular programming language, one that -is widely used among developers working in that language. - - The "System Libraries" of an executable work include anything, other -than the work as a whole, that (a) is included in the normal form of -packaging a Major Component, but which is not part of that Major -Component, and (b) serves only to enable use of the work with that -Major Component, or to implement a Standard Interface for which an -implementation is available to the public in source code form. A -"Major Component", in this context, means a major essential component -(kernel, window system, and so on) of the specific operating system -(if any) on which the executable work runs, or a compiler used to -produce the work, or an object code interpreter used to run it. - - The "Corresponding Source" for a work in object code form means all -the source code needed to generate, install, and (for an executable -work) run the object code and to modify the work, including scripts to -control those activities. However, it does not include the work's -System Libraries, or general-purpose tools or generally available free -programs which are used unmodified in performing those activities but -which are not part of the work. For example, Corresponding Source -includes interface definition files associated with source files for -the work, and the source code for shared libraries and dynamically -linked subprograms that the work is specifically designed to require, -such as by intimate data communication or control flow between those -subprograms and other parts of the work. - - The Corresponding Source need not include anything that users -can regenerate automatically from other parts of the Corresponding -Source. - - The Corresponding Source for a work in source code form is that -same work. - - 2. Basic Permissions. - - All rights granted under this License are granted for the term of -copyright on the Program, and are irrevocable provided the stated -conditions are met. This License explicitly affirms your unlimited -permission to run the unmodified Program. The output from running a -covered work is covered by this License only if the output, given its -content, constitutes a covered work. This License acknowledges your -rights of fair use or other equivalent, as provided by copyright law. - - You may make, run and propagate covered works that you do not -convey, without conditions so long as your license otherwise remains -in force. You may convey covered works to others for the sole purpose -of having them make modifications exclusively for you, or provide you -with facilities for running those works, provided that you comply with -the terms of this License in conveying all material for which you do -not control copyright. Those thus making or running the covered works -for you must do so exclusively on your behalf, under your direction -and control, on terms that prohibit them from making any copies of -your copyrighted material outside their relationship with you. - - Conveying under any other circumstances is permitted solely under -the conditions stated below. Sublicensing is not allowed; section 10 -makes it unnecessary. - - 3. Protecting Users' Legal Rights From Anti-Circumvention Law. - - No covered work shall be deemed part of an effective technological -measure under any applicable law fulfilling obligations under article -11 of the WIPO copyright treaty adopted on 20 December 1996, or -similar laws prohibiting or restricting circumvention of such -measures. - - When you convey a covered work, you waive any legal power to forbid -circumvention of technological measures to the extent such circumvention -is effected by exercising rights under this License with respect to -the covered work, and you disclaim any intention to limit operation or -modification of the work as a means of enforcing, against the work's -users, your or third parties' legal rights to forbid circumvention of -technological measures. - - 4. Conveying Verbatim Copies. - - You may convey verbatim copies of the Program's source code as you -receive it, in any medium, provided that you conspicuously and -appropriately publish on each copy an appropriate copyright notice; -keep intact all notices stating that this License and any -non-permissive terms added in accord with section 7 apply to the code; -keep intact all notices of the absence of any warranty; and give all -recipients a copy of this License along with the Program. - - You may charge any price or no price for each copy that you convey, -and you may offer support or warranty protection for a fee. - - 5. Conveying Modified Source Versions. - - You may convey a work based on the Program, or the modifications to -produce it from the Program, in the form of source code under the -terms of section 4, provided that you also meet all of these conditions: - - a) The work must carry prominent notices stating that you modified - it, and giving a relevant date. - - b) The work must carry prominent notices stating that it is - released under this License and any conditions added under section - 7. This requirement modifies the requirement in section 4 to - "keep intact all notices". - - c) You must license the entire work, as a whole, under this - License to anyone who comes into possession of a copy. This - License will therefore apply, along with any applicable section 7 - additional terms, to the whole of the work, and all its parts, - regardless of how they are packaged. This License gives no - permission to license the work in any other way, but it does not - invalidate such permission if you have separately received it. - - d) If the work has interactive user interfaces, each must display - Appropriate Legal Notices; however, if the Program has interactive - interfaces that do not display Appropriate Legal Notices, your - work need not make them do so. - - A compilation of a covered work with other separate and independent -works, which are not by their nature extensions of the covered work, -and which are not combined with it such as to form a larger program, -in or on a volume of a storage or distribution medium, is called an -"aggregate" if the compilation and its resulting copyright are not -used to limit the access or legal rights of the compilation's users -beyond what the individual works permit. Inclusion of a covered work -in an aggregate does not cause this License to apply to the other -parts of the aggregate. - - 6. Conveying Non-Source Forms. - - You may convey a covered work in object code form under the terms -of sections 4 and 5, provided that you also convey the -machine-readable Corresponding Source under the terms of this License, -in one of these ways: - - a) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by the - Corresponding Source fixed on a durable physical medium - customarily used for software interchange. - - b) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by a - written offer, valid for at least three years and valid for as - long as you offer spare parts or customer support for that product - model, to give anyone who possesses the object code either (1) a - copy of the Corresponding Source for all the software in the - product that is covered by this License, on a durable physical - medium customarily used for software interchange, for a price no - more than your reasonable cost of physically performing this - conveying of source, or (2) access to copy the - Corresponding Source from a network server at no charge. - - c) Convey individual copies of the object code with a copy of the - written offer to provide the Corresponding Source. This - alternative is allowed only occasionally and noncommercially, and - only if you received the object code with such an offer, in accord - with subsection 6b. - - d) Convey the object code by offering access from a designated - place (gratis or for a charge), and offer equivalent access to the - Corresponding Source in the same way through the same place at no - further charge. You need not require recipients to copy the - Corresponding Source along with the object code. If the place to - copy the object code is a network server, the Corresponding Source - may be on a different server (operated by you or a third party) - that supports equivalent copying facilities, provided you maintain - clear directions next to the object code saying where to find the - Corresponding Source. Regardless of what server hosts the - Corresponding Source, you remain obligated to ensure that it is - available for as long as needed to satisfy these requirements. - - e) Convey the object code using peer-to-peer transmission, provided - you inform other peers where the object code and Corresponding - Source of the work are being offered to the general public at no - charge under subsection 6d. - - A separable portion of the object code, whose source code is excluded -from the Corresponding Source as a System Library, need not be -included in conveying the object code work. - - A "User Product" is either (1) a "consumer product", which means any -tangible personal property which is normally used for personal, family, -or household purposes, or (2) anything designed or sold for incorporation -into a dwelling. In determining whether a product is a consumer product, -doubtful cases shall be resolved in favor of coverage. For a particular -product received by a particular user, "normally used" refers to a -typical or common use of that class of product, regardless of the status -of the particular user or of the way in which the particular user -actually uses, or expects or is expected to use, the product. A product -is a consumer product regardless of whether the product has substantial -commercial, industrial or non-consumer uses, unless such uses represent -the only significant mode of use of the product. - - "Installation Information" for a User Product means any methods, -procedures, authorization keys, or other information required to install -and execute modified versions of a covered work in that User Product from -a modified version of its Corresponding Source. The information must -suffice to ensure that the continued functioning of the modified object -code is in no case prevented or interfered with solely because -modification has been made. - - If you convey an object code work under this section in, or with, or -specifically for use in, a User Product, and the conveying occurs as -part of a transaction in which the right of possession and use of the -User Product is transferred to the recipient in perpetuity or for a -fixed term (regardless of how the transaction is characterized), the -Corresponding Source conveyed under this section must be accompanied -by the Installation Information. But this requirement does not apply -if neither you nor any third party retains the ability to install -modified object code on the User Product (for example, the work has -been installed in ROM). - - The requirement to provide Installation Information does not include a -requirement to continue to provide support service, warranty, or updates -for a work that has been modified or installed by the recipient, or for -the User Product in which it has been modified or installed. Access to a -network may be denied when the modification itself materially and -adversely affects the operation of the network or violates the rules and -protocols for communication across the network. - - Corresponding Source conveyed, and Installation Information provided, -in accord with this section must be in a format that is publicly -documented (and with an implementation available to the public in -source code form), and must require no special password or key for -unpacking, reading or copying. - - 7. Additional Terms. - - "Additional permissions" are terms that supplement the terms of this -License by making exceptions from one or more of its conditions. -Additional permissions that are applicable to the entire Program shall -be treated as though they were included in this License, to the extent -that they are valid under applicable law. If additional permissions -apply only to part of the Program, that part may be used separately -under those permissions, but the entire Program remains governed by -this License without regard to the additional permissions. - - When you convey a copy of a covered work, you may at your option -remove any additional permissions from that copy, or from any part of -it. (Additional permissions may be written to require their own -removal in certain cases when you modify the work.) You may place -additional permissions on material, added by you to a covered work, -for which you have or can give appropriate copyright permission. - - Notwithstanding any other provision of this License, for material you -add to a covered work, you may (if authorized by the copyright holders of -that material) supplement the terms of this License with terms: - - a) Disclaiming warranty or limiting liability differently from the - terms of sections 15 and 16 of this License; or - - b) Requiring preservation of specified reasonable legal notices or - author attributions in that material or in the Appropriate Legal - Notices displayed by works containing it; or - - c) Prohibiting misrepresentation of the origin of that material, or - requiring that modified versions of such material be marked in - reasonable ways as different from the original version; or - - d) Limiting the use for publicity purposes of names of licensors or - authors of the material; or - - e) Declining to grant rights under trademark law for use of some - trade names, trademarks, or service marks; or - - f) Requiring indemnification of licensors and authors of that - material by anyone who conveys the material (or modified versions of - it) with contractual assumptions of liability to the recipient, for - any liability that these contractual assumptions directly impose on - those licensors and authors. - - All other non-permissive additional terms are considered "further -restrictions" within the meaning of section 10. If the Program as you -received it, or any part of it, contains a notice stating that it is -governed by this License along with a term that is a further -restriction, you may remove that term. If a license document contains -a further restriction but permits relicensing or conveying under this -License, you may add to a covered work material governed by the terms -of that license document, provided that the further restriction does -not survive such relicensing or conveying. - - If you add terms to a covered work in accord with this section, you -must place, in the relevant source files, a statement of the -additional terms that apply to those files, or a notice indicating -where to find the applicable terms. - - Additional terms, permissive or non-permissive, may be stated in the -form of a separately written license, or stated as exceptions; -the above requirements apply either way. - - 8. Termination. - - You may not propagate or modify a covered work except as expressly -provided under this License. Any attempt otherwise to propagate or -modify it is void, and will automatically terminate your rights under -this License (including any patent licenses granted under the third -paragraph of section 11). - - However, if you cease all violation of this License, then your -license from a particular copyright holder is reinstated (a) -provisionally, unless and until the copyright holder explicitly and -finally terminates your license, and (b) permanently, if the copyright -holder fails to notify you of the violation by some reasonable means -prior to 60 days after the cessation. - - Moreover, your license from a particular copyright holder is -reinstated permanently if the copyright holder notifies you of the -violation by some reasonable means, this is the first time you have -received notice of violation of this License (for any work) from that -copyright holder, and you cure the violation prior to 30 days after -your receipt of the notice. - - Termination of your rights under this section does not terminate the -licenses of parties who have received copies or rights from you under -this License. If your rights have been terminated and not permanently -reinstated, you do not qualify to receive new licenses for the same -material under section 10. - - 9. Acceptance Not Required for Having Copies. - - You are not required to accept this License in order to receive or -run a copy of the Program. Ancillary propagation of a covered work -occurring solely as a consequence of using peer-to-peer transmission -to receive a copy likewise does not require acceptance. However, -nothing other than this License grants you permission to propagate or -modify any covered work. These actions infringe copyright if you do -not accept this License. Therefore, by modifying or propagating a -covered work, you indicate your acceptance of this License to do so. - - 10. Automatic Licensing of Downstream Recipients. - - Each time you convey a covered work, the recipient automatically -receives a license from the original licensors, to run, modify and -propagate that work, subject to this License. You are not responsible -for enforcing compliance by third parties with this License. - - An "entity transaction" is a transaction transferring control of an -organization, or substantially all assets of one, or subdividing an -organization, or merging organizations. If propagation of a covered -work results from an entity transaction, each party to that -transaction who receives a copy of the work also receives whatever -licenses to the work the party's predecessor in interest had or could -give under the previous paragraph, plus a right to possession of the -Corresponding Source of the work from the predecessor in interest, if -the predecessor has it or can get it with reasonable efforts. - - You may not impose any further restrictions on the exercise of the -rights granted or affirmed under this License. For example, you may -not impose a license fee, royalty, or other charge for exercise of -rights granted under this License, and you may not initiate litigation -(including a cross-claim or counterclaim in a lawsuit) alleging that -any patent claim is infringed by making, using, selling, offering for -sale, or importing the Program or any portion of it. - - 11. Patents. - - A "contributor" is a copyright holder who authorizes use under this -License of the Program or a work on which the Program is based. The -work thus licensed is called the contributor's "contributor version". - - A contributor's "essential patent claims" are all patent claims -owned or controlled by the contributor, whether already acquired or -hereafter acquired, that would be infringed by some manner, permitted -by this License, of making, using, or selling its contributor version, -but do not include claims that would be infringed only as a -consequence of further modification of the contributor version. For -purposes of this definition, "control" includes the right to grant -patent sublicenses in a manner consistent with the requirements of -this License. - - Each contributor grants you a non-exclusive, worldwide, royalty-free -patent license under the contributor's essential patent claims, to -make, use, sell, offer for sale, import and otherwise run, modify and -propagate the contents of its contributor version. - - In the following three paragraphs, a "patent license" is any express -agreement or commitment, however denominated, not to enforce a patent -(such as an express permission to practice a patent or covenant not to -sue for patent infringement). To "grant" such a patent license to a -party means to make such an agreement or commitment not to enforce a -patent against the party. - - If you convey a covered work, knowingly relying on a patent license, -and the Corresponding Source of the work is not available for anyone -to copy, free of charge and under the terms of this License, through a -publicly available network server or other readily accessible means, -then you must either (1) cause the Corresponding Source to be so -available, or (2) arrange to deprive yourself of the benefit of the -patent license for this particular work, or (3) arrange, in a manner -consistent with the requirements of this License, to extend the patent -license to downstream recipients. "Knowingly relying" means you have -actual knowledge that, but for the patent license, your conveying the -covered work in a country, or your recipient's use of the covered work -in a country, would infringe one or more identifiable patents in that -country that you have reason to believe are valid. - - If, pursuant to or in connection with a single transaction or -arrangement, you convey, or propagate by procuring conveyance of, a -covered work, and grant a patent license to some of the parties -receiving the covered work authorizing them to use, propagate, modify -or convey a specific copy of the covered work, then the patent license -you grant is automatically extended to all recipients of the covered -work and works based on it. - - A patent license is "discriminatory" if it does not include within -the scope of its coverage, prohibits the exercise of, or is -conditioned on the non-exercise of one or more of the rights that are -specifically granted under this License. You may not convey a covered -work if you are a party to an arrangement with a third party that is -in the business of distributing software, under which you make payment -to the third party based on the extent of your activity of conveying -the work, and under which the third party grants, to any of the -parties who would receive the covered work from you, a discriminatory -patent license (a) in connection with copies of the covered work -conveyed by you (or copies made from those copies), or (b) primarily -for and in connection with specific products or compilations that -contain the covered work, unless you entered into that arrangement, -or that patent license was granted, prior to 28 March 2007. - - Nothing in this License shall be construed as excluding or limiting -any implied license or other defenses to infringement that may -otherwise be available to you under applicable patent law. - - 12. No Surrender of Others' Freedom. - - If conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot convey a -covered work so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you may -not convey it at all. For example, if you agree to terms that obligate you -to collect a royalty for further conveying from those to whom you convey -the Program, the only way you could satisfy both those terms and this -License would be to refrain entirely from conveying the Program. - - 13. Use with the GNU Affero General Public License. - - Notwithstanding any other provision of this License, you have -permission to link or combine any covered work with a work licensed -under version 3 of the GNU Affero General Public License into a single -combined work, and to convey the resulting work. The terms of this -License will continue to apply to the part which is the covered work, -but the special requirements of the GNU Affero General Public License, -section 13, concerning interaction through a network will apply to the -combination as such. - - 14. Revised Versions of this License. - - The Free Software Foundation may publish revised and/or new versions of -the GNU General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - - Each version is given a distinguishing version number. If the -Program specifies that a certain numbered version of the GNU General -Public License "or any later version" applies to it, you have the -option of following the terms and conditions either of that numbered -version or of any later version published by the Free Software -Foundation. If the Program does not specify a version number of the -GNU General Public License, you may choose any version ever published -by the Free Software Foundation. - - If the Program specifies that a proxy can decide which future -versions of the GNU General Public License can be used, that proxy's -public statement of acceptance of a version permanently authorizes you -to choose that version for the Program. - - Later license versions may give you additional or different -permissions. However, no additional obligations are imposed on any -author or copyright holder as a result of your choosing to follow a -later version. - - 15. Disclaimer of Warranty. - - THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY -APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT -HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY -OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, -THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR -PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM -IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF -ALL NECESSARY SERVICING, REPAIR OR CORRECTION. - - 16. Limitation of Liability. - - IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS -THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY -GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE -USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF -DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD -PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), -EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF -SUCH DAMAGES. - - 17. Interpretation of Sections 15 and 16. - - If the disclaimer of warranty and limitation of liability provided -above cannot be given local legal effect according to their terms, -reviewing courts shall apply local law that most closely approximates -an absolute waiver of all civil liability in connection with the -Program, unless a warranty or assumption of liability accompanies a -copy of the Program in return for a fee. - - END OF TERMS AND CONDITIONS - - How to Apply These Terms to Your New Programs - - If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - - To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -state the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 3 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . - -Also add information on how to contact you by electronic and paper mail. - - If the program does terminal interaction, make it output a short -notice like this when it starts in an interactive mode: - - Copyright (C) - This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, your program's commands -might be different; for a GUI interface, you would use an "about box". - - You should also get your employer (if you work as a programmer) or school, -if any, to sign a "copyright disclaimer" for the program, if necessary. -For more information on this, and how to apply and follow the GNU GPL, see -. - - The GNU General Public License does not permit incorporating your program -into proprietary programs. If your program is a subroutine library, you -may consider it more useful to permit linking proprietary applications with -the library. If this is what you want to do, use the GNU Lesser General -Public License instead of this License. But first, please read -. \ No newline at end of file diff --git a/lib/yolov5-face_Jan1/README.md b/lib/yolov5-face_Jan1/README.md deleted file mode 100755 index 40e4a51c1..000000000 --- a/lib/yolov5-face_Jan1/README.md +++ /dev/null @@ -1,154 +0,0 @@ -## What's New - -**2021.11**: BlazeFace -| Method | multi scale | Easy | Medium | Hard | Model Size(MB) | Link | -| -------------------- | ----------- | ----- | ------ | ----- | -------------- | ----- | -| BlazeFace | Ture | 88.5 | 85.5 | 73.1 | 0.472 | https://github.com/PaddlePaddle/PaddleDetection | -| BlazeFace-FPN-SSH | Ture | 90.7 | 88.3 | 79.3 | 0.479 | https://github.com/PaddlePaddle/PaddleDetection | -| yolov5-blazeface | True | 90.4 | 88.7 | 78.0 | 0.493 | https://pan.baidu.com/s/1RHp8wa615OuDVhsO-qrMpQ pwd:r3v3 | -| yolov5-blazeface-fpn | True | 90.8 | 89.4 | 79.1 | 0.493 | - | - -**2021.08**: Yolov5-face to TensorRT. -Inference time on rtx2080ti. -|Backbone|Pytorch |TensorRT_FP16 | -|:---:|:----:|:----:| -|yolov5n-0.5|11.9ms|2.9ms| -|yolov5n-face|20.7ms|2.5ms| -|yolov5s-face|25.2ms|3.0ms| -|yolov5m-face|61.2ms|3.0ms| -|yolov5l-face|109.6ms|3.6ms| -> Note: (1) Model inference (2) Resolution 640x640 - - -**2021.08**: Add new training dataset [Multi-Task-Facial](https://drive.google.com/file/d/1Pwd6ga06cDjeOX20RSC1KWiT888Q9IpM/view?usp=sharing),improve large face detection. -| Method | Easy | Medium | Hard | -| -------------------- | ----- | ------ | ----- | -| ***YOLOv5s*** | 94.56 | 92.92 | 83.84 | -| ***YOLOv5m*** | 95.46 | 93.87 | 85.54 | - - -## Introduction - -Yolov5-face is a real-time,high accuracy face detection. - -![](data/images/yolov5-face-p6.png) - -## Performance - -Single Scale Inference on VGA resolution(max side is equal to 640 and scale). - -***Large family*** - -| Method | Backbone | Easy | Medium | Hard | \#Params(M) | \#Flops(G) | -| :------------------ | -------------- | ----- | ------ | ----- | ----------- | ---------- | -| DSFD (CVPR19) | ResNet152 | 94.29 | 91.47 | 71.39 | 120.06 | 259.55 | -| RetinaFace (CVPR20) | ResNet50 | 94.92 | 91.90 | 64.17 | 29.50 | 37.59 | -| HAMBox (CVPR20) | ResNet50 | 95.27 | 93.76 | 76.75 | 30.24 | 43.28 | -| TinaFace (Arxiv20) | ResNet50 | 95.61 | 94.25 | 81.43 | 37.98 | 172.95 | -| SCRFD-34GF(Arxiv21) | Bottleneck Res | 96.06 | 94.92 | 85.29 | 9.80 | 34.13 | -| SCRFD-10GF(Arxiv21) | Basic Res | 95.16 | 93.87 | 83.05 | 3.86 | 9.98 | -| - | - | - | - | - | - | - | -| ***YOLOv5s*** | CSPNet | 94.67 | 92.75 | 83.03 | 7.075 | 5.751 | -| **YOLOv5s6** | CSPNet | 95.48 | 93.66 | 82.8 | 12.386 | 6.280 | -| ***YOLOv5m*** | CSPNet | 95.30 | 93.76 | 85.28 | 21.063 | 18.146 | -| **YOLOv5m6** | CSPNet | 95.66 | 94.1 | 85.2 | 35.485 | 19.773 | -| ***YOLOv5l*** | CSPNet | 95.78 | 94.30 | 86.13 | 46.627 | 41.607 | -| ***YOLOv5l6*** | CSPNet | 96.38 | 94.90 | 85.88 | 76.674 | 45.279 | - - -***Small family*** - -| Method | Backbone | Easy | Medium | Hard | \#Params(M) | \#Flops(G) | -| -------------------- | --------------- | ----- | ------ | ----- | ----------- | ---------- | -| RetinaFace (CVPR20 | MobileNet0.25 | 87.78 | 81.16 | 47.32 | 0.44 | 0.802 | -| FaceBoxes (IJCB17) | | 76.17 | 57.17 | 24.18 | 1.01 | 0.275 | -| SCRFD-0.5GF(Arxiv21) | Depth-wise Conv | 90.57 | 88.12 | 68.51 | 0.57 | 0.508 | -| SCRFD-2.5GF(Arxiv21) | Basic Res | 93.78 | 92.16 | 77.87 | 0.67 | 2.53 | -| - | - | - | - | - | - | - | -| ***YOLOv5n*** | ShuffleNetv2 | 93.74 | 91.54 | 80.32 | 1.726 | 2.111 | -| ***YOLOv5n-0.5*** | ShuffleNetv2 | 90.76 | 88.12 | 73.82 | 0.447 | 0.571 | - - - -## Pretrained-Models - -| Name | Easy | Medium | Hard | FLOPs(G) | Params(M) | Link | -| ----------- | ----- | ------ | ----- | -------- | --------- | ------------------------------------------------------------ | -| yolov5n-0.5 | 90.76 | 88.12 | 73.82 | 0.571 | 0.447 | Link: https://pan.baidu.com/s/1UgiKwzFq5NXI2y-Zui1kiA pwd: s5ow, https://drive.google.com/file/d/1XJ8w55Y9Po7Y5WP4X1Kg1a77ok2tL_KY/view?usp=sharing | -| yolov5n | 93.61 | 91.52 | 80.53 | 2.111 | 1.726 | Link: https://pan.baidu.com/s/1xsYns6cyB84aPDgXB7sNDQ pwd: lw9j,https://drive.google.com/file/d/18oenL6tjFkdR1f5IgpYeQfDFqU4w3jEr/view?usp=sharing | -| yolov5s | 94.33 | 92.61 | 83.15 | 5.751 | 7.075 | Link: https://pan.baidu.com/s/1fyzLxZYx7Ja1_PCIWRhxbw Link: eq0q,https://drive.google.com/file/d/1zxaHeLDyID9YU4-hqK7KNepXIwbTkRIO/view?usp=sharing | -| yolov5m | 95.30 | 93.76 | 85.28 | 18.146 | 21.063 | Link: https://pan.baidu.com/s/1oePvd2K6R4-gT0g7EERmdQ pwd: jmtk, https://drive.google.com/file/d/1Sx-KEGXSxvPMS35JhzQKeRBiqC98VDDI | -| yolov5l | 95.78 | 94.30 | 86.13 | 41.607 | 46.627 | Link: https://pan.baidu.com/s/11l4qSEgA2-c7e8lpRt8iFw pwd: 0mq7, https://drive.google.com/file/d/16F-3AjdQBn9p3nMhStUxfDNAE_1bOF_r | - -## Data preparation - -1. Download WIDERFace datasets. -2. Download annotation files from [google drive](https://drive.google.com/file/d/1tU_IjyOwGQfGNUvZGwWWM4SwxKp2PUQ8/view?usp=sharing). - -```shell -python3 train2yolo.py -python3 val2yolo.py -``` - - - -## Training - -```shell -CUDA_VISIBLE_DEVICES="0,1,2,3" python3 train.py --data data/widerface.yaml --cfg models/yolov5s.yaml --weights 'pretrained models' -``` - - - -## WIDERFace Evaluation - -```shell -python3 test_widerface.py --weights 'your test model' --img-size 640 - -cd widerface_evaluate -python3 evaluation.py -``` - -#### Test - -![](data/images/result.jpg) - - -#### Android demo - -https://github.com/FeiGeChuanShu/ncnn_Android_face/tree/main/ncnn-android-yolov5_face - -#### opencv dnn demo - -https://github.com/hpc203/yolov5-face-landmarks-opencv-v2 - -#### References - -https://github.com/ultralytics/yolov5 - -https://github.com/DayBreak-u/yolo-face-with-landmark - -https://github.com/xialuxi/yolov5_face_landmark - -https://github.com/biubug6/Pytorch_Retinaface - -https://github.com/deepinsight/insightface - - -#### Citation -- If you think this work is useful for you, please cite - - @article{YOLO5Face, - title = {YOLO5Face: Why Reinventing a Face Detector}, - author = {Delong Qi and Weijun Tan and Qi Yao and Jingfeng Liu}, - booktitle = {ArXiv preprint ArXiv:2105.12931}, - year = {2021} - } - -#### Main Contributors -https://github.com/derronqi - -https://github.com/changhy666 - -https://github.com/bobo0810 - diff --git a/lib/yolov5-face_Jan1/README_DISPENSION.md b/lib/yolov5-face_Jan1/README_DISPENSION.md deleted file mode 100755 index 154586172..000000000 --- a/lib/yolov5-face_Jan1/README_DISPENSION.md +++ /dev/null @@ -1,40 +0,0 @@ -## DISPENSION -## INTOXIVISION PROJECT - YOLOV5-FACE -## JANUARY 1, 2022 -## Lucas Wan (lucas.wan@dal.ca) - -**TO RUN** - -Ensure that all required packages are installed (see requirements.txt) - -python3 detect_face.py --image "/image-location" - -Can edit detect_face to update write location. - -**INFO** - -Uses pretrained model: yolov5m6_face. This model has the best recorded accuracy. - -Landmarks output gives X Y coordinates of [Left Eye, Right Eye, Nose, Left Mouth, Right Mouth, Left Inner Eyebrow, Right Inner Eyebrow]. - -X = 0 is left of image (right = positive), Y = 0 is top of image (down = positive). X and Y range from [0 , 1]. - -Location of eyebrows are calculated from eye locations based on average distances between pupils (63mm) and between pupil to top of eyebrow (25mm). - -Note that is folder only include files that are required for running the pretrained model (can not train a new model). - -**REFERENCES** - -https://github.com/ultralytics/yolov5 - -https://github.com/deepcam-cn/yolov5-face - -https://www.techrxiv.org/articles/preprint/TFW_Annotated_Thermal_Faces_in_the_Wild_Dataset/17004538 - -**TO DO** - -Combine landmark location information from multiple images (obtain average from burst of frames). - -Identify central person (currently only outputting landmarks for 1 person - could be person off to the side). - -Determine which packages in requirements.txt can be omitted. diff --git a/lib/yolov5-face_Jan1/data/images/100.png b/lib/yolov5-face_Jan1/data/images/100.png deleted file mode 100644 index a3831ec0a..000000000 Binary files a/lib/yolov5-face_Jan1/data/images/100.png and /dev/null differ diff --git a/lib/yolov5-face_Jan1/data/images/thermal1.png b/lib/yolov5-face_Jan1/data/images/thermal1.png deleted file mode 100644 index f78225bf5..000000000 Binary files a/lib/yolov5-face_Jan1/data/images/thermal1.png and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__init__.py b/lib/yolov5-face_Jan1/models/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/lib/yolov5-face_Jan1/models/__pycache__/__init__.cpython-310.pyc b/lib/yolov5-face_Jan1/models/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 03779e8ae..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__pycache__/__init__.cpython-36.pyc b/lib/yolov5-face_Jan1/models/__pycache__/__init__.cpython-36.pyc deleted file mode 100644 index b52e26006..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/__init__.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__pycache__/common.cpython-310.pyc b/lib/yolov5-face_Jan1/models/__pycache__/common.cpython-310.pyc deleted file mode 100644 index 1c80b302a..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/common.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__pycache__/common.cpython-36.pyc b/lib/yolov5-face_Jan1/models/__pycache__/common.cpython-36.pyc deleted file mode 100644 index 29481346b..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/common.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__pycache__/experimental.cpython-310.pyc b/lib/yolov5-face_Jan1/models/__pycache__/experimental.cpython-310.pyc deleted file mode 100644 index a904a954a..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/experimental.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__pycache__/experimental.cpython-36.pyc b/lib/yolov5-face_Jan1/models/__pycache__/experimental.cpython-36.pyc deleted file mode 100644 index eb057ccdb..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/experimental.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__pycache__/yolo.cpython-310.pyc b/lib/yolov5-face_Jan1/models/__pycache__/yolo.cpython-310.pyc deleted file mode 100644 index f3b927de2..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/yolo.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/__pycache__/yolo.cpython-36.pyc b/lib/yolov5-face_Jan1/models/__pycache__/yolo.cpython-36.pyc deleted file mode 100644 index 99c4d9418..000000000 Binary files a/lib/yolov5-face_Jan1/models/__pycache__/yolo.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/models/common.py b/lib/yolov5-face_Jan1/models/common.py deleted file mode 100644 index 40a19fa43..000000000 --- a/lib/yolov5-face_Jan1/models/common.py +++ /dev/null @@ -1,439 +0,0 @@ -# This file contains modules common to various models - -import math - -import numpy as np -import requests -import torch -import torch.nn as nn -from PIL import Image, ImageDraw - -from utils.datasets import letterbox -from utils.general import non_max_suppression, make_divisible, scale_coords, xyxy2xywh -from utils.plots import color_list - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - -def channel_shuffle(x, groups): - batchsize, num_channels, height, width = x.data.size() - channels_per_group = num_channels // groups - - # reshape - x = x.view(batchsize, groups, channels_per_group, height, width) - x = torch.transpose(x, 1, 2).contiguous() - - # flatten - x = x.view(batchsize, -1, height, width) - return x - -def DWConv(c1, c2, k=1, s=1, act=True): - # Depthwise convolution - return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Conv, self).__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - #self.act = self.act = nn.LeakyReLU(0.1, inplace=True) if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def fuseforward(self, x): - return self.act(self.conv(x)) - -class StemBlock(nn.Module): - def __init__(self, c1, c2, k=3, s=2, p=None, g=1, act=True): - super(StemBlock, self).__init__() - self.stem_1 = Conv(c1, c2, k, s, p, g, act) - self.stem_2a = Conv(c2, c2 // 2, 1, 1, 0) - self.stem_2b = Conv(c2 // 2, c2, 3, 2, 1) - self.stem_2p = nn.MaxPool2d(kernel_size=2,stride=2,ceil_mode=True) - self.stem_3 = Conv(c2 * 2, c2, 1, 1, 0) - - def forward(self, x): - stem_1_out = self.stem_1(x) - stem_2a_out = self.stem_2a(stem_1_out) - stem_2b_out = self.stem_2b(stem_2a_out) - stem_2p_out = self.stem_2p(stem_1_out) - out = self.stem_3(torch.cat((stem_2b_out,stem_2p_out),1)) - return out - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super(Bottleneck, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSP, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.LeakyReLU(0.1, inplace=True) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(C3, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # act=FReLU(c2) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1)) - -class ShuffleV2Block(nn.Module): - def __init__(self, inp, oup, stride): - super(ShuffleV2Block, self).__init__() - - if not (1 <= stride <= 3): - raise ValueError('illegal stride value') - self.stride = stride - - branch_features = oup // 2 - assert (self.stride != 1) or (inp == branch_features << 1) - - if self.stride > 1: - self.branch1 = nn.Sequential( - self.depthwise_conv(inp, inp, kernel_size=3, stride=self.stride, padding=1), - nn.BatchNorm2d(inp), - nn.Conv2d(inp, branch_features, kernel_size=1, stride=1, padding=0, bias=False), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - ) - else: - self.branch1 = nn.Sequential() - - self.branch2 = nn.Sequential( - nn.Conv2d(inp if (self.stride > 1) else branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - self.depthwise_conv(branch_features, branch_features, kernel_size=3, stride=self.stride, padding=1), - nn.BatchNorm2d(branch_features), - nn.Conv2d(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False), - nn.BatchNorm2d(branch_features), - nn.SiLU(), - ) - - @staticmethod - def depthwise_conv(i, o, kernel_size, stride=1, padding=0, bias=False): - return nn.Conv2d(i, o, kernel_size, stride, padding, bias=bias, groups=i) - - def forward(self, x): - if self.stride == 1: - x1, x2 = x.chunk(2, dim=1) - out = torch.cat((x1, self.branch2(x2)), dim=1) - else: - out = torch.cat((self.branch1(x), self.branch2(x)), dim=1) - out = channel_shuffle(out, 2) - return out - -class BlazeBlock(nn.Module): - def __init__(self, in_channels,out_channels,mid_channels=None,stride=1): - super(BlazeBlock, self).__init__() - mid_channels = mid_channels or in_channels - assert stride in [1, 2] - if stride>1: - self.use_pool = True - else: - self.use_pool = False - - self.branch1 = nn.Sequential( - nn.Conv2d(in_channels=in_channels,out_channels=mid_channels,kernel_size=5,stride=stride,padding=2,groups=in_channels), - nn.BatchNorm2d(mid_channels), - nn.Conv2d(in_channels=mid_channels,out_channels=out_channels,kernel_size=1,stride=1), - nn.BatchNorm2d(out_channels), - ) - - if self.use_pool: - self.shortcut = nn.Sequential( - nn.MaxPool2d(kernel_size=stride, stride=stride), - nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=1), - nn.BatchNorm2d(out_channels), - ) - - self.relu = nn.SiLU(inplace=True) - - def forward(self, x): - branch1 = self.branch1(x) - out = (branch1+self.shortcut(x)) if self.use_pool else (branch1+x) - return self.relu(out) - -class DoubleBlazeBlock(nn.Module): - def __init__(self,in_channels,out_channels,mid_channels=None,stride=1): - super(DoubleBlazeBlock, self).__init__() - mid_channels = mid_channels or in_channels - assert stride in [1, 2] - if stride > 1: - self.use_pool = True - else: - self.use_pool = False - - self.branch1 = nn.Sequential( - nn.Conv2d(in_channels=in_channels, out_channels=in_channels, kernel_size=5, stride=stride,padding=2,groups=in_channels), - nn.BatchNorm2d(in_channels), - nn.Conv2d(in_channels=in_channels, out_channels=mid_channels, kernel_size=1, stride=1), - nn.BatchNorm2d(mid_channels), - nn.SiLU(inplace=True), - nn.Conv2d(in_channels=mid_channels, out_channels=mid_channels, kernel_size=5, stride=1,padding=2), - nn.BatchNorm2d(mid_channels), - nn.Conv2d(in_channels=mid_channels, out_channels=out_channels, kernel_size=1, stride=1), - nn.BatchNorm2d(out_channels), - ) - - if self.use_pool: - self.shortcut = nn.Sequential( - nn.MaxPool2d(kernel_size=stride, stride=stride), - nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=1), - nn.BatchNorm2d(out_channels), - ) - - self.relu = nn.SiLU(inplace=True) - - def forward(self, x): - branch1 = self.branch1(x) - out = (branch1 + self.shortcut(x)) if self.use_pool else (branch1 + x) - return self.relu(out) - - -class SPP(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, k=(5, 9, 13)): - super(SPP, self).__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Focus, self).__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) - # return self.conv(self.contract(x)) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(N, s, s, C // s ** 2, H, W) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(N, C // s ** 2, H * s, W * s) # x(1,16,160,160) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super(Concat, self).__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class NMS(nn.Module): - # Non-Maximum Suppression (NMS) module - conf = 0.25 # confidence threshold - iou = 0.45 # IoU threshold - classes = None # (optional list) filter by class - - def __init__(self): - super(NMS, self).__init__() - - def forward(self, x): - return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) - -class autoShape(nn.Module): - # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - img_size = 640 # inference size (pixels) - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - classes = None # (optional list) filter by class - - def __init__(self, model): - super(autoShape, self).__init__() - self.model = model.eval() - - def autoshape(self): - print('autoShape already enabled, skipping... ') # model already converted to model.autoshape() - return self - - def forward(self, imgs, size=640, augment=False, profile=False): - # Inference from various sources. For height=720, width=1280, RGB images example inputs are: - # filename: imgs = 'data/samples/zidane.jpg' - # URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(720,1280,3) - # PIL: = Image.open('image.jpg') # HWC x(720,1280,3) - # numpy: = np.zeros((720,1280,3)) # HWC - # torch: = torch.zeros(16,3,720,1280) # BCHW - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - p = next(self.model.parameters()) # for device and type - if isinstance(imgs, torch.Tensor): # torch - return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images - shape0, shape1 = [], [] # image and inference shapes - for i, im in enumerate(imgs): - if isinstance(im, str): # filename or uri - im = Image.open(requests.get(im, stream=True).raw if im.startswith('http') else im) # open - im = np.array(im) # to numpy - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = (size / max(s)) # gain - shape1.append([y * g for y in s]) - imgs[i] = im # update - shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape - x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad - x = np.stack(x, 0) if n > 1 else x[0][None] # stack - x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255. # uint8 to fp16/32 - - # Inference - with torch.no_grad(): - y = self.model(x, augment, profile)[0] # forward - y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) # NMS - - # Post-process - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - return Detections(imgs, y, self.names) - - -class Detections: - # detections class for YOLOv5 inference results - def __init__(self, imgs, pred, names=None): - super(Detections, self).__init__() - d = pred[0].device # device - gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations - self.imgs = imgs # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) - - def display(self, pprint=False, show=False, save=False, render=False): - colors = color_list() - for i, (img, pred) in enumerate(zip(self.imgs, self.pred)): - str = f'Image {i + 1}/{len(self.pred)}: {img.shape[0]}x{img.shape[1]} ' - if pred is not None: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - str += f'{n} {self.names[int(c)]}s, ' # add to string - if show or save or render: - img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np - for *box, conf, cls in pred: # xyxy, confidence, class - # str += '%s %.2f, ' % (names[int(cls)], conf) # label - ImageDraw.Draw(img).rectangle(box, width=4, outline=colors[int(cls) % 10]) # plot - if pprint: - print(str) - if show: - img.show(f'Image {i}') # show - if save: - f = f'results{i}.jpg' - str += f"saved to '{f}'" - img.save(f) # save - if render: - self.imgs[i] = np.asarray(img) - - def print(self): - self.display(pprint=True) # print results - - def show(self): - self.display(show=True) # show results - - def save(self): - self.display(save=True) # save results - - def render(self): - self.display(render=True) # render results - return self.imgs - - def __len__(self): - return self.n - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - x = [Detections([self.imgs[i]], [self.pred[i]], self.names) for i in range(self.n)] - for d in x: - for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - -class Classify(nn.Module): - # Classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super(Classify, self).__init__() - self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1) - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1) - self.flat = nn.Flatten() - - def forward(self, x): - z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list - return self.flat(self.conv(z)) # flatten to x(b,c2) diff --git a/lib/yolov5-face_Jan1/models/experimental.py b/lib/yolov5-face_Jan1/models/experimental.py deleted file mode 100644 index 72dc877c8..000000000 --- a/lib/yolov5-face_Jan1/models/experimental.py +++ /dev/null @@ -1,133 +0,0 @@ -# This file contains experimental modules - -import numpy as np -import torch -import torch.nn as nn - -from models.common import Conv, DWConv -from utils.google_utils import attempt_download - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super(CrossConv, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class Sum(nn.Module): - # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 - def __init__(self, n, weight=False): # n: number of inputs - super(Sum, self).__init__() - self.weight = weight # apply weights boolean - self.iter = range(n - 1) # iter object - if weight: - self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights - - def forward(self, x): - y = x[0] # no weight - if self.weight: - w = torch.sigmoid(self.w) * 2 - for i in self.iter: - y = y + x[i + 1] * w[i] - else: - for i in self.iter: - y = y + x[i + 1] - return y - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super(GhostConv, self).__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat([y, self.cv2(y)], 1) - - -class GhostBottleneck(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k, s): - super(GhostBottleneck, self).__init__() - c_ = c2 // 2 - self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), - Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - - -class MixConv2d(nn.Module): - # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super(MixConv2d, self).__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) - - -class Ensemble(nn.ModuleList): - # Ensemble of models - def __init__(self): - super(Ensemble, self).__init__() - - def forward(self, x, augment=False): - y = [] - for module in self: - y.append(module(x, augment)[0]) - # y = torch.stack(y).max(0)[0] # max ensemble - # y = torch.stack(y).mean(0) # mean ensemble - y = torch.cat(y, 1) # nms ensemble - return y, None # inference, train output - - -def attempt_load(weights, map_location=None): - # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a - model = Ensemble() - for w in weights if isinstance(weights, list) else [weights]: - attempt_download(w) - model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval()) # load FP32 model - - # Compatibility updates - for m in model.modules(): - if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True # pytorch 1.7.0 compatibility - elif type(m) is Conv: - m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - - if len(model) == 1: - return model[-1] # return model - else: - print('Ensemble created with %s\n' % weights) - for k in ['names', 'stride']: - setattr(model, k, getattr(model[-1], k)) - return model # return ensemble diff --git a/lib/yolov5-face_Jan1/models/export.py b/lib/yolov5-face_Jan1/models/export.py deleted file mode 100644 index 5de04cc56..000000000 --- a/lib/yolov5-face_Jan1/models/export.py +++ /dev/null @@ -1,112 +0,0 @@ -"""Exports a YOLOv5 *.pt model to ONNX and TorchScript formats - -Usage: - $ export PYTHONPATH="$PWD" && python models/export.py --weights ./weights/yolov5s.pt --img 640 --batch 1 -""" - -import argparse -import sys -import time - -sys.path.append('./') # to run '$ python *.py' files in subdirectories - -import torch -import torch.nn as nn - -import models -from models.experimental import attempt_load -from utils.activations import Hardswish, SiLU -from utils.general import set_logging, check_img_size -import onnx - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default='./yolov5s.pt', help='weights path') # from yolov5/models/ - parser.add_argument('--img_size', nargs='+', type=int, default=[640, 640], help='image size') # height, width - parser.add_argument('--batch_size', type=int, default=1, help='batch size') - parser.add_argument('--onnx2pb', action='store_true', default=False, help='export onnx to pb') - opt = parser.parse_args() - opt.img_size *= 2 if len(opt.img_size) == 1 else 1 # expand - print(opt) - set_logging() - t = time.time() - - # Load PyTorch model - model = attempt_load(opt.weights, map_location=torch.device('cpu')) # load FP32 model - model.eval() - labels = model.names - - # Checks - gs = int(max(model.stride)) # grid size (max stride) - opt.img_size = [check_img_size(x, gs) for x in opt.img_size] # verify img_size are gs-multiples - - # Input - img = torch.zeros(opt.batch_size, 3, *opt.img_size) # image size(1,3,320,192) iDetection - - # Update model - for k, m in model.named_modules(): - m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - if isinstance(m, models.common.Conv): # assign export-friendly activations - if isinstance(m.act, nn.Hardswish): - m.act = Hardswish() - elif isinstance(m.act, nn.SiLU): - m.act = SiLU() - # elif isinstance(m, models.yolo.Detect): - # m.forward = m.forward_export # assign forward (optional) - if isinstance(m, models.common.ShuffleV2Block):#shufflenet block nn.SiLU - for i in range(len(m.branch1)): - if isinstance(m.branch1[i], nn.SiLU): - m.branch1[i] = SiLU() - for i in range(len(m.branch2)): - if isinstance(m.branch2[i], nn.SiLU): - m.branch2[i] = SiLU() - model.model[-1].export = True # set Detect() layer export=True - y = model(img) # dry run - - # ONNX export - print('\nStarting ONNX export with onnx %s...' % onnx.__version__) - f = opt.weights.replace('.pt', '.onnx') # filename - model.fuse() # only for ONNX - input_names=['data'] - output_names=['stride_' + str(int(x)) for x in model.stride] - torch.onnx.export(model, img, f, verbose=False, opset_version=12, input_names=input_names, - output_names=output_names) - - # Checks - onnx_model = onnx.load(f) # load onnx model - onnx.checker.check_model(onnx_model) # check onnx model - # print(onnx.helper.printable_graph(onnx_model.graph)) # print a human readable model - print('ONNX export success, saved as %s' % f) - # Finish - print('\nExport complete (%.2fs). Visualize with https://github.com/lutzroeder/netron.' % (time.time() - t)) - - # PB export - if opt.onnx2pb: - print('download the newest onnx_tf by https://github.com/onnx/onnx-tensorflow/tree/master/onnx_tf') - from onnx_tf.backend import prepare - import tensorflow as tf - - outpb = f.replace('.onnx', '.pb') # filename - # strict=True maybe leads to KeyError: 'pyfunc_0', check: https://github.com/onnx/onnx-tensorflow/issues/167 - tf_rep = prepare(onnx_model, strict=False) # prepare tf representation - tf_rep.export_graph(outpb) # export the model - - out_onnx = tf_rep.run(img) # onnx output - - # check pb - with tf.Graph().as_default(): - graph_def = tf.GraphDef() - with open(outpb, "rb") as f: - graph_def.ParseFromString(f.read()) - tf.import_graph_def(graph_def, name="") - with tf.Session() as sess: - init = tf.global_variables_initializer() - input_x = sess.graph.get_tensor_by_name(input_names[0]+':0') # input - outputs = [] - for i in output_names: - outputs.append(sess.graph.get_tensor_by_name(i+':0')) - out_pb = sess.run(outputs, feed_dict={input_x: img}) - - print(f'out_pytorch {y}') - print(f'out_onnx {out_onnx}') - print(f'out_pb {out_pb}') diff --git a/lib/yolov5-face_Jan1/models/yolo.py b/lib/yolov5-face_Jan1/models/yolo.py deleted file mode 100644 index 11b4efed4..000000000 --- a/lib/yolov5-face_Jan1/models/yolo.py +++ /dev/null @@ -1,343 +0,0 @@ -import argparse -import logging -import math -import sys -from copy import deepcopy -from pathlib import Path - -import torch -import torch.nn as nn - -sys.path.append('./') # to run '$ python *.py' files in subdirectories -logger = logging.getLogger(__name__) - -from models.common import Conv, Bottleneck, SPP, DWConv, Focus, BottleneckCSP, C3, ShuffleV2Block, Concat, NMS, autoShape, StemBlock, BlazeBlock, DoubleBlazeBlock -from models.experimental import MixConv2d, CrossConv -from utils.autoanchor import check_anchor_order -from utils.general import make_divisible, check_file, set_logging -from utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \ - select_device, copy_attr - -try: - import thop # for FLOPS computation -except ImportError: - thop = None - - -class Detect(nn.Module): - stride = None # strides computed during build - export = False # onnx export - export_cat = False # onnx export cat output - - def __init__(self, nc=80, anchors=(), ch=()): # detection layer - super(Detect, self).__init__() - self.nc = nc # number of classes - #self.no = nc + 5 # number of outputs per anchor - self.no = nc + 5 + 10 # number of outputs per anchor - - self.nl = len(anchors) # number of detection layers - self.na = len(anchors[0]) // 2 # number of anchors - self.grid = [torch.zeros(1)] * self.nl # init grid - a = torch.tensor(anchors).float().view(self.nl, -1, 2) - self.register_buffer('anchors', a) # shape(nl,na,2) - self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2) - self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv - - def forward(self, x): - # x = x.copy() # for profiling - z = [] # inference output - # self.training |= self.export - if self.export: - for i in range(self.nl): - x[i] = self.m[i](x[i]) - bs, _, ny, nx = x[i].shape # x(bs,48,20,20) to x(bs,3,20,20,16) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - return x - if self.export_cat: - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = torch.full_like(x[i], 0) - y = y + torch.cat((x[i][:, :, :, :, 0:5].sigmoid(), torch.cat((x[i][:, :, :, :, 5:15], x[i][:, :, :, :, 15:15+self.nc].sigmoid()), 4)), 4) - - box_xy = (y[:, :, :, :, 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i] # xy - box_wh = (y[:, :, :, :, 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - # box_conf = torch.cat((box_xy, torch.cat((box_wh, y[:, :, :, :, 4:5]), 4)), 4) - - landm1 = y[:, :, :, :, 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x1 y1 - landm2 = y[:, :, :, :, 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x2 y2 - landm3 = y[:, :, :, :, 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x3 y3 - landm4 = y[:, :, :, :, 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x4 y4 - landm5 = y[:, :, :, :, 13:15] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x5 y5 - # landm = torch.cat((landm1, torch.cat((landm2, torch.cat((landm3, torch.cat((landm4, landm5), 4)), 4)), 4)), 4) - # y = torch.cat((box_conf, torch.cat((landm, y[:, :, :, :, 15:15+self.nc]), 4)), 4) - y = torch.cat([box_xy, box_wh, y[:, :, :, :, 4:5], landm1, landm2, landm3, landm4, landm5, y[:, :, :, :, 15:15+self.nc]], -1) - - z.append(y.view(bs, -1, self.no)) - return torch.cat(z, 1) - - for i in range(self.nl): - x[i] = self.m[i](x[i]) # conv - bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) - x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous() - - if not self.training: # inference - if self.grid[i].shape[2:4] != x[i].shape[2:4]: - self.grid[i] = self._make_grid(nx, ny).to(x[i].device) - - y = torch.full_like(x[i], 0) - class_range = list(range(5)) + list(range(15,15+self.nc)) - y[..., class_range] = x[i][..., class_range].sigmoid() - y[..., 5:15] = x[i][..., 5:15] - #y = x[i].sigmoid() - - y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i] # xy - y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh - - #y[..., 5:15] = y[..., 5:15] * 8 - 4 - y[..., 5:7] = y[..., 5:7] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i] # landmark x1 y1 - y[..., 7:9] = y[..., 7:9] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]# landmark x2 y2 - y[..., 9:11] = y[..., 9:11] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]# landmark x3 y3 - y[..., 11:13] = y[..., 11:13] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]# landmark x4 y4 - y[..., 13:15] = y[..., 13:15] * self.anchor_grid[i] + self.grid[i].to(x[i].device) * self.stride[i]# landmark x5 y5 - - #y[..., 5:7] = (y[..., 5:7] * 2 -1) * self.anchor_grid[i] # landmark x1 y1 - #y[..., 7:9] = (y[..., 7:9] * 2 -1) * self.anchor_grid[i] # landmark x2 y2 - #y[..., 9:11] = (y[..., 9:11] * 2 -1) * self.anchor_grid[i] # landmark x3 y3 - #y[..., 11:13] = (y[..., 11:13] * 2 -1) * self.anchor_grid[i] # landmark x4 y4 - #y[..., 13:15] = (y[..., 13:15] * 2 -1) * self.anchor_grid[i] # landmark x5 y5 - - z.append(y.view(bs, -1, self.no)) - - return x if self.training else (torch.cat(z, 1), x) - - @staticmethod - def _make_grid(nx=20, ny=20): - yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)]) - return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float() - - -class Model(nn.Module): - def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None): # model, input channels, number of classes - super(Model, self).__init__() - if isinstance(cfg, dict): - self.yaml = cfg # model dict - else: # is *.yaml - import yaml # for torch hub - self.yaml_file = Path(cfg).name - with open(cfg) as f: - self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict - - # Define model - ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels - if nc and nc != self.yaml['nc']: - logger.info('Overriding model.yaml nc=%g with nc=%g' % (self.yaml['nc'], nc)) - self.yaml['nc'] = nc # override yaml value - self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist - self.names = [str(i) for i in range(self.yaml['nc'])] # default names - # print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))]) - - # Build strides, anchors - m = self.model[-1] # Detect() - if isinstance(m, Detect): - s = 128 # 2x min stride - m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward - m.anchors /= m.stride.view(-1, 1, 1) - check_anchor_order(m) - self.stride = m.stride - self._initialize_biases() # only run once - # print('Strides: %s' % m.stride.tolist()) - - # Init weights, biases - initialize_weights(self) - self.info() - logger.info('') - - def forward(self, x, augment=False, profile=False): - if augment: - img_size = x.shape[-2:] # height, width - s = [1, 0.83, 0.67] # scales - f = [None, 3, None] # flips (2-ud, 3-lr) - y = [] # outputs - for si, fi in zip(s, f): - xi = scale_img(x.flip(fi) if fi else x, si) - yi = self.forward_once(xi)[0] # forward - # cv2.imwrite('img%g.jpg' % s, 255 * xi[0].numpy().transpose((1, 2, 0))[:, :, ::-1]) # save - yi[..., :4] /= si # de-scale - if fi == 2: - yi[..., 1] = img_size[0] - yi[..., 1] # de-flip ud - elif fi == 3: - yi[..., 0] = img_size[1] - yi[..., 0] # de-flip lr - y.append(yi) - return torch.cat(y, 1), None # augmented inference, train - else: - return self.forward_once(x, profile) # single-scale inference, train - - def forward_once(self, x, profile=False): - y, dt = [], [] # outputs - for m in self.model: - if m.f != -1: # if not from previous layer - x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers - - if profile: - o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS - t = time_synchronized() - for _ in range(10): - _ = m(x) - dt.append((time_synchronized() - t) * 100) - print('%10.1f%10.0f%10.1fms %-40s' % (o, m.np, dt[-1], m.type)) - - x = m(x) # run - y.append(x if m.i in self.save else None) # save output - - if profile: - print('%.1fms total' % sum(dt)) - return x - - def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency - # https://arxiv.org/abs/1708.02002 section 3.3 - # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1. - m = self.model[-1] # Detect() module - for mi, s in zip(m.m, m.stride): # from - b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85) - b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image) - b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls - mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True) - - def _print_biases(self): - m = self.model[-1] # Detect() module - for mi in m.m: # from - b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85) - print(('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean())) - - # def _print_weights(self): - # for m in self.model.modules(): - # if type(m) is Bottleneck: - # print('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights - - def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers - print('Fusing layers... ') - for m in self.model.modules(): - if type(m) is Conv and hasattr(m, 'bn'): - m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv - delattr(m, 'bn') # remove batchnorm - m.forward = m.fuseforward # update forward - self.info() - return self - - def nms(self, mode=True): # add or remove NMS module - present = type(self.model[-1]) is NMS # last layer is NMS - if mode and not present: - print('Adding NMS... ') - m = NMS() # module - m.f = -1 # from - m.i = self.model[-1].i + 1 # index - self.model.add_module(name='%s' % m.i, module=m) # add - self.eval() - elif not mode and present: - print('Removing NMS... ') - self.model = self.model[:-1] # remove - return self - - def autoshape(self): # add autoShape module - print('Adding autoShape... ') - m = autoShape(self) # wrap model - copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes - return m - - def info(self, verbose=False, img_size=640): # print model information - model_info(self, verbose, img_size) - - -def parse_model(d, ch): # model_dict, input_channels(3) - logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments')) - anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'] - na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors - no = na * (nc + 5) # number of outputs = anchors * (classes + 5) - - layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out - for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args - m = eval(m) if isinstance(m, str) else m # eval strings - for j, a in enumerate(args): - try: - args[j] = eval(a) if isinstance(a, str) else a # eval strings - except: - pass - - n = max(round(n * gd), 1) if n > 1 else n # depth gain - if m in [Conv, Bottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP, C3, ShuffleV2Block, StemBlock, BlazeBlock, DoubleBlazeBlock]: - c1, c2 = ch[f], args[0] - - # Normal - # if i > 0 and args[0] != no: # channel expansion factor - # ex = 1.75 # exponential (default 2.0) - # e = math.log(c2 / ch[1]) / math.log(2) - # c2 = int(ch[1] * ex ** e) - # if m != Focus: - - c2 = make_divisible(c2 * gw, 8) if c2 != no else c2 - - # Experimental - # if i > 0 and args[0] != no: # channel expansion factor - # ex = 1 + gw # exponential (default 2.0) - # ch1 = 32 # ch[1] - # e = math.log(c2 / ch1) / math.log(2) # level 1-n - # c2 = int(ch1 * ex ** e) - # if m != Focus: - # c2 = make_divisible(c2, 8) if c2 != no else c2 - - args = [c1, c2, *args[1:]] - if m in [BottleneckCSP, C3]: - args.insert(2, n) - n = 1 - elif m is nn.BatchNorm2d: - args = [ch[f]] - elif m is Concat: - c2 = sum([ch[-1 if x == -1 else x + 1] for x in f]) - elif m is Detect: - args.append([ch[x + 1] for x in f]) - if isinstance(args[1], int): # number of anchors - args[1] = [list(range(args[1] * 2))] * len(f) - else: - c2 = ch[f] - - m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module - t = str(m)[8:-2].replace('__main__.', '') # module type - np = sum([x.numel() for x in m_.parameters()]) # number params - m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params - logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print - save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist - layers.append(m_) - ch.append(c2) - return nn.Sequential(*layers), sorted(save) - - -from thop import profile -from thop import clever_format -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - opt = parser.parse_args() - opt.cfg = check_file(opt.cfg) # check file - set_logging() - device = select_device(opt.device) - - # Create model - model = Model(opt.cfg).to(device) - stride = model.stride.max() - if stride == 32: - input = torch.Tensor(1, 3, 480, 640).to(device) - else: - input = torch.Tensor(1, 3, 512, 640).to(device) - model.train() - print(model) - flops, params = profile(model, inputs=(input, )) - flops, params = clever_format([flops, params], "%.3f") - print('Flops:', flops, ',Params:' ,params) diff --git a/lib/yolov5-face_Jan1/requirements.txt b/lib/yolov5-face_Jan1/requirements.txt deleted file mode 100755 index 22b51fc49..000000000 --- a/lib/yolov5-face_Jan1/requirements.txt +++ /dev/null @@ -1,36 +0,0 @@ -# pip install -r requirements.txt - -# Base ---------------------------------------- -matplotlib>=3.2.2 -numpy>=1.18.5 -opencv-python>=4.1.2 -Pillow>=7.1.2 -PyYAML>=5.3.1 -requests>=2.23.0 -scipy>=1.4.1 -torch>=1.7.0 -torchvision>=0.8.1 -tqdm>=4.41.0 - -# Logging ------------------------------------- -tensorboard>=2.4.1 -# wandb - -# Plotting ------------------------------------ -pandas>=1.1.4 -seaborn>=0.11.0 - -# Export -------------------------------------- -# coremltools>=4.1 # CoreML export -# onnx>=1.9.0 # ONNX export -# onnx-simplifier>=0.3.6 # ONNX simplifier -# scikit-learn==0.19.2 # CoreML quantization -# tensorflow>=2.4.1 # TFLite export -# tensorflowjs>=3.9.0 # TF.js export - -# Extras -------------------------------------- -# albumentations>=1.0.3 -# Cython # for pycocotools https://github.com/cocodataset/cocoapi/issues/172 -# pycocotools>=2.0 # COCO mAP -# roboflow -thop # FLOPs computation diff --git a/lib/yolov5-face_Jan1/runs/train/exp/events.out.tfevents.1639845652.lucasacm-Legion-5-15ITH6.3761.0 b/lib/yolov5-face_Jan1/runs/train/exp/events.out.tfevents.1639845652.lucasacm-Legion-5-15ITH6.3761.0 deleted file mode 100644 index 9b1415e30..000000000 Binary files a/lib/yolov5-face_Jan1/runs/train/exp/events.out.tfevents.1639845652.lucasacm-Legion-5-15ITH6.3761.0 and /dev/null differ diff --git a/lib/yolov5-face_Jan1/runs/train/exp/hyp.yaml b/lib/yolov5-face_Jan1/runs/train/exp/hyp.yaml deleted file mode 100644 index cfe751135..000000000 --- a/lib/yolov5-face_Jan1/runs/train/exp/hyp.yaml +++ /dev/null @@ -1,28 +0,0 @@ -lr0: 0.01 -lrf: 0.2 -momentum: 0.937 -weight_decay: 0.0005 -warmup_epochs: 3.0 -warmup_momentum: 0.8 -warmup_bias_lr: 0.1 -box: 0.05 -cls: 0.5 -landmark: 0.005 -cls_pw: 1.0 -obj: 1.0 -obj_pw: 1.0 -iou_t: 0.2 -anchor_t: 4.0 -fl_gamma: 0.0 -hsv_h: 0.015 -hsv_s: 0.7 -hsv_v: 0.4 -degrees: 0.0 -translate: 0.1 -scale: 0.5 -shear: 0.5 -perspective: 0.0 -flipud: 0.0 -fliplr: 0.5 -mosaic: 0.5 -mixup: 0.0 diff --git a/lib/yolov5-face_Jan1/runs/train/exp/opt.yaml b/lib/yolov5-face_Jan1/runs/train/exp/opt.yaml deleted file mode 100644 index 8cac7da3d..000000000 --- a/lib/yolov5-face_Jan1/runs/train/exp/opt.yaml +++ /dev/null @@ -1,34 +0,0 @@ -weights: pretrained models -cfg: models/yolov5s.yaml -data: data/widerface.yaml -hyp: data/hyp.scratch.yaml -epochs: 250 -batch_size: 16 -img_size: -- 800 -- 800 -rect: false -resume: false -nosave: false -notest: false -noautoanchor: false -evolve: false -bucket: '' -cache_images: false -image_weights: false -device: '' -multi_scale: false -single_cls: false -adam: false -sync_bn: false -local_rank: -1 -log_imgs: 16 -log_artifacts: false -workers: 4 -project: runs/train -name: exp -exist_ok: false -total_batch_size: 16 -world_size: 1 -global_rank: -1 -save_dir: runs/train/exp diff --git a/lib/yolov5-face_Jan1/runs/train/exp/weights/yolov5m6_face.pt b/lib/yolov5-face_Jan1/runs/train/exp/weights/yolov5m6_face.pt deleted file mode 100644 index 60c608962..000000000 Binary files a/lib/yolov5-face_Jan1/runs/train/exp/weights/yolov5m6_face.pt and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__init__.py b/lib/yolov5-face_Jan1/utils/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/__init__.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 6a9e27da3..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/__init__.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/__init__.cpython-36.pyc deleted file mode 100644 index bb1eddcb4..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/__init__.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/autoanchor.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/autoanchor.cpython-310.pyc deleted file mode 100644 index cfb0a56ae..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/autoanchor.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/autoanchor.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/autoanchor.cpython-36.pyc deleted file mode 100644 index 22a19ba13..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/autoanchor.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/datasets.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/datasets.cpython-310.pyc deleted file mode 100644 index a64d5d318..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/datasets.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/datasets.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/datasets.cpython-36.pyc deleted file mode 100644 index b2306ec83..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/datasets.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/general.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/general.cpython-310.pyc deleted file mode 100644 index 83bcb5678..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/general.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/general.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/general.cpython-36.pyc deleted file mode 100644 index ccd67cff4..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/general.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/google_utils.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/google_utils.cpython-310.pyc deleted file mode 100644 index 323635c59..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/google_utils.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/google_utils.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/google_utils.cpython-36.pyc deleted file mode 100644 index 7a25abf46..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/google_utils.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/metrics.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/metrics.cpython-310.pyc deleted file mode 100644 index 247ddd76b..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/metrics.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/metrics.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/metrics.cpython-36.pyc deleted file mode 100644 index 4067583c0..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/metrics.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/plots.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/plots.cpython-310.pyc deleted file mode 100644 index ae91f4c4c..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/plots.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/plots.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/plots.cpython-36.pyc deleted file mode 100644 index 273252779..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/plots.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/torch_utils.cpython-310.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/torch_utils.cpython-310.pyc deleted file mode 100644 index 8c58f0d84..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/torch_utils.cpython-310.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/__pycache__/torch_utils.cpython-36.pyc b/lib/yolov5-face_Jan1/utils/__pycache__/torch_utils.cpython-36.pyc deleted file mode 100644 index 8f57f303d..000000000 Binary files a/lib/yolov5-face_Jan1/utils/__pycache__/torch_utils.cpython-36.pyc and /dev/null differ diff --git a/lib/yolov5-face_Jan1/utils/activations.py b/lib/yolov5-face_Jan1/utils/activations.py deleted file mode 100644 index aa3ddf071..000000000 --- a/lib/yolov5-face_Jan1/utils/activations.py +++ /dev/null @@ -1,72 +0,0 @@ -# Activation functions - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -# SiLU https://arxiv.org/pdf/1606.08415.pdf ---------------------------------------------------------------------------- -class SiLU(nn.Module): # export-friendly version of nn.SiLU() - @staticmethod - def forward(x): - return x * torch.sigmoid(x) - - -class Hardswish(nn.Module): # export-friendly version of nn.Hardswish() - @staticmethod - def forward(x): - # return x * F.hardsigmoid(x) # for torchscript and CoreML - return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX - - -class MemoryEfficientSwish(nn.Module): - class F(torch.autograd.Function): - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return x * torch.sigmoid(x) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - sx = torch.sigmoid(x) - return grad_output * (sx * (1 + x * (1 - sx))) - - def forward(self, x): - return self.F.apply(x) - - -# Mish https://github.com/digantamisra98/Mish -------------------------------------------------------------------------- -class Mish(nn.Module): - @staticmethod - def forward(x): - return x * F.softplus(x).tanh() - - -class MemoryEfficientMish(nn.Module): - class F(torch.autograd.Function): - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x))) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - sx = torch.sigmoid(x) - fx = F.softplus(x).tanh() - return grad_output * (fx + x * sx * (1 - fx * fx)) - - def forward(self, x): - return self.F.apply(x) - - -# FReLU https://arxiv.org/abs/2007.11824 ------------------------------------------------------------------------------- -class FReLU(nn.Module): - def __init__(self, c1, k=3): # ch_in, kernel - super().__init__() - self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False) - self.bn = nn.BatchNorm2d(c1) - - def forward(self, x): - return torch.max(x, self.bn(self.conv(x))) diff --git a/lib/yolov5-face_Jan1/utils/autoanchor.py b/lib/yolov5-face_Jan1/utils/autoanchor.py deleted file mode 100644 index 5dba9f1ea..000000000 --- a/lib/yolov5-face_Jan1/utils/autoanchor.py +++ /dev/null @@ -1,155 +0,0 @@ -# Auto-anchor utils - -import numpy as np -import torch -import yaml -from scipy.cluster.vq import kmeans -from tqdm import tqdm - -from utils.general import colorstr - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary - a = m.anchor_grid.prod(-1).view(-1) # anchor area - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da.sign() != ds.sign(): # same order - print('Reversing anchor order') - m.anchors[:] = m.anchors.flip(0) - m.anchor_grid[:] = m.anchor_grid.flip(0) - - -def check_anchors(dataset, model, thr=4.0, imgsz=640): - # Check anchor fit to data, recompute if necessary - prefix = colorstr('autoanchor: ') - print(f'\n{prefix}Analyzing anchors... ', end='') - m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() - shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) - scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale - wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh - - def metric(k): # compute metric - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - best = x.max(1)[0] # best_x - aat = (x > 1. / thr).float().sum(1).mean() # anchors above threshold - bpr = (best > 1. / thr).float().mean() # best possible recall - return bpr, aat - - bpr, aat = metric(m.anchor_grid.clone().cpu().view(-1, 2)) - print(f'anchors/target = {aat:.2f}, Best Possible Recall (BPR) = {bpr:.4f}', end='') - if bpr < 0.98: # threshold to recompute - print('. Attempting to improve anchors, please wait...') - na = m.anchor_grid.numel() // 2 # number of anchors - new_anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) - new_bpr = metric(new_anchors.reshape(-1, 2))[0] - if new_bpr > bpr: # replace anchors - new_anchors = torch.tensor(new_anchors, device=m.anchors.device).type_as(m.anchors) - m.anchor_grid[:] = new_anchors.clone().view_as(m.anchor_grid) # for inference - m.anchors[:] = new_anchors.clone().view_as(m.anchors) / m.stride.to(m.anchors.device).view(-1, 1, 1) # loss - check_anchor_order(m) - print(f'{prefix}New anchors saved to model. Update model *.yaml to use these anchors in the future.') - else: - print(f'{prefix}Original anchors better than new anchors. Proceeding with original anchors.') - print('') # newline - - -def kmean_anchors(path='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): - """ Creates kmeans-evolved anchors from training dataset - - Arguments: - path: path to dataset *.yaml, or a loaded dataset - n: number of anchors - img_size: image size used for training - thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 - gen: generations to evolve anchors using genetic algorithm - verbose: print all results - - Return: - k: kmeans evolved anchors - - Usage: - from utils.autoanchor import *; _ = kmean_anchors() - """ - thr = 1. / thr - prefix = colorstr('autoanchor: ') - - def metric(k, wh): # compute metrics - r = wh[:, None] / k[None] - x = torch.min(r, 1. / r).min(2)[0] # ratio metric - # x = wh_iou(wh, torch.tensor(k)) # iou metric - return x, x.max(1)[0] # x, best_x - - def anchor_fitness(k): # mutation fitness - _, best = metric(torch.tensor(k, dtype=torch.float32), wh) - return (best * (best > thr).float()).mean() # fitness - - def print_results(k): - k = k[np.argsort(k.prod(1))] # sort small to large - x, best = metric(k, wh0) - bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr - print(f'{prefix}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr') - print(f'{prefix}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' - f'past_thr={x[x > thr].mean():.3f}-mean: ', end='') - for i, x in enumerate(k): - print('%i,%i' % (round(x[0]), round(x[1])), end=', ' if i < len(k) - 1 else '\n') # use in *.cfg - return k - - if isinstance(path, str): # *.yaml file - with open(path) as f: - data_dict = yaml.load(f, Loader=yaml.SafeLoader) # model dict - from utils.datasets import LoadImagesAndLabels - dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) - else: - dataset = path # dataset - - # Get label wh - shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) - wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh - - # Filter - i = (wh0 < 3.0).any(1).sum() - if i: - print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.') - wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels - # wh = wh * (np.random.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 - - # Kmeans calculation - print(f'{prefix}Running kmeans for {n} anchors on {len(wh)} points...') - s = wh.std(0) # sigmas for whitening - k, dist = kmeans(wh / s, n, iter=30) # points, mean distance - k *= s - wh = torch.tensor(wh, dtype=torch.float32) # filtered - wh0 = torch.tensor(wh0, dtype=torch.float32) # unfiltered - k = print_results(k) - - # Plot - # k, d = [None] * 20, [None] * 20 - # for i in tqdm(range(1, 21)): - # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance - # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True) - # ax = ax.ravel() - # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh - # ax[0].hist(wh[wh[:, 0]<100, 0],400) - # ax[1].hist(wh[wh[:, 1]<100, 1],400) - # fig.savefig('wh.png', dpi=200) - - # Evolve - npr = np.random - f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), desc=f'{prefix}Evolving anchors with Genetic Algorithm:') # progress bar - for _ in pbar: - v = np.ones(sh) - while (v == 1).all(): # mutate until a change occurs (prevent duplicates) - v = ((npr.random(sh) < mp) * npr.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) - kg = (k.copy() * v).clip(min=2.0) - fg = anchor_fitness(kg) - if fg > f: - f, k = fg, kg.copy() - pbar.desc = f'{prefix}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}' - if verbose: - print_results(k) - - return print_results(k) diff --git a/lib/yolov5-face_Jan1/utils/datasets.py b/lib/yolov5-face_Jan1/utils/datasets.py deleted file mode 100755 index feb5dc1dc..000000000 --- a/lib/yolov5-face_Jan1/utils/datasets.py +++ /dev/null @@ -1,1019 +0,0 @@ -# Dataset utils and dataloaders - -import glob -import logging -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path -from threading import Thread - -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -from PIL import Image, ExifTags -from torch.utils.data import Dataset -from tqdm import tqdm - -from utils.general import xyxy2xywh, xywh2xyxy, xywhn2xyxy, clean_str -from utils.torch_utils import torch_distributed_zero_first - -# Parameters -help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng'] # acceptable image suffixes -vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes -logger = logging.getLogger(__name__) - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - - -def get_hash(files): - # Returns a single hash value of a list of files - return sum(os.path.getsize(f) for f in files if os.path.isfile(f)) - - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - try: - rotation = dict(img._getexif().items())[orientation] - if rotation == 6: # rotation 270 - s = (s[1], s[0]) - elif rotation == 8: # rotation 90 - s = (s[1], s[0]) - except: - pass - - return s - - -def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, - rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''): - # Make sure only the first process in DDP process the dataset first, and the following others can use the cache - with torch_distributed_zero_first(rank): - dataset = LoadImagesAndLabels(path, imgsz, batch_size, - augment=augment, # augment images - hyp=hyp, # augmentation hyperparameters - rect=rect, # rectangular training - cache_images=cache, - single_cls=opt.single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - prefix=prefix) - - batch_size = min(batch_size, len(dataset)) - nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None - loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader - # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader() - dataloader = loader(dataset, - batch_size=batch_size, - num_workers=nw, - sampler=sampler, - pin_memory=True, - collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn) - return dataloader, dataset - - -class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler(object): - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - - -class LoadImages: # for inference - def __init__(self, path, img_size=640): - p = str(Path(path)) # os-agnostic - p = os.path.abspath(p) # absolute path - if '*' in p: - files = sorted(glob.glob(p, recursive=True)) # glob - elif os.path.isdir(p): - files = sorted(glob.glob(os.path.join(p, '*.*'))) # dir - elif os.path.isfile(p): - files = [p] # files - else: - raise Exception(f'ERROR: {p} does not exist') - - images = [x for x in files if x.split('.')[-1].lower() in img_formats] - videos = [x for x in files if x.split('.')[-1].lower() in vid_formats] - ni, nv = len(images), len(videos) - - self.img_size = img_size - self.files = images + videos - self.nf = ni + nv # number of files - self.video_flag = [False] * ni + [True] * nv - self.mode = 'image' - if any(videos): - self.new_video(videos[0]) # new video - else: - self.cap = None - assert self.nf > 0, f'No images or videos found in {p}. ' \ - f'Supported formats are:\nimages: {img_formats}\nvideos: {vid_formats}' - - def __iter__(self): - self.count = 0 - return self - - def __next__(self): - if self.count == self.nf: - raise StopIteration - path = self.files[self.count] - - if self.video_flag[self.count]: - # Read video - self.mode = 'video' - ret_val, img0 = self.cap.read() - if not ret_val: - self.count += 1 - self.cap.release() - if self.count == self.nf: # last video - raise StopIteration - else: - path = self.files[self.count] - self.new_video(path) - ret_val, img0 = self.cap.read() - - self.frame += 1 - print(f'video {self.count + 1}/{self.nf} ({self.frame}/{self.nframes}) {path}: ', end='') - - else: - # Read image - self.count += 1 - img0 = cv2.imread(path) # BGR - assert img0 is not None, 'Image Not Found ' + path - print(f'image {self.count}/{self.nf} {path}: ', end='') - - # Padded resize - img = letterbox(img0, new_shape=self.img_size)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return path, img, img0, self.cap - - def new_video(self, path): - self.frame = 0 - self.cap = cv2.VideoCapture(path) - self.nframes = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - def __len__(self): - return self.nf # number of files - - -class LoadWebcam: # for inference - def __init__(self, pipe='0', img_size=640): - self.img_size = img_size - - if pipe.isnumeric(): - pipe = eval(pipe) # local camera - # pipe = 'rtsp://192.168.1.64/1' # IP camera - # pipe = 'rtsp://username:password@192.168.1.64/1' # IP camera with login - # pipe = 'http://wmccpinetop.axiscam.net/mjpg/video.mjpg' # IP golf camera - - self.pipe = pipe - self.cap = cv2.VideoCapture(pipe) # video capture object - self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 3) # set buffer size - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - if cv2.waitKey(1) == ord('q'): # q to quit - self.cap.release() - cv2.destroyAllWindows() - raise StopIteration - - # Read frame - if self.pipe == 0: # local camera - ret_val, img0 = self.cap.read() - img0 = cv2.flip(img0, 1) # flip left-right - else: # IP camera - n = 0 - while True: - n += 1 - self.cap.grab() - if n % 30 == 0: # skip frames - ret_val, img0 = self.cap.retrieve() - if ret_val: - break - - # Print - assert ret_val, f'Camera Error {self.pipe}' - img_path = 'webcam.jpg' - print(f'webcam {self.count}: ', end='') - - # Padded resize - img = letterbox(img0, new_shape=self.img_size)[0] - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return img_path, img, img0, None - - def __len__(self): - return 0 - - -class LoadStreams: # multiple IP or RTSP cameras - def __init__(self, sources='streams.txt', img_size=640): - self.mode = 'stream' - self.img_size = img_size - - if os.path.isfile(sources): - with open(sources, 'r') as f: - sources = [x.strip() for x in f.read().strip().splitlines() if len(x.strip())] - else: - sources = [sources] - - n = len(sources) - self.imgs = [None] * n - self.sources = [clean_str(x) for x in sources] # clean source names for later - for i, s in enumerate(sources): - # Start the thread to read frames from the video stream - print(f'{i + 1}/{n}: {s}... ', end='') - cap = cv2.VideoCapture(eval(s) if s.isnumeric() else s) - assert cap.isOpened(), f'Failed to open {s}' - w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - fps = cap.get(cv2.CAP_PROP_FPS) % 100 - _, self.imgs[i] = cap.read() # guarantee first frame - thread = Thread(target=self.update, args=([i, cap]), daemon=True) - print(f' success ({w}x{h} at {fps:.2f} FPS).') - thread.start() - print('') # newline - - # check for common shapes - s = np.stack([letterbox(x, new_shape=self.img_size)[0].shape for x in self.imgs], 0) # inference shapes - self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal - if not self.rect: - print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.') - - def update(self, index, cap): - # Read next stream frame in a daemon thread - n = 0 - while cap.isOpened(): - n += 1 - # _, self.imgs[index] = cap.read() - cap.grab() - if n == 4: # read every 4th frame - _, self.imgs[index] = cap.retrieve() - n = 0 - time.sleep(0.01) # wait time - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - img0 = self.imgs.copy() - if cv2.waitKey(1) == ord('q'): # q to quit - cv2.destroyAllWindows() - raise StopIteration - - # Letterbox - img = [letterbox(x, new_shape=self.img_size, auto=self.rect)[0] for x in img0] - - # Stack - img = np.stack(img, 0) - - # Convert - img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR to RGB, to bsx3x416x416 - img = np.ascontiguousarray(img) - - return self.sources, img, img0, None - - def __len__(self): - return 0 # 1E12 frames = 32 streams at 30 FPS for 30 years - - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - return [x.replace(sa, sb, 1).replace('.' + x.split('.')[-1], '.txt') for x in img_paths] - - -class LoadImagesAndLabels(Dataset): # for training/testing - def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, - cache_images=False, single_cls=False, stride=32, pad=0.0, prefix=''): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - elif p.is_file(): # file - with open(p, 'r') as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - else: - raise Exception(f'{prefix}{p} does not exist') - self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats]) - assert self.img_files, f'{prefix}No images found' - except Exception as e: - raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {help_url}') - - # Check cache - self.label_files = img2label_paths(self.img_files) # labels - cache_path = Path(self.label_files[0]).parent.with_suffix('.cache') # cached labels - if cache_path.is_file(): - cache = torch.load(cache_path) # load - if cache['hash'] != get_hash(self.label_files + self.img_files) or 'results' not in cache: # changed - cache = self.cache_labels(cache_path, prefix) # re-cache - else: - cache = self.cache_labels(cache_path, prefix) # cache - - # Display cache - [nf, nm, ne, nc, n] = cache.pop('results') # found, missing, empty, corrupted, total - desc = f"Scanning '{cache_path}' for images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted" - tqdm(None, desc=prefix + desc, total=n, initial=n) - assert nf > 0 or not augment, f'{prefix}No labels in {cache_path}. Can not train without labels. See {help_url}' - - # Read cache - cache.pop('hash') # remove hash - labels, shapes = zip(*cache.values()) - self.labels = list(labels) - self.shapes = np.array(shapes, dtype=np.float64) - self.img_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - if single_cls: - for x in self.labels: - x[:, 0] = 0 - - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.img_files = [self.img_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride - - # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM) - self.imgs = [None] * n - if cache_images: - gb = 0 # Gigabytes of cached images - self.img_hw0, self.img_hw = [None] * n, [None] * n - results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n))) # 8 threads - pbar = tqdm(enumerate(results), total=n) - for i, x in pbar: - self.imgs[i], self.img_hw0[i], self.img_hw[i] = x # img, hw_original, hw_resized = load_image(self, i) - gb += self.imgs[i].nbytes - pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB)' - - def cache_labels(self, path=Path('./labels.cache'), prefix=''): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate - pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files)) - for i, (im_file, lb_file) in enumerate(pbar): - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - assert (shape[0] > 9) & (shape[1] > 9), 'image size <10 pixels' - - # verify labels - if os.path.isfile(lb_file): - nf += 1 # label found - with open(lb_file, 'r') as f: - l = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - if len(l): - assert l.shape[1] == 5, 'labels require 5 columns each' - assert (l >= 0).all(), 'negative labels' - assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels' - assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels' - else: - ne += 1 # label empty - l = np.zeros((0, 5), dtype=np.float32) - else: - nm += 1 # label missing - l = np.zeros((0, 5), dtype=np.float32) - x[im_file] = [l, shape] - except Exception as e: - nc += 1 - print(f'{prefix}WARNING: Ignoring corrupted image and/or label {im_file}: {e}') - - pbar.desc = f"{prefix}Scanning '{path.parent / path.stem}' for images and labels... " \ - f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted" - - if nf == 0: - print(f'{prefix}WARNING: No labels found in {path}. See {help_url}') - - x['hash'] = get_hash(self.label_files + self.img_files) - x['results'] = [nf, nm, ne, nc, i + 1] - torch.save(x, path) # save for next time - logging.info(f'{prefix}New cache created: {path}') - return x - - def __len__(self): - return len(self.img_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - img, labels = load_mosaic(self, index) - shapes = None - - # MixUp https://arxiv.org/pdf/1710.09412.pdf - if random.random() < hyp['mixup']: - img2, labels2 = load_mosaic(self, random.randint(0, self.n - 1)) - r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0 - img = (img * r + img2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - - else: - # Load image - img, (h0, w0), (h, w) = load_image(self, index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - labels = self.labels[index].copy() - if labels.size: # normalized xywh to pixel xyxy format - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1]) - - if self.augment: - # Augment imagespace - if not mosaic: - img, labels = random_perspective(img, labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - # Augment colorspace - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Apply cutouts - # if random.random() < 0.9: - # labels = cutout(img, labels) - - nL = len(labels) # number of labels - if nL: - labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh - labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1 - labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1 - - if self.augment: - # flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nL: - labels[:, 2] = 1 - labels[:, 2] - - # flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nL: - labels[:, 1] = 1 - labels[:, 1] - - labels_out = torch.zeros((nL, 6)) - if nL: - labels_out[:, 1:] = torch.from_numpy(labels) - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - - return torch.from_numpy(img), labels_out, self.img_files[index], shapes - - @staticmethod - def collate_fn(batch): - img, label, path, shapes = zip(*batch) # transposed - for i, l in enumerate(label): - l[:, 0] = i # add target image index for build_targets() - return torch.stack(img, 0), torch.cat(label, 0), path, shapes - - @staticmethod - def collate_fn4(batch): - img, label, path, shapes = zip(*batch) # transposed - n = len(shapes) // 4 - img4, label4, path4, shapes4 = [], [], path[:n], shapes[:n] - - ho = torch.tensor([[0., 0, 0, 1, 0, 0]]) - wo = torch.tensor([[0., 0, 1, 0, 0, 0]]) - s = torch.tensor([[1, 1, .5, .5, .5, .5]]) # scale - for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW - i *= 4 - if random.random() < 0.5: - im = F.interpolate(img[i].unsqueeze(0).float(), scale_factor=2., mode='bilinear', align_corners=False)[ - 0].type(img[i].type()) - l = label[i] - else: - im = torch.cat((torch.cat((img[i], img[i + 1]), 1), torch.cat((img[i + 2], img[i + 3]), 1)), 2) - l = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s - img4.append(im) - label4.append(l) - - for i, l in enumerate(label4): - l[:, 0] = i # add target image index for build_targets() - - return torch.stack(img4, 0), torch.cat(label4, 0), path4, shapes4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def load_image(self, index): - # loads 1 image from dataset, returns img, original hw, resized hw - img = self.imgs[index] - if img is None: # not cached - path = self.img_files[index] - img = cv2.imread(path) # BGR - assert img is not None, 'Image Not Found ' + path - h0, w0 = img.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # resize image to img_size - if r != 1: # always resize down, only resize up if training with augmentation - interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR - img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp) - return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized - else: - return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized - - -def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5): - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV)) - dtype = img.dtype # uint8 - - x = np.arange(0, 256, dtype=np.int16) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype) - cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed - - # Histogram equalization - # if random.random() < 0.2: - # for i in range(3): - # img[:, :, i] = cv2.equalizeHist(img[:, :, i]) - - -def load_mosaic(self, index): - # loads images in a 4-mosaic - - labels4 = [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + [self.indices[random.randint(0, self.n - 1)] for _ in range(3)] # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels = self.labels[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - labels4.append(labels) - - # Concat/clip labels - if len(labels4): - labels4 = np.concatenate(labels4, 0) - np.clip(labels4[:, 1:], 0, 2 * s, out=labels4[:, 1:]) # use with random_perspective - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - img4, labels4 = random_perspective(img4, labels4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img4, labels4 - - -def load_mosaic9(self, index): - # loads images in a 9-mosaic - - labels9 = [] - s = self.img_size - indices = [index] + [self.indices[random.randint(0, self.n - 1)] for _ in range(8)] # 8 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img9 - if i == 0: # center - img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - h0, w0 = h, w - c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - elif i == 1: # top - c = s, s - h, s + w, s - elif i == 2: # top right - c = s + wp, s - h, s + wp + w, s - elif i == 3: # right - c = s + w0, s, s + w0 + w, s + h - elif i == 4: # bottom right - c = s + w0, s + hp, s + w0 + w, s + hp + h - elif i == 5: # bottom - c = s + w0 - w, s + h0, s + w0, s + h0 + h - elif i == 6: # bottom left - c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - elif i == 7: # left - c = s - w, s + h0 - h, s, s + h0 - elif i == 8: # top left - c = s - w, s + h0 - hp - h, s, s + h0 - hp - - padx, pady = c[:2] - x1, y1, x2, y2 = [max(x, 0) for x in c] # allocate coords - - # Labels - labels = self.labels[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - labels9.append(labels) - - # Image - img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] - hp, wp = h, w # height, width previous - - # Offset - yc, xc = [int(random.uniform(0, s)) for x in self.mosaic_border] # mosaic center x, y - img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - - # Concat/clip labels - if len(labels9): - labels9 = np.concatenate(labels9, 0) - labels9[:, [1, 3]] -= xc - labels9[:, [2, 4]] -= yc - - np.clip(labels9[:, 1:], 0, 2 * s, out=labels9[:, 1:]) # use with random_perspective - # img9, labels9 = replicate(img9, labels9) # replicate - - # Augment - img9, labels9 = random_perspective(img9, labels9, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img9, labels9 - - -def replicate(img, labels): - # Replicate labels - h, w = img.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return img, labels - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True): - # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232 - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) - - -def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = img.shape[0] + border[0] * 2 # shape(h,w,c) - width = img.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -img.shape[1] / 2 # x translation (pixels) - C[1, 2] = -img.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(img[:, :, ::-1]) # base - # ax[1].imshow(img2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - # warp points - xy = np.ones((n * 4, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - if perspective: - xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8) # rescale - else: # affine - xy = xy[:, :2].reshape(n, 8) - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - xy = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # # apply angle-based reduction of bounding boxes - # radians = a * math.pi / 180 - # reduction = max(abs(math.sin(radians)), abs(math.cos(radians))) ** 0.5 - # x = (xy[:, 2] + xy[:, 0]) / 2 - # y = (xy[:, 3] + xy[:, 1]) / 2 - # w = (xy[:, 2] - xy[:, 0]) * reduction - # h = (xy[:, 3] - xy[:, 1]) * reduction - # xy = np.concatenate((x - w / 2, y - h / 2, x + w / 2, y + h / 2)).reshape(4, n).T - - # clip boxes - xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width) - xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=xy.T) - targets = targets[i] - targets[:, 1:5] = xy[i] - - return img, targets - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates - - -def cutout(image, labels): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - def bbox_ioa(box1, box2): - # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2 - box2 = box2.transpose() - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16 - - # Intersection over box2 area - return inter_area / box2_area - - # create random masks - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def create_folder(path='./new'): - # Create folder - if os.path.exists(path): - shutil.rmtree(path) # delete output folder - os.makedirs(path) # make new output folder - - -def flatten_recursive(path='../coco128'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(path + '_flat') - create_folder(new_path) - for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path='../coco128/'): # from utils.datasets import *; extract_boxes('../coco128') - # Convert detection dataset into classification dataset, with one directory per class - - path = Path(path) # images dir - shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in img_formats: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file, 'r') as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path='../coco128', weights=(0.9, 0.1, 0.0)): # from utils.datasets import *; autosplit('../coco128') - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - # Arguments - path: Path to images directory - weights: Train, val, test weights (list) - """ - path = Path(path) # images dir - files = list(path.rglob('*.*')) - n = len(files) # number of files - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing - for i, img in tqdm(zip(indices, files), total=n): - if img.suffix[1:] in img_formats: - with open(path / txt[i], 'a') as f: - f.write(str(img) + '\n') # add image to txt file diff --git a/lib/yolov5-face_Jan1/utils/face_datasets.py b/lib/yolov5-face_Jan1/utils/face_datasets.py deleted file mode 100755 index efd6f4927..000000000 --- a/lib/yolov5-face_Jan1/utils/face_datasets.py +++ /dev/null @@ -1,834 +0,0 @@ -import glob -import logging -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path -from threading import Thread - -import cv2 -import numpy as np -import torch -from PIL import Image, ExifTags -from torch.utils.data import Dataset -from tqdm import tqdm - -from utils.general import xyxy2xywh, xywh2xyxy, clean_str -from utils.torch_utils import torch_distributed_zero_first - - -# Parameters -help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -img_formats = ['bmp', 'jpg', 'jpeg', 'png', 'tif', 'tiff', 'dng'] # acceptable image suffixes -vid_formats = ['mov', 'avi', 'mp4', 'mpg', 'mpeg', 'm4v', 'wmv', 'mkv'] # acceptable video suffixes -logger = logging.getLogger(__name__) - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - -def get_hash(files): - # Returns a single hash value of a list of files - return sum(os.path.getsize(f) for f in files if os.path.isfile(f)) - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = os.sep + 'images' + os.sep, os.sep + 'labels' + os.sep # /images/, /labels/ substrings - return [x.replace(sa, sb, 1).replace('.' + x.split('.')[-1], '.txt') for x in img_paths] - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - try: - rotation = dict(img._getexif().items())[orientation] - if rotation == 6: # rotation 270 - s = (s[1], s[0]) - elif rotation == 8: # rotation 90 - s = (s[1], s[0]) - except: - pass - - return s - -def create_dataloader(path, imgsz, batch_size, stride, opt, hyp=None, augment=False, cache=False, pad=0.0, rect=False, - rank=-1, world_size=1, workers=8, image_weights=False, quad=False, prefix=''): - # Make sure only the first process in DDP process the dataset first, and the following others can use the cache - with torch_distributed_zero_first(rank): - dataset = LoadFaceImagesAndLabels(path, imgsz, batch_size, - augment=augment, # augment images - hyp=hyp, # augmentation hyperparameters - rect=rect, # rectangular training - cache_images=cache, - single_cls=opt.single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - ) - - batch_size = min(batch_size, len(dataset)) - nw = min([os.cpu_count() // world_size, batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = torch.utils.data.distributed.DistributedSampler(dataset) if rank != -1 else None - loader = torch.utils.data.DataLoader if image_weights else InfiniteDataLoader - # Use torch.utils.data.DataLoader() if dataset.properties will update during training else InfiniteDataLoader() - dataloader = loader(dataset, - batch_size=batch_size, - num_workers=nw, - sampler=sampler, - pin_memory=True, - collate_fn=LoadFaceImagesAndLabels.collate_fn4 if quad else LoadFaceImagesAndLabels.collate_fn) - return dataloader, dataset -class InfiniteDataLoader(torch.utils.data.dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) -class _RepeatSampler(object): - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - -class LoadFaceImagesAndLabels(Dataset): # for training/testing - def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, rect=False, image_weights=False, - cache_images=False, single_cls=False, stride=32, pad=0.0, rank=-1): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - elif p.is_file(): # file - with open(p, 'r') as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - else: - raise Exception('%s does not exist' % p) - self.img_files = sorted([x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in img_formats]) - assert self.img_files, 'No images found' - except Exception as e: - raise Exception('Error loading data from %s: %s\nSee %s' % (path, e, help_url)) - - # Check cache - self.label_files = img2label_paths(self.img_files) # labels - cache_path = Path(self.label_files[0]).parent.with_suffix('.cache') # cached labels - if cache_path.is_file(): - cache = torch.load(cache_path) # load - if cache['hash'] != get_hash(self.label_files + self.img_files) or 'results' not in cache: # changed - cache = self.cache_labels(cache_path) # re-cache - else: - cache = self.cache_labels(cache_path) # cache - - # Display cache - [nf, nm, ne, nc, n] = cache.pop('results') # found, missing, empty, corrupted, total - desc = f"Scanning '{cache_path}' for images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupted" - tqdm(None, desc=desc, total=n, initial=n) - assert nf > 0 or not augment, f'No labels found in {cache_path}. Can not train without labels. See {help_url}' - - # Read cache - cache.pop('hash') # remove hash - labels, shapes = zip(*cache.values()) - self.labels = list(labels) - self.shapes = np.array(shapes, dtype=np.float64) - self.img_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - if single_cls: - for x in self.labels: - x[:, 0] = 0 - - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(np.int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.img_files = [self.img_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(np.int) * stride - - # Cache images into memory for faster training (WARNING: large datasets may exceed system RAM) - self.imgs = [None] * n - if cache_images: - gb = 0 # Gigabytes of cached images - self.img_hw0, self.img_hw = [None] * n, [None] * n - results = ThreadPool(8).imap(lambda x: load_image(*x), zip(repeat(self), range(n))) # 8 threads - pbar = tqdm(enumerate(results), total=n) - for i, x in pbar: - self.imgs[i], self.img_hw0[i], self.img_hw[i] = x # img, hw_original, hw_resized = load_image(self, i) - gb += self.imgs[i].nbytes - pbar.desc = 'Caching images (%.1fGB)' % (gb / 1E9) - - def cache_labels(self, path=Path('./labels.cache')): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc = 0, 0, 0, 0 # number missing, found, empty, duplicate - pbar = tqdm(zip(self.img_files, self.label_files), desc='Scanning images', total=len(self.img_files)) - for i, (im_file, lb_file) in enumerate(pbar): - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - assert (shape[0] > 9) & (shape[1] > 9), 'image size <10 pixels' - - # verify labels - if os.path.isfile(lb_file): - nf += 1 # label found - with open(lb_file, 'r') as f: - l = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - if len(l): - assert l.shape[1] == 15, 'labels require 15 columns each' - assert (l >= -1).all(), 'negative labels' - assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels' - assert np.unique(l, axis=0).shape[0] == l.shape[0], 'duplicate labels' - else: - ne += 1 # label empty - l = np.zeros((0, 15), dtype=np.float32) - else: - nm += 1 # label missing - l = np.zeros((0, 15), dtype=np.float32) - x[im_file] = [l, shape] - except Exception as e: - nc += 1 - print('WARNING: Ignoring corrupted image and/or label %s: %s' % (im_file, e)) - - pbar.desc = f"Scanning '{path.parent / path.stem}' for images and labels... " \ - f"{nf} found, {nm} missing, {ne} empty, {nc} corrupted" - - if nf == 0: - print(f'WARNING: No labels found in {path}. See {help_url}') - - x['hash'] = get_hash(self.label_files + self.img_files) - x['results'] = [nf, nm, ne, nc, i + 1] - torch.save(x, path) # save for next time - logging.info(f"New cache created: {path}") - return x - - def __len__(self): - return len(self.img_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - img, labels = load_mosaic_face(self, index) - shapes = None - - # MixUp https://arxiv.org/pdf/1710.09412.pdf - if random.random() < hyp['mixup']: - img2, labels2 = load_mosaic_face(self, random.randint(0, self.n - 1)) - r = np.random.beta(8.0, 8.0) # mixup ratio, alpha=beta=8.0 - img = (img * r + img2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - - else: - # Load image - img, (h0, w0), (h, w) = load_image(self, index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - # Load labels - labels = [] - x = self.labels[index] - if x.size > 0: - # Normalized xywh to pixel xyxy format - labels = x.copy() - labels[:, 1] = ratio[0] * w * (x[:, 1] - x[:, 3] / 2) + pad[0] # pad width - labels[:, 2] = ratio[1] * h * (x[:, 2] - x[:, 4] / 2) + pad[1] # pad height - labels[:, 3] = ratio[0] * w * (x[:, 1] + x[:, 3] / 2) + pad[0] - labels[:, 4] = ratio[1] * h * (x[:, 2] + x[:, 4] / 2) + pad[1] - - #labels[:, 5] = ratio[0] * w * x[:, 5] + pad[0] # pad width - labels[:, 5] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 5] + pad[0]) + ( - np.array(x[:, 5] > 0, dtype=np.int32) - 1) - labels[:, 6] = np.array(x[:, 6] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 6] + pad[1]) + ( - np.array(x[:, 6] > 0, dtype=np.int32) - 1) - labels[:, 7] = np.array(x[:, 7] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 7] + pad[0]) + ( - np.array(x[:, 7] > 0, dtype=np.int32) - 1) - labels[:, 8] = np.array(x[:, 8] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 8] + pad[1]) + ( - np.array(x[:, 8] > 0, dtype=np.int32) - 1) - labels[:, 9] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 9] + pad[0]) + ( - np.array(x[:, 9] > 0, dtype=np.int32) - 1) - labels[:, 10] = np.array(x[:, 5] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 10] + pad[1]) + ( - np.array(x[:, 10] > 0, dtype=np.int32) - 1) - labels[:, 11] = np.array(x[:, 11] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 11] + pad[0]) + ( - np.array(x[:, 11] > 0, dtype=np.int32) - 1) - labels[:, 12] = np.array(x[:, 12] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 12] + pad[1]) + ( - np.array(x[:, 12] > 0, dtype=np.int32) - 1) - labels[:, 13] = np.array(x[:, 13] > 0, dtype=np.int32) * (ratio[0] * w * x[:, 13] + pad[0]) + ( - np.array(x[:, 13] > 0, dtype=np.int32) - 1) - labels[:, 14] = np.array(x[:, 14] > 0, dtype=np.int32) * (ratio[1] * h * x[:, 14] + pad[1]) + ( - np.array(x[:, 14] > 0, dtype=np.int32) - 1) - - if self.augment: - # Augment imagespace - if not mosaic: - img, labels = random_perspective(img, labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - # Augment colorspace - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Apply cutouts - # if random.random() < 0.9: - # labels = cutout(img, labels) - - nL = len(labels) # number of labels - if nL: - labels[:, 1:5] = xyxy2xywh(labels[:, 1:5]) # convert xyxy to xywh - labels[:, [2, 4]] /= img.shape[0] # normalized height 0-1 - labels[:, [1, 3]] /= img.shape[1] # normalized width 0-1 - - labels[:, [5, 7, 9, 11, 13]] /= img.shape[1] # normalized landmark x 0-1 - labels[:, [5, 7, 9, 11, 13]] = np.where(labels[:, [5, 7, 9, 11, 13]] < 0, -1, labels[:, [5, 7, 9, 11, 13]]) - labels[:, [6, 8, 10, 12, 14]] /= img.shape[0] # normalized landmark y 0-1 - labels[:, [6, 8, 10, 12, 14]] = np.where(labels[:, [6, 8, 10, 12, 14]] < 0, -1, labels[:, [6, 8, 10, 12, 14]]) - - if self.augment: - # flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nL: - labels[:, 2] = 1 - labels[:, 2] - - labels[:, 6] = np.where(labels[:,6] < 0, -1, 1 - labels[:, 6]) - labels[:, 8] = np.where(labels[:, 8] < 0, -1, 1 - labels[:, 8]) - labels[:, 10] = np.where(labels[:, 10] < 0, -1, 1 - labels[:, 10]) - labels[:, 12] = np.where(labels[:, 12] < 0, -1, 1 - labels[:, 12]) - labels[:, 14] = np.where(labels[:, 14] < 0, -1, 1 - labels[:, 14]) - - # flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nL: - labels[:, 1] = 1 - labels[:, 1] - - labels[:, 5] = np.where(labels[:, 5] < 0, -1, 1 - labels[:, 5]) - labels[:, 7] = np.where(labels[:, 7] < 0, -1, 1 - labels[:, 7]) - labels[:, 9] = np.where(labels[:, 9] < 0, -1, 1 - labels[:, 9]) - labels[:, 11] = np.where(labels[:, 11] < 0, -1, 1 - labels[:, 11]) - labels[:, 13] = np.where(labels[:, 13] < 0, -1, 1 - labels[:, 13]) - - #左右镜像的时候,左眼、右眼, 左嘴角、右嘴角无法区分, 应该交换位置,便于网络学习 - eye_left = np.copy(labels[:, [5, 6]]) - mouth_left = np.copy(labels[:, [11, 12]]) - labels[:, [5, 6]] = labels[:, [7, 8]] - labels[:, [7, 8]] = eye_left - labels[:, [11, 12]] = labels[:, [13, 14]] - labels[:, [13, 14]] = mouth_left - - labels_out = torch.zeros((nL, 16)) - if nL: - labels_out[:, 1:] = torch.from_numpy(labels) - #showlabels(img, labels[:, 1:5], labels[:, 5:15]) - - # Convert - img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - img = np.ascontiguousarray(img) - #print(index, ' --- labels_out: ', labels_out) - #if nL: - #print( ' : landmarks : ', torch.max(labels_out[:, 5:15]), ' --- ', torch.min(labels_out[:, 5:15])) - return torch.from_numpy(img), labels_out, self.img_files[index], shapes - - @staticmethod - def collate_fn(batch): - img, label, path, shapes = zip(*batch) # transposed - for i, l in enumerate(label): - l[:, 0] = i # add target image index for build_targets() - return torch.stack(img, 0), torch.cat(label, 0), path, shapes - - -def showlabels(img, boxs, landmarks): - for box in boxs: - x,y,w,h = box[0] * img.shape[1], box[1] * img.shape[0], box[2] * img.shape[1], box[3] * img.shape[0] - #cv2.rectangle(image, (x,y), (x+w,y+h), (0,255,0), 2) - cv2.rectangle(img, (int(x - w/2), int(y - h/2)), (int(x + w/2), int(y + h/2)), (0, 255, 0), 2) - - for landmark in landmarks: - #cv2.circle(img,(60,60),30,(0,0,255)) - for i in range(5): - cv2.circle(img, (int(landmark[2*i] * img.shape[1]), int(landmark[2*i+1]*img.shape[0])), 3 ,(0,0,255), -1) - cv2.imshow('test', img) - cv2.waitKey(0) - - -def load_mosaic_face(self, index): - # loads images in a mosaic - labels4 = [] - s = self.img_size - yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y - indices = [index] + [self.indices[random.randint(0, self.n - 1)] for _ in range(3)] # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = load_image(self, index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - x = self.labels[index] - labels = x.copy() - if x.size > 0: # Normalized xywh to pixel xyxy format - #box, x1,y1,x2,y2 - labels[:, 1] = w * (x[:, 1] - x[:, 3] / 2) + padw - labels[:, 2] = h * (x[:, 2] - x[:, 4] / 2) + padh - labels[:, 3] = w * (x[:, 1] + x[:, 3] / 2) + padw - labels[:, 4] = h * (x[:, 2] + x[:, 4] / 2) + padh - #10 landmarks - - labels[:, 5] = np.array(x[:, 5] > 0, dtype=np.int32) * (w * x[:, 5] + padw) + (np.array(x[:, 5] > 0, dtype=np.int32) - 1) - labels[:, 6] = np.array(x[:, 6] > 0, dtype=np.int32) * (h * x[:, 6] + padh) + (np.array(x[:, 6] > 0, dtype=np.int32) - 1) - labels[:, 7] = np.array(x[:, 7] > 0, dtype=np.int32) * (w * x[:, 7] + padw) + (np.array(x[:, 7] > 0, dtype=np.int32) - 1) - labels[:, 8] = np.array(x[:, 8] > 0, dtype=np.int32) * (h * x[:, 8] + padh) + (np.array(x[:, 8] > 0, dtype=np.int32) - 1) - labels[:, 9] = np.array(x[:, 9] > 0, dtype=np.int32) * (w * x[:, 9] + padw) + (np.array(x[:, 9] > 0, dtype=np.int32) - 1) - labels[:, 10] = np.array(x[:, 10] > 0, dtype=np.int32) * (h * x[:, 10] + padh) + (np.array(x[:, 10] > 0, dtype=np.int32) - 1) - labels[:, 11] = np.array(x[:, 11] > 0, dtype=np.int32) * (w * x[:, 11] + padw) + (np.array(x[:, 11] > 0, dtype=np.int32) - 1) - labels[:, 12] = np.array(x[:, 12] > 0, dtype=np.int32) * (h * x[:, 12] + padh) + (np.array(x[:, 12] > 0, dtype=np.int32) - 1) - labels[:, 13] = np.array(x[:, 13] > 0, dtype=np.int32) * (w * x[:, 13] + padw) + (np.array(x[:, 13] > 0, dtype=np.int32) - 1) - labels[:, 14] = np.array(x[:, 14] > 0, dtype=np.int32) * (h * x[:, 14] + padh) + (np.array(x[:, 14] > 0, dtype=np.int32) - 1) - labels4.append(labels) - - # Concat/clip labels - if len(labels4): - labels4 = np.concatenate(labels4, 0) - np.clip(labels4[:, 1:5], 0, 2 * s, out=labels4[:, 1:5]) # use with random_perspective - # img4, labels4 = replicate(img4, labels4) # replicate - - #landmarks - labels4[:, 5:] = np.where(labels4[:, 5:] < 0, -1, labels4[:, 5:]) - labels4[:, 5:] = np.where(labels4[:, 5:] > 2 * s, -1, labels4[:, 5:]) - - labels4[:, 5] = np.where(labels4[:, 6] == -1, -1, labels4[:, 5]) - labels4[:, 6] = np.where(labels4[:, 5] == -1, -1, labels4[:, 6]) - - labels4[:, 7] = np.where(labels4[:, 8] == -1, -1, labels4[:, 7]) - labels4[:, 8] = np.where(labels4[:, 7] == -1, -1, labels4[:, 8]) - - labels4[:, 9] = np.where(labels4[:, 10] == -1, -1, labels4[:, 9]) - labels4[:, 10] = np.where(labels4[:, 9] == -1, -1, labels4[:, 10]) - - labels4[:, 11] = np.where(labels4[:, 12] == -1, -1, labels4[:, 11]) - labels4[:, 12] = np.where(labels4[:, 11] == -1, -1, labels4[:, 12]) - - labels4[:, 13] = np.where(labels4[:, 14] == -1, -1, labels4[:, 13]) - labels4[:, 14] = np.where(labels4[:, 13] == -1, -1, labels4[:, 14]) - - # Augment - img4, labels4 = random_perspective(img4, labels4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - return img4, labels4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def load_image(self, index): - # loads 1 image from dataset, returns img, original hw, resized hw - img = self.imgs[index] - if img is None: # not cached - path = self.img_files[index] - img = cv2.imread(path) # BGR - assert img is not None, 'Image Not Found ' + path - h0, w0 = img.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # resize image to img_size - if r != 1: # always resize down, only resize up if training with augmentation - interp = cv2.INTER_AREA if r < 1 and not self.augment else cv2.INTER_LINEAR - img = cv2.resize(img, (int(w0 * r), int(h0 * r)), interpolation=interp) - return img, (h0, w0), img.shape[:2] # img, hw_original, hw_resized - else: - return self.imgs[index], self.img_hw0[index], self.img_hw[index] # img, hw_original, hw_resized - - -def augment_hsv(img, hgain=0.5, sgain=0.5, vgain=0.5): - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV)) - dtype = img.dtype # uint8 - - x = np.arange(0, 256, dtype=np.int16) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - img_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))).astype(dtype) - cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img) # no return needed - - # Histogram equalization - # if random.random() < 0.2: - # for i in range(3): - # img[:, :, i] = cv2.equalizeHist(img[:, :, i]) - -def replicate(img, labels): - # Replicate labels - h, w = img.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - img[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return img, labels - - -def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True): - # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232 - shape = img.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better test mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return img, ratio, (dw, dh) - - -def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(.1, .1), scale=(.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = img.shape[0] + border[0] * 2 # shape(h,w,c) - width = img.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -img.shape[1] / 2 # x translation (pixels) - C[1, 2] = -img.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(img[:, :, ::-1]) # base - # ax[1].imshow(img2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - # warp points - #xy = np.ones((n * 4, 3)) - xy = np.ones((n * 9, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]].reshape(n * 9, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - if perspective: - xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 18) # rescale - else: # affine - xy = xy[:, :2].reshape(n, 18) - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - - landmarks = xy[:, [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]] - mask = np.array(targets[:, 5:] > 0, dtype=np.int32) - landmarks = landmarks * mask - landmarks = landmarks + mask - 1 - - landmarks = np.where(landmarks < 0, -1, landmarks) - landmarks[:, [0, 2, 4, 6, 8]] = np.where(landmarks[:, [0, 2, 4, 6, 8]] > width, -1, landmarks[:, [0, 2, 4, 6, 8]]) - landmarks[:, [1, 3, 5, 7, 9]] = np.where(landmarks[:, [1, 3, 5, 7, 9]] > height, -1,landmarks[:, [1, 3, 5, 7, 9]]) - - landmarks[:, 0] = np.where(landmarks[:, 1] == -1, -1, landmarks[:, 0]) - landmarks[:, 1] = np.where(landmarks[:, 0] == -1, -1, landmarks[:, 1]) - - landmarks[:, 2] = np.where(landmarks[:, 3] == -1, -1, landmarks[:, 2]) - landmarks[:, 3] = np.where(landmarks[:, 2] == -1, -1, landmarks[:, 3]) - - landmarks[:, 4] = np.where(landmarks[:, 5] == -1, -1, landmarks[:, 4]) - landmarks[:, 5] = np.where(landmarks[:, 4] == -1, -1, landmarks[:, 5]) - - landmarks[:, 6] = np.where(landmarks[:, 7] == -1, -1, landmarks[:, 6]) - landmarks[:, 7] = np.where(landmarks[:, 6] == -1, -1, landmarks[:, 7]) - - landmarks[:, 8] = np.where(landmarks[:, 9] == -1, -1, landmarks[:, 8]) - landmarks[:, 9] = np.where(landmarks[:, 8] == -1, -1, landmarks[:, 9]) - - targets[:,5:] = landmarks - - xy = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # # apply angle-based reduction of bounding boxes - # radians = a * math.pi / 180 - # reduction = max(abs(math.sin(radians)), abs(math.cos(radians))) ** 0.5 - # x = (xy[:, 2] + xy[:, 0]) / 2 - # y = (xy[:, 3] + xy[:, 1]) / 2 - # w = (xy[:, 2] - xy[:, 0]) * reduction - # h = (xy[:, 3] - xy[:, 1]) * reduction - # xy = np.concatenate((x - w / 2, y - h / 2, x + w / 2, y + h / 2)).reshape(4, n).T - - # clip boxes - xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width) - xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=xy.T) - targets = targets[i] - targets[:, 1:5] = xy[i] - - return img, targets - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + 1e-16) > area_thr) & (ar < ar_thr) # candidates - - -def cutout(image, labels): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - h, w = image.shape[:2] - - def bbox_ioa(box1, box2): - # Returns the intersection over box2 area given box1, box2. box1 is 4, box2 is nx4. boxes are x1y1x2y2 - box2 = box2.transpose() - - # Get the coordinates of bounding boxes - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - - # Intersection area - inter_area = (np.minimum(b1_x2, b2_x2) - np.maximum(b1_x1, b2_x1)).clip(0) * \ - (np.minimum(b1_y2, b2_y2) - np.maximum(b1_y1, b2_y1)).clip(0) - - # box2 area - box2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1) + 1e-16 - - # Intersection over box2 area - return inter_area / box2_area - - # create random masks - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - image[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def create_folder(path='./new'): - # Create folder - if os.path.exists(path): - shutil.rmtree(path) # delete output folder - os.makedirs(path) # make new output folder - - -def flatten_recursive(path='../coco128'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(path + '_flat') - create_folder(new_path) - for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path='../coco128/'): # from utils.datasets import *; extract_boxes('../coco128') - # Convert detection dataset into classification dataset, with one directory per class - - path = Path(path) # images dir - shutil.rmtree(path / 'classifier') if (path / 'classifier').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in img_formats: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file, 'r') as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(np.int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path='../coco128', weights=(0.9, 0.1, 0.0)): # from utils.datasets import *; autosplit('../coco128') - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - # Arguments - path: Path to images directory - weights: Train, val, test weights (list) - """ - path = Path(path) # images dir - files = list(path.rglob('*.*')) - n = len(files) # number of files - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing - for i, img in tqdm(zip(indices, files), total=n): - if img.suffix[1:] in img_formats: - with open(path / txt[i], 'a') as f: - f.write(str(img) + '\n') # add image to txt file diff --git a/lib/yolov5-face_Jan1/utils/general.py b/lib/yolov5-face_Jan1/utils/general.py deleted file mode 100755 index 204de55d3..000000000 --- a/lib/yolov5-face_Jan1/utils/general.py +++ /dev/null @@ -1,646 +0,0 @@ -# General utils - -import glob -import logging -import math -import os -import random -import re -import subprocess -import time -from pathlib import Path - -import cv2 -import numpy as np -import torch -import torchvision -import yaml - -from utils.google_utils import gsutil_getsize -from utils.metrics import fitness -from utils.torch_utils import init_torch_seeds - -# Settings -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads - - -def set_logging(rank=-1): - logging.basicConfig( - format="%(message)s", - level=logging.INFO if rank in [-1, 0] else logging.WARN) - - -def init_seeds(seed=0): - # Initialize random number generator (RNG) seeds - random.seed(seed) - np.random.seed(seed) - init_torch_seeds(seed) - - -def get_latest_run(search_dir='.'): - # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) - last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) - return max(last_list, key=os.path.getctime) if last_list else '' - - -def check_online(): - # Check internet connectivity - import socket - try: - socket.create_connection(("1.1.1.1", 53)) # check host accesability - return True - except OSError: - return False - - -def check_git_status(): - # Recommend 'git pull' if code is out of date - print(colorstr('github: '), end='') - try: - assert Path('.git').exists(), 'skipping check (not a git repository)' - assert not Path('/workspace').exists(), 'skipping check (Docker image)' # not Path('/.dockerenv').exists() - assert check_online(), 'skipping check (offline)' - - cmd = 'git fetch && git config --get remote.origin.url' # github repo url - url = subprocess.check_output(cmd, shell=True).decode()[:-1] - cmd = 'git rev-list $(git rev-parse --abbrev-ref HEAD)..origin/master --count' # commits behind - n = int(subprocess.check_output(cmd, shell=True)) - if n > 0: - print(f"⚠️ WARNING: code is out of date by {n} {'commits' if n > 1 else 'commmit'}. " - f"Use 'git pull' to update or 'git clone {url}' to download latest.") - else: - print(f'up to date with {url} ✅') - except Exception as e: - print(e) - - -def check_requirements(file='requirements.txt'): - # Check installed dependencies meet requirements - import pkg_resources - requirements = pkg_resources.parse_requirements(Path(file).open()) - requirements = [x.name + ''.join(*x.specs) if len(x.specs) else x.name for x in requirements] - pkg_resources.require(requirements) # DistributionNotFound or VersionConflict exception if requirements not met - - -def check_img_size(img_size, s=32): - # Verify img_size is a multiple of stride s - new_size = make_divisible(img_size, int(s)) # ceil gs-multiple - if new_size != img_size: - print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size)) - return new_size - - -def check_file(file): - # Search for file if not found - if os.path.isfile(file) or file == '': - return file - else: - files = glob.glob('./**/' + file, recursive=True) # find file - assert len(files), 'File Not Found: %s' % file # assert file was found - assert len(files) == 1, "Multiple files match '%s', specify exact path: %s" % (file, files) # assert unique - return files[0] # return file - - -def check_dataset(dict): - # Download dataset if not found locally - val, s = dict.get('val'), dict.get('download') - if val and len(val): - val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path - if not all(x.exists() for x in val): - print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()]) - if s and len(s): # download script - print('Downloading %s ...' % s) - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - torch.hub.download_url_to_file(s, f) - r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip - else: # bash script - r = os.system(s) - print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value - else: - raise Exception('Dataset not found.') - - -def make_divisible(x, divisor): - # Returns x evenly divisible by divisor - return math.ceil(x / divisor) * divisor - - -def clean_str(s): - # Cleans a string by replacing special characters with underscore _ - return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s) - - -def one_cycle(y1=0.0, y2=1.0, steps=100): - # lambda function for sinusoidal ramp from y1 to y2 - return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1 - - -def colorstr(*input): - # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') - *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string - colors = {'black': '\033[30m', # basic colors - 'red': '\033[31m', - 'green': '\033[32m', - 'yellow': '\033[33m', - 'blue': '\033[34m', - 'magenta': '\033[35m', - 'cyan': '\033[36m', - 'white': '\033[37m', - 'bright_black': '\033[90m', # bright colors - 'bright_red': '\033[91m', - 'bright_green': '\033[92m', - 'bright_yellow': '\033[93m', - 'bright_blue': '\033[94m', - 'bright_magenta': '\033[95m', - 'bright_cyan': '\033[96m', - 'bright_white': '\033[97m', - 'end': '\033[0m', # misc - 'bold': '\033[1m', - 'underline': '\033[4m'} - return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] - - -def labels_to_class_weights(labels, nc=80): - # Get class weights (inverse frequency) from training labels - if labels[0] is None: # no labels loaded - return torch.Tensor() - - labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(np.int) # labels = [class xywh] - weights = np.bincount(classes, minlength=nc) # occurrences per class - - # Prepend gridpoint count (for uCE training) - # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image - # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start - - weights[weights == 0] = 1 # replace empty bins with 1 - weights = 1 / weights # number of targets per class - weights /= weights.sum() # normalize - return torch.from_numpy(weights) - - -def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): - # Produces image weights based on class_weights and image contents - class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels]) - image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) - # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample - return image_weights - - -def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') - # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') - # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco - # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - return x - - -def xyxy2xywh(x): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def xywhn2xyxy(x, w=640, h=640, padw=32, padh=32): - # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x - y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y - y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x - y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y - return y - - -def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - clip_coords(coords, img0_shape) - return coords - - -def clip_coords(boxes, img_shape): - # Clip bounding xyxy bounding boxes to image shape (height, width) - boxes[:, 0].clamp_(0, img_shape[1]) # x1 - boxes[:, 1].clamp_(0, img_shape[0]) # y1 - boxes[:, 2].clamp_(0, img_shape[1]) # x2 - boxes[:, 3].clamp_(0, img_shape[0]) # y2 - - -def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-9): - # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - iou = inter / union - if GIoU or DIoU or CIoU: - # convex (smallest enclosing box) width - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + - (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * \ - torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha = v / ((1 + eps) - iou + v) - return iou - (rho2 / c2 + v * alpha) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU - else: - return iou # IoU - - -def box_iou(box1, box2): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - # iou = inter / (area1 + area2 - inter) - return inter / (area1[:, None] + area2 - inter) - - -def wh_iou(wh1, wh2): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - # iou = inter / (area1 + area2 - inter) - return inter / (wh1.prod(2) + wh2.prod(2) - inter) - -def jaccard_diou(box_a, box_b, iscrowd:bool=False): - use_batch = True - if box_a.dim() == 2: - use_batch = False - box_a = box_a[None, ...] - box_b = box_b[None, ...] - - inter = intersect(box_a, box_b) - area_a = ((box_a[:, :, 2]-box_a[:, :, 0]) * - (box_a[:, :, 3]-box_a[:, :, 1])).unsqueeze(2).expand_as(inter) # [A,B] - area_b = ((box_b[:, :, 2]-box_b[:, :, 0]) * - (box_b[:, :, 3]-box_b[:, :, 1])).unsqueeze(1).expand_as(inter) # [A,B] - union = area_a + area_b - inter - x1 = ((box_a[:, :, 2]+box_a[:, :, 0]) / 2).unsqueeze(2).expand_as(inter) - y1 = ((box_a[:, :, 3]+box_a[:, :, 1]) / 2).unsqueeze(2).expand_as(inter) - x2 = ((box_b[:, :, 2]+box_b[:, :, 0]) / 2).unsqueeze(1).expand_as(inter) - y2 = ((box_b[:, :, 3]+box_b[:, :, 1]) / 2).unsqueeze(1).expand_as(inter) - - t1 = box_a[:, :, 1].unsqueeze(2).expand_as(inter) - b1 = box_a[:, :, 3].unsqueeze(2).expand_as(inter) - l1 = box_a[:, :, 0].unsqueeze(2).expand_as(inter) - r1 = box_a[:, :, 2].unsqueeze(2).expand_as(inter) - - t2 = box_b[:, :, 1].unsqueeze(1).expand_as(inter) - b2 = box_b[:, :, 3].unsqueeze(1).expand_as(inter) - l2 = box_b[:, :, 0].unsqueeze(1).expand_as(inter) - r2 = box_b[:, :, 2].unsqueeze(1).expand_as(inter) - - cr = torch.max(r1, r2) - cl = torch.min(l1, l2) - ct = torch.min(t1, t2) - cb = torch.max(b1, b2) - D = (((x2 - x1)**2 + (y2 - y1)**2) / ((cr-cl)**2 + (cb-ct)**2 + 1e-7)) - out = inter / area_a if iscrowd else inter / (union + 1e-7) - D ** 0.7 - return out if use_batch else out.squeeze(0) - - -def non_max_suppression_face(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()): - """Performs Non-Maximum Suppression (NMS) on inference results - Returns: - detections with shape: nx6 (x1, y1, x2, y2, conf, cls) - """ - - nc = prediction.shape[2] - 15 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0, 16), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 15), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 15] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 15:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, landmarks, cls) - if multi_label: - i, j = (x[:, 15:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 15, None], x[:, 5:15] ,j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 15:].max(1, keepdim=True) - x = torch.cat((box, conf, x[:, 5:15], j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # If none remain process next image - n = x.shape[0] # number of boxes - if not n: - continue - - # Batched NMS - c = x[:, 15:16] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - #if i.shape[0] > max_det: # limit detections - # i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - break # time limit exceeded - - return output - - -def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, labels=()): - """Performs Non-Maximum Suppression (NMS) on inference results - - Returns: - detections with shape: nx6 (x1, y1, x2, y2, conf, cls) - """ - - nc = prediction.shape[2] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - # (pixels) minimum and maximum box width and height - min_wh, max_wh = 2, 4096 - #max_det = 300 # maximum number of detections per image - #max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 5), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[ - conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - #elif n > max_nms: # excess boxes - # x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - x = x[x[:, 4].argsort(descending=True)] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - #if i.shape[0] > max_det: # limit detections - # i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - print(f'WARNING: NMS time limit {time_limit}s exceeded') - break # time limit exceeded - - return output - - -def strip_optimizer(f='weights/best.pt', s=''): # from utils.general import *; strip_optimizer() - # Strip optimizer from 'f' to finalize training, optionally save as 's' - x = torch.load(f, map_location=torch.device('cpu')) - for key in 'optimizer', 'training_results', 'wandb_id': - x[key] = None - x['epoch'] = -1 - x['model'].half() # to FP16 - for p in x['model'].parameters(): - p.requires_grad = False - torch.save(x, s or f) - mb = os.path.getsize(s or f) / 1E6 # filesize - print('Optimizer stripped from %s,%s %.1fMB' % (f, (' saved as %s,' % s) if s else '', mb)) - - -def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''): - # Print mutation results to evolve.txt (for use with train.py --evolve) - a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys - b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c)) - - if bucket: - url = 'gs://%s/evolve.txt' % bucket - if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0): - os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local - - with open('evolve.txt', 'a') as f: # append result - f.write(c + b + '\n') - x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows - x = x[np.argsort(-fitness(x))] # sort - np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness - - # Save yaml - for i, k in enumerate(hyp.keys()): - hyp[k] = float(x[0, i + 7]) - with open(yaml_file, 'w') as f: - results = tuple(x[0, :7]) - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n') - yaml.dump(hyp, f, sort_keys=False) - - if bucket: - os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload - - -def apply_classifier(x, model, img, im0): - # applies a second stage classifier to yolo outputs - im0 = [im0] if isinstance(im0, np.ndarray) else im0 - for i, d in enumerate(x): # per image - if d is not None and len(d): - d = d.clone() - - # Reshape and pad cutouts - b = xyxy2xywh(d[:, :4]) # boxes - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square - b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad - d[:, :4] = xywh2xyxy(b).long() - - # Rescale boxes from img_size to im0 size - scale_coords(img.shape[2:], d[:, :4], im0[i].shape) - - # Classes - pred_cls1 = d[:, 5].long() - ims = [] - for j, a in enumerate(d): # per item - cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] - im = cv2.resize(cutout, (224, 224)) # BGR - # cv2.imwrite('test%i.jpg' % j, cutout) - - # BGR to RGB, to 3x416x416 - im = im[:, :, ::-1].transpose(2, 0, 1) - im = np.ascontiguousarray( - im, dtype=np.float32) # uint8 to float32 - im /= 255.0 # 0 - 255 to 0.0 - 1.0 - ims.append(im) - - pred_cls2 = model(torch.Tensor(ims).to(d.device) - ).argmax(1) # classifier prediction - # retain matching class detections - x[i] = x[i][pred_cls1 == pred_cls2] - - return x - - -def increment_path(path, exist_ok=True, sep=''): - # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc. - path = Path(path) # os-agnostic - if (path.exists() and exist_ok) or (not path.exists()): - return str(path) - else: - dirs = glob.glob(f"{path}{sep}*") # similar paths - matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs] - i = [int(m.groups()[0]) for m in matches if m] # indices - n = max(i) + 1 if i else 2 # increment number - return f"{path}{sep}{n}" # update path diff --git a/lib/yolov5-face_Jan1/utils/google_utils.py b/lib/yolov5-face_Jan1/utils/google_utils.py deleted file mode 100644 index 024dc7802..000000000 --- a/lib/yolov5-face_Jan1/utils/google_utils.py +++ /dev/null @@ -1,122 +0,0 @@ -# Google utils: https://cloud.google.com/storage/docs/reference/libraries - -import os -import platform -import subprocess -import time -from pathlib import Path - -import requests -import torch - - -def gsutil_getsize(url=''): - # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du - s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8') - return eval(s.split(' ')[0]) if len(s) else 0 # bytes - - -def attempt_download(file, repo='ultralytics/yolov5'): - # Attempt file download if does not exist - file = Path(str(file).strip().replace("'", '').lower()) - - if not file.exists(): - try: - response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api - assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...] - tag = response['tag_name'] # i.e. 'v1.0' - except: # fallback plan - assets = ['yolov5.pt', 'yolov5.pt', 'yolov5l.pt', 'yolov5x.pt'] - tag = subprocess.check_output('git tag', shell=True).decode('utf-8').split('\n')[-2] - - name = file.name - if name in assets: - msg = f'{file} missing, try downloading from https://github.com/{repo}/releases/' - redundant = False # second download option - try: # GitHub - url = f'https://github.com/{repo}/releases/download/{tag}/{name}' - print(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, file) - assert file.exists() and file.stat().st_size > 1E6 # check - except Exception as e: # GCP - print(f'Download error: {e}') - assert redundant, 'No secondary mirror' - url = f'https://storage.googleapis.com/{repo}/ckpt/{name}' - print(f'Downloading {url} to {file}...') - os.system(f'curl -L {url} -o {file}') # torch.hub.download_url_to_file(url, weights) - finally: - if not file.exists() or file.stat().st_size < 1E6: # check - file.unlink(missing_ok=True) # remove partial downloads - print(f'ERROR: Download failure: {msg}') - print('') - return - - -def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'): - # Downloads a file from Google Drive. from yolov5.utils.google_utils import *; gdrive_download() - t = time.time() - file = Path(file) - cookie = Path('cookie') # gdrive cookie - print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='') - file.unlink(missing_ok=True) # remove existing file - cookie.unlink(missing_ok=True) # remove existing cookie - - # Attempt file download - out = "NUL" if platform.system() == "Windows" else "/dev/null" - os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}') - if os.path.exists('cookie'): # large file - s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}' - else: # small file - s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"' - r = os.system(s) # execute, capture return - cookie.unlink(missing_ok=True) # remove existing cookie - - # Error check - if r != 0: - file.unlink(missing_ok=True) # remove partial - print('Download error ') # raise Exception('Download error') - return r - - # Unzip if archive - if file.suffix == '.zip': - print('unzipping... ', end='') - os.system(f'unzip -q {file}') # unzip - file.unlink() # remove zip to free space - - print(f'Done ({time.time() - t:.1f}s)') - return r - - -def get_token(cookie="./cookie"): - with open(cookie) as f: - for line in f: - if "download" in line: - return line.split()[-1] - return "" - -# def upload_blob(bucket_name, source_file_name, destination_blob_name): -# # Uploads a file to a bucket -# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python -# -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(destination_blob_name) -# -# blob.upload_from_filename(source_file_name) -# -# print('File {} uploaded to {}.'.format( -# source_file_name, -# destination_blob_name)) -# -# -# def download_blob(bucket_name, source_blob_name, destination_file_name): -# # Uploads a blob from a bucket -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(source_blob_name) -# -# blob.download_to_filename(destination_file_name) -# -# print('Blob {} downloaded to {}.'.format( -# source_blob_name, -# destination_file_name)) diff --git a/lib/yolov5-face_Jan1/utils/infer_utils.py b/lib/yolov5-face_Jan1/utils/infer_utils.py deleted file mode 100755 index 9dc428cd4..000000000 --- a/lib/yolov5-face_Jan1/utils/infer_utils.py +++ /dev/null @@ -1,36 +0,0 @@ -import torch - - - -def decode_infer(output, stride): - # logging.info(torch.tensor(output.shape[0])) - # logging.info(output.shape) - # # bz is batch-size - # bz = tuple(torch.tensor(output.shape[0])) - # gridsize = tuple(torch.tensor(output.shape[-1])) - # logging.info(gridsize) - sh = torch.tensor(output.shape) - bz = sh[0] - gridsize = sh[-1] - - output = output.permute(0, 2, 3, 1) - output = output.view(bz, gridsize, gridsize, self.gt_per_grid, 5+self.numclass) - x1y1, x2y2, conf, prob = torch.split( - output, [2, 2, 1, self.numclass], dim=4) - - shiftx = torch.arange(0, gridsize, dtype=torch.float32) - shifty = torch.arange(0, gridsize, dtype=torch.float32) - shifty, shiftx = torch.meshgrid([shiftx, shifty]) - shiftx = shiftx.unsqueeze(-1).repeat(bz, 1, 1, self.gt_per_grid) - shifty = shifty.unsqueeze(-1).repeat(bz, 1, 1, self.gt_per_grid) - - xy_grid = torch.stack([shiftx, shifty], dim=4).cuda() - x1y1 = (xy_grid+0.5-torch.exp(x1y1))*stride - x2y2 = (xy_grid+0.5+torch.exp(x2y2))*stride - - xyxy = torch.cat((x1y1, x2y2), dim=4) - conf = torch.sigmoid(conf) - prob = torch.sigmoid(prob) - output = torch.cat((xyxy, conf, prob), 4) - output = output.view(bz, -1, 5+self.numclass) - return output \ No newline at end of file diff --git a/lib/yolov5-face_Jan1/utils/loss.py b/lib/yolov5-face_Jan1/utils/loss.py deleted file mode 100644 index 8211db9f5..000000000 --- a/lib/yolov5-face_Jan1/utils/loss.py +++ /dev/null @@ -1,304 +0,0 @@ -# Loss functions - -import torch -import torch.nn as nn -import numpy as np -from utils.general import bbox_iou -from utils.torch_utils import is_parallel - - -def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441 - # return positive, negative label smoothing BCE targets - return 1.0 - 0.5 * eps, 0.5 * eps - - -class BCEBlurWithLogitsLoss(nn.Module): - # BCEwithLogitLoss() with reduced missing label effects. - def __init__(self, alpha=0.05): - super(BCEBlurWithLogitsLoss, self).__init__() - self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss() - self.alpha = alpha - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - pred = torch.sigmoid(pred) # prob from logits - dx = pred - true # reduce only missing label effects - # dx = (pred - true).abs() # reduce missing label and false label effects - alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4)) - loss *= alpha_factor - return loss.mean() - - -class FocalLoss(nn.Module): - # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(FocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - # p_t = torch.exp(-loss) - # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability - - # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py - pred_prob = torch.sigmoid(pred) # prob from logits - p_t = true * pred_prob + (1 - true) * (1 - pred_prob) - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = (1.0 - p_t) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - - -class QFocalLoss(nn.Module): - # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(QFocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - - pred_prob = torch.sigmoid(pred) # prob from logits - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = torch.abs(true - pred_prob) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - -class WingLoss(nn.Module): - def __init__(self, w=10, e=2): - super(WingLoss, self).__init__() - # https://arxiv.org/pdf/1711.06753v4.pdf Figure 5 - self.w = w - self.e = e - self.C = self.w - self.w * np.log(1 + self.w / self.e) - - def forward(self, x, t, sigma=1): - weight = torch.ones_like(t) - weight[torch.where(t==-1)] = 0 - diff = weight * (x - t) - abs_diff = diff.abs() - flag = (abs_diff.data < self.w).float() - y = flag * self.w * torch.log(1 + abs_diff / self.e) + (1 - flag) * (abs_diff - self.C) - return y.sum() - -class LandmarksLoss(nn.Module): - # BCEwithLogitLoss() with reduced missing label effects. - def __init__(self, alpha=1.0): - super(LandmarksLoss, self).__init__() - self.loss_fcn = WingLoss()#nn.SmoothL1Loss(reduction='sum') - self.alpha = alpha - - def forward(self, pred, truel, mask): - loss = self.loss_fcn(pred*mask, truel*mask) - return loss / (torch.sum(mask) + 10e-14) - - -def compute_loss(p, targets, model): # predictions, targets, model - device = targets.device - lcls, lbox, lobj, lmark = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - tcls, tbox, indices, anchors, tlandmarks, lmks_mask = build_targets(p, targets, model) # targets - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) # weight=model.class_weights) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - - landmarks_loss = LandmarksLoss(1.0) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - cp, cn = smooth_BCE(eps=0.0) - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - # Losses - nt = 0 # number of targets - no = len(p) # number of outputs - balance = [4.0, 1.0, 0.4] if no == 3 else [4.0, 1.0, 0.4, 0.1] # P3-5 or P3-6 - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = indices[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - nt += n # cumulative targets - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - model.gr) + model.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - if model.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 15:], cn, device=device) # targets - t[range(n), tcls[i]] = cp - lcls += BCEcls(ps[:, 15:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - #landmarks loss - #plandmarks = ps[:,5:15].sigmoid() * 8. - 4. - plandmarks = ps[:,5:15] - - plandmarks[:, 0:2] = plandmarks[:, 0:2] * anchors[i] - plandmarks[:, 2:4] = plandmarks[:, 2:4] * anchors[i] - plandmarks[:, 4:6] = plandmarks[:, 4:6] * anchors[i] - plandmarks[:, 6:8] = plandmarks[:, 6:8] * anchors[i] - plandmarks[:, 8:10] = plandmarks[:,8:10] * anchors[i] - - lmark += landmarks_loss(plandmarks, tlandmarks[i], lmks_mask[i]) - - - lobj += BCEobj(pi[..., 4], tobj) * balance[i] # obj loss - - s = 3 / no # output count scaling - lbox *= h['box'] * s - lobj *= h['obj'] * s * (1.4 if no == 4 else 1.) - lcls *= h['cls'] * s - lmark *= h['landmark'] * s - - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls + lmark - return loss * bs, torch.cat((lbox, lobj, lcls, lmark, loss)).detach() - - -def build_targets(p, targets, model): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - na, nt = det.na, targets.shape[0] # number of anchors, targets - tcls, tbox, indices, anch, landmarks, lmks_mask = [], [], [], [], [], [] - #gain = torch.ones(7, device=targets.device) # normalized to gridspace gain - gain = torch.ones(17, device=targets.device) - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(det.nl): - anchors = det.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - #landmarks 10 - gain[6:16] = torch.tensor(p[i].shape)[[3, 2, 3, 2, 3, 2, 3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < model.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 16].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - tbox.append(torch.cat((gxy - gij, gwh), 1)) # box - anch.append(anchors[a]) # anchors - tcls.append(c) # class - - #landmarks - lks = t[:,6:16] - #lks_mask = lks > 0 - #lks_mask = lks_mask.float() - lks_mask = torch.where(lks < 0, torch.full_like(lks, 0.), torch.full_like(lks, 1.0)) - - #应该是关键点的坐标除以anch的宽高才对,便于模型学习。使用gwh会导致不同关键点的编码不同,没有统一的参考标准 - - lks[:, [0, 1]] = (lks[:, [0, 1]] - gij) - lks[:, [2, 3]] = (lks[:, [2, 3]] - gij) - lks[:, [4, 5]] = (lks[:, [4, 5]] - gij) - lks[:, [6, 7]] = (lks[:, [6, 7]] - gij) - lks[:, [8, 9]] = (lks[:, [8, 9]] - gij) - - ''' - #anch_w = torch.ones(5, device=targets.device).fill_(anchors[0][0]) - #anch_wh = torch.ones(5, device=targets.device) - anch_f_0 = (a == 0).unsqueeze(1).repeat(1, 5) - anch_f_1 = (a == 1).unsqueeze(1).repeat(1, 5) - anch_f_2 = (a == 2).unsqueeze(1).repeat(1, 5) - lks[:, [0, 2, 4, 6, 8]] = torch.where(anch_f_0, lks[:, [0, 2, 4, 6, 8]] / anchors[0][0], lks[:, [0, 2, 4, 6, 8]]) - lks[:, [0, 2, 4, 6, 8]] = torch.where(anch_f_1, lks[:, [0, 2, 4, 6, 8]] / anchors[1][0], lks[:, [0, 2, 4, 6, 8]]) - lks[:, [0, 2, 4, 6, 8]] = torch.where(anch_f_2, lks[:, [0, 2, 4, 6, 8]] / anchors[2][0], lks[:, [0, 2, 4, 6, 8]]) - - lks[:, [1, 3, 5, 7, 9]] = torch.where(anch_f_0, lks[:, [1, 3, 5, 7, 9]] / anchors[0][1], lks[:, [1, 3, 5, 7, 9]]) - lks[:, [1, 3, 5, 7, 9]] = torch.where(anch_f_1, lks[:, [1, 3, 5, 7, 9]] / anchors[1][1], lks[:, [1, 3, 5, 7, 9]]) - lks[:, [1, 3, 5, 7, 9]] = torch.where(anch_f_2, lks[:, [1, 3, 5, 7, 9]] / anchors[2][1], lks[:, [1, 3, 5, 7, 9]]) - - #new_lks = lks[lks_mask>0] - #print('new_lks: min --- ', torch.min(new_lks), ' max --- ', torch.max(new_lks)) - - lks_mask_1 = torch.where(lks < -3, torch.full_like(lks, 0.), torch.full_like(lks, 1.0)) - lks_mask_2 = torch.where(lks > 3, torch.full_like(lks, 0.), torch.full_like(lks, 1.0)) - - lks_mask_new = lks_mask * lks_mask_1 * lks_mask_2 - lks_mask_new[:, 0] = lks_mask_new[:, 0] * lks_mask_new[:, 1] - lks_mask_new[:, 1] = lks_mask_new[:, 0] * lks_mask_new[:, 1] - lks_mask_new[:, 2] = lks_mask_new[:, 2] * lks_mask_new[:, 3] - lks_mask_new[:, 3] = lks_mask_new[:, 2] * lks_mask_new[:, 3] - lks_mask_new[:, 4] = lks_mask_new[:, 4] * lks_mask_new[:, 5] - lks_mask_new[:, 5] = lks_mask_new[:, 4] * lks_mask_new[:, 5] - lks_mask_new[:, 6] = lks_mask_new[:, 6] * lks_mask_new[:, 7] - lks_mask_new[:, 7] = lks_mask_new[:, 6] * lks_mask_new[:, 7] - lks_mask_new[:, 8] = lks_mask_new[:, 8] * lks_mask_new[:, 9] - lks_mask_new[:, 9] = lks_mask_new[:, 8] * lks_mask_new[:, 9] - ''' - lks_mask_new = lks_mask - lmks_mask.append(lks_mask_new) - landmarks.append(lks) - #print('lks: ', lks.size()) - - return tcls, tbox, indices, anch, landmarks, lmks_mask diff --git a/lib/yolov5-face_Jan1/utils/metrics.py b/lib/yolov5-face_Jan1/utils/metrics.py deleted file mode 100644 index 99d5bcfaf..000000000 --- a/lib/yolov5-face_Jan1/utils/metrics.py +++ /dev/null @@ -1,200 +0,0 @@ -# Model validation metrics - -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np -import torch - -from . import general - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] - return (x[:, :4] * w).sum(1) - - -def ap_per_class(tp, conf, pred_cls, target_cls, plot=False, save_dir='precision-recall_curve.png', names=[]): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. - # Arguments - tp: True positives (nparray, nx1 or nx10). - conf: Objectness value from 0-1 (nparray). - pred_cls: Predicted object classes (nparray). - target_cls: True object classes (nparray). - plot: Plot precision-recall curve at mAP@0.5 - save_dir: Plot save directory - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Sort by objectness - i = np.argsort(-conf) - tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] - - # Find unique classes - unique_classes = np.unique(target_cls) - - # Create Precision-Recall curve and compute AP for each class - px, py = np.linspace(0, 1, 1000), [] # for plotting - pr_score = 0.1 # score to evaluate P and R https://github.com/ultralytics/yolov3/issues/898 - s = [unique_classes.shape[0], tp.shape[1]] # number class, number iou thresholds (i.e. 10 for mAP0.5...0.95) - ap, p, r = np.zeros(s), np.zeros(s), np.zeros(s) - for ci, c in enumerate(unique_classes): - i = pred_cls == c - n_l = (target_cls == c).sum() # number of labels - n_p = i.sum() # number of predictions - - if n_p == 0 or n_l == 0: - continue - else: - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) - - # Recall - recall = tpc / (n_l + 1e-16) # recall curve - r[ci] = np.interp(-pr_score, -conf[i], recall[:, 0]) # r at pr_score, negative x, xp because xp decreases - - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-pr_score, -conf[i], precision[:, 0]) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j]) - if plot and (j == 0): - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 - - # Compute F1 score (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + 1e-16) - - if plot: - plot_pr_curve(px, py, ap, save_dir, names) - - return p, r, ap, f1, unique_classes.astype('int32') - - -def compute_ap(recall, precision): - """ Compute the average precision, given the recall and precision curves - # Arguments - recall: The recall curve (list) - precision: The precision curve (list) - # Returns - Average precision, precision curve, recall curve - """ - - # Append sentinel values to beginning and end - mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01])) - mpre = np.concatenate(([1.], precision, [0.])) - - # Compute the precision envelope - mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) - - # Integrate area under curve - method = 'interp' # methods: 'continuous', 'interp' - if method == 'interp': - x = np.linspace(0, 1, 101) # 101-point interp (COCO) - ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate - else: # 'continuous' - i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve - - return ap, mpre, mrec - - -class ConfusionMatrix: - # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix - def __init__(self, nc, conf=0.25, iou_thres=0.45): - self.matrix = np.zeros((nc + 1, nc + 1)) - self.nc = nc # number of classes - self.conf = conf - self.iou_thres = iou_thres - - def process_batch(self, detections, labels): - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - None, updates confusion matrix accordingly - """ - detections = detections[detections[:, 4] > self.conf] - gt_classes = labels[:, 0].int() - detection_classes = detections[:, 5].int() - iou = general.box_iou(labels[:, 1:], detections[:, :4]) - - x = torch.where(iou > self.iou_thres) - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - else: - matches = np.zeros((0, 3)) - - n = matches.shape[0] > 0 - m0, m1, _ = matches.transpose().astype(np.int16) - for i, gc in enumerate(gt_classes): - j = m0 == i - if n and sum(j) == 1: - self.matrix[gc, detection_classes[m1[j]]] += 1 # correct - else: - self.matrix[gc, self.nc] += 1 # background FP - - if n: - for i, dc in enumerate(detection_classes): - if not any(m1 == i): - self.matrix[self.nc, dc] += 1 # background FN - - def matrix(self): - return self.matrix - - def plot(self, save_dir='', names=()): - try: - import seaborn as sn - - array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize - array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) - - fig = plt.figure(figsize=(12, 9), tight_layout=True) - sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size - labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels - sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True, - xticklabels=names + ['background FN'] if labels else "auto", - yticklabels=names + ['background FP'] if labels else "auto").set_facecolor((1, 1, 1)) - fig.axes[0].set_xlabel('True') - fig.axes[0].set_ylabel('Predicted') - fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) - except Exception as e: - pass - - def print(self): - for i in range(self.nc + 1): - print(' '.join(map(str, self.matrix[i]))) - - -# Plots ---------------------------------------------------------------------------------------------------------------- - -def plot_pr_curve(px, py, ap, save_dir='.', names=()): - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - py = np.stack(py, axis=1) - - if 0 < len(names) < 21: # show mAP in legend if < 10 classes - for i, y in enumerate(py.T): - ax.plot(px, y, linewidth=1, label=f'{names[i]} %.3f' % ap[i, 0]) # plot(recall, precision) - else: - ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision) - - ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) - ax.set_xlabel('Recall') - ax.set_ylabel('Precision') - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - fig.savefig(Path(save_dir) / 'precision_recall_curve.png', dpi=250) diff --git a/lib/yolov5-face_Jan1/utils/plots.py b/lib/yolov5-face_Jan1/utils/plots.py deleted file mode 100644 index 0c008f165..000000000 --- a/lib/yolov5-face_Jan1/utils/plots.py +++ /dev/null @@ -1,413 +0,0 @@ -# Plotting utils - -import glob -import math -import os -import random -from copy import copy -from pathlib import Path - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sns -import torch -import yaml -from PIL import Image, ImageDraw -from scipy.signal import butter, filtfilt - -from utils.general import xywh2xyxy, xyxy2xywh -from utils.metrics import fitness - -# Settings -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -def color_list(): - # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb - def hex2rgb(h): - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - return [hex2rgb(h) for h in plt.rcParams['axes.prop_cycle'].by_key()['color']] - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def plot_one_box(x, img, color=None, label=None, line_thickness=None): - # Plots one bounding box on image img - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - color = color or [random.randint(0, 255) for _ in range(3)] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - -def plot_wh_methods(): # from utils.plots import *; plot_wh_methods() - # Compares the two methods for width-height anchor multiplication - # https://github.com/ultralytics/yolov3/issues/168 - x = np.arange(-4.0, 4.0, .1) - ya = np.exp(x) - yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 - - fig = plt.figure(figsize=(6, 3), tight_layout=True) - plt.plot(x, ya, '.-', label='YOLOv3') - plt.plot(x, yb ** 2, '.-', label='YOLOv5 ^2') - plt.plot(x, yb ** 1.6, '.-', label='YOLOv5 ^1.6') - plt.xlim(left=-4, right=4) - plt.ylim(bottom=0, top=6) - plt.xlabel('input') - plt.ylabel('output') - plt.grid() - plt.legend() - fig.savefig('comparison.png', dpi=200) - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): - # Plot image grid with labels - - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - # un-normalise - if np.max(images[0]) <= 1: - images *= 255 - - tl = 3 # line thickness - tf = max(tl - 1, 1) # font thickness - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Check if we should resize - scale_factor = max_size / max(h, w) - if scale_factor < 1: - h = math.ceil(scale_factor * h) - w = math.ceil(scale_factor * w) - - # colors = color_list() # list of colors - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, img in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - - block_x = int(w * (i // ns)) - block_y = int(h * (i % ns)) - - img = img.transpose(1, 2, 0) - if scale_factor < 1: - img = cv2.resize(img, (w, h)) - - mosaic[block_y:block_y + h, block_x:block_x + w, :] = img - if len(targets) > 0: - image_targets = targets[targets[:, 0] == i] - boxes = xywh2xyxy(image_targets[:, 2:6]).T - classes = image_targets[:, 1].astype('int') - labels = image_targets.shape[1] == 6 # labels if no conf column - conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale_factor < 1: # absolute coords need scale if image scales - boxes *= scale_factor - boxes[[0, 2]] += block_x - boxes[[1, 3]] += block_y - for j, box in enumerate(boxes.T): - cls = int(classes[j]) - # color = colors[cls % len(colors)] - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j]) - plot_one_box(box, mosaic, label=label, color=None, line_thickness=tl) - - # Draw image filename labels - if paths: - label = Path(paths[i]).name[:40] # trim to 40 char - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, - lineType=cv2.LINE_AA) - - # Image border - cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) - - if fname: - r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size - mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA) - # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save - Image.fromarray(mosaic).save(fname) # PIL save - return mosaic - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_test_txt(): # from utils.plots import *; plot_test() - # Plot test.txt histograms - x = np.loadtxt('test.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(path='study/', x=None): # from utils.plots import *; plot_study_txt() - # Plot study.txt generated by test.py - fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) - ax = ax.ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']]: - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] - for i in range(7): - ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[6, :j], y[3, :j] * 1E2, '.-', linewidth=2, markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid() - ax2.set_yticks(np.arange(30, 60, 5)) - ax2.set_xlim(0, 30) - ax2.set_ylim(29, 51) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig('test_study.png', dpi=300) - - -def plot_labels(labels, save_dir=Path(''), loggers=None): - # plot dataset labels - print('Plotting labels... ') - c, b = labels[:, 0], labels[:, 1:5].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - colors = color_list() - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_xlabel('classes') - sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - # for cls, *box in labels[:1000]: - # ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - # loggers - for k, v in loggers.items() or {}: - if k == 'wandb' and v: - v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}) - - -def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution() - # Plot hyperparameter evolution results in evolve.txt - with open(yaml_file) as f: - hyp = yaml.load(f, Loader=yaml.SafeLoader) - x = np.loadtxt('evolve.txt', ndmin=2) - f = fitness(x) - # weights = (f - f.min()) ** 2 # for weighted results - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, (k, v) in enumerate(hyp.items()): - y = x[:, i + 7] - # mu = (y * weights).sum() / weights.sum() # best weighted result - mu = y[f.argmax()] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - plt.savefig('evolve.png', dpi=200) - print('\nPlot saved as evolve.png') - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay() - # Plot training 'results*.txt', overlaying train and val losses - s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends - t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles - for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) - ax = ax.ravel() - for i in range(5): - for j in [i, i + 5]: - y = results[j, x] - ax[i].plot(x, y, marker='.', label=s[j]) - # y_smooth = butter_lowpass_filtfilt(y) - # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) - - ax[i].set_title(t[i]) - ax[i].legend() - ax[i].set_ylabel(f) if i == 0 else None # add filename - fig.savefig(f.replace('.txt', '.png'), dpi=200) - - -def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): - # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp') - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', - 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] - if bucket: - # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] - files = ['results%g.txt' % x for x in id] - c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) - os.system(c) - else: - files = list(Path(save_dir).glob('results*.txt')) - assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - for i in range(10): - y = results[i, x] - if i in [0, 1, 2, 5, 6, 7]: - y[y == 0] = np.nan # don't show zero loss values - # y /= y[0] # normalize - label = labels[fi] if len(labels) else f.stem - ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8) - ax[i].set_title(s[i]) - # if i in [5, 6, 7]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - fig.savefig(Path(save_dir) / 'results.png', dpi=200) diff --git a/lib/yolov5-face_Jan1/utils/torch_utils.py b/lib/yolov5-face_Jan1/utils/torch_utils.py deleted file mode 100644 index 2cb09e71c..000000000 --- a/lib/yolov5-face_Jan1/utils/torch_utils.py +++ /dev/null @@ -1,294 +0,0 @@ -# PyTorch utils - -import logging -import math -import os -import subprocess -import time -from contextlib import contextmanager -from copy import deepcopy -from pathlib import Path - -import torch -import torch.backends.cudnn as cudnn -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -try: - import thop # for FLOPS computation -except ImportError: - thop = None -logger = logging.getLogger(__name__) - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - """ - Decorator to make all processes in distributed training wait for each local_master to do something. - """ - if local_rank not in [-1, 0]: - torch.distributed.barrier() - yield - if local_rank == 0: - torch.distributed.barrier() - - -def init_torch_seeds(seed=0): - # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html - torch.manual_seed(seed) - if seed == 0: # slower, more reproducible - cudnn.benchmark, cudnn.deterministic = False, True - else: # faster, less reproducible - cudnn.benchmark, cudnn.deterministic = True, False - - -def git_describe(): - # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe - if Path('.git').exists(): - return subprocess.check_output('git describe --tags --long --always', shell=True).decode('utf-8')[:-1] - else: - return '' - - -def select_device(device='', batch_size=None): - # device = 'cpu' or '0' or '0,1,2,3' - s = f'YOLOv5 {git_describe()} torch {torch.__version__} ' # string - cpu = device.lower() == 'cpu' - if cpu: - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False - elif device: # non-cpu device requested - os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability - - cuda = not cpu and torch.cuda.is_available() - if cuda: - n = torch.cuda.device_count() - if n > 1 and batch_size: # check that batch_size is compatible with device_count - assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' - space = ' ' * len(s) - for i, d in enumerate(device.split(',') if device else range(n)): - p = torch.cuda.get_device_properties(i) - s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB - else: - s += 'CPU\n' - - logger.info(s) # skip a line - return torch.device('cuda:0' if cuda else 'cpu') - - -def time_synchronized(): - # pytorch-accurate time - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def profile(x, ops, n=100, device=None): - # profile a pytorch module or list of modules. Example usage: - # x = torch.randn(16, 3, 640, 640) # input - # m1 = lambda x: x * torch.sigmoid(x) - # m2 = nn.SiLU() - # profile(x, [m1, m2], n=100) # profile speed over 100 iterations - - device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - x = x.to(device) - x.requires_grad = True - print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '') - print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}") - for m in ops if isinstance(ops, list) else [ops]: - m = m.to(device) if hasattr(m, 'to') else m # device - m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type - dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward - try: - flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS - except: - flops = 0 - - for _ in range(n): - t[0] = time_synchronized() - y = m(x) - t[1] = time_synchronized() - try: - _ = y.sum().backward() - t[2] = time_synchronized() - except: # no backward method - t[2] = float('nan') - dtf += (t[1] - t[0]) * 1000 / n # ms per op forward - dtb += (t[2] - t[1]) * 1000 / n # ms per op backward - - s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' - s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list' - p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters - print(f'{p:12.4g}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}') - - -def is_parallel(model): - return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) - - -def intersect_dicts(da, db, exclude=()): - # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values - return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape} - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]: - m.inplace = True - - -def find_modules(model, mclass=nn.Conv2d): - # Finds layer indices matching module class 'mclass' - return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)] - - -def sparsity(model): - # Return global model sparsity - a, b = 0., 0. - for p in model.parameters(): - a += p.numel() - b += (p == 0).sum() - return b / a - - -def prune(model, amount=0.3): - # Prune model to requested global sparsity - import torch.nn.utils.prune as prune - print('Pruning model... ', end='') - for name, m in model.named_modules(): - if isinstance(m, nn.Conv2d): - prune.l1_unstructured(m, name='weight', amount=amount) # prune - prune.remove(m, 'weight') # make permanent - print(' %.3g global sparsity' % sparsity(model)) - - -def fuse_conv_and_bn(conv, bn): - # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = nn.Conv2d(conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - groups=conv.groups, - bias=True).requires_grad_(False).to(conv.weight.device) - - # prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.size())) - - # prepare spatial bias - b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) - fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) - - return fusedconv - - -def model_info(model, verbose=False, img_size=640): - # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] - n_p = sum(x.numel() for x in model.parameters()) # number parameters - n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients - if verbose: - print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma')) - for i, (name, p) in enumerate(model.named_parameters()): - name = name.replace('module_list.', '') - print('%5g %40s %9s %12g %20s %10.3g %10.3g' % - (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) - - try: # FLOPS - from thop import profile - stride = int(model.stride.max()) if hasattr(model, 'stride') else 32 - img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input - flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS - img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float - fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS - except (ImportError, Exception): - fs = '' - - logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}") - - -def load_classifier(name='resnet101', n=2): - # Loads a pretrained model reshaped to n-class output - model = torchvision.models.__dict__[name](pretrained=True) - - # ResNet model properties - # input_size = [3, 224, 224] - # input_space = 'RGB' - # input_range = [0, 1] - # mean = [0.485, 0.456, 0.406] - # std = [0.229, 0.224, 0.225] - - # Reshape output to n classes - filters = model.fc.weight.shape[1] - model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True) - model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True) - model.fc.out_features = n - return model - - -def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) - # scales img(bs,3,y,x) by ratio constrained to gs-multiple - if ratio == 1.0: - return img - else: - h, w = img.shape[2:] - s = (int(h * ratio), int(w * ratio)) # new size - img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize - if not same_shape: # pad/crop img - h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)] - return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean - - -def copy_attr(a, b, include=(), exclude=()): - # Copy attributes from b to a, options to only include [...] and to exclude [...] - for k, v in b.__dict__.items(): - if (len(include) and k not in include) or k.startswith('_') or k in exclude: - continue - else: - setattr(a, k, v) - - -class ModelEMA: - """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models - Keep a moving average of everything in the model state_dict (parameters and buffers). - This is intended to allow functionality like - https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage - A smoothed version of the weights is necessary for some training schemes to perform well. - This class is sensitive where it is initialized in the sequence of model init, - GPU assignment and distributed training wrappers. - """ - - def __init__(self, model, decay=0.9999, updates=0): - # Create EMA - self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA - # if next(model.parameters()).device.type != 'cpu': - # self.ema.half() # FP16 EMA - self.updates = updates # number of EMA updates - self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs) - for p in self.ema.parameters(): - p.requires_grad_(False) - - def update(self, model): - # Update EMA parameters - with torch.no_grad(): - self.updates += 1 - d = self.decay(self.updates) - - msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict - for k, v in self.ema.state_dict().items(): - if v.dtype.is_floating_point: - v *= d - v += (1. - d) * msd[k].detach() - - def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): - # Update EMA attributes - copy_attr(self.ema, model, include, exclude) diff --git a/modules/fjpalmvein/C/BIRCapData.dat b/modules/fjpalmvein/C/BIRCapData.dat new file mode 100644 index 000000000..975b8ed5f Binary files /dev/null and b/modules/fjpalmvein/C/BIRCapData.dat differ diff --git a/modules/fjpalmvein/C/BIRData.dat b/modules/fjpalmvein/C/BIRData.dat new file mode 100644 index 000000000..770fd64c3 Binary files /dev/null and b/modules/fjpalmvein/C/BIRData.dat differ diff --git a/modules/fjpalmvein/C/BioAPI_sample_C_Verify b/modules/fjpalmvein/C/BioAPI_sample_C_Verify new file mode 100755 index 000000000..99bb72a86 Binary files /dev/null and b/modules/fjpalmvein/C/BioAPI_sample_C_Verify differ diff --git a/modules/fjpalmvein/C/BioAPI_sample_C_Verify.c b/modules/fjpalmvein/C/BioAPI_sample_C_Verify.c index c26ce813e..9da99d02e 100644 --- a/modules/fjpalmvein/C/BioAPI_sample_C_Verify.c +++ b/modules/fjpalmvein/C/BioAPI_sample_C_Verify.c @@ -23,7 +23,7 @@ -#define APPLICATION_KEY "your application key" +#define APPLICATION_KEY "P6Kiuy2L4CifuBuK" #define ENROLL_FILENAME "BIRData.dat" #define CAPTURE_FILENAME "BIRCapData.dat" #define SILHOUETTE_FILENAME "silhouette.bmp" @@ -193,7 +193,7 @@ int main(int argc, char **argv) if ( fp != NULL ) { fwrite(ucEnrolledBIR, sizeof(unsigned char), datasize, fp); fclose(fp); - printf(" FILE: %s (DataSize=%d)\n", ENROLL_FILENAME, datasize); + printf(" FILE: %s (DataSize=%ld)\n", ENROLL_FILENAME, datasize); } } @@ -296,7 +296,7 @@ int main(int argc, char **argv) if ( fp != NULL ) { fwrite(ucCapturedBIR, sizeof(unsigned char), datasize, fp); fclose(fp); - printf(" FILE: %s (DataSize=%d)\n", CAPTURE_FILENAME, datasize); + printf(" FILE: %s (DataSize=%ld)\n", CAPTURE_FILENAME, datasize); } // ----------------------------------------------------------------- diff --git a/modules/fjpalmvein/C/BioAPI_sample_C_Verify.o b/modules/fjpalmvein/C/BioAPI_sample_C_Verify.o new file mode 100644 index 000000000..2a656f999 Binary files /dev/null and b/modules/fjpalmvein/C/BioAPI_sample_C_Verify.o differ diff --git a/modules/fjpalmvein/C/LM/PvAPITrc.dat b/modules/fjpalmvein/C/LM/PvAPITrc.dat index bf009f002..978cfd343 100644 Binary files a/modules/fjpalmvein/C/LM/PvAPITrc.dat and b/modules/fjpalmvein/C/LM/PvAPITrc.dat differ diff --git a/modules/fjpalmvein/C/LM/PvAPITrc01.dat b/modules/fjpalmvein/C/LM/PvAPITrc01.dat new file mode 100644 index 000000000..8ef2b3d7c Binary files /dev/null and b/modules/fjpalmvein/C/LM/PvAPITrc01.dat differ diff --git a/modules/fjpalmvein/C/LM/foo b/modules/fjpalmvein/C/LM/foo new file mode 100644 index 000000000..cb7ba9883 --- /dev/null +++ b/modules/fjpalmvein/C/LM/foo @@ -0,0 +1,15 @@ +total 5396 +-rw-rw-r-- 1 carl carl 253 Dec 19 2017 PvAPI.INI +-rw-rw-r-- 1 carl carl 2706 Oct 21 2021 F3BC4BSP.DAT +-rwxr-xr-x 1 carl carl 237104 Mar 17 2022 libf3bc4com.so +-rwxrwxr-x 1 carl carl 2045856 Mar 17 2022 libf3bc4cap.so +-rwxr-xr-x 1 carl carl 1350272 Mar 17 2022 libf3bc4mat.so +-rwxr-xr-x 1 carl carl 499824 Mar 17 2022 libf3bc4bsp.so +-rwxr-xr-x 1 carl carl 146328 Mar 17 2022 libf3bc4bio.so +-rw-r--r-- 1 carl carl 22 Mar 17 2022 pvfwvl.txt +-rw-rw-r-- 1 carl carl 403 May 31 22:36 F3BC4SDK.LIC +-rw-rw-rw- 1 root root 1048224 Jun 7 17:40 PvAPITrc01.dat +drwxrwxr-x 5 carl carl 4096 Jun 7 20:17 .. +-rw-rw-rw- 1 carl carl 159033 Jun 7 20:21 PvAPITrc.dat +-rw-rw-r-- 1 carl carl 0 Jun 7 20:22 foo +drwxr-xr-x 2 carl carl 4096 Jun 7 20:22 . diff --git a/modules/fjpalmvein/C/LM/foobar b/modules/fjpalmvein/C/LM/foobar new file mode 100644 index 000000000..375073bb3 --- /dev/null +++ b/modules/fjpalmvein/C/LM/foobar @@ -0,0 +1,16 @@ +total 5428 +-rw-rw-r-- 1 carl carl 253 Dec 19 2017 PvAPI.INI +-rw-rw-r-- 1 carl carl 2706 Oct 21 2021 F3BC4BSP.DAT +-rwxr-xr-x 1 carl carl 237104 Mar 17 2022 libf3bc4com.so +-rwxrwxr-x 1 carl carl 2045856 Mar 17 2022 libf3bc4cap.so +-rwxr-xr-x 1 carl carl 1350272 Mar 17 2022 libf3bc4mat.so +-rwxr-xr-x 1 carl carl 499824 Mar 17 2022 libf3bc4bsp.so +-rwxr-xr-x 1 carl carl 146328 Mar 17 2022 libf3bc4bio.so +-rw-r--r-- 1 carl carl 22 Mar 17 2022 pvfwvl.txt +-rw-rw-r-- 1 carl carl 403 May 31 22:36 F3BC4SDK.LIC +-rw-rw-rw- 1 root root 1048224 Jun 7 17:40 PvAPITrc01.dat +drwxrwxr-x 5 carl carl 4096 Jun 7 20:17 .. +-rw-rw-r-- 1 carl carl 786 Jun 7 20:22 foo +-rw-rw-rw- 1 carl carl 187399 Jun 7 20:22 PvAPITrc.dat +-rw-rw-r-- 1 carl carl 0 Jun 7 20:22 foobar +drwxr-xr-x 2 carl carl 4096 Jun 7 20:22 . diff --git a/modules/fjpalmvein/C/Makefile b/modules/fjpalmvein/C/Makefile index f8d75ebc3..70b38bae5 100644 --- a/modules/fjpalmvein/C/Makefile +++ b/modules/fjpalmvein/C/Makefile @@ -27,6 +27,12 @@ $(VERIFY).o : $(VERIFY).c clean: $(RM) *~ *.o $(IDENTIFY) $(VERIFY) +handjob: handjob.o + $(CC) -o handjob handjob.o $(LDFLAGS) $(LDLIBS) + +handjob.o : handjob.c + $(CC) $(CFLAGS) handjob.c + %.o: %.c $(CC) $(CFLAGS) -c -o $@ $< diff --git a/modules/fjpalmvein/C/fjpalmvein-main/93-unicon-palmvene.rules.x b/modules/fjpalmvein/C/fjpalmvein-main/93-unicon-palmvene.rules.x new file mode 100644 index 000000000..3e543547e --- /dev/null +++ b/modules/fjpalmvein/C/fjpalmvein-main/93-unicon-palmvene.rules.x @@ -0,0 +1,10 @@ +ACTION!="add",\ + KERNEL=="fjveincam*"\ + DRIVERS=="fjveincam",\ + MODE="0666", + SUBSYSTEM=="usbmisc",\ + ATTRS{idVendor}=="04c5",\ + ATTRS{idProduct}=="1526",\ + SYMLINK+="usb/fjveincam%n",\ + RUN+="/bin/bash -c 'date >> /tmp/fjpv'",\ + RUN+="/bin/bash -c 'echo $kernel _ $devpath _ $number id=$id MM=$major:$minor $name $sys >> /tmp/fjpv'" diff --git a/modules/fjpalmvein/C/fjpalmvein-main/a.out b/modules/fjpalmvein/C/fjpalmvein-main/a.out deleted file mode 100755 index 8714d52e1..000000000 Binary files a/modules/fjpalmvein/C/fjpalmvein-main/a.out and /dev/null differ diff --git a/modules/fjpalmvein/C/fjpalmvein-main/drivertest b/modules/fjpalmvein/C/fjpalmvein-main/drivertest index 8c42c533f..170d5eb4e 100755 Binary files a/modules/fjpalmvein/C/fjpalmvein-main/drivertest and b/modules/fjpalmvein/C/fjpalmvein-main/drivertest differ diff --git a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.c.carl b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.c.carl new file mode 100644 index 000000000..5f3089fab --- /dev/null +++ b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.c.carl @@ -0,0 +1,937 @@ +/** + * USB PalmSecure Sensor driver (kernel-2.6) + * + * Copyright (C) 2012 FUJITSU FRONTECH LIMITED + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * Notes: + * Heavily based on usb_skeleton.c + * Copyright (C) 2001-2004 Greg Kroah-Hartman (greg@kroah.com) + * + * History: + * + * 2012-07-06 - V31L01 + * - first version + * + * Problems? Try... + * lsusb or lsusb -vd MANU:PROD // swap in the device values << FUJITSU PalmSecure-F Pro + * sudo udevadm info -a -n /dev/usb/fjveincam0 // get infor about the device << major=180 minor=0 + * cat /sys/class/usbmisc/fjveincam0/ * // + * ls -al /sys/class/usbmisc/fjveincam0/device/driver/module - coresize, + + + */ + +#include +#include +#include //+ +#include +#include + // kref.h- +#include +#include //+ +#include //+ +#include + + +/* Define these values to match your devices */ +#define VENDOR_ID 0x04C5 +#define PRODUCT_ID 0x1526 + +/* table of devices that work with this driver */ +static struct usb_device_id fjveincam_table [] = { + { USB_DEVICE(VENDOR_ID, PRODUCT_ID) }, + { } /* Terminating entry */ +}; +MODULE_DEVICE_TABLE(usb, fjveincam_table); + + +/* Get a minor range for your devices from the usb maintainer */ +#define USB_subminor_BASE 160 + + +/* Structure to hold all of our device specific stuff */ +struct fjveincam { + struct usb_device *udev; + unsigned char subminor; /* minor number - used in disconnect() */ + char confirmed; /* Not zero if the device is used (Not in phase of confirming) */ + int open_count; /* count the number of openers */ + char *obuf, *ibuf; /* transfer buffers */ + char bulk_in_ep; /* Endpoint assignments */ + char bulk_out_ep; /* Endpoint assignments */ + wait_queue_head_t wait_q; /* wait-queue for checking sensors */ + struct mutex io_mutex; /* lock to prevent concurrent reads or writes */ + int o_timeout; /* counter of open time out */ + int r_error; /* counter of read error */ + int r_lasterr; /* read last error */ + int w_error; /* counter of write error */ + int w_lasterr; /* write last error */ +}; +#define to_skel_dev(d) container_of(d, struct fjveincam, kref) + +static struct usb_driver usb_fjveincam_driver; +//skel static void fjveincam_draw_down(struct usb_fjveincam *dev); + + + +/* our private defines. if this grows any larger, use your own .h file */ +#include "fjveincam.h" + + +#define CONFIG_FJVEINCAM_DEBUGXXX + + +// +// # # ##### ###### ##### ## # #### +// # # # # # # # # # # +// # # # ##### # # # # # #### +// # # # # ##### ###### # # +// # # # # # # # # # # # +// ###### # # ###### # # # # ###### #### +// + +/* Endpoint direction check macros */ +#define IS_EP_BULK(ep) ((ep)->bmAttributes == USB_ENDPOINT_XFER_BULK ? 1 : 0) +#define IS_EP_BULK_IN(ep) (IS_EP_BULK(ep) && ((ep)->bEndpointAddress & USB_ENDPOINT_DIR_MASK) == USB_DIR_IN) +#define IS_EP_BULK_OUT(ep) (IS_EP_BULK(ep) && ((ep)->bEndpointAddress & USB_ENDPOINT_DIR_MASK) == USB_DIR_OUT) + +/* Version Information */ +#define DRIVER_VERSION "V31L01" +//#define DRIVER_VERSION "V34L77" +#define DRIVER_AUTHOR "Fujitsu Frontech Ltd. Modified by Carl Goodwin (Dispension Inc)" +#define DRIVER_DESC "FUJITSU PalmSecure Sensor driver for Ubuntu22" + +/* minor number defines */ + + +/* Waiting time for sensor confirming. */ +/* Change this value when the time-out happens before the sensor confirming ends. */ +#define SENSOR_CONFIRMED_WAIT_TIME 1 + +/* Read timeouts -- R_NAK_TIMEOUT * R_EXPIRE = Number of seconds */ +#define R_NAK_TIMEOUT (50) /* Default number of X seconds to wait */ +#define R_EXPIRE 1 /* Number of attempts to wait X seconds */ + +/* Write timeouts */ +#define W_NAK_TIMEOUT (50) /* Default number of X seconds to wait */ + +/* Ioctl timeouts */ +#define C_NAK_TIMEOUT (100) /* Default number of X seconds to wait */ + +/* Allocate buffer byte size */ +#define IBUF_SIZE 32768 +#define OBUF_SIZE 4096 + +/* Flag of sensor state of use */ +#define SENSOR_NOT_CONFIRMED 0 /* Sensor is not used or is in phase of confirming. */ +#define SENSOR_CONFIRMED 1 /* Sensor is now used */ + + + + +static DEFINE_MUTEX(fjveincam_mutex); /* Initializes to unlocked */ + + +// +static void dbg(int line, char * func, char * remark, unsigned long num){ + pr_notice(">>>>>>>>>>>>>>.. USB Driver: %s @ %d (%s): %s = %lu", __FILE__, line, remark, func, num); +} + + + + + + + + + + +// ####### ### # ####### +// # # # # +// # # # # +// ##### # # ##### +// # # # # +// # # # # +// # ### ####### ####### +// +// @func +static int usb_fjveincam_open(struct inode *inode, struct file *file) +{ + + struct fjveincam *dev; + struct usb_interface *interface; + int subminor; + int retval = 0; + long wait; + + // does this even run? + dbg(__LINE__, "usb_fjveincam_open", "********* fjveincam open", ENODEV); + pr_notice("**************81 FFFFFFFFFUCK"); + return -ENODEV; + + + mutex_lock(&fjveincam_mutex); + + subminor = iminor(inode); + + dbg(__LINE__, "usb_fjveincam_open", "open", subminor); + + interface = usb_find_interface(&usb_fjveincam_driver, subminor); + if (!interface) { + pr_err("%s - error, can't find device for minor %d\n", + __func__, subminor); + retval = -ENODEV; + goto exit; + } + + dev = usb_get_intfdata(interface); + if ((!dev) || (!dev->udev)) { + dbg(__LINE__, "usb_fjveincam_open", "device not present", 0L); + retval = -ENODEV; + goto exit; + } + + mutex_lock(&(dev->io_mutex)); + + if (dev->open_count) { + /* Another process has opened. */ + if (dev->confirmed == SENSOR_CONFIRMED) { + /* The sensor was confirmed. */ + dbg(__LINE__, "usb_fjveincam_open", "device already open", 0L); + retval = -EBUSY; + goto exit; + } + + mutex_unlock(&(dev->io_mutex)); + + /* Wait until the sensor is confirmed or closed, because another process is open. */ + /* Change SENSOR_CONFIRMED_WAIT_TIME value when the time-out happens before the sensor is confirmed. */ + wait = wait_event_interruptible_timeout(dev->wait_q, + (!dev->open_count)||(dev->confirmed==SENSOR_CONFIRMED), + SENSOR_CONFIRMED_WAIT_TIME); + + mutex_lock(&(dev->io_mutex)); + if (wait == 0) { + /* Time-out happens before the sensor is confirmed. */ + dbg(__LINE__, "usb_fjveincam_open", "preconfirmation timeout", 0L); + dev->o_timeout++; + dev->confirmed=SENSOR_CONFIRMED; + retval = -EBUSY; + goto exit; + } + else if (dev->confirmed==SENSOR_CONFIRMED) { + /* Another process completed the sensor confirming, and started the use of the sensor. */ + dbg(__LINE__, "usb_fjveincam_open", "device already open", 0L); + retval = -EBUSY; + goto exit; + } + else if(wait == -ERESTARTSYS) { + retval = -ERESTARTSYS; + goto exit; + } + /* else { + // Another process closed the sensor. + } */ + } + + init_waitqueue_head(&dev->wait_q); + dev->open_count = 1; + file->private_data = dev; /* Used by the read and write methods */ + +exit: + mutex_unlock(&(dev->io_mutex)); + mutex_unlock(&fjveincam_mutex); + + return retval; + +} + +// @func +static int usb_fjveincam_release(struct inode *inode, struct file *file) +{ + struct fjveincam *dev = file->private_data; + mutex_lock(&(dev->io_mutex)); + + dev->confirmed = SENSOR_NOT_CONFIRMED; + dev->open_count = 0; + file->private_data = NULL; + + if (!dev->udev) { + /* The device was unplugged while open - need to clean up */ + dbg(__LINE__, "funczz", "device was unplugged while open .. tidying up", 0L); + + mutex_unlock(&(dev->io_mutex)); + kfree(dev->ibuf); + kfree(dev->obuf); + kfree(dev); + + return 0; + } + + wake_up_interruptible(&dev->wait_q); /* Wake_up the process waiting in open() function. */ + dbg(__LINE__, "usb_fjveincam_close", "closing...", 0L); + mutex_unlock(&(dev->io_mutex)); + + return 0; +} + + +// ### # ####### +// # # # # +// # # # # +// # # # # +// # # # # +// # # # # +// ### # ####### +// +// @func +static ssize_t usb_fjveincam_read(struct file *file, char *buffer, + size_t count, loff_t *ppos) +{ + struct fjveincam *dev = file->private_data; + struct usb_device *udev; + + ssize_t bytes_read = 0; /* Overall count of bytes_read */ + ssize_t ret = 0; + + int subminor; + int partial; /* Number of bytes successfully read */ + int this_read; /* Max number of bytes to read */ + int result; + int r_expire = R_EXPIRE; + + char *ibuf; + struct timespec64 CURRENT_TIME; + + ktime_get_ts64(&CURRENT_TIME); + + mutex_lock(&(dev->io_mutex)); + + subminor = dev->subminor; + + udev = dev->udev; + if (!udev) { + /* The device was unplugged before the file was released */ + dbg(__LINE__, "usb_fjveincam_read", "device was unplugged", 0L); + ret = -ENODEV; + goto out_error; + } + + ibuf = dev->ibuf; + + file->f_path.dentry->d_inode->i_atime = CURRENT_TIME; + while (count > 0) { + if (signal_pending(current)) { + dbg(__LINE__, "usb_fjveincam_read", "signal detected", 0L); + ret = -ERESTARTSYS; + break; + } + + this_read = (count >= IBUF_SIZE) ? IBUF_SIZE : count; + + result = usb_bulk_msg(udev, usb_rcvbulkpipe(udev, dev->bulk_in_ep), ibuf, this_read, &partial, R_NAK_TIMEOUT); + //dbg("%s: minor:%d result:%d this_read:%d partial:%d count:%d", "funczz", subminor, result, this_read, partial, count); + dbg(__LINE__, "usb_fjveincam_read", "partial read", 0L); + + dev->r_lasterr = result; + if (result == -ETIMEDOUT) { /* NAK */ + dev->r_error++; + if (!partial) { /* No data */ + if (--r_expire <= 0) { /* Give it up */ + dbg(__LINE__, "usb_fjveincam_read", "excessive NAKs", 0L); + ret = result; + break; + } else { /* Keep trying to read data */ + schedule_timeout(R_NAK_TIMEOUT); + continue; + } + } else { /* Timeout w/ some data */ + goto data_recvd; + } + } + + if (result == -EPIPE) { /* No hope */ + dev->r_error++; + if(usb_clear_halt(udev, dev->bulk_in_ep)) { + dbg(__LINE__, "usb_fjveincam_read", "failed to clear endpoint halt condition", 0L); + } + ret = result; + break; + } else if ((result < 0) && (result != EREMOTEIO)) { + dev->r_error++; + dbg(__LINE__, "usb_fjveincam_read", "an error occurred", 0L); + ret = -EIO; + break; + } + +data_recvd: + + if (partial) { /* Data returned */ + if (copy_to_user(buffer, ibuf, partial)) { + dbg(__LINE__, "usb_fjveincam_read", "failed to copy data to user space", 0L); + ret = -EFAULT; + break; + } + count -= partial; /* Compensate for short reads */ + bytes_read += partial; /* Keep tally of what actually was read */ + buffer += partial; + } else { + ret = 0; + break; + } + } + +out_error: + + dbg(__LINE__, "usb_fjveincam_read", "bytes were read", 0L); + + mutex_unlock(&(dev->io_mutex)); + + return ret ? ret : bytes_read; +} + + +// @func +static ssize_t usb_fjveincam_write(struct file *file, const char *buffer, + size_t count, loff_t *ppos) +{ + struct fjveincam *dev = file->private_data; + struct usb_device *udev; + + ssize_t bytes_written = 0; /* Overall count of bytes written */ + ssize_t ret = 0; + + int subminor; + int this_write; /* Number of bytes to write */ + int partial; /* Number of bytes successfully written */ + int result = 0; + + char *obuf; + struct timespec64 CURRENT_TIME; + + ktime_get_ts64(&CURRENT_TIME); + mutex_lock(&(dev->io_mutex)); + + subminor = dev->subminor; + + udev = dev->udev; + if (!udev) { + dbg(__LINE__, "usb_fjveincam_write", "device was unplugged", 0L); + ret = -ENODEV; + goto out_error; + } + + obuf = dev->obuf; + file->f_path.dentry->d_inode->i_atime = CURRENT_TIME; + + while (count > 0) { + if (signal_pending(current)) { + ret = -ERESTARTSYS; + break; + } + + this_write = (count >= OBUF_SIZE) ? OBUF_SIZE : count; + + if (copy_from_user(dev->obuf, buffer, this_write)) { + ret = -EFAULT; + break; + } + + result = usb_bulk_msg(udev,usb_sndbulkpipe(udev, dev->bulk_out_ep), obuf, this_write, &partial, W_NAK_TIMEOUT); + dbg(__LINE__, "usb_fjveincam_write", "bulk data sent", 0L); + + dev->w_lasterr = result; + if (result == -ETIMEDOUT) { /* NAK */ + dbg(__LINE__, "usb_fjveincam_write", "excess NAKs", 0L); + dev->w_error++; + ret = result; + break; + } else if (result < 0) { /* We should not get any I/O errors */ + dbg(__LINE__, "usb_fjveincam_write", "error detected", 0L); + dev->w_error++; + ret = -EIO; + break; + } + + if (partial != this_write) { /* Unable to write all contents of obuf */ + dev->w_error++; + ret = -EIO; + break; + } + + if (partial) { /* Data written */ + buffer += partial; + count -= partial; + bytes_written += partial; + } else { /* No data written */ + ret = 0; + break; + } + } + +out_error: + + mutex_unlock(&(dev->io_mutex)); + + return ret ? ret : bytes_written; +} + + + + + +// ### ####### ##### ####### # +// # # # # # # # +// # # # # # # +// # # # # # # +// # # # # # # +// # # # # # # # +// ### ####### ##### # ####### +// +// @func +static long usb_fjveincam_unlocked_ioctl(struct file *file, uint cmd, ulong arg) +{ + struct fjveincam *dev = file->private_data; + struct usb_device *udev; + char obuf[256]; + int subminor; + int retval = 0; + return -99; + + memset(&obuf,0,sizeof(obuf)); + printk(">>>>>>>>> IOCTL %d\n", cmd); + mutex_lock(&(dev->io_mutex)); + + subminor = dev->subminor; + + dbg(__LINE__, "usb_fjveincam_ioctl", "ioctl", 0L); + + if (!dev->udev) { + dbg(__LINE__, "usb_fjveincam_ioctl", "device was unplugged", 0L); + retval = -ENODEV; + goto out_error; + } + + + switch (cmd) + { + case USB_FJVEINCAMV30_IOCTL_CTRLMSG: + case USB_FJVEINCAM_IOCTL_CTRLMSG: + { + struct fjveincam_cmsg user_cmsg; + struct { + struct usb_ctrlrequest req; + unsigned char *data; + } cmsg; + int pipe, nb, ret; + unsigned char buf[974]; + dbg(__LINE__, "usb_fjveincam_ioctl", "USB_FJVEINCAM_IOCTL_CTRLMSG", 0L); + udev = dev->udev; + + dbg(__LINE__, "usb_fjveincam_ioctl", "dealing with an ioctl", 0L); + + if (copy_from_user(&user_cmsg, (void *)arg, sizeof(user_cmsg))) { + retval = -EFAULT; + break; + } + cmsg.req.bRequestType = user_cmsg.req.bRequestType; + cmsg.req.bRequest = user_cmsg.req.bRequest; + cmsg.req.wValue = user_cmsg.req.wValue; + cmsg.req.wIndex = user_cmsg.req.wIndex; + cmsg.req.wLength = user_cmsg.req.wLength; + cmsg.data = user_cmsg.data; + + nb = cmsg.req.wLength; + + if (nb > sizeof(buf)) { + retval = -EINVAL; + break; + } + + if ((cmsg.req.bRequestType & 0x80) == 0) { + pipe = usb_sndctrlpipe(udev, 0); + if (nb > 0 && copy_from_user(buf, cmsg.data, nb)) { + retval = -EFAULT; + break; + } + } else { + pipe = usb_rcvctrlpipe(udev, 0); + } + + ret = usb_control_msg(udev, pipe, + cmsg.req.bRequest, + cmsg.req.bRequestType, + cmsg.req.wValue, + cmsg.req.wIndex, + buf, nb, C_NAK_TIMEOUT); + + dbg(__LINE__, "usb_fjveincam_ioctl", "request", 0L); + + sprintf(obuf,"%s: minor:%d request result:%d cmd[%02X:%04X:%04X:%04X] rsp[%02X:%02X:%02X:%02X]", + "funczz", subminor, ret, + cmsg.req.bRequest, cmsg.req.wValue, cmsg.req.wIndex, cmsg.req.wLength, + buf[0], buf[1], buf[2], buf[3]); + dbg(__LINE__, "usb_fjveincam_ioctl", obuf, 0L); + + + if (ret < 0) { + dbg(__LINE__, "usb_fjveincam_ioctl", "error detected", 0L); + retval = -EIO; + break; + } + + if (nb < ret) { + ret = nb; + } + if (nb > 0 && (cmsg.req.bRequestType & 0x80) && copy_to_user(cmsg.data, buf, ret)) { + retval = -EFAULT; + } + + break; + } + + case USB_FJVEINCAMV30_IOCTL_CHECK: + case USB_FJVEINCAM_IOCTL_CHECK: + dbg(__LINE__, "usb_fjveincam_ioctl", "USB_FJVEINCAM_IOCTL_CHECK", 0L); + break; + + /* Notification of the end of sensor confirming. */ + case USB_FJVEINCAMV30_IOCTL_CONFIRM: + case USB_FJVEINCAM_IOCTL_CONFIRM: + { + dbg(__LINE__, "usb_fjveincam_ioctl", "USB_FJVEINCAM_IOCTL_CONFIRM", 0L); + dev->confirmed = SENSOR_CONFIRMED; /* Sensor confirming was completed, and started the use of the sensor. */ + wake_up_interruptible(&dev->wait_q); /* Wake_up the process waiting in open() function. */ + dbg(__LINE__, "usb_fjveincam_ioctl", "sensor was checked", 0L); + break; + } + + case USB_FJVEINCAMV30_IOCTL_INFO: + case USB_FJVEINCAM_IOCTL_INFO: + { + struct fjveincam_info info; + dbg(__LINE__, "usb_fjveincam_ioctl", "USB_FJVEINCAM_IOCTL_INFO", 0L); + + info.magic = FJPV_MAGIC; /* Magic number for indicating Fujitsu Palmsecure sensor driver. */ + info.minor = subminor; + info.o_timeout = dev->o_timeout; + info.r_error = dev->r_error; + info.r_lasterr = dev->r_lasterr; + info.w_error = dev->w_error; + info.w_lasterr = dev->w_lasterr; + strncpy((char*)info.version, DRIVER_VERSION, sizeof(info.version)); + if (copy_to_user((void *)arg, &info, sizeof(info))) + retval = -EFAULT; + + break; + } + + default: + dbg(__LINE__, "usb_fjveincam_ioctl", "invalid request code", 0L); + retval = -ENOIOCTLCMD; + break; + } + +out_error: + + mutex_unlock(&(dev->io_mutex)); + + dbg(__LINE__, "usb_fjveincam_ioctl", "OK...", 0L); + + return retval; +} + + + +// @config +static struct file_operations usb_fjveincam_fops = { + .owner = THIS_MODULE, + .open = usb_fjveincam_open, + .release = usb_fjveincam_release, + .read = usb_fjveincam_read, + .write = usb_fjveincam_write, + .unlocked_ioctl = usb_fjveincam_unlocked_ioctl, +}; + + +// @config +static struct usb_class_driver fjveincam_class = { + .name = "usb/fjveincam%d", + .fops = &usb_fjveincam_fops, + .minor_base = USB_subminor_BASE, +}; + + + + +// +// ##### ##### #### ##### ###### +// # # # # # # # # # +// # # # # # # ##### ##### +// ##### ##### # # # # # +// # # # # # # # # +// # # # #### ##### ###### +// +// Runs when the *device* is plugged in +// @func +static int usb_fjveincam_probe(struct usb_interface *intf, + const struct usb_device_id *id) +{ + struct usb_device *udev = interface_to_usbdev(intf); + struct fjveincam *dev; + struct usb_host_interface *interface; + struct usb_endpoint_descriptor *endpoint; + + int ep_cnt; + int retval; + + char have_bulk_in, have_bulk_out; + char name[20]; + char buf[128]; + + + // Dump usb_interface structure + pr_info("Dumping usb_interface structure:\n"); + pr_info(" Interface number: %d\n", intf->cur_altsetting->desc.bInterfaceNumber); + pr_info(" Interface class: 0x%02x\n", intf->cur_altsetting->desc.bInterfaceClass); + // Add more fields as needed + + // Dump usb_device_id structure + pr_info("Dumping usb_device_id structure:\n"); + pr_info(" Matched vendor ID: 0x%04x\n", id->idVendor); + pr_info(" Matched product ID: 0x%04x\n", id->idProduct); + // Add more fields as needed + + memset(&buf,0,sizeof(buf)); + dbg(__LINE__, "usb_fjveincam_probe", "probed; [device id]", 0L); + + sprintf(buf, "vendor id 0x%x, device id 0x%x, portnum:%d minor_base:%d", + udev->descriptor.idVendor, udev->descriptor.idProduct, + udev->portnum, USB_subminor_BASE); + dbg(__LINE__, "usb_fjveincam_probe", buf, 0L); + + +/* + * After this point we can be a little noisy about what we are trying to + * configure. + */ + + if (udev->descriptor.bNumConfigurations != 1) { + dbg(__LINE__, "funczz", "only one device configuration is supported", 0L); + return -ENODEV; + } + +/* + * Start checking for two bulk endpoints. + */ + + interface = &intf->altsetting[0]; + + dbg(__LINE__, "usb_fjveincam_probe", "endpoints", interface->desc.bNumEndpoints); + + if (interface->desc.bNumEndpoints != 2) { + dbg(__LINE__, "usb_fjveincam_probe", "**ERROR** endpoint count", interface->desc.bNumEndpoints); + return -EIO; + } + + ep_cnt = have_bulk_in = have_bulk_out = 0; + + while (ep_cnt < interface->desc.bNumEndpoints) { + endpoint = &interface->endpoint[ep_cnt].desc; + + if (!have_bulk_in && IS_EP_BULK_IN(endpoint)) { + ep_cnt++; + have_bulk_in = endpoint->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK; + dbg(__LINE__, "usb_fjveincam_probe", "bulk in", 0L); + continue; + } + + if (!have_bulk_out && IS_EP_BULK_OUT(endpoint)) { + ep_cnt++; + have_bulk_out = endpoint->bEndpointAddress & USB_ENDPOINT_NUMBER_MASK; + dbg(__LINE__, "usb_fjveincam_probe", "bulk out", 0L); + continue; + } + + dbg(__LINE__, "usb_fjveincam_probe", "**ERROR** not a bulk endpoint", 0L); + return -EIO; /* Shouldn't ever get here unless we have something weird */ + } + +/* + * Perform a quick check to make sure that everything worked as it + * should have. + */ + + if (!have_bulk_in || !have_bulk_out) { + dbg(__LINE__, "usb_fjveincam_probe", "**ERROR** bulk in/out both required", 0L); + return -EIO; + } + +/* + * Determine a minor number and initialize the structure associated + * with it. + */ + if (!(dev = kzalloc (sizeof (struct fjveincam), GFP_KERNEL))) { + dbg(__LINE__, "usb_fjveincam_probe", "**ERROR** insufficient memory", 0L); + return -ENOMEM; + } + mutex_init(&(dev->io_mutex)); /* Initializes to unlocked */ + +/* Ok, now initialize all the relevant values */ + if (!(dev->obuf = (char *)kmalloc(OBUF_SIZE, GFP_KERNEL))) { + dbg(__LINE__, "usb_fjveincam_probe", "**ERROR** insufficient output memory", 0L); + kfree(dev); + return -ENOMEM; + } + + if (!(dev->ibuf = (char *)kmalloc(IBUF_SIZE, GFP_KERNEL))) { + dbg(__LINE__, "usb_fjveincam_probe", "**ERROR** insufficient input memory", 0L); + kfree(dev->obuf); + kfree(dev); + return -ENOMEM; + } + + usb_get_dev(udev); + dev->bulk_in_ep = have_bulk_in; + dev->bulk_out_ep = have_bulk_out; + dev->udev = udev; + dev->open_count = 0; + dev->confirmed = SENSOR_NOT_CONFIRMED; + + usb_set_intfdata(intf, dev); + + retval = usb_register_dev(intf, &fjveincam_class); + if (retval) { + dbg(__LINE__, "usb_fjveincam_probe", "**ERROR** unable to get a minor number", 0L); + usb_set_intfdata(intf, NULL); + kfree(dev->ibuf); + kfree(dev->obuf); + kfree(dev); + return -ENOMEM; + } + + dbg(__LINE__, "usb_fjveincam_probe", "have a minor", intf->minor); + dev->subminor = intf->minor; + + snprintf(name, sizeof(name), fjveincam_class.name, + intf->minor - fjveincam_class.minor_base); + dev_info(&intf->dev, "USB PalmVeinCam device now attached to %s\n", name); + + dbg(__LINE__, "usb_fjveincam_probe: have a name", name, 0L); + + return 0; +} + + + + + +// ##### ####### # # # # +// # # # # ## # ## # +// # # # # # # # # # +// # # # # # # # # # +// # # # # # # # # # +// # # # # # ## # ## +// ##### ####### # # # # +// +// Runs when the *device* is disconnected, or the module is unloaded +// @func +static void usb_fjveincam_disconnect(struct usb_interface *interface) +{ + struct fjveincam *dev = usb_get_intfdata(interface); + int subminor = interface->minor; + + usb_set_intfdata(interface, NULL); + + /* give back our minor */ + usb_deregister_dev (interface, &fjveincam_class); + + mutex_lock(&fjveincam_mutex); /* If there is a process in open(), wait for return. */ + mutex_lock(&(dev->io_mutex)); + + dev_info(&interface->dev, "USB PalmVeinCam #%d now disconnected\n", (subminor - fjveincam_class.minor_base)); + + usb_driver_release_interface(&usb_fjveincam_driver, + dev->udev->actconfig->interface[0]); + + if (dev->open_count) { + /* The device is still open - cleanup must be delayed */ + dbg(__LINE__, "usb_fjveincam_disconnect", "device was unplugged while open", 0L); + dev->udev = 0; + mutex_unlock(&(dev->io_mutex)); + mutex_unlock(&fjveincam_mutex); + return; + } + + dbg(__LINE__, "usb_fjveincam_disconnect", "deallocating...", 0L); + + mutex_unlock(&(dev->io_mutex)); + mutex_unlock(&fjveincam_mutex); + + kfree(dev->ibuf); + kfree(dev->obuf); + kfree(dev); +} + + + + + +// ###### ####### ##### ### ##### ####### ### ###### # # +// # # # # # # # # # ### # # ## # +// # # # # # # # # # # # # # +// ###### ##### # #### # ##### # # ###### # # # +// # # # # # # # # # # # # # +// # # # # # # # # # # # # ## +// # # ####### ##### ### ##### # # # # # +// +// Runs when the *module* is loaded +// @func +static int __init usb_fjveincam_init(void){ + int result; + + // register this driver with the USB subsystem - fires on driver module insmod + dbg(__LINE__, "usb_fjveincam_init", "USB registration with ioctl %lu", USB_FJVEINCAM_IOCTL_INFO); + result = usb_register(&usb_fjveincam_driver); + if (result){ + dbg(__LINE__, "usb_fjveincam_init", "USB registration failed", 0L); + } + dbg(__LINE__, "usb_fjveincam_init", "registration complete", result); + return result; +} +// This runs when the *module* is unloaded +// @func +static void __exit usb_fjveincam_exit(void) +{ + // deregister this driver with the USB subsystem - fires on driver module rmmod + dbg(__LINE__, "usb_fjveincam_exit", "USB de-registration with ioctl %lu", USB_FJVEINCAM_IOCTL_INFO); + usb_deregister(&usb_fjveincam_driver); + dbg(__LINE__, "usb_fjveincam_exit", "removing the driver", 0L); +} +module_init(usb_fjveincam_init); +module_exit(usb_fjveincam_exit); + + + + + +// @config +static struct usb_driver usb_fjveincam_driver = { + .name = "fjveincam", + .probe = usb_fjveincam_probe, + .disconnect = usb_fjveincam_disconnect, + .id_table = fjveincam_table, + .no_dynamic_id = 1 +}; + + + + +MODULE_AUTHOR(DRIVER_AUTHOR); +MODULE_DESCRIPTION(DRIVER_DESC); +MODULE_LICENSE("GPL v2"); + + \ No newline at end of file diff --git a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.ko b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.ko index 7452a5158..c1f11df41 100644 Binary files a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.ko and b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.ko differ diff --git a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.c b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.c index 6fe1a64a3..96830daac 100644 --- a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.c +++ b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.c @@ -31,14 +31,13 @@ __used __section("__versions") = { { 0xdf85ea06, "usb_deregister" }, { 0xf63cc4cc, "usb_register_driver" }, { 0xa024a396, "usb_clear_halt" }, + { 0x6b10bee1, "_copy_to_user" }, + { 0xd4afa9de, "usb_control_msg" }, { 0x228fca22, "usb_bulk_msg" }, + { 0x13c49cc2, "_copy_from_user" }, + { 0x88db9f48, "__check_object_size" }, { 0xa7bfbf2f, "current_task" }, { 0x5e515be6, "ktime_get_ts64" }, - { 0x6b10bee1, "_copy_to_user" }, - { 0x56470118, "__warn_printk" }, - { 0xd4afa9de, "usb_control_msg" }, - { 0x88db9f48, "__check_object_size" }, - { 0x13c49cc2, "_copy_from_user" }, { 0x656e4a6e, "snprintf" }, { 0x40a9a344, "usb_register_dev" }, { 0x1e3192f4, "usb_get_dev" }, @@ -46,7 +45,6 @@ __used __section("__versions") = { { 0xcefb0c9f, "__mutex_init" }, { 0xf35141b2, "kmem_cache_alloc_trace" }, { 0x26087692, "kmalloc_caches" }, - { 0x3c3ff9fd, "sprintf" }, { 0xd0da656b, "__stack_chk_fail" }, { 0x92540fbf, "finish_wait" }, { 0x8ddd8aad, "schedule_timeout" }, @@ -56,13 +54,13 @@ __used __section("__versions") = { { 0xd9a5ea54, "__init_waitqueue_head" }, { 0x2546aa39, "usb_find_interface" }, { 0x3eeb2322, "__wake_up" }, + { 0x5b8239ca, "__x86_return_thunk" }, { 0x37a0cba, "kfree" }, { 0x3213f038, "mutex_unlock" }, { 0x30350852, "usb_driver_release_interface" }, { 0xe6e002cf, "_dev_info" }, { 0x4dfa8d4b, "mutex_lock" }, { 0x665cdc8a, "usb_deregister_dev" }, - { 0x5b8239ca, "__x86_return_thunk" }, { 0x92997ed8, "_printk" }, { 0xbdfb6dbb, "__fentry__" }, }; @@ -73,4 +71,4 @@ MODULE_ALIAS("usb:v04C5p1084d*dc*dsc*dp*ic*isc*ip*in*"); MODULE_ALIAS("usb:v04C5p125Ad*dc*dsc*dp*ic*isc*ip*in*"); MODULE_ALIAS("usb:v04C5p1526d*dc*dsc*dp*ic*isc*ip*in*"); -MODULE_INFO(srcversion, "808114ED83ED71E3194151A"); +MODULE_INFO(srcversion, "58936B95B19315871CE1C0D"); diff --git a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.o b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.o index 68eaea12a..b99c1980f 100644 Binary files a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.o and b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.mod.o differ diff --git a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.o b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.o index 7d01f0d8a..4f1bff360 100644 Binary files a/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.o and b/modules/fjpalmvein/C/fjpalmvein-main/fjveincam.o differ diff --git a/modules/fjpalmvein/C/fjpalmvein-main/foo b/modules/fjpalmvein/C/fjpalmvein-main/foo index f33026d40..48c24b52b 100644 --- a/modules/fjpalmvein/C/fjpalmvein-main/foo +++ b/modules/fjpalmvein/C/fjpalmvein-main/foo @@ -74,7 +74,8 @@ virgil version: 21.101.1 serial: Unknown slot: AM4 - size: 3693MHz + size: 3492MHz + capacity: 3500MHz width: 64 bits clock: 100MHz capabilities: lm fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 constant_tsc rep_good acc_power nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb bpext ptsc mwaitx cpb hw_pstate ssbd vmmcall fsgsbase bmi1 avx2 smep bmi2 xsaveopt arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov cpufreq @@ -177,6 +178,15 @@ virgil capabilities: usb-2.00 bidirectional configuration: driver=usblp maxpower=2mA speed=480Mbit/s *-usb:1 + description: Generic USB device + product: FUJITSU PalmSecure-F Pro + vendor: FUJITSU + physical id: 2 + bus info: usb@4:2 + version: 2.00 + capabilities: usb-2.00 + configuration: driver=fjveincam maxpower=480mA speed=480Mbit/s + *-usb:2 description: USB hub product: USB 2.0 Hub vendor: Terminus Technology Inc. @@ -185,16 +195,7 @@ virgil version: 1.11 capabilities: usb-2.00 configuration: driver=hub maxpower=100mA slots=4 speed=480Mbit/s - *-usb:0 - description: Generic USB device - product: FUJITSU PalmSecure-F Pro - vendor: FUJITSU - physical id: 1 - bus info: usb@4:6.1 - version: 2.00 - capabilities: usb-2.00 - configuration: driver=fjveincam maxpower=480mA speed=480Mbit/s - *-usb:1 + *-usb description: Mouse product: USB Receiver vendor: Logitech @@ -210,7 +211,7 @@ virgil logical name: /dev/input/event7 logical name: /dev/input/mouse1 capabilities: usb - *-usb:2 + *-usb:3 description: Bluetooth wireless interface product: Bluetooth Radio vendor: Realtek diff --git a/modules/fjpalmvein/C/fjpalmvein-main/trace.log b/modules/fjpalmvein/C/fjpalmvein-main/trace.log new file mode 100644 index 000000000..777254edd --- /dev/null +++ b/modules/fjpalmvein/C/fjpalmvein-main/trace.log @@ -0,0 +1,46 @@ +execve("./drivertest", ["./drivertest", "2"], 0x7fff1b858518 /* 18 vars */) = 0 +brk(NULL) = 0x564c7cc47000 +arch_prctl(0x3001 /* ARCH_??? */, 0x7ffdb60bebb0) = -1 EINVAL (Invalid argument) +mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f5d63ff9000 +access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) +openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 +newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=106883, ...}, AT_EMPTY_PATH) = 0 +mmap(NULL, 106883, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f5d63f9c000 +close(3) = 0 +openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 +read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\237\2\0\0\0\0\0"..., 832) = 832 +pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784 +pread64(3, "\4\0\0\0 \0\0\0\5\0\0\0GNU\0\2\0\0\300\4\0\0\0\3\0\0\0\0\0\0\0"..., 48, 848) = 48 +pread64(3, "\4\0\0\0\24\0\0\0\3\0\0\0GNU\0i8\235HZ\227\223\333\350s\360\352,\223\340."..., 68, 896) = 68 +newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=2216304, ...}, AT_EMPTY_PATH) = 0 +pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784 +mmap(NULL, 2260560, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f5d63d74000 +mmap(0x7f5d63d9c000, 1658880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28000) = 0x7f5d63d9c000 +mmap(0x7f5d63f31000, 360448, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bd000) = 0x7f5d63f31000 +mmap(0x7f5d63f89000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x214000) = 0x7f5d63f89000 +mmap(0x7f5d63f8f000, 52816, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f5d63f8f000 +close(3) = 0 +mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f5d63fb9000 +arch_prctl(ARCH_SET_FS, 0x7f5d63fb9740) = 0 +set_tid_address(0x7f5d63fb9a10) = 6850 +set_robust_list(0x7f5d63fb9a20, 24) = 0 +rseq(0x7f5d63fba0e0, 0x20, 0, 0x53053053) = 0 +mprotect(0x7f5d63f89000, 16384, PROT_READ) = 0 +mprotect(0x564c7be7b000, 4096, PROT_READ) = 0 +mprotect(0x7f5d63ff4000, 8192, PROT_READ) = 0 +prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0 +munmap(0x7f5d63f9c000, 106883) = 0 +newfstatat(1, "", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x4), ...}, AT_EMPTY_PATH) = 0 +getrandom("\xf8\x81\x10\x55\xbd\x94\x86\xa0", 8, GRND_NONBLOCK) = 8 +brk(NULL) = 0x564c7cc47000 +brk(0x564c7cc68000) = 0x564c7cc68000 +write(1, "#1 /dev/usb/fjveincam2\n", 23) = 23 +openat(AT_FDCWD, "/dev/usb/fjveincam2", O_RDWR) = -1 ENOENT (No such file or directory) +dup(2) = 3 +fcntl(3, F_GETFL) = 0x2 (flags O_RDWR) +newfstatat(3, "", {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x4), ...}, AT_EMPTY_PATH) = 0 +write(3, "open: No such file or directory\n", 32) = 32 +close(3) = 0 +write(1, "Failed to open USB device /dev/u"..., 50) = 50 +exit_group(-1) = ? ++++ exited with 255 +++ diff --git a/modules/fjpalmvein/C/fjpalmvein-main/working.tgz b/modules/fjpalmvein/C/fjpalmvein-main/working.tgz new file mode 100644 index 000000000..23557e473 Binary files /dev/null and b/modules/fjpalmvein/C/fjpalmvein-main/working.tgz differ diff --git a/modules/fjpalmvein/C/silhouette.bmp b/modules/fjpalmvein/C/silhouette.bmp new file mode 100644 index 000000000..53244f218 Binary files /dev/null and b/modules/fjpalmvein/C/silhouette.bmp differ