Compare commits

..

No commits in common. "3c4cbfd300f98c17a9c969d65cf82e1f8653ccaf" and "862d563f08b6740e018b1de133d98e0739c3c8b0" have entirely different histories.

10 changed files with 216 additions and 420 deletions

View File

@ -1,113 +0,0 @@
# Yoloserv 2026 Project
## Project Objectives
### Repo to large for GitLab / Streamline the repo for Deployment
### Move camera_stream from core to YOLO
#### Problems with camera_stream
Camera stream is a proccess that provides a live video feed to the UI. It has led to better success with the face matching proccess and has allowed for a more seamless user experience. Right now the camera_stream functionalty is in the CORE repo and runs and runs contiously in the background. There are a few issues with this:
- It occupies and holds the camera would cause issues with other applications that use core and not the camera_stream
- It uses a full CPU core to run which as our applications scale is not maintainable.
#### Solution
The solution was to move the camera_stream functionality to the YOLO repo and make it a UKDI variable that can be enabled or disabled. However this failed as it added to much delay and lagging when loading the stream and moving frames back and forth from yolo to core back to yolo. The decision was made to keep the camera_stream in core for now but init it from a UKDI var.
#### Flow Diagram
The Flow has been broken into two parts to show the flow of the camera stream.
```mermaid
flowchart LR
UI[UI]
CORE[Core]
YOLO[YOLO]
CAM[Camera]
UI --> | Open Camera Stream | CORE
CORE --> | Open Camera Stream | YOLO
YOLO --> | Open Camera Stream | CAM
CAM --> | Frames | YOLO
YOLO --> | Frames | CORE
```
This flow below can be viewed within the method `cam_livefeed` in the file `yoloserv.py`.
```mermaid
flowchart LR
UI[UI]
CORE[CORE]
CAM[Camera]
UI --> | Open Camera Stream | CORE
CORE --> | Open Camera Stream | CAM
CAM --> | Frames | CORE
CORE --> | Camera Stream | UI
```
### Yoloserv pipeline
```mermaid
sequenceDiagram
participant UI
participant CORE
participant CAM
participant YOLO
participant para
rect rgba(200, 220, 255, 0.35)
UI->>CORE: pic_still
CORE->>CAM: capture still
CAM-->>CORE: still image
CORE-->>UI: still image
end
rect rgba(220, 255, 220, 0.35)
UI->>CORE: facematch (regula)
CORE->>YOLO: facematch
YOLO->>para: facematch
para-->>YOLO: result
YOLO-->>CORE: result
CORE-->>UI: result
end
```
### Implement 2D detection
Dispension faced the problem of needing a detection model in place for fully autonomous operation. To resolve this we implemented a 2D detection model that can be used to detect faces in a video stream. We implemented paravisions 2D model detection. Refer to para.md for more information on installing models.
Here are the results from a live user test:
```bash
<ValidnessResult (isValid=0, sharpness=-1.000000, quality=-1.000000, acceptability=-1.000000, frontality=-1.000000, mask=-1.000000, image_avg_pixel=115.375671, image_bright_pixel_pct=0.145549, image_dark_pixel_pct=0.030642, face_height_pct=0.318821, face_width_pct=0.208163, face_width=133.224304, face_height=153.033981, face_roll_angle=3.102657, face_positions=[0.377501,-0.001564,0.585664,0.317257])>
HERE IS THE LIVENESS RESULT <LivenessResult (liveProbability=0.999998, spoofProbability=0.000002)>
HERE IS THE VALIDNESS RESULT <ValidnessResult (isValid=0, sharpness=0.023801, quality=-1.000000, acceptability=-1.000000, frontality=97.000000, mask=0.008516, image_avg_pixel=58.403454, image_bright_pixel_pct=0.000000, image_dark_pixel_pct=0.000000, face_height_pct=0.551221, face_width_pct=0.579525, face_width=373.213989, face_height=446.488739, face_roll_angle=0.489146, face_positions=[0.182151,0.242310,0.761676,0.793531])>
HERE IS THE LIVENESS RESULT <LivenessResult (liveProbability=0.057056, spoofProbability=0.830750)>
```
## Improvements
Ideally we should be using the first frame to complete the facematch process and not for rendering. This would speed things up by a few seconds but is something for the future.

View File

@ -1,37 +0,0 @@
# Gitea Setup
## Table of Contents
- [Gitea Setup](#gitea-setup)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [Configuration](#configuration)
- [Usage](#usage)
- [Troubleshooting](#troubleshooting)
The Problem we faced is that yoloserv was to big to be hosted on Gitlab. Gitlab would also suspend the repo if bills were not paid and user seats were maxed. Obviously this was not a viable solution. Explored maybe self hosting without an intreface just to have repos but when I read Gitea did not need docker and could be run as a system service I tried it out.
## Setting up the Server
First step was setting up the server to run gitea. I scanned my local netowrk with ```bash arp-scan --localnet``` to find the IP address of the server. I could nto find my server running I had to reset the DHCP client with these commands. Use ```nmcli connection show``` to find the connection name and then ```nmcli connection down <connection name>``` and ```nmcli connection up <connection name>``` to reset the DHCP client. Was able to succesfully ```ping 8.8.8.8``` to confirm the server was running.
## Installation
I remoted onto the server and installed gitea with the following commands:
Ran ```free -h``` to check memory and ```df -h``` to check disk space. Was recommended to have at least 8GB of RAM and 2.5 repo size of disk space. These seem a bit much.
Depending on what distro you run commands may vary but I:
- Updated and upgraded the system
- Installed git, openssh-server, ufw and fail2ban
- sudo ufw enable, sudo ufw allow ssh, sudo ufw allow 3000
#### SSH Hardening
Can harden the SSH to be keys only should be done will be a future addition. Done in ```/etc/ssh/sshd_config```. Run ```sudo systemctl restart sshd``` to apply changes.
```bash
PasswordAuthentication no
PermitRootLogin no
```

Binary file not shown.

View File

@ -101,7 +101,6 @@ do
PYP="$PYP:$PYP_DEEPFACE"
;;
"regula") ;;
"traffic") ;;
"camera") ;;
*) echo "yoloserv does not implement backend $i. Edit /etc/ukdi.json::yolo_devices and try again."
exit 1

View File

@ -80,9 +80,10 @@ class Deepfacex(FaceClass):
return json.dumps(verification)
def analyze(self,name):
result = DeepFace.analyze(self.imgs["name"], actions=['age', 'gender', 'emotion', 'race'], enforce_detection=False)
return result
def metadata(self,name):
f1 = "/tmp/%s.png" % (name)
metadata = DeepFace.analyze(img_path = f1, actions = ["age", "gender", "emotion", "race"])
return json.dumps(metadata)
#

View File

@ -38,11 +38,8 @@ import sys
import os
import face_recognition
import json
import numpy as np
import urllib.request
from faceclass import FaceClass
from keras.models import load_model
import time
#
@ -62,12 +59,8 @@ class FaceRecognition(FaceClass):
#def init():
#model = load_model("./emotion_detector_models/model.hdf5")
#def prep_detectors(self):
# @doc find all the faces in the named image.
# The detectors tend to return propritery formats, so this "detect" method
# is going to be new for each implementation of FaceClass
# @doc find all the faces in the named image
def detect(self, name):
self.tree["detectms"] = time.time()
boxes = []
@ -89,25 +82,24 @@ class FaceRecognition(FaceClass):
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s, "time":%d }' % (len(self.boxes), json.dumps(self.boxes), self.tree["detectms"] )
# @doc find the landmarks of the given face (eyes, mouth, nose etc)
# @doc find the landmarks of the given face
def landmarks(self, name):
landmarks = face_recognition.face_landmarks(self.imgs[name])
return '{ "status":0, "remark":"OK", "landmarks":%s }' % json.dumps(landmarks)
# @doc find the metadata of the given face (emotion, age, gender, race)
def metadata(self, npimg):
model = load_model("models/emotion_model.hdf5")
print(time.time())
# @doc find the metadata of the given face
def metadata(self, name):
emotion_dict= {'Angry': 0, 'Sad': 5, 'Neutral': 4, 'Disgust': 1, 'Surprise': 6, 'Fear': 2, 'Happy': 3}
im = cv2.resize(npimg, (64, 64))
im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
im = np.reshape(im, [1, im.shape[0], im.shape[1], 1])
predicted_class = np.argmax(model.predict(im))
face_image = cv2.imread("..test_images/39.jpg")
face_image = cv2.resize(face_image, (48,48))
face_image = cv2.cvtColor(face_image, cv2.COLOR_BGR2GRAY)
face_image = np.reshape(face_image, [1, face_image.shape[0], face_image.shape[1], 1])
model = load_model("./emotion_detector_models/model.hdf5")
predicted_class = np.argmax(model.predict(face_image))
label_map = dict((v,k) for k,v in emotion_dict.items())
predicted_label = label_map[predicted_class]
print(time.time())
return predicted_label
return '{ "status":88324, "remark":"override this", "landmarks":%s }' % json.dumps(landmarks)
# @doc compare two named images, previously loaded
@ -158,8 +150,8 @@ if __name__ == '__main__':
d = FaceRecognition()
if sys.argv[1]=="regula":
jsonstr = d.crowd_vs_govid("pic1", "/tmp/localcam.jpg", 0, "pic2", "/tmp/regula/Portrait_0.jpg", 0.25)
if sys.argv[1]=="kiosk":
jsonstr = d.crowd_vs_govid("pic1", "testimg/ox.jpg", 0, "pic2", "testimg/ox_govid.jpg", 0.25)
print(jsonstr)
if sys.argv[1]=="messi":
@ -181,11 +173,8 @@ if __name__ == '__main__':
d.stats("messi2_crop")
d.compare("messi4_crop","messi2_rect")
if sys.argv[1]=="traffic":
jsonstr = d.traffic("traffic4", "testimg/messi4.jpg")
print(jsonstr)
if sys.argv[1]=="group":
if sys.argv[1]=="crowd":
jsonstr = d.crowd_vs_govid("pic1", "testimg/messi4.jpg", 0, "pic2", "testimg/messi2.jpg", 0)
print(jsonstr)
@ -195,3 +184,17 @@ if __name__ == '__main__':
if sys.argv[1]=="match":
# lfw
n=0
print("LFW Matching")
for lfw in sorted(os.listdir("lfw/0001")):
d.load2("localcam","regula", "lfw/0001" + lfw, "lfw/0002/" + lfw)
d.get_faces()
d.compute_scores()
print(d.get_scores())
print(d.get_landmarks())
n+=1
if n > 1:
sys.exit(0)

View File

@ -15,24 +15,13 @@ class FaceClass(object):
model = None
imgs = {}
encs = {}
crops = {}
faces = {}
visual = 0
files = {}
boxes = []
jsonx = ""
errstr = ""
vidcap = None
tree = { "device1":"NA", "device2":"NA", "threshold":380, "device1_qual":0.5, "device2_qual":0.5, "score":0, "detectms":0, "comparems":0 }
def json2obj(self,jsonx):
print(jsonx)
self.jsonx = jsonx
return json.loads(jsonx)
# # # # #####
# ## # # #
# # # # # #
@ -93,14 +82,11 @@ class FaceClass(object):
return '{ "status":88241, "remark":"override this!" }'
# @doc find the biggest face in the named image
# Imge has origin at top left and value are b = [ X1, Y1, X2, Y2 ]
def ideal(self, name, rectname, cropname):
found = -1
biggest = -1
self.crops[name] = []
print("** faceclass::ideal ... %s with %d boxes => %s + %s" % (name, len(self.boxes), rectname, cropname ))
# resize boxes
for i in range(len(self.boxes)):
@ -115,7 +101,6 @@ class FaceClass(object):
b = self.boxes[found]
# extract crops and highlights - colours are BGR
self.imgs[cropname] = self.crop(name,b[0],b[1],b[2],b[3])
self.crops[name].append(self.imgs[cropname])
self.imgs[rectname] = deepcopy(self.imgs[name])
#print(self.imgs[name])
#print(self.imgs[rectname])
@ -146,36 +131,36 @@ class FaceClass(object):
return '{ "status":88245, "remark":"override this!" }'
# @doc This does everything for you.
# If you are smartserv, "crowd" means cam and "govid" means regula pic
def crowd_vs_govid(self, name1,file1,scale1, name2,file2,scale2):
def crowd_vs_govid(self, name1,file1,scale1str, name2,file2,scale2str):
print("##1##")
if self.json2obj(self.load1(name1, file1))["status"] != 0:
return self.jsonx
scale1 = float(scale1str)
scale2 = float(scale2str)
self.load1(name1, file1)
if scale1 !=0:
self.shrink(name1,scale1)
if self.json2obj(self.detect(name1))["status"] != 0:
return self.jsonx
jsonstr = self.detect(name1)
if json.loads(jsonstr)["status"]!=0:
return jsonstr
self.boxscale(name1,0.3)
if self.json2obj(self.ideal(name1,name1+"_rect",name1+"_crop"))["status"] != 0:
return self.jsonx
self.ideal(name1,name1+"_rect",name1+"_crop")
self.save(name1,"/tmp")
self.save(name1+"_rect","/tmp")
self.save(name1+"_crop","/tmp")
print(self.imgs.keys())
print("##2##")
if self.json2obj(self.load1(name2, file2))["status"] != 0:
return self.jsonx
self.load1(name2, file2)
if scale2 !=0:
self.shrink(name2,scale2)
self.save(name2,"/tmp")
if self.json2obj(self.detect(name2))["status"]!=0:
return self.jsonx
jsonstr = self.detect(name2)
if json.loads(jsonstr)["status"]!=0:
return jsonstr
self.boxscale(name2,0.3)
if self.json2obj(self.ideal(name2,name2+"_rect",name2+"_crop"))["status"] != 0:
return self.jsonx
self.ideal(name2,name2+"_rect",name2+"_crop")
self.save(name2,"/tmp")
self.save(name2+"_rect","/tmp")
self.save(name2+"_crop","/tmp")
@ -184,29 +169,6 @@ class FaceClass(object):
print("##R##")
jsonstr = self.compare(name1+"_crop",name2+"_crop")
print(jsonstr)
return jsonstr
# @doc This does deomgraphic examination on a pic.
# If you are smartserv, "crowd" means cam and "govid" means regula pic
def traffic(self, name, file, scale=0):
print("##1##")
jsons = []
if self.json2obj(self.load1(name, file))["status"] != 0:
return self.jsonx
if scale !=0:
self.shrink(name,scale)
if self.json2obj(self.detect(name))["status"] != 0:
return self.jsonx
for i in range(len(self.boxes)):
b = self.boxes[i]
print(">>>>" , b)
analysis = self.metadata(self.imgs[name][b[0]:b[0]+b[1],b[2]:b[2]+b[3]])
jsons.append(analysis)
print(json.dumps(jsons))
return jsonx
###### ##### # #####
@ -226,10 +188,10 @@ class FaceClass(object):
def rebox(self,x1,y1,x2,y2,shape,scale=0.2):
print("!!!!!!1 rebox with shape ",shape)
xx1 = x1 - int((x2-x1)*scale*0.8)
xx2 = x2 + int((x2-x1)*scale*0.8)
yy1 = y1 - int((y2-y1)*scale*1.3)
yy2 = y2 + int((y2-y1)*scale*1.3)
xx1 = x1 - int((x2-x1)*scale)
xx2 = x2 + int((x2-x1)*scale)
yy1 = y1 - int((y2-y1)*scale)
yy2 = y2 + int((y2-y1)*scale)
if xx1 < 0:
xx1 = 0
if yy1 < 0:
@ -252,14 +214,11 @@ class FaceClass(object):
# @doc crop an image, allowing a gutter.
def shrink(self, name, skale=0.5):
print ("shrinking ",name,skale)
if skale == 0:
return
print ("shrinking ",name)
self.imgs[name] = cv2.resize(self.imgs[name],None,fx=skale,fy=skale)
##### ###### ##### # # ##### # #
# # # # # # # # ## #
# # ##### # # # # # # # #
@ -267,7 +226,7 @@ class FaceClass(object):
# # # # # # # # # ##
# # ###### # #### # # # #
def scores(self):
def get_scores(self):
return json.dumps(self.tree)
# return a base64 version of the pic in memory

View File

@ -1,207 +1,199 @@
from paravision.recognition.sdk import SDK
from paravision.recognition.types import Settings, ValidnessCheck
from paravision.recognition.exceptions import ParavisionException
from paravision.liveness2d import SDK as Liveness2DSDK
from paravision.liveness2d.types import (
Settings as Liveness2DSettings,
ValidnessSettings,
)
import paravision.recognition.utils as pru
#
# Paravision based face matcher
#
import json
import sys
import os
### export PYTHONPATH=/wherever/yoloserv/modules ... as long as "paravision/.../" is in there
from paravision.recognition.exceptions import ParavisionException
from paravision.recognition.engine import Engine
from paravision.recognition.sdk import SDK
import paravision.recognition.utils as pru
#from openvino.inference_engine import Engineq
#from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from faceclass import FaceClass
class Paravision(FaceClass):
models = {}
scores = {}
quality = {}
sdk = None
match_score = 0
def init(self):
def init(self,backend=None,model=None):
print("@@@ initialising paravision")
try:
print("INIT FROM PARAVISION")
settings = Settings()
# 1 is the default
settings.worker_count = 4
settings.detection_model = "default"
settings.validness_checks = ValidnessCheck.LIVENESS
self.sdk = SDK(settings=settings)
liveness2d_settings = Liveness2DSettings()
self.liveness2d_sdk = Liveness2DSDK(settings=liveness2d_settings)
except ParavisionException as e:
# error handling logic
print("Exception:", e)
self.sdk = SDK(engine=Engine.AUTO)
except ParavisionException:
pass
# @doc Load a pic using the device label
def load1(self, name, fname):
print(" Loading image '%s' from file %s" % (name, fname))
if not os.path.isfile(fname):
print(" * file not found: %s" % (fname))
return (
'{ "status":442565, "remark":"file name not found", "guilty_param":"fname", "guilty_value":"%s" }'
% (fname)
)
self.files[name] = fname
self.imgs[name] = pru.load_image(fname)
print(" Loaded %s from file %s" % (name, fname))
return '{ "status":0, "remark":"OK", "name":"%s", "fname":"%s" }' % (
name,
fname,
)
# @doc find all the faces in the named image
def detect(self, name):
boxes = []
self.boxes = []
print("** face_recognition::detect ... %s" % name)
try:
# Get all faces from images with qualities, landmarks, and embeddings
faces = self.sdk.get_faces(
[self.imgs[name]], qualities=True, landmarks=True, embeddings=True
)
print("HERE IS THE FACES %s" % faces)
inferences = faces.image_inferences
print("HERE IS THE INFERENCE %s" % inferences)
ix = inferences[0].most_prominent_face_index()
face = inferences[0].faces[ix]
print("HERE IS THE FACE %s" % face)
self.models[name] = inferences[0].faces[ix].embedding
self.quality[name] = round(1000 * inferences[0].faces[ix].quality)
self.boxes = [(0, 0, 0, 0)]
res = self.sdk.get_faces([self.imgs[name]], qualities=True, landmarks=True, embeddings=True)
boxes = res.faces
print(boxes)
for a in boxes:
print(box)
# box is somehow = y1 / x2 / y2 / x1 for face_recognition .
# =
# Thats crazy lol. We need to fix that to x1, y1, x2, y2 with origin at top left
for b in boxes:
self.boxes.append((b[3],b[0],b[1],b[2]))
print("found %d boxes for %s" % (len(self.boxes), name) )
except Exception as ex:
self.errstr = "image processing exception at get_faces: " + str(ex)
return (
'{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }'
% str(ex)
)
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s }' % (
len(self.boxes),
json.dumps(self.boxes),
)
self.errstr = "image processing exception at get_faces: "+str(ex)
return '{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }' % str(ex)
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s }' % (len(self.boxes), json.dumps(self.boxes))
def detect_liveness(self, name):
self.boxes = []
print("** face_recognition::detect ... %s" % name)
# Assess the face that was read in
def processiii(self):
# Get all faces metadata
print("Finding faces in %s" %(self.imgpath))
faces = self.sdk.get_faces([self.image], qualities=True, landmarks=True, embeddings=True)
print("Getting metadata")
inferences = faces.image_inferences
print("Getting best face")
ix = inferences[0].most_prominent_face_index()
print("Getting a mathematical mode of that best face")
self.model = inferences[0].faces[ix].embedding
print("Getting image quality scores..")
self.score = round(1000*inferences[0].faces[ix].quality)
print("Score was %d" %(self.score))
return self.score
# Compare to a face in another Facematch instance
def compare(self,other):
# Get face match score
return self.sdk.get_match_score(self.model, other.model)
#mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
#
def load(self, dev1, dev2, id_image_filepath, photo_image_filepath):
print("## loading images", dev1, dev2)
self.dev1 = dev1
self.dev2 = dev2
try:
self.id_image = pru.load_image(id_image_filepath)
except Exception as e:
return "id image loading failed ", e
try:
self.photo_image = pru.load_image(photo_image_filepath)
except Exception as e:
return "client image loading failed ", e
return None
def get_faces(self):
try:
# Get all faces from images with qualities, landmarks, and embeddings
faces = self.sdk.get_faces(
[self.imgs[name]], qualities=True, landmarks=True, embeddings=True
)
print("HERE IS THE FACES %s" % faces)
inferences = faces.image_inferences
print("HERE IS THE INFERENCE %s" % inferences)
ix = inferences[0].most_prominent_face_index()
face = inferences[0].faces[ix]
print("HERE IS THE FACE %s" % face)
liveness_result = self.validness(face, self.sdk)
live_prob = getattr(liveness_result, "live_prob", None)
print("HERE IS THE LIVE PROB %s" % live_prob)
if live_prob < 0.5:
return (
'{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }'
% live_prob
)
self.models[name] = inferences[0].faces[ix].embedding
self.quality[name] = round(1000 * inferences[0].faces[ix].quality)
self.boxes = [(0, 0, 0, 0)]
print("Finding faces...")
self.inference_result = self.sdk.get_faces([self.id_image, self.photo_image], qualities=True, landmarks=True, embeddings=True)
print("Inferences...")
self.image_inference_result = self.inference_result.image_inferences
if len(self.image_inference_result)==0:
return "no inferences found"
# Get most prominent face
print("Most prominent...")
self.id_face = self.image_inference_result[0].most_prominent_face_index()
self.photo_face = self.image_inference_result[1].most_prominent_face_index()
if self.id_face<0:
return "no id face found"
if self.photo_face<0:
return "no live face found"
# Get numerical representation of faces (required for face match)
print("stats...")
if (len(self.image_inference_result)<2):
return "ID or human face could not be recognised"
self.id_emb = self.image_inference_result[0].faces[self.id_face].embedding
self.photo_emb = self.image_inference_result[1].faces[self.photo_face].embedding
except Exception as ex:
self.errstr = "image processing exception at get_faces: " + str(ex)
return (
'{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }'
% str(ex)
)
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s }' % (
len(self.boxes),
json.dumps(self.boxes),
)
return "image processing exception "+str(ex)
# @doc This does everything for you.
# If you are smartserv, "crowd" means cam and "govid" means regula pic
def crowd_vs_govid(self, name1, file1, scale1, name2, file2, scale2):
print("##1## DETECTING FROM IMG")
if self.json2obj(self.load1(name1, file1))["status"] != 0:
return self.jsonx
if self.json2obj(self.detect_liveness(name1))["status"] != 0:
return self.jsonx
self.save(name1, "/tmp")
return None
print("##2## DETECTING FROM ID")
if self.json2obj(self.load1(name2, file2))["status"] != 0:
return self.jsonx
self.save(name2, "/tmp")
if self.json2obj(self.detect(name2))["status"] != 0:
return self.jsonx
self.save(name2, "/tmp")
print("##R##")
jsonstr = self.compare(name1, name2)
print(jsonstr)
return jsonstr
# @doc compare two named images, previously loaded
def compare(self, name1, name2):
print("** face_recognition::compare ... %s vs %s" % (name1, name2))
def compute_scores(self):
try:
res = self.sdk.get_match_score(self.models[name1], self.models[name2])
print("Match is ", res)
self.match_score = res
# Get image quality scores (how 'good' a face is)
self.id_qual = self.image_inference_result[0].faces[self.id_face].quality
self.photo_qual = self.image_inference_result[1].faces[self.photo_face].quality
self.id_qual = round(self.id_qual, 3)
self.photo_qual = round(self.photo_qual, 3)
# Get face match score
self.match_score = self.sdk.get_match_score(self.id_emb, self.photo_emb)
# Create .json
self.face_match_json = {"device1":self.dev1,
"device2":self.dev2,
"passmark":500,
"device1_qual":self.id_qual,
"device2_qual":self.photo_qual,
"match_score":self.match_score}
#return json.dumps(self.face_match_json)
#print(self.face_match_json)
# Send to core
#url = "%s/notify/%s/%s" % (self.conf["core"], self.conf["identity"], face_match_json)
#url = url.replace(" ", "%20") # Remove spaces
#buf = []
#req = urllib.request.Request( url )
#with urllib.request.urlopen(req) as response:
#print(response.read())
except Exception as ex:
print("** paravision::compare exception ... " + str(ex))
self.errstr = "image comparison exception at compute_scores: " + str(ex)
return '{ "status":332410, "remark":"%s" }' % self.errstr
return (
'{ "status":0, "threshold": 500, "device1_qual": 0.5, "device2_qual": 0.5, "remark":"OK", "score":%d }'
% self.match_score
)
def validness(self, face, sdk):
validness_settings = ValidnessSettings(face)
validness_result = self.liveness2d_sdk.check_validness(
face, validness_settings, sdk
)
print("HERE IS THE VALIDNESS RESULT %s" % validness_result)
liveness_result = self.liveness2d_sdk.get_liveness(face)
print("HERE IS THE LIVENESS RESULT %s" % liveness_result)
return liveness_result
def scores(self):
return (
'{ "status":0, "threshold": 500, "device1_qual": 0.5, "device2_qual": 0.5, "remark":"OK", "score":%d }'
% self.match_score
)
return str(ex)
if __name__ == "__main__":
def get_scores(self):
return json.dumps(self.face_match_json)
d = Paravisionox()
if __name__ == '__main__':
d = Paravision()
d.init()
if sys.argv[1] == "messia":
if sys.argv[1]=="messia":
jsonstr = d.load1("pic1", "testimg/messi4.jpg")
print(jsonstr)
jsonstr = d.detect("pic1")
print(jsonstr)
if sys.argv[1] == "test":
if sys.argv[1]=="test":
d.load1("pic1", "testimg/ox.jpg")
d.detect("pic1")
if sys.argv[1] == "kiosk":
jsonstr = d.crowd_vs_govid(
"pic1", "testimg/ox.jpg", 0, "pic2", "testimg/ox_govid.jpg", 0.25
)
if sys.argv[1]=="kiosk":
jsonstr = d.crowd_vs_govid("pic1", "testimg/ox.jpg", 0, "pic2", "testimg/ox_govid.jpg", 0.25)
print(jsonstr)
if sys.argv[1] == "messi":
jsonstr = d.crowd_vs_govid(
"pic1", "testimg/messi4.jpg", 0, "pic2", "testimg/messi2.jpg", 0
)
if sys.argv[1]=="messi":
jsonstr = d.crowd_vs_govid("pic1", "testimg/messi4.jpg", 0, "pic2", "testimg/messi2.jpg", 0)
print(jsonstr)
if sys.argv[1] == "maiden":
jsonstr = d.crowd_vs_govid(
"pic1", "testimg/ironmaiden.jpg", 0, "pic2", "testimg/davemurray.jpg", 0
)
if sys.argv[1]=="maiden":
jsonstr = d.crowd_vs_govid("pic1", "testimg/ironmaiden.jpg", 0, "pic2", "testimg/davemurray.jpg", 0)
print(jsonstr)

View File

@ -32,7 +32,6 @@ class yoloserv(object):
palmdetector = None
facematcher = None
palmmatcher = None
traffic = None
ir_camera = None
devices = []
points = []
@ -216,13 +215,6 @@ class yoloserv(object):
return self.facematcher.compare(name1,name2)
# Traffic analysis
@cherrypy.expose
def svc_traffic(self,infile=None):
return self.facematcher.traffic(infile)
@cherrypy.expose
def shutdown(self):
@ -240,15 +232,15 @@ class yoloserv(object):
# Match faces together
@cherrypy.expose
def svc_match_faces(self,dev1,fil1,scl1s,dev2,fil2,scl2s):
scl1 = float(scl1s)
scl2 = float(scl2s)
jsonstr = self.facematcher.crowd_vs_govid(dev1,self.conf["yolo_indir"]+fil1,scl1, dev2,self.conf["yolo_indir"]+fil2,scl2)
def svc_match_faces(self,dev1,fil1,scl1,dev2,fil2,scl2):
jsonstr = self.facematcher.crowd_vs_govid(dev1,self.conf["yolo_indir"]+fil1,scl1, dev2,self.conf["yolo_outdir"]+fil2,scl2)
obj = self.json2obj(jsonstr)
return jsonstr
if obj.status > 0:
return jsonstr
def json2obj(self,jsonx):
return json.loads(jsonx)
return json.laods(jsonx)
# @doc put all the steps for a retail facematch into one convenient functions
@cherrypy.expose
@ -276,7 +268,7 @@ class yoloserv(object):
jsonstr = self.facematcher.crowd_vs_govid(dev1,fil1,scl1, dev2,fil2,scl2)
obj = self.json2obj(jsonstr)
if obj["status"] > 0:
if obj.status > 0:
return jsonstr
jsonstr = self.facematcher.scores()
return '{ "status":0, "remark":"OK", "data": %s }' % (jsonstr)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB