Compare commits

..

10 Commits

Author SHA1 Message Date
ox
3c4cbfd300 added docs inital push for ox_Server 2026-01-06 13:43:11 -04:00
Ox
3c73c8eb4c paravision threshold update 2024-10-23 16:28:15 -03:00
Ox
e583f8add3 paravision threshold update 2024-10-23 16:24:19 -03:00
Ox
b7448b2c76 para editing to pass facematch 2024-10-23 10:49:56 -03:00
Ox
46acafd738 para editing to pass facematch 2024-10-23 10:48:26 -03:00
carl
4266448f68 get paraVISION working again 2024-10-22 22:25:43 -03:00
Ox
241372ef00 reverting changes 2024-10-21 17:50:31 -03:00
Ox
0ea955674a para test 2024-10-21 17:48:09 -03:00
carl
b69c4f9ecc new dev towards ad traffic recognition 2024-09-08 22:24:43 -03:00
carl
7f2f669e29 some fettling to get it all to work on smrt08 2024-09-05 15:21:46 -03:00
10 changed files with 420 additions and 216 deletions

113
doc/fm_project2026.md Normal file
View File

@ -0,0 +1,113 @@
# Yoloserv 2026 Project
## Project Objectives
### Repo to large for GitLab / Streamline the repo for Deployment
### Move camera_stream from core to YOLO
#### Problems with camera_stream
Camera stream is a proccess that provides a live video feed to the UI. It has led to better success with the face matching proccess and has allowed for a more seamless user experience. Right now the camera_stream functionalty is in the CORE repo and runs and runs contiously in the background. There are a few issues with this:
- It occupies and holds the camera would cause issues with other applications that use core and not the camera_stream
- It uses a full CPU core to run which as our applications scale is not maintainable.
#### Solution
The solution was to move the camera_stream functionality to the YOLO repo and make it a UKDI variable that can be enabled or disabled. However this failed as it added to much delay and lagging when loading the stream and moving frames back and forth from yolo to core back to yolo. The decision was made to keep the camera_stream in core for now but init it from a UKDI var.
#### Flow Diagram
The Flow has been broken into two parts to show the flow of the camera stream.
```mermaid
flowchart LR
UI[UI]
CORE[Core]
YOLO[YOLO]
CAM[Camera]
UI --> | Open Camera Stream | CORE
CORE --> | Open Camera Stream | YOLO
YOLO --> | Open Camera Stream | CAM
CAM --> | Frames | YOLO
YOLO --> | Frames | CORE
```
This flow below can be viewed within the method `cam_livefeed` in the file `yoloserv.py`.
```mermaid
flowchart LR
UI[UI]
CORE[CORE]
CAM[Camera]
UI --> | Open Camera Stream | CORE
CORE --> | Open Camera Stream | CAM
CAM --> | Frames | CORE
CORE --> | Camera Stream | UI
```
### Yoloserv pipeline
```mermaid
sequenceDiagram
participant UI
participant CORE
participant CAM
participant YOLO
participant para
rect rgba(200, 220, 255, 0.35)
UI->>CORE: pic_still
CORE->>CAM: capture still
CAM-->>CORE: still image
CORE-->>UI: still image
end
rect rgba(220, 255, 220, 0.35)
UI->>CORE: facematch (regula)
CORE->>YOLO: facematch
YOLO->>para: facematch
para-->>YOLO: result
YOLO-->>CORE: result
CORE-->>UI: result
end
```
### Implement 2D detection
Dispension faced the problem of needing a detection model in place for fully autonomous operation. To resolve this we implemented a 2D detection model that can be used to detect faces in a video stream. We implemented paravisions 2D model detection. Refer to para.md for more information on installing models.
Here are the results from a live user test:
```bash
<ValidnessResult (isValid=0, sharpness=-1.000000, quality=-1.000000, acceptability=-1.000000, frontality=-1.000000, mask=-1.000000, image_avg_pixel=115.375671, image_bright_pixel_pct=0.145549, image_dark_pixel_pct=0.030642, face_height_pct=0.318821, face_width_pct=0.208163, face_width=133.224304, face_height=153.033981, face_roll_angle=3.102657, face_positions=[0.377501,-0.001564,0.585664,0.317257])>
HERE IS THE LIVENESS RESULT <LivenessResult (liveProbability=0.999998, spoofProbability=0.000002)>
HERE IS THE VALIDNESS RESULT <ValidnessResult (isValid=0, sharpness=0.023801, quality=-1.000000, acceptability=-1.000000, frontality=97.000000, mask=0.008516, image_avg_pixel=58.403454, image_bright_pixel_pct=0.000000, image_dark_pixel_pct=0.000000, face_height_pct=0.551221, face_width_pct=0.579525, face_width=373.213989, face_height=446.488739, face_roll_angle=0.489146, face_positions=[0.182151,0.242310,0.761676,0.793531])>
HERE IS THE LIVENESS RESULT <LivenessResult (liveProbability=0.057056, spoofProbability=0.830750)>
```
## Improvements
Ideally we should be using the first frame to complete the facematch process and not for rendering. This would speed things up by a few seconds but is something for the future.

37
doc/gitea_setup.md Normal file
View File

@ -0,0 +1,37 @@
# Gitea Setup
## Table of Contents
- [Gitea Setup](#gitea-setup)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [Configuration](#configuration)
- [Usage](#usage)
- [Troubleshooting](#troubleshooting)
The Problem we faced is that yoloserv was to big to be hosted on Gitlab. Gitlab would also suspend the repo if bills were not paid and user seats were maxed. Obviously this was not a viable solution. Explored maybe self hosting without an intreface just to have repos but when I read Gitea did not need docker and could be run as a system service I tried it out.
## Setting up the Server
First step was setting up the server to run gitea. I scanned my local netowrk with ```bash arp-scan --localnet``` to find the IP address of the server. I could nto find my server running I had to reset the DHCP client with these commands. Use ```nmcli connection show``` to find the connection name and then ```nmcli connection down <connection name>``` and ```nmcli connection up <connection name>``` to reset the DHCP client. Was able to succesfully ```ping 8.8.8.8``` to confirm the server was running.
## Installation
I remoted onto the server and installed gitea with the following commands:
Ran ```free -h``` to check memory and ```df -h``` to check disk space. Was recommended to have at least 8GB of RAM and 2.5 repo size of disk space. These seem a bit much.
Depending on what distro you run commands may vary but I:
- Updated and upgraded the system
- Installed git, openssh-server, ufw and fail2ban
- sudo ufw enable, sudo ufw allow ssh, sudo ufw allow 3000
#### SSH Hardening
Can harden the SSH to be keys only should be done will be a future addition. Done in ```/etc/ssh/sshd_config```. Run ```sudo systemctl restart sshd``` to apply changes.
```bash
PasswordAuthentication no
PermitRootLogin no
```

BIN
models/emotion_model.hdf5 Normal file

Binary file not shown.

View File

@ -101,6 +101,7 @@ do
PYP="$PYP:$PYP_DEEPFACE" PYP="$PYP:$PYP_DEEPFACE"
;; ;;
"regula") ;; "regula") ;;
"traffic") ;;
"camera") ;; "camera") ;;
*) echo "yoloserv does not implement backend $i. Edit /etc/ukdi.json::yolo_devices and try again." *) echo "yoloserv does not implement backend $i. Edit /etc/ukdi.json::yolo_devices and try again."
exit 1 exit 1

View File

@ -80,10 +80,9 @@ class Deepfacex(FaceClass):
return json.dumps(verification) return json.dumps(verification)
def metadata(self,name): def analyze(self,name):
f1 = "/tmp/%s.png" % (name) result = DeepFace.analyze(self.imgs["name"], actions=['age', 'gender', 'emotion', 'race'], enforce_detection=False)
metadata = DeepFace.analyze(img_path = f1, actions = ["age", "gender", "emotion", "race"]) return result
return json.dumps(metadata)
# #

View File

@ -38,8 +38,11 @@ import sys
import os import os
import face_recognition import face_recognition
import json import json
import numpy as np
import urllib.request import urllib.request
from faceclass import FaceClass from faceclass import FaceClass
from keras.models import load_model
import time import time
# #
@ -59,8 +62,12 @@ class FaceRecognition(FaceClass):
#def init(): #def init():
#model = load_model("./emotion_detector_models/model.hdf5") #model = load_model("./emotion_detector_models/model.hdf5")
#def prep_detectors(self):
# @doc find all the faces in the named image
# @doc find all the faces in the named image.
# The detectors tend to return propritery formats, so this "detect" method
# is going to be new for each implementation of FaceClass
def detect(self, name): def detect(self, name):
self.tree["detectms"] = time.time() self.tree["detectms"] = time.time()
boxes = [] boxes = []
@ -82,24 +89,25 @@ class FaceRecognition(FaceClass):
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s, "time":%d }' % (len(self.boxes), json.dumps(self.boxes), self.tree["detectms"] ) return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s, "time":%d }' % (len(self.boxes), json.dumps(self.boxes), self.tree["detectms"] )
# @doc find the landmarks of the given face # @doc find the landmarks of the given face (eyes, mouth, nose etc)
def landmarks(self, name): def landmarks(self, name):
landmarks = face_recognition.face_landmarks(self.imgs[name]) landmarks = face_recognition.face_landmarks(self.imgs[name])
return '{ "status":0, "remark":"OK", "landmarks":%s }' % json.dumps(landmarks) return '{ "status":0, "remark":"OK", "landmarks":%s }' % json.dumps(landmarks)
# @doc find the metadata of the given face (emotion, age, gender, race)
# @doc find the metadata of the given face def metadata(self, npimg):
def metadata(self, name): model = load_model("models/emotion_model.hdf5")
print(time.time())
emotion_dict= {'Angry': 0, 'Sad': 5, 'Neutral': 4, 'Disgust': 1, 'Surprise': 6, 'Fear': 2, 'Happy': 3} emotion_dict= {'Angry': 0, 'Sad': 5, 'Neutral': 4, 'Disgust': 1, 'Surprise': 6, 'Fear': 2, 'Happy': 3}
face_image = cv2.imread("..test_images/39.jpg") im = cv2.resize(npimg, (64, 64))
face_image = cv2.resize(face_image, (48,48)) im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
face_image = cv2.cvtColor(face_image, cv2.COLOR_BGR2GRAY) im = np.reshape(im, [1, im.shape[0], im.shape[1], 1])
face_image = np.reshape(face_image, [1, face_image.shape[0], face_image.shape[1], 1]) predicted_class = np.argmax(model.predict(im))
model = load_model("./emotion_detector_models/model.hdf5")
predicted_class = np.argmax(model.predict(face_image))
label_map = dict((v,k) for k,v in emotion_dict.items()) label_map = dict((v,k) for k,v in emotion_dict.items())
predicted_label = label_map[predicted_class] predicted_label = label_map[predicted_class]
return '{ "status":88324, "remark":"override this", "landmarks":%s }' % json.dumps(landmarks) print(time.time())
return predicted_label
# @doc compare two named images, previously loaded # @doc compare two named images, previously loaded
@ -150,8 +158,8 @@ if __name__ == '__main__':
d = FaceRecognition() d = FaceRecognition()
if sys.argv[1]=="kiosk": if sys.argv[1]=="regula":
jsonstr = d.crowd_vs_govid("pic1", "testimg/ox.jpg", 0, "pic2", "testimg/ox_govid.jpg", 0.25) jsonstr = d.crowd_vs_govid("pic1", "/tmp/localcam.jpg", 0, "pic2", "/tmp/regula/Portrait_0.jpg", 0.25)
print(jsonstr) print(jsonstr)
if sys.argv[1]=="messi": if sys.argv[1]=="messi":
@ -173,8 +181,11 @@ if __name__ == '__main__':
d.stats("messi2_crop") d.stats("messi2_crop")
d.compare("messi4_crop","messi2_rect") d.compare("messi4_crop","messi2_rect")
if sys.argv[1]=="traffic":
jsonstr = d.traffic("traffic4", "testimg/messi4.jpg")
print(jsonstr)
if sys.argv[1]=="crowd": if sys.argv[1]=="group":
jsonstr = d.crowd_vs_govid("pic1", "testimg/messi4.jpg", 0, "pic2", "testimg/messi2.jpg", 0) jsonstr = d.crowd_vs_govid("pic1", "testimg/messi4.jpg", 0, "pic2", "testimg/messi2.jpg", 0)
print(jsonstr) print(jsonstr)
@ -184,17 +195,3 @@ if __name__ == '__main__':
if sys.argv[1]=="match":
# lfw
n=0
print("LFW Matching")
for lfw in sorted(os.listdir("lfw/0001")):
d.load2("localcam","regula", "lfw/0001" + lfw, "lfw/0002/" + lfw)
d.get_faces()
d.compute_scores()
print(d.get_scores())
print(d.get_landmarks())
n+=1
if n > 1:
sys.exit(0)

View File

@ -15,12 +15,23 @@ class FaceClass(object):
model = None model = None
imgs = {} imgs = {}
encs = {} encs = {}
faces = {} crops = {}
visual = 0 visual = 0
files = {} files = {}
boxes = [] boxes = []
jsonx = ""
errstr = "" errstr = ""
vidcap = None
tree = { "device1":"NA", "device2":"NA", "threshold":380, "device1_qual":0.5, "device2_qual":0.5, "score":0, "detectms":0, "comparems":0 } tree = { "device1":"NA", "device2":"NA", "threshold":380, "device1_qual":0.5, "device2_qual":0.5, "score":0, "detectms":0, "comparems":0 }
def json2obj(self,jsonx):
print(jsonx)
self.jsonx = jsonx
return json.loads(jsonx)
# # # # ##### # # # # #####
# ## # # # # ## # # #
@ -82,11 +93,14 @@ class FaceClass(object):
return '{ "status":88241, "remark":"override this!" }' return '{ "status":88241, "remark":"override this!" }'
# @doc find the biggest face in the named image # @doc find the biggest face in the named image
# Imge has origin at top left and value are b = [ X1, Y1, X2, Y2 ] # Imge has origin at top left and value are b = [ X1, Y1, X2, Y2 ]
def ideal(self, name, rectname, cropname): def ideal(self, name, rectname, cropname):
found = -1 found = -1
biggest = -1 biggest = -1
self.crops[name] = []
print("** faceclass::ideal ... %s with %d boxes => %s + %s" % (name, len(self.boxes), rectname, cropname )) print("** faceclass::ideal ... %s with %d boxes => %s + %s" % (name, len(self.boxes), rectname, cropname ))
# resize boxes # resize boxes
for i in range(len(self.boxes)): for i in range(len(self.boxes)):
@ -101,6 +115,7 @@ class FaceClass(object):
b = self.boxes[found] b = self.boxes[found]
# extract crops and highlights - colours are BGR # extract crops and highlights - colours are BGR
self.imgs[cropname] = self.crop(name,b[0],b[1],b[2],b[3]) self.imgs[cropname] = self.crop(name,b[0],b[1],b[2],b[3])
self.crops[name].append(self.imgs[cropname])
self.imgs[rectname] = deepcopy(self.imgs[name]) self.imgs[rectname] = deepcopy(self.imgs[name])
#print(self.imgs[name]) #print(self.imgs[name])
#print(self.imgs[rectname]) #print(self.imgs[rectname])
@ -131,36 +146,36 @@ class FaceClass(object):
return '{ "status":88245, "remark":"override this!" }' return '{ "status":88245, "remark":"override this!" }'
# @doc This does everything for you. # @doc This does everything for you.
# If you are smartserv, "crowd" means cam and "govid" means regula pic # If you are smartserv, "crowd" means cam and "govid" means regula pic
def crowd_vs_govid(self, name1,file1,scale1str, name2,file2,scale2str): def crowd_vs_govid(self, name1,file1,scale1, name2,file2,scale2):
print("##1##") print("##1##")
scale1 = float(scale1str) if self.json2obj(self.load1(name1, file1))["status"] != 0:
scale2 = float(scale2str) return self.jsonx
self.load1(name1, file1)
if scale1 !=0: if scale1 !=0:
self.shrink(name1,scale1) self.shrink(name1,scale1)
jsonstr = self.detect(name1) if self.json2obj(self.detect(name1))["status"] != 0:
if json.loads(jsonstr)["status"]!=0: return self.jsonx
return jsonstr
self.boxscale(name1,0.3) self.boxscale(name1,0.3)
self.ideal(name1,name1+"_rect",name1+"_crop")
if self.json2obj(self.ideal(name1,name1+"_rect",name1+"_crop"))["status"] != 0:
return self.jsonx
self.save(name1,"/tmp") self.save(name1,"/tmp")
self.save(name1+"_rect","/tmp") self.save(name1+"_rect","/tmp")
self.save(name1+"_crop","/tmp") self.save(name1+"_crop","/tmp")
print(self.imgs.keys()) print(self.imgs.keys())
print("##2##") print("##2##")
self.load1(name2, file2) if self.json2obj(self.load1(name2, file2))["status"] != 0:
return self.jsonx
if scale2 !=0: if scale2 !=0:
self.shrink(name2,scale2) self.shrink(name2,scale2)
self.save(name2,"/tmp") self.save(name2,"/tmp")
jsonstr = self.detect(name2) if self.json2obj(self.detect(name2))["status"]!=0:
if json.loads(jsonstr)["status"]!=0: return self.jsonx
return jsonstr
self.boxscale(name2,0.3) self.boxscale(name2,0.3)
self.ideal(name2,name2+"_rect",name2+"_crop") if self.json2obj(self.ideal(name2,name2+"_rect",name2+"_crop"))["status"] != 0:
return self.jsonx
self.save(name2,"/tmp") self.save(name2,"/tmp")
self.save(name2+"_rect","/tmp") self.save(name2+"_rect","/tmp")
self.save(name2+"_crop","/tmp") self.save(name2+"_crop","/tmp")
@ -169,6 +184,29 @@ class FaceClass(object):
print("##R##") print("##R##")
jsonstr = self.compare(name1+"_crop",name2+"_crop") jsonstr = self.compare(name1+"_crop",name2+"_crop")
print(jsonstr) print(jsonstr)
return jsonstr
# @doc This does deomgraphic examination on a pic.
# If you are smartserv, "crowd" means cam and "govid" means regula pic
def traffic(self, name, file, scale=0):
print("##1##")
jsons = []
if self.json2obj(self.load1(name, file))["status"] != 0:
return self.jsonx
if scale !=0:
self.shrink(name,scale)
if self.json2obj(self.detect(name))["status"] != 0:
return self.jsonx
for i in range(len(self.boxes)):
b = self.boxes[i]
print(">>>>" , b)
analysis = self.metadata(self.imgs[name][b[0]:b[0]+b[1],b[2]:b[2]+b[3]])
jsons.append(analysis)
print(json.dumps(jsons))
return jsonx
###### ##### # ##### ###### ##### # #####
@ -188,10 +226,10 @@ class FaceClass(object):
def rebox(self,x1,y1,x2,y2,shape,scale=0.2): def rebox(self,x1,y1,x2,y2,shape,scale=0.2):
print("!!!!!!1 rebox with shape ",shape) print("!!!!!!1 rebox with shape ",shape)
xx1 = x1 - int((x2-x1)*scale) xx1 = x1 - int((x2-x1)*scale*0.8)
xx2 = x2 + int((x2-x1)*scale) xx2 = x2 + int((x2-x1)*scale*0.8)
yy1 = y1 - int((y2-y1)*scale) yy1 = y1 - int((y2-y1)*scale*1.3)
yy2 = y2 + int((y2-y1)*scale) yy2 = y2 + int((y2-y1)*scale*1.3)
if xx1 < 0: if xx1 < 0:
xx1 = 0 xx1 = 0
if yy1 < 0: if yy1 < 0:
@ -214,11 +252,14 @@ class FaceClass(object):
# @doc crop an image, allowing a gutter. # @doc crop an image, allowing a gutter.
def shrink(self, name, skale=0.5): def shrink(self, name, skale=0.5):
print ("shrinking ",name) print ("shrinking ",name,skale)
if skale == 0:
return
self.imgs[name] = cv2.resize(self.imgs[name],None,fx=skale,fy=skale) self.imgs[name] = cv2.resize(self.imgs[name],None,fx=skale,fy=skale)
##### ###### ##### # # ##### # # ##### ###### ##### # # ##### # #
# # # # # # # # ## # # # # # # # # # ## #
# # ##### # # # # # # # # # # ##### # # # # # # # #
@ -226,7 +267,7 @@ class FaceClass(object):
# # # # # # # # # ## # # # # # # # # # ##
# # ###### # #### # # # # # # ###### # #### # # # #
def get_scores(self): def scores(self):
return json.dumps(self.tree) return json.dumps(self.tree)
# return a base64 version of the pic in memory # return a base64 version of the pic in memory

View File

@ -1,199 +1,207 @@
# from paravision.recognition.sdk import SDK
# Paravision based face matcher from paravision.recognition.types import Settings, ValidnessCheck
# from paravision.recognition.exceptions import ParavisionException
from paravision.liveness2d import SDK as Liveness2DSDK
from paravision.liveness2d.types import (
Settings as Liveness2DSettings,
ValidnessSettings,
)
import paravision.recognition.utils as pru
import json import json
import sys import sys
import os
### export PYTHONPATH=/wherever/yoloserv/modules ... as long as "paravision/.../" is in there
from paravision.recognition.exceptions import ParavisionException
from paravision.recognition.engine import Engine
from paravision.recognition.sdk import SDK
import paravision.recognition.utils as pru
#from openvino.inference_engine import Engineq
#from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from faceclass import FaceClass from faceclass import FaceClass
class Paravision(FaceClass): class Paravision(FaceClass):
models = {}
scores = {}
quality = {}
sdk = None
match_score = 0
def init(self,backend=None,model=None): def init(self):
print("@@@ initialising paravision")
try: try:
self.sdk = SDK(engine=Engine.AUTO) print("INIT FROM PARAVISION")
except ParavisionException: settings = Settings()
pass # 1 is the default
settings.worker_count = 4
settings.detection_model = "default"
settings.validness_checks = ValidnessCheck.LIVENESS
self.sdk = SDK(settings=settings)
liveness2d_settings = Liveness2DSettings()
self.liveness2d_sdk = Liveness2DSDK(settings=liveness2d_settings)
except ParavisionException as e:
# error handling logic
print("Exception:", e)
# @doc Load a pic using the device label
def load1(self, name, fname):
print(" Loading image '%s' from file %s" % (name, fname))
if not os.path.isfile(fname):
print(" * file not found: %s" % (fname))
return (
'{ "status":442565, "remark":"file name not found", "guilty_param":"fname", "guilty_value":"%s" }'
% (fname)
)
self.files[name] = fname
self.imgs[name] = pru.load_image(fname)
print(" Loaded %s from file %s" % (name, fname))
return '{ "status":0, "remark":"OK", "name":"%s", "fname":"%s" }' % (
name,
fname,
)
# @doc find all the faces in the named image # @doc find all the faces in the named image
def detect(self, name): def detect(self, name):
boxes = []
self.boxes = [] self.boxes = []
print("** face_recognition::detect ... %s" % name) print("** face_recognition::detect ... %s" % name)
try: try:
# Get all faces from images with qualities, landmarks, and embeddings # Get all faces from images with qualities, landmarks, and embeddings
res = self.sdk.get_faces([self.imgs[name]], qualities=True, landmarks=True, embeddings=True) faces = self.sdk.get_faces(
boxes = res.faces [self.imgs[name]], qualities=True, landmarks=True, embeddings=True
print(boxes) )
for a in boxes: print("HERE IS THE FACES %s" % faces)
print(box) inferences = faces.image_inferences
# box is somehow = y1 / x2 / y2 / x1 for face_recognition . print("HERE IS THE INFERENCE %s" % inferences)
# = ix = inferences[0].most_prominent_face_index()
# Thats crazy lol. We need to fix that to x1, y1, x2, y2 with origin at top left face = inferences[0].faces[ix]
for b in boxes: print("HERE IS THE FACE %s" % face)
self.boxes.append((b[3],b[0],b[1],b[2])) self.models[name] = inferences[0].faces[ix].embedding
print("found %d boxes for %s" % (len(self.boxes), name) ) self.quality[name] = round(1000 * inferences[0].faces[ix].quality)
self.boxes = [(0, 0, 0, 0)]
except Exception as ex: except Exception as ex:
self.errstr = "image processing exception at get_faces: "+str(ex) self.errstr = "image processing exception at get_faces: " + str(ex)
return '{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }' % str(ex) return (
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s }' % (len(self.boxes), json.dumps(self.boxes)) '{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }'
% str(ex)
)
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s }' % (
len(self.boxes),
json.dumps(self.boxes),
)
def detect_liveness(self, name):
self.boxes = []
# Assess the face that was read in print("** face_recognition::detect ... %s" % name)
def processiii(self):
# Get all faces metadata
print("Finding faces in %s" %(self.imgpath))
faces = self.sdk.get_faces([self.image], qualities=True, landmarks=True, embeddings=True)
print("Getting metadata")
inferences = faces.image_inferences
print("Getting best face")
ix = inferences[0].most_prominent_face_index()
print("Getting a mathematical mode of that best face")
self.model = inferences[0].faces[ix].embedding
print("Getting image quality scores..")
self.score = round(1000*inferences[0].faces[ix].quality)
print("Score was %d" %(self.score))
return self.score
# Compare to a face in another Facematch instance
def compare(self,other):
# Get face match score
return self.sdk.get_match_score(self.model, other.model)
#mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
#
def load(self, dev1, dev2, id_image_filepath, photo_image_filepath):
print("## loading images", dev1, dev2)
self.dev1 = dev1
self.dev2 = dev2
try:
self.id_image = pru.load_image(id_image_filepath)
except Exception as e:
return "id image loading failed ", e
try:
self.photo_image = pru.load_image(photo_image_filepath)
except Exception as e:
return "client image loading failed ", e
return None
def get_faces(self):
try: try:
# Get all faces from images with qualities, landmarks, and embeddings # Get all faces from images with qualities, landmarks, and embeddings
print("Finding faces...") faces = self.sdk.get_faces(
self.inference_result = self.sdk.get_faces([self.id_image, self.photo_image], qualities=True, landmarks=True, embeddings=True) [self.imgs[name]], qualities=True, landmarks=True, embeddings=True
print("Inferences...") )
self.image_inference_result = self.inference_result.image_inferences print("HERE IS THE FACES %s" % faces)
if len(self.image_inference_result)==0: inferences = faces.image_inferences
return "no inferences found" print("HERE IS THE INFERENCE %s" % inferences)
ix = inferences[0].most_prominent_face_index()
# Get most prominent face face = inferences[0].faces[ix]
print("Most prominent...") print("HERE IS THE FACE %s" % face)
self.id_face = self.image_inference_result[0].most_prominent_face_index() liveness_result = self.validness(face, self.sdk)
self.photo_face = self.image_inference_result[1].most_prominent_face_index() live_prob = getattr(liveness_result, "live_prob", None)
if self.id_face<0: print("HERE IS THE LIVE PROB %s" % live_prob)
return "no id face found" if live_prob < 0.5:
if self.photo_face<0: return (
return "no live face found" '{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }'
% live_prob
# Get numerical representation of faces (required for face match) )
print("stats...") self.models[name] = inferences[0].faces[ix].embedding
if (len(self.image_inference_result)<2): self.quality[name] = round(1000 * inferences[0].faces[ix].quality)
return "ID or human face could not be recognised" self.boxes = [(0, 0, 0, 0)]
self.id_emb = self.image_inference_result[0].faces[self.id_face].embedding
self.photo_emb = self.image_inference_result[1].faces[self.photo_face].embedding
except Exception as ex: except Exception as ex:
return "image processing exception "+str(ex) self.errstr = "image processing exception at get_faces: " + str(ex)
return (
'{ "status":222310, "remark":"image processing exception", "guilty_param":"error", "guilty_value":"%s" }'
% str(ex)
)
return '{ "status":0, "remark":"OK", "faces":%d, "boxes":%s }' % (
len(self.boxes),
json.dumps(self.boxes),
)
return None # @doc This does everything for you.
# If you are smartserv, "crowd" means cam and "govid" means regula pic
def crowd_vs_govid(self, name1, file1, scale1, name2, file2, scale2):
print("##1## DETECTING FROM IMG")
if self.json2obj(self.load1(name1, file1))["status"] != 0:
return self.jsonx
if self.json2obj(self.detect_liveness(name1))["status"] != 0:
return self.jsonx
self.save(name1, "/tmp")
print("##2## DETECTING FROM ID")
if self.json2obj(self.load1(name2, file2))["status"] != 0:
return self.jsonx
self.save(name2, "/tmp")
if self.json2obj(self.detect(name2))["status"] != 0:
return self.jsonx
self.save(name2, "/tmp")
print("##R##")
jsonstr = self.compare(name1, name2)
print(jsonstr)
return jsonstr
def compute_scores(self): # @doc compare two named images, previously loaded
def compare(self, name1, name2):
print("** face_recognition::compare ... %s vs %s" % (name1, name2))
try: try:
# Get image quality scores (how 'good' a face is) res = self.sdk.get_match_score(self.models[name1], self.models[name2])
self.id_qual = self.image_inference_result[0].faces[self.id_face].quality print("Match is ", res)
self.photo_qual = self.image_inference_result[1].faces[self.photo_face].quality self.match_score = res
self.id_qual = round(self.id_qual, 3)
self.photo_qual = round(self.photo_qual, 3)
# Get face match score
self.match_score = self.sdk.get_match_score(self.id_emb, self.photo_emb)
# Create .json
self.face_match_json = {"device1":self.dev1,
"device2":self.dev2,
"passmark":500,
"device1_qual":self.id_qual,
"device2_qual":self.photo_qual,
"match_score":self.match_score}
#return json.dumps(self.face_match_json)
#print(self.face_match_json)
# Send to core
#url = "%s/notify/%s/%s" % (self.conf["core"], self.conf["identity"], face_match_json)
#url = url.replace(" ", "%20") # Remove spaces
#buf = []
#req = urllib.request.Request( url )
#with urllib.request.urlopen(req) as response:
#print(response.read())
except Exception as ex: except Exception as ex:
return str(ex) print("** paravision::compare exception ... " + str(ex))
self.errstr = "image comparison exception at compute_scores: " + str(ex)
return '{ "status":332410, "remark":"%s" }' % self.errstr
return (
'{ "status":0, "threshold": 500, "device1_qual": 0.5, "device2_qual": 0.5, "remark":"OK", "score":%d }'
% self.match_score
)
def validness(self, face, sdk):
validness_settings = ValidnessSettings(face)
validness_result = self.liveness2d_sdk.check_validness(
face, validness_settings, sdk
)
print("HERE IS THE VALIDNESS RESULT %s" % validness_result)
liveness_result = self.liveness2d_sdk.get_liveness(face)
print("HERE IS THE LIVENESS RESULT %s" % liveness_result)
return liveness_result
def scores(self):
return (
'{ "status":0, "threshold": 500, "device1_qual": 0.5, "device2_qual": 0.5, "remark":"OK", "score":%d }'
% self.match_score
)
def get_scores(self): if __name__ == "__main__":
return json.dumps(self.face_match_json)
d = Paravisionox()
if __name__ == '__main__':
d = Paravision()
d.init() d.init()
if sys.argv[1]=="messia": if sys.argv[1] == "messia":
jsonstr = d.load1("pic1", "testimg/messi4.jpg") jsonstr = d.load1("pic1", "testimg/messi4.jpg")
print(jsonstr) print(jsonstr)
jsonstr = d.detect("pic1") jsonstr = d.detect("pic1")
print(jsonstr) print(jsonstr)
if sys.argv[1]=="test": if sys.argv[1] == "test":
d.load1("pic1", "testimg/ox.jpg") d.load1("pic1", "testimg/ox.jpg")
d.detect("pic1") d.detect("pic1")
if sys.argv[1]=="kiosk": if sys.argv[1] == "kiosk":
jsonstr = d.crowd_vs_govid("pic1", "testimg/ox.jpg", 0, "pic2", "testimg/ox_govid.jpg", 0.25) jsonstr = d.crowd_vs_govid(
"pic1", "testimg/ox.jpg", 0, "pic2", "testimg/ox_govid.jpg", 0.25
)
print(jsonstr) print(jsonstr)
if sys.argv[1]=="messi": if sys.argv[1] == "messi":
jsonstr = d.crowd_vs_govid("pic1", "testimg/messi4.jpg", 0, "pic2", "testimg/messi2.jpg", 0) jsonstr = d.crowd_vs_govid(
"pic1", "testimg/messi4.jpg", 0, "pic2", "testimg/messi2.jpg", 0
)
print(jsonstr) print(jsonstr)
if sys.argv[1]=="maiden": if sys.argv[1] == "maiden":
jsonstr = d.crowd_vs_govid("pic1", "testimg/ironmaiden.jpg", 0, "pic2", "testimg/davemurray.jpg", 0) jsonstr = d.crowd_vs_govid(
print(jsonstr) "pic1", "testimg/ironmaiden.jpg", 0, "pic2", "testimg/davemurray.jpg", 0
)
print(jsonstr)

View File

@ -32,6 +32,7 @@ class yoloserv(object):
palmdetector = None palmdetector = None
facematcher = None facematcher = None
palmmatcher = None palmmatcher = None
traffic = None
ir_camera = None ir_camera = None
devices = [] devices = []
points = [] points = []
@ -215,6 +216,13 @@ class yoloserv(object):
return self.facematcher.compare(name1,name2) return self.facematcher.compare(name1,name2)
# Traffic analysis
@cherrypy.expose
def svc_traffic(self,infile=None):
return self.facematcher.traffic(infile)
@cherrypy.expose @cherrypy.expose
def shutdown(self): def shutdown(self):
@ -232,15 +240,15 @@ class yoloserv(object):
# Match faces together # Match faces together
@cherrypy.expose @cherrypy.expose
def svc_match_faces(self,dev1,fil1,scl1,dev2,fil2,scl2): def svc_match_faces(self,dev1,fil1,scl1s,dev2,fil2,scl2s):
jsonstr = self.facematcher.crowd_vs_govid(dev1,self.conf["yolo_indir"]+fil1,scl1, dev2,self.conf["yolo_outdir"]+fil2,scl2) scl1 = float(scl1s)
scl2 = float(scl2s)
jsonstr = self.facematcher.crowd_vs_govid(dev1,self.conf["yolo_indir"]+fil1,scl1, dev2,self.conf["yolo_indir"]+fil2,scl2)
obj = self.json2obj(jsonstr) obj = self.json2obj(jsonstr)
return jsonstr return jsonstr
if obj.status > 0:
return jsonstr
def json2obj(self,jsonx): def json2obj(self,jsonx):
return json.laods(jsonx) return json.loads(jsonx)
# @doc put all the steps for a retail facematch into one convenient functions # @doc put all the steps for a retail facematch into one convenient functions
@cherrypy.expose @cherrypy.expose
@ -268,7 +276,7 @@ class yoloserv(object):
jsonstr = self.facematcher.crowd_vs_govid(dev1,fil1,scl1, dev2,fil2,scl2) jsonstr = self.facematcher.crowd_vs_govid(dev1,fil1,scl1, dev2,fil2,scl2)
obj = self.json2obj(jsonstr) obj = self.json2obj(jsonstr)
if obj.status > 0: if obj["status"] > 0:
return jsonstr return jsonstr
jsonstr = self.facematcher.scores() jsonstr = self.facematcher.scores()
return '{ "status":0, "remark":"OK", "data": %s }' % (jsonstr) return '{ "status":0, "remark":"OK", "data": %s }' % (jsonstr)

BIN
testimg/crowd.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB