Tello Drone Python Programming, Face Tracking From Drone Camera! Using Python Module OpenCV and PyGame!
by BobWithTech in Circuits > Robots
2153 Views, 8 Favorites, 0 Comments
Tello Drone Python Programming, Face Tracking From Drone Camera! Using Python Module OpenCV and PyGame!
In this tutorial, I will show you how you can program a face tracking drone through the use of Python programming languages with OpenCV library.
Supplies
Gather The Required Material:
Hardware:
Option:
Tello Drone (1x Battery, 1x Drone, 8x Propeller, No Multiple Battery Charger)
Tello Drone Boost Combo Pack (3x Battery, 1x Drone, 8x Propeller, 1x Multiple Battery Charger)
Amazon:
- Tello Drone or Tello Drone Boost Combo Pack Amazon AU
Ebay:
- Tello Drone or Tello Drone Boost Combo Pack Ebay AU
Software:
- Any IDE that support Python
- Python Programming Language
Required Python Version:
- <3.10 And >3.7
- Flexible Python IDE
Required Python Library:
- djitellopy module
- pygame module
- opencv-python module
- numpy module
- opencv haarcascade xml files (attached files)
Downloads
Create Python Virtual Environment (Optional)
Setting Up Virtual Environment On Python:
If you already set up the virtual environment on your computer, just skip this step and move on to step 2.
Windows:
For windows, please refer to this link to set up the virtual environment.
Linux:
First install the required module virtualenv in the system.
pip install virtualenv
Create a folder for your project.
mkdir projectA
cd projectA
Setup the virtual environment with the given python version that you currently use.
Syntax:
python<version> -m venv <virtual-environment-name>
Example:
python3 -m venv .venv
Activate that virtual environment.
Syntax:
source <virtual-environment-name>/bin/activate
Example:
source .venv/bin/activate
Note!
To deactivate the virtual environment just enter the command:
deactivate
Install the Required Python Module
Installing the Python Module:
- opencv-python
pip install opencv-python
- pygame
pip install pygame
- numpy
pip install numpy
- djitellopy
pip install djitellopy
Programming
Firstly, create the drone controller python scripts and name it "controller.py" or what ever name you want.
You can also look up in my first tello tutorial.
Write this lines of codes in the "controller.py":
import pygame
def init():
#initialize pygame library
pygame.init()
#Set Control Display as 400x400 pixel
windows = pygame.display.set_mode((400,400))
if __name__ == '__main__':
init()
while True:
main()
Secondly, create the facial recognition python scripts and name it "main.py" or what ever name you want.
Write this lines of codes in the "main.py" (with explanation):
import cv2 #import opencv library
import numpy as np #import numpy library
from djitellopy import tello #import djitellopy library
import time #import time library
me = tello.Tello() #initialise the djitello module classes on variable
me.connect() #establish wifi connection to the tello drone
print(me.get_battery()) #print the battery available on the tello drone
me.streamon() #start streaming the tello drone camera
w, h = 540, 360 #initialise the display dimension for the camera
MAX_STATE = 5
fbRange = [15 * 1000, 20 * 1000] #the range area of the detected face for forward and backward movement
#Range of threshold values of the detected face center point in y-axis
udRange = [(h/2)-30, (h/2)] # MIN DOWN, MAX UP
udMax = [(h/2), (h/2)+30]
udMin = [(h/2)-30, (h/2)-60]
udMotion = [0, -10, 10, -30, 30] # neutral, min, max, MIN , MAX
#Range of thresholds values of the detected face center point in x-axis
lrRange = [(w/2)-30, (w/2)+30] # MIN DOWN, MAX UP
lrMax = [(w/2)+30, (w/2)+60]
lrMin = [(w/2)-30 , (w/2)-60]
lrMotion = [0, 8, -8, 16, -16] #motion for left and right movement variables
#pid tuning controller
pid = [0.5, 0.5, 0]
pError = 0 #initial error value of PID
def findFace(img):
faceCascade = cv2.CascadeClassifier("haarcascades/haarcascade_frontalface_default.xml") # the file location for frontal face detection.
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(imgGray, 1.2, 5)
myFaceListC = []
myFaceListArea = []
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
cx = x + w // 2
cy = y + h // 2
area = w * h
cv2.circle(img, (cx, cy), 5, (0, 255, 0), cv2.FILLED)
myFaceListC.append([cx, cy])
myFaceListArea.append(area)
if len(myFaceListArea) != 0:
i = myFaceListArea.index(max(myFaceListArea))
return img, [myFaceListC[i], myFaceListArea[i]]
else:
return img, [[0, 0], 0]
def trackFace( info, w, pid, pError):
global x, y, area
area = info[1]
x, y = info[0]
fb, ud , lr = 0, 0, 0
error = x - w // 2
speed = pid[0] * error + pid[1] * (error - pError)
speed = int(np.clip(speed, -100, 100))
udState = [y > udRange[0] and y < udRange[1], y >= udMax[0], y <= udMin[0] and area != 0, y >= udMax[1], y <= udMin[1] and area != 0]
lrState = [x > lrRange[0] and x < lrRange[1], x >= lrMax[0], x <= lrMin[0] and area != 0, x >= lrMax[1], x <= lrMin[1] and area != 0]
if area > fbRange[0] and area < fbRange[1]:
fb = 0
elif area >= fbRange[1]:
fb = -20
elif area <= fbRange[0] and area != 0:
fb = 20
for index in range(MAX_STATE): #up down
if (udState[index]):
ud = udMotion[index]
for index in range(MAX_STATE): #left right
if (lrState[index]):
lr = lrMotion[index]
######################################
if x == 0:
speed = 0
error = 0
#print(speed, fb)
me.send_rc_control(lr, fb, ud, speed)
return error
#cap = cv2.VideoCapture(1)
me.takeoff()
me.send_rc_control(0, 0, 15, 0)
time.sleep(1.5)
while True:
#_, img = cap.read()
img = me.get_frame_read().frame
img = cv2.resize(img, (w, h))
img, info = findFace(img)
pError = trackFace( info, w, pid, pError)
#print("Center", info[0], "Area", info[1])
cv2.imshow("Output", img)
if cv2.waitKey(1) & 0xFF == ord('q'):
me.land()
me.streamoff()
break
cv2.destroyAllWindows()
exit()