The thesis majors at: Camera surveillance systems and related works; the hand – off camera techniques in surveillance camera systems with multiple cameras; the anomaly detection techniques in video surveillance.
Trang 1MINISTRY OF EDUCATION
AND TRAINING
VIETNAM ACADEMY OF SCIENCE AND TECHNOLOGY
GRADUATE UNIVERSITY SCIENCE AND TECHNOLOGY
Trang 31
INTRODUCTION
1 The urgency of the thesis
Nowadays, camera surveillance systems become popular and used widely in many fields With the traditional systems, video streams are observed by observers in real time, they can process as quick as problems founded Directly process video streams are quite
a big problem, due to the large number of cameras with a huge of database which will lead to leave out important scenes in a video stream
For these reasons, auto surveillance is the first job in video surveillance system This can help observers in controlling, observation and reduce errors Automatic systems can do surveillance work without human interaction from the element (moving detection) to upper (event detection, behaviour detection) One of problems needs to solve in multi-camera surveillance system is the appearance and disappearance of an object from one camera to the other, this called finding forward camera Finding the next camera is the most important work in continuous tracking an object in a multi camera surveillance system
Many projects for solving continuous tracking object when its travel pass by the camera, most of them pay attention to establishing the relation of an object in one camera and the forward camera It means that most projects, compare objects in the intersection of the observation zone of cameras in a 2D environment
How to define the time and forward camera to get the continuous tracking? The researcher is working to find the answer for this question Finding forward camera requests a huge task: define time and define the next camera, transfer the object So, with
Trang 4the aim of strengthening the power of the system, changing camera should be at least, this is researched and given detail in chapter 2
2 The objectives of the thesis
The thesis majors at:
First: Camera surveillance systems and related works;
Second: The hand – off camera techniques in surveillance camera systems with multiple cameras;
Third: The anomaly detection techniques in video surveillance
3 The new contributions of the thesis
Main results of the thesis:
Recommend a technique on partitioning the static observation region in a camera surveillance system on the geometric relationship between the observed regions of cameras Through the reduction of polygon edges, the recommended technique helps to reduce the time to transfer camera in the Overlapping system This technique was published in the Journal of Information Technology and Communications in 2014;
Raise up a new technique to hand - off camera based on virtual line, this effect on detect the right time to change camera by calculating impact of moving object with virtual line in 3D environment Proposal technique was published at Vietnamese Journal of Science and Technology in 2013
Propose a technique to select forwarding camera This base on the movements of objects to reduce time to transition
Trang 53 surveillance camera with the aim of improving the performance of the systems Proposal technique was presented and published in Fundamental and Applied Information Research– FAIR 2013
Raise up an abnormal detection technique based on segmentation the criteria of each route The result shows that the proposed technique could detect abnormal while the object has not finished its orbit, it means that the object still in the video This technique really helps in real time surveillance and published in Journal of Informatics and Communication
2015
4 Structure of the thesis
The thesis includes Introduction, Summary and 3 main chapters Chapter 1: Overview on hand-off camera and abnormal in camera surveillance systems In this chapter, we mention on the overview of the camera surveillance system and related works Chapter 2: Some techniques to hand-off camera Propose techniques
to find forward camera with the aim of reducing times to choose next camera in tracking object Chapter 3: Detect abnormal based on orbital in video surveillance This chapter also gives a brief on approaches and techniques to detect abnormal in video surveillance and propose a technique to detect abnormal based on analysis the moving orbital of an object
Trang 6CHAPTER 1: OVERVIEW ON HAND – OFF CAMERA AND
DETECT ABNORMAL IN CAMERA SURVEILLANCE SYSTEMS
1.1 Camera surveillance system
In this part, the thesis present a general introduction on camera surveillance and theirs basics problems
1.2 Hand - off camera and detect abnormal
Approaches to solve problems in camera surveillance: hand – off camera and detect abmormal in video surveillance are shown here
1.3 Summary and researches
In this chapter, the thesis shown the overview on camera surveillance system and its ralated works Beside these, thesis also gives an introduction on some approaches in tracking object in multi-cameras system In this subject I pay most attention at two importance problems that have been applied in many fields: hand – off camera and detect abnormal in surveillance system
CHAPTER 2: SOME TECHNIQUES TO HANDLE REGION
OBSERVATION IN HAND – OFF CAMERA
In this chapter, thesis presents three proposals related to: When we need to find forward camera? And How does camera get the tracking job? These proposals aim at reduce the calculation in choosing forward camera and enhance the power of the system
Trang 72.2.2 Intersection of two polygons
Definiton 2.1 [Observation polygon]
Observation polygon is the projected zone of observed area to the 2D plane
Definition 2.2[Intersection point of two intersection polygons]
A point is called intersection point of two polygon (A and B)
if it is the intersection of an edge of polygon A and the other of B This point is neither vertex of polygon A nor B
Definition 2.3[Single intersection]
Having two observation polygons: A and B Intersection between A and B is called single intersection if it is convex and the rest of either A or B is a polygon
Fig2.2 Types of two polygons intersection a) Non intersection; b) Single intersection; c) Intersection
Trang 8Proposition 2.1
If two observation polygons A and B have a single intersection, number of intersection point can not be exceed 2
2.2.3 Divide observation zone of camera surveillance system
2.2.3.1 Divide intersection zone of two polygons
Proposition 2.2[Divide two polygons]
Let two observation polygons A and B Their intesection is single if there exist 2 intersection points These points create an intersection edge that formed min-edges polygons (Fig.2.4)
Fig2.22 Divide the intersection between two polygons
2.2.3.2 Division of observed zone in multi-cameras surveillance
system
In an working observation with n static cameras with information of observation zone, these observation polygons are overlapped and single intersection We divide observation zone of the system into class of observation polygon of each camera, these polygons are non-intersect
PartitionTwoPolygon Function: Partition two intersecting
polygons so that the edge of each polygon after separation is minimal
Input: A=(A[1], A[2], ., A[n]); B=(B[1], B[2], .,
B[m]); vertex A[i], B[j]
Output: Polygon X and Y, satisfiy:𝑨 ∪ 𝑩 = 𝑿 ∪ 𝒀
in which 𝑨 ∪ 𝑿 ∩ 𝒀 = ∅; 𝑿 ⊆ 𝑨; 𝒀 ⊆ 𝑩;
Trang 97
Pseudocode
partitionTwoPolygon (A, B: polygon)
{Find the Subtraction of P(P[1], P[2], , P[t]) = A\B Find intersection of each pair in A and B; P[h], P[k] (h< k< t)
Input: Observation zone 𝑷 = {𝑷[𝟏], 𝑷[𝟐], , 𝑷[𝒏]}(with n,
integer) Where, 𝑷[𝒊] = {𝑽𝟏, 𝑽𝟐, 𝑽𝒕}, with vertex Vk (xk,yk) , these vertex are sorted clockwise
Output: Q=(Q[1], Q[2], ., Q[n]) satify: n
i Q i P
Trang 10Computational Complexity:
As we seen, with an observation zone of n camera system, at
step i we have to do (i – 1) times the partitionTwoPolygon function
In general, times to do:
2)1(
The experiment aim at divide observation zone of camera into non-intersection parts, each of these parts will be pass to one camera With proposal technique, not only the coverage in the highest but also the observation zone is divided into non-intersection parts Finding forward camera will be more efficient when we combine virtual line into the technique This was published in Journal of Information and Communication in 2014
a) Monitoring site plan b) Yi Yao plan
Trang 119
c) Overlapped zone and edges of
polygon as Yi Yao plan
d) Proposal algorithm result
Fig 2.6 Division of observation zone in camera surveillance system
2.3 Finding next camera based on virtual line
2.3.1 Virtual line
At the intersection part of cameras, build virtual lines with the aim to define observation zone for each camera From that whenever the object touch the virtual line, we start to change camera To enhance the accuracy of time to change camera, the thesis calculate the collision of the object with the virtual line in 3D environment instead of 2d Tracking object and virtual line are modeled in 3D cubes
Fig.2.11 Moving object and virtual line in 3D environment
-20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0
C1 C2
C4
C3
C6 C5
Trang 122.3.2 Calculate collision of object with virtual line
Thesis presents necessary calculation to check the collision of
an object with a virtual line in 3D environment
2.3.3 Proposal technique
2.3.3.1 System modeling
System modeling is shown in figure 2.16
Fig 2.16 System structure
2.3.3.2 Algorithm
See figure 2.17
Trang 14Fig 2.19 shows the forward tracking between cameras A person moving and collision with virtual line (red line) and appear at the observation of forwarded camera with the information of object's index and next camera
Fig 2.18 Camera site plan
(a) Forward between camera 1 and camera 2
Trang 1513
(a) Forward between camera 2 and camera 3
Fig 2.19 Forwarded between 2 camera
The result of experiment show that the accuracy of calculation
in 3D environment is higher than that in 2D environment Proposal technique was published at Journal of Science and Technology, VAST in 2013
2.4 Finding camera based on the moving direction of an object
2.4.1 Prediction position and moving direction of an object
In this part, thesis using Kalman filter to predict position and moving direction of an object
2.4.2 Express the relation between observation parts of the
system
Thesis using adjacency list to express the relation between observation parts of the system
2.4.3 Algorithm to choose camera base on moving direction
Let A, B be the correspondence position in Decarts at time t1
and t2: 𝐴(𝑥𝑡1, 𝑦𝑡1) 𝐵(𝑥𝑡2, 𝑦𝑡2) With the aim of reduce the times to change camera, our tatics is calculate for the maintain time in one camera is the longest (the existance in an observation of one camera
is longest) The thesis propose to build a line though A and B, find its intersection with each edges of polygon Intersect point C should satisfy A and belong to the other part of B Called D is the length of
Trang 16BC in camera j.So selected camera will be the camera that have the largest Dj
Function findIntersectPolygon: find intersection point C of AB
and edge of polygon P
findIntersectPolygon (P: polygon; A, B: point)
{ Creat the equation line AB;
For i=1 to n do{
C= intersection point of AB with edge (P[i],P[i+1]); If(𝐴𝐵̅̅̅̅ + 𝐵𝐶̅̅̅̅ = 𝐴𝐶̅̅̅̅) return C;}}
Proposal algorithm:
Input: Q=(Q[1], Q[2], , Q[n]) observation polygons
Object's position at 𝑡1: 𝐴(𝑥𝑡1, 𝑦𝑡1)
Predict object's position at 𝑡2: 𝐵(𝑥𝑡2, 𝑦𝑡2)
Graph G=(V, E): Adjacency list Ke(i)
Index i (tracking object camera )
Output: Index t: Index of forwarded camera
Trang 1715
t=Ke(i)[k];}}
Evaluate computational complexity
With an observation zone of n cameras, at the time object out of camera i, we should go through adjacency list (Ke(i)) to find suitable
camera, using findIntersectPolygon function to find the intersection
between AB and edges of observation polygons So, the computational
2.4.4 Experiment
Figure 2.24 illustrate the result of the algorithm with the input is observation zone of cameras deployed by Eduardo Monari The result show that time to change camera in the system with overlapped system has been reduce
Fig 2.24 Result of selective camera
Proposal technique was presented and published in Fundamental and Applied Information Research, FAIR 2013
2.5 Summary chapter 2
In order to answer “When will need to change camera and Which camera will be the next?” in finding forward camera, this chapter proposes 3 techniques These techniques, attention on
Trang 18reducing the calculation to find next camera, then strengthen the power of the surveillance system
First: Recommend a technical partition fixed surveillance cameras rely on single crossings, to divide the camera observation system into regions do not intersect with the criteria reduce the number of polygons edge observation after partition, thereby reducing the computational time transition when moving objects in the overlapping zone of the camera
Second: propose a technique to define the time to change camera through the calculate the collision of object with virtual line
in a 3D environment The result shows that with proposal technique, the accuracy has been risen
Third: Recommend a technique to find next camera base in the moving direction, this helpful in reducing the times to change camera when an object passes camera observation zone in the OVL system
CHAPTER 3: DETECT ABNORMAL BASED ON OBJECT'S
TRAJECTORY IN VIDEO SURVEILLANCE
In this chapter, thesis presents some approaches to detect abnormal in video surveillance Then propose a technique to detect abnormal based on the moving trajectory of an object
3.1 Introduction
3.1.1 Approach base on stream video image analysis
Approaches of this group base on video stream image analysis using image processing, manipulate with motion picture archieved from object detection, then combine with probabilistic model, clusters, statistical… to detect abnormal
3.1.2 Approach base on orbital analysis
Approaches based on clustering the trajectory are working as the workflow in figure 3.1 Most of proposed algorithms to detect