1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Smart parking manugement integrated deep learning for automatic monitoring

127 5 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Smart Parking Management Integrated Deep Learning For Automatic Monitoring
Tác giả Pham Van Manh Hung, Nguyen Ngoc Thien
Người hướng dẫn PhD. Le Vinh Thinh
Trường học Ho Chi Minh City University of Technology and Education
Chuyên ngành Information Technology
Thể loại graduation thesis
Năm xuất bản 2023
Thành phố Ho Chi Minh City
Định dạng
Số trang 127
Dung lượng 4,34 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • CHAPTER 1: INTRODUCTION (17)
    • 1.1. Reasons for choosing the topic (17)
    • 1.2. Purpose of the project (17)
    • 1.3. Methods of implementation (17)
  • CHAPTER 2: STATUS SURVEY AND DETERMINATION OF REQUIREMENTS (18)
    • 2.1. Survey the status of vehicle management systems (18)
    • 2.2. Define requirements (18)
      • 2.2.1. Require function (18)
      • 2.2.2. Non-functional requirements (21)
    • 2.3. Expected results achieved (22)
  • CHAPTER 3: ANALYSIS AND SYSTEM DESIGN (23)
    • 3.1. Design System (23)
    • 3.2. Database Specification (23)
      • 3.2.1. Kiosk (23)
      • 3.2.2. HLFXBaseKIOSKDB (24)
    • 3.3. Database Description (24)
      • 3.3.1. Table tblAdMgt (24)
      • 3.3.2. Table tblAdStoreMgt (25)
      • 3.3.3. Table tblClientSoundMgt (26)
      • 3.3.4. Table tblStoreDevice (27)
      • 3.3.5. Table tblStoreEnvironmentSetting (28)
      • 3.3.6. Table tblStoreMaster (29)
      • 3.3.7. Table tblTrack (30)
      • 3.3.8. Table tblUser (31)
      • 3.3.9. Table tblUserHistory (33)
      • 3.3.10. Table tblUserPhoto (34)
      • 3.3.11. Table tblVehicle (34)
      • 3.3.12. Table SYUserGroups (35)
      • 3.3.13. Table SYUsersInGroup (36)
      • 3.3.14. Table RolesToPermissions (37)
      • 3.3.15. Table Permissions (37)
      • 3.3.16. Table UserRoles (37)
      • 3.3.17. SYUser (38)
    • 3.4. Use Case Diagram (40)
      • 3.4.1. Building Client System (41)
      • 3.4.2. Parking Client System (41)
      • 3.4.3. Supervisor User & Vehicle (42)
    • 3.5. Use case specification and Sequence diagram (42)
      • 3.5.1. Real Time Tracking Vehicle History (42)
      • 3.5.2. Real Time Tracking Face History (44)
      • 3.5.3. User permission (46)
      • 3.5.4. Vehicle Infor Management (47)
      • 3.5.5. Register User Phone (49)
      • 3.5.6. Register User Citizen ID (51)
      • 3.5.7. Sync client audio (53)
      • 3.5.8. Create or Update Sound (55)
      • 3.5.9. Create Or Update Parking site (57)
      • 3.5.10. Search Member in Paring site (58)
      • 3.5.11. Login to the management site (60)
    • 3.6. Class Diagram (62)
      • 3.6.1. Vehicle Detect client (62)
      • 3.6.2. Detect Face in Citizen Id function (62)
      • 3.6.3. Detect Face in Phone register function (63)
      • 3.6.4. Detect Citizen Id (64)
    • 3.7. Interface (64)
      • 3.7.1. Vehicle monitoring interface (64)
      • 3.7.2. Management website interface (65)
      • 3.7.3. User registration application interface (79)
  • CHAPTER 4: TECHNOLOGY KEY POINTS (91)
    • 4.1. Fast API (91)
      • 4.1.2. Applicability to the project (91)
    • 4.3. MS SQL Server (92)
    • 4.4. Deep learning (93)
    • 4.5. Pytorch (93)
    • 4.6. Tensorflow (93)
  • CHAPTER 5: DEEP LEARNING AND APPLICATIONS (95)
    • 5.1. Vehicle license plate recognition (95)
      • 5.1.1. Dataset (95)
      • 5.1.2. Yolo Architecture (95)
      • 5.1.3. SSD Architecture (97)
      • 5.1.4. Experiment (99)
    • 5.2. Face Detection (102)
      • 5.2.1. P-Net (Proposal Network) (102)
      • 5.2.2. R-Net (Refine Network) (103)
      • 5.2.3. O-Net (Output Network) (103)
      • 5.2.4. MTCNN (104)
    • 5.3. Face Recognition (104)
      • 5.3.1. Dataset (104)
      • 5.3.2. Feature Extract (105)
      • 5.3.3. Calculate Face Distance in Vector Space (106)
      • 5.3.4. Vision Transformer Architecture (107)
      • 5.3.5. Inception Resnet Architecture (109)
      • 5.3.6. Inception-Resnet V1 Network (111)
      • 5.3.7. Mobilenet Architecture (111)
      • 5.3.8. Experiment (112)
    • 5.4. Optical Character Recognition (OCR) (114)
      • 5.4.1. Card detection (114)
      • 5.4.2. TransformerOCR (114)
  • CHAPTER 6: INSTALL AND TEST (117)
    • 6.1. Installation Steps (117)
      • 6.1.1. Python (117)
    • 6.2. Software Testing (118)
      • 6.2.1. Tracking function (120)
  • CHAPTER 7: CONCLUTION (124)
    • 7.1. Result achieved (124)
      • 7.1.1. Advantages (124)
      • 7.1.2. Disadvantages (124)
      • 7.1.3. Convenience (124)
      • 7.1.4. Difficulties (124)
    • 7.2. Oriented development (125)

Nội dung

Table of requirements of the building management department .... Survey the status of vehicle management systems In recent years, the development of technology and the increase of vehic

INTRODUCTION

Reasons for choosing the topic

Recent research on smart parking management in Vietnam has rapidly advanced, aligning with the country's trend of integrating information technology across various sectors This project leverages cutting-edge technologies like the Internet of Things (IoT), artificial intelligence (AI), and data analytics to enhance decision-making and deliver superior services to customers Additionally, it optimizes resource utilization, minimizes waste, and streamlines garage management, making it more flexible and efficient.

Through the Smart Garage Management project, we can enhance the competitiveness of garages, create a smart traffic environment, and optimize the movement of the public

At the same time, the project also brings economic and social benefits, from improving customer experience to saving energy and reducing traffic congestion.

Purpose of the project

Automating the parking management process, replacing human tasks, providing intelligent solutions for large parking lots such as urban areas, supermarkets, and schools

In the management as well as ensuring the security of the garage.

Methods of implementation

Theoretical research: Synthetic analysis method, model training, technology solutions for website design management, understanding of camera devices, data science methodology

This article discusses experimental research methods focused on exploring and surveying the status of parking lots It outlines various data collection techniques and data processing methods, followed by model training and evaluation Additionally, it emphasizes the importance of implementing necessary corrections to enhance the effectiveness of the models in practice.

STATUS SURVEY AND DETERMINATION OF REQUIREMENTS

Survey the status of vehicle management systems

The rise of technology and the growing number of vehicles have highlighted the need for advanced vehicle preservation and protection systems that incorporate intelligent solutions for enhanced security in parking areas An effective management system must prioritize safety, convenience, and time efficiency for users.

In Vietnam, while many parking lots have adopted smart garage management systems to enhance efficiency and minimize the need for human resources, they still face significant challenges These include issues such as low accuracy, insufficient multi-layer security, and a lack of integrated user warnings Consequently, these drawbacks have hindered the project's goal of developing a fully automated parking system.

Our research group has identified significant security concerns within the school's garage management system and certain apartments, including issues related to vehicle theft and the loss of equipment and supplies belonging to owners.

Many parking lots across the country utilize magnetic card recognition systems to manage vehicle access However, these systems can cause delays, resulting in congestion at entry and exit points This inefficiency prevents the full utilization of advanced recognition technologies, such as facial and license plate recognition.

Define requirements

2.2.1.1 Professional function a) Department that performs vehicle inspection in and out

STT Request Type of request Description/ Binding/ Formula

1 Check-in vehicle Storage, Access Check-in vehicle monitoring to add to the database

2 Check-out vehicle Storage, Access Check-out vehicle monitoring &

3 | Page update into the database

3 Detect Vehicle Predict Detect localization of object, and predict type of vehicle and license plate

Predict Predict character in license plate

Table 2.2.1 Table of requirements of the entry and exit vehicle inspection departments b) Building management department

STT Request Type of request Description/ Binding/ Formula

1 Register users in and out of the apartment building by phone number

Storage, Predict Use phone number to authenticate registered users

2 Register users in and out of the apartment building with their citizen IDs

Storage, Predict Use citizen ID to authenticate registered users

3 Monitor faces entering the apartment

Storage, Predict Face recognition going in and out for enhanced security

Table 2.2.2 Table of requirements of the building management department c) Supervisor department system

STT Request Type of request Description/ Binding/ Formula

Search Show information of vehicle go/leave into the parking

2 Realtime monitoring face check-in

Search Show information of user went check-in at the building

3 Create audio Storage Create audio for the client system

4 Update audio Storage Delete audio for the client system

5 Deploy audio Storage Synchronous audio if admin was updated audio in the website

Storage Initialize site belong to the architecture at organization

Storage Update information of site suitable with current map

8 Assign manager Storage Assign manager for the department to supervisor user in the system

Storage Manage information of user and set access role in the system

10 Show member Search Display member in the system

11 Approval/Reject Access Allow the user access department or not

12 Add device Storage Add information of device in the department

13 Update device Storage Update information of device in the department

Access Allow the device to active or not

15 Delete device Storage Delete the device out the system if not want to use it

16 User permissions Access Set permissions for user access the website

17 Group permissions Access Set permissions for group access the website

18 Display user Search Display all account in the system

Storage Create or update advertisement to display in main form in the client, and synchronous information in the client system

Storage Change menu position or name dynamic in the website

21 Site Setting Storage The user can be change the UI dynamic (color, hide element, position)

22 Profile Storage The user update information involved to face, card, or vehicle to has role access to the department system

Table 2.2.3 Table of requirements of the system administrator department

Call socket to the web to check access role of the device

2 Send OTP code The system sends a phone number confirmation code and requires completing registration information

4 Send SMS Send information of account when the user register successful at the system

No Request Type of request

1 User-friendly interface, easy to use Convenience

2 Ability to expand, upgrade, improve in the future Evolution

3 Stable page loading speed, quick response operations Efficiency

5 The code can be used many times and applied to many different programs without having to change the code too much

Expected results achieved

Complete parking management system, and achieve important results:

● Fully meet the proposed functions

● Complete deep learning models and stable applications

● Teamwork skills and use of time management tools like Trello, GitHub, Gitlab

● Learn and gain knowledge about famous deep learning models like SSD, Yolo, Resnet, etc.

ANALYSIS AND SYSTEM DESIGN

Design System

Database Specification

Database Description

No Attribute Type of data Definition Note

1 AdNo Int Identity advertisement Primary

2 AdType Nvarchar(6) Type of advertisement (image or video)

3 AdName Nvarchar(30) Name of the advertisement

Datetime The period day the advertisement that it can be start appear in the main screen (> 1 days)

Datetime The period day the advertisement that it can be end appear in the main screen

Datetime The period time in the day that it can be display

Datetime The period time in the day that it can be display

8 AdStatus Bit Can be display or not

Datetime The daytime create advertisement

10 ResitUser Nvarchar(50) The user create advertisement

Bit Posstion display on the screen

Nvarchar(99) The path file in the server

No Attribute Type of data Definition Note

1 AdNo Int Reference to tblAdMgt Foreign Key

2 StoreNo Int Reference to tblStoreMaster Foreign Key

No Attribute Type of data Definition Note

1 SoundNo Int Identity of audio Primary Key

2 SoundName Nvarchar(20) Name of audio

File path in the server

File path in the client

Bit Status of audio when deploy to the client system

6 RegistDate Datetime The datetime create audio

7 ResitUser Nvarchar(50) The user create audio

Datetime The datetime update the audo

14 SoundType Nvarchar(50) Type of audio

No Attribute Type of data Definition Note

2 StoreNo Int Reference to tblStoreMaster Foreign Key

Nvarchar(6) Type of device (building or parking)

Nvarchar(50) The user create device

Datetime The datetime to create device

No Attribute Type of data Definition Note

2 StoreNo Int Reference to tblStoreMaster Foreign Key

Int Threshold of distance face

Bit Can be open camera or not

No Attribute Type of data Definition Note

1 StoreNo Int Identity Primary Key

2 Location Nvarchar(6) Location of department

11 Status Bit Can be use or not

12 Capacity Int Volume contain people/vehicle in the site

Datetime The start time tracking

Datetime The end time tracking

Int The age can be register account at building

17 SiteType Nvarchar(6) Type of site (parking/ building)

No Attribute Type of data Definition Note

1 Id Int Identity Primary Key

3 VehicleId Int Reference to tblVehicle Foreign Key

4 StartTime Datetime Start time check-in in the parking

5 EndTime Datetime End time check-out in the parking

7 SiteId Int Reference to tblStoreMaster Foreign Key

10 UserId Int Reference to tblUser Foreign Key

Varchar(250) Face path of check-in

Varchar(250) Face path of check-out

13 PlateIn Varchar(250) Plate path of check-in

15 Status Nchar(10) Status of vehicle

Datetime Current time go into parking

No Attribute Type of data Definition Note

1 UserID Nvarchar(20) Identity Primary Key

2 UserType Nvarchar(6) Role of user in department

Nvarchar(15) Phone number to contact

6 Email Nvarchar(30) Email to contact

7 Birthday Datetime Birthdate of user

Bit Can be access department or not

11 RegistDate Datetime Datetime create account

Varchar(6) Code of phone number

No Attribute Type of data Definition Note

2 UserID Nvarchar(20) Reference to tblUser Foreign Key

Int Threshold of face distance

Bit Can be access department or not

Datetime The time check-in in the system

7 LoginIP Nvarchar(20) IP of face device

No Attribute Type of data Definition Note

2 UserID Nvarchar(20) Reference to tblUser

Image Image of face taken at the client system

No Attribute Type of data Definition Note

1 Id Int Identity Primary Key

3 TypeTrans port nchar(10) Type of vehicle

4 TypePlate nchar(10) Type of plate license

6 UserId nvarchar(20) Reference to tblUser

7 CreatedAt datetime Datetime to create vehicle

8 UpdatedAt datetime Datetime to update vehicle

9 VehiclePh oto image Image of vehicle

10 LicensePh oto image Image of license vehicle

11 VehiclePh otoPath varchar(200) Path of vehicle

12 LicensePh otoPath varchar(200) Path of license vehicle

No Attribute Type of data Definition Note

Int Id Group Primary Key

Varchar(255) Description of the group

5 SITE_ID Int Id Site parking

Datetime User groups creation date

Datetime User groups update date

No Attribute Type of data Definition Note

Int Group ID user Primary Key

3 SITE_ID Int Parking site Id

Datetime User group creation date

Datetime User group update date

No Attribute Type of data Definition Note

1 RoleCode Varchar(50) Role code corresponding to permissions

No Attribute Type of data Definition Note

1 Code Varchar(50) Permission code Primary Key

No Attribute Type of data Definition Note

1 UserId Int User ID Primary Key

2 RoleCode varchar(50) User's corresponding Role information

No Attribute Type of data Definition Note

1 USER_ID Varchar(128) User's id Primary Key

10 SITE_ID Int User registration place

Bit User has been blocked

Datetime Password of last change

Varchar(100) Type of user in the system

Use Case Diagram

Figure 3.4.1 Use-case total diagram

Figure 3.4.2 Building client system use-case

Figure 3.4.3 Parking client system use-case

Figure 3.4.4 Supervisor user & vehicle use-case

Use case specification and Sequence diagram

3.5.1.Real Time Tracking Vehicle History

This use case describes the function of the vehicle tracking system in and out of the parking lot by recognizing license plates and faces

Actors: Users in and out of the parking lot, employees

Pre-Conditions: New MAC-addressed software can access the system

Post-Conditions: Software successfully accessed the system

Main Flow: (1)Application startup worker

(2)The user moves the vehicle into the monitoring area

(3)The system detects license plates and recognizes the characters on the license plate

(4)Face detection and recognition system

(5)Compare faces and check if there is already a vehicle in the data

(6)Feedback the result to the user

The user points the license plate at the camera and the system accepts the user into the parking lot

Parking is not accepted without a matching number plate and face

Table 3.5.1 Real Time Tracking Vehicle History

Figure 3.5.1 Real Time Tracking Vehicle History Sequence diagram

3.5.2.Real Time Tracking Face History

Use Case: Real Time Track History

The system monitors users entering and leaving the system by face check

Pre-Conditions: The user points his face at the camera to recognize and detect the face

Post-Conditions: User is approved to the system

Main Flow: 1) User points face at camera

2) The system detects faces and sends photos to FaceServices

3) Get the results and check in the data about the information of the newly recognized face

The user points his face at the camera then is approved to enter the system

The system does not recognize the face, then notify the user The system does not detect the face after 5s will notify the user

Table 3.5.2 Real Time Tracking Face History

Figure 3.5.2 Real Time Tracking Face History Sequence diagram

Assign functional permissions to each user account

Pre-Conditions: Already have an account with the ability to decentralize

Post-Conditions: Successful authorization for users

Main Flow: 1) User logs in to the management site with admin account

2) The manager chooses the account to decentralize

3) Perform a permission update for the account and then click save

The login account that does not have permissions will notify the user

Error message if permission change failed due to insufficient information provided

Figure 3.5.3 User permission Sequence diagram

Use Case: Vehicle Infor Management

Update information of vehicles under your ownership

Pre-Conditions: Successfully registered an account

Post-Conditions: User's vehicle is updated successfully

Main Flow: 1) User uses authorized account to log in and access the vehicle information management function

2) Provide information to be updated

3) Click Save to confirm the update

4) The system will update new media information or create a new vehicle if it does not exist

5) The system displays a message and reloads the page to update the modified information

When performing the function the user ends the application, the function ends, and no information is modified

If the user enters invalid information, they will be notified and asked to re-enter the information

Figure 3.5.4 Vehicle Infor Management Sequence diagram

Use Case: Register User Phone

This use case describes the user registration function by phone number and face photo

Pre-Conditions: The system must be deployed and operational

Office users have personal phone numbers

Post-Conditions: The user has successfully registered and received a message from the system to complete the login information update

Main Flow: 1) User accesses user registration function by phone number

3) User receives and enters the OTP sent from the system for confirmation

4) User enters information about date of birth and time

5) User enters their full name

6) User points face at screen to detect face

7) User checks information and clicks confirm to register

8) The system notifies the registration result, and the user ends the registration function

2 a The user clicks end of the application while performing the function will notify the user and end the application

3a The user did not receive the OTP and asked for it to be resend

6a If the face is not detected after 5s, the user will be notified and asked to do it again

2 a If the user enters invalid phone number information, the system reports an error and asks the user to re-enter it

3a User enters incorrect OTP confirmation code asking to re- enter or end the program

6a If the system is unable to authenticate the identity by face recognition, the system notifies the user and asks to repeat the facial recognition process or end the registration

Figure 3.5.5 Register User Phone Sequence diagram

Use Case: Register User Citizen ID

This use case describes the function of user registration by citizen ID and face photo

Pre-Conditions: The system must be deployed and operational

The user owns a new version of the citizen's identity card

Post-Conditions: The user has successfully registered and received a system message to complete the login information update

Main Flow: 1) User accesses the user registration function by citizen ID

2) User enters citizen ID number

3) The user points the citizen ID at the screen to identify

4) User enters information about date of birth and time

5) User enters their full name

6) User points face at screen to detect face

7) User checks information and clicks confirm to register

8) The system notifies the registration result, and the user ends the registration function

3a The system does not detect the citizen's identity, then ask to take a photo

6a If the face is not detected after 5s, the user will be notified and asked to do it again

If the user inputs an invalid citizen ID number that is either too short or too long, they will be prompted to either re-enter the ID or exit the program Additionally, if the citizen ID number does not match the required format, the user must return to the step of entering the citizen identification number or taking a photo.

Undetected user faces will notify the user or end the application

Table 3.5.6 Register User Citizen Id

Figure 3.5.6 Register User Citizen ID Sequence diagram

Use Case: Sync client audio

This use case describes the function of synchronizing the audio of the client (client) with the server (server) in an application or multimedia system The system allows the user

38 | Page to synchronize the client's audio with the audio content played from the server

Pre-Conditions: The multimedia system has been deployed and operational

Post-Conditions: The client audio has been synchronized with the audio content played from the server

Account with audio sync permission

Main Flow: 1) Manage successful login to the system

2) Manage access to the page that performs the audio sync function

3) Manage select deployed sound and click confirm

4) The system displays the results on the screen

Audio no longer exists, it will notify the screen and ask to repeat or end the program

Figure 3.5.7 Sync client audio Sequence diagram

Use Case: Create or Update Sound

This use case describes the function of creating or updating audio information in a multimedia system

Pre-Conditions: The multimedia system has been deployed and operational

The administrator or user is authenticated and has access to create new or update sounds

Post-Conditions: Audio information has been successfully created or updated in the system

Main Flow: 1) The administrator logs in and accesses the function of creating or updating sounds in the system

2) The administrator enters the audio information and provides the file

3) The system asks for confirmation of sound generation

4) Admin or user confirm sound creation

5) Sound generation system is successful

5a If a sound has been created or exists in the system, the information of that sound will be updated

The audio file is not valid, please change the file or end

Table 3.5.8 Create or Update Sound

Figure 3.5.8 Create or Update Sound Sequence diagram

3.5.9.Create Or Update Parking site

Use Case: Create Or Update Parking site

This use case outlines the process for creating or updating parking site information within the system Administrators can manage essential details, including the name, address, type, location, and various attributes related to parking facilities.

Pre-Conditions: The parking management system has been deployed and operational

The administrator is authenticated and has access to parking management functionality

Post-Conditions: Parking information has been successfully created or updated in the system

Main Flow: 1) The administrator accesses the parking management function in the system

2) The user enters the parking information to be created

3) The manager clicks confirm to perform the function

5) If it does not exist, create a new parking lot

5a If it already exists, update the parking lot

If the parking information is not valid, please notify the administrator

Table 3.5.9 Create Or Update Parking site

Figure 3.5.9 Create Or Update Parking Site Sequence diagram

3.5.10 Search Member in Paring site

Use Case: Search Member in Paring site

This use case describes the member search function in the parking system

Pre-Conditions: The parking management system has been deployed and operational

The administrator or user is authenticated and has access to the member search function

Post-Conditions: The administrator or user has successfully searched and accessed member information in the system

Main Flow: 1) Admin access the member search page in the Paring site

2) The system displays authorized members for the login account

3) User clicks the member search function

4) User enters information to search

5) System to search for members according to the user- supplied keywords

6) The system displays the results to the admin

6a If no results are found, a message will be displayed on the screen

User entered invalid keyword asking to re-enter or end the search function

Table 3.5.10 Search Member in Paring Site

Figure 3.5.10 Search Member in Paring site Sequence diagram

3.5.11 Login to the management site

Use Case: Login to the management site

Short Description: User case has the function of logging in and checking the permissions of the user account

Pre-Conditions: The parking management system has been deployed and operational

The administrator or user is authenticated and has access to the member search function

Post-Conditions: The administrator or user has successfully searched and accessed member information in the system

Main Flow: 1) User accesses the login page

2) User enters Username and password information

3) User clicks perform system login

4) The system checks the login information and executes the decentralized query

5) The system responds to user login and page navigation results

The system responds with a login error if the information is invalid or the user cannot be found

Table 3.5.11 Login to the management site

Figure 3.5.11 Login to the management site Sequence diagram

Class Diagram

Figure 3.6.1 Vehicle Detect Client Class Diagram

3.6.2.Detect Face in Citizen Id function

Figure 3.6.2 Detect Face in Citizen Id functionClass Diagram

3.6.3 Detect Face in Phone register function

Figure 3.6.3 Detect Face in Phone Register Function Class Diagram

Figure 3.6.4 Detect Citizen Id Class Diagram

Interface

2 License plate Textbox Display of detected license plates

3 Accept In-Out Checkbox Shows whether vehicles are allowed in or out

Face monitoring screen to be able to capture faces when Tracking is needed

5 Time Textbox Real-time display of the application

6 Bar Item IP Textbox Connecting to Web API will show connection success or failure

1 Vertical menu Group button Select detailed function

2 Horizontal menu Group button Select the main function group

3 User information Link Show username

4 Logout Button Implement user logout function

5 Tag sub page Textbox Display the currently used functions of the system

6 Main child page View Display the main working screen

1 Reload Button Reload the page

2 Search Button Implement the search function

5 Type by 30 Min CheckBox Information search in about 30 minutes

6 Type by 60 Min CheckBox Information search in about 60 minutes

7 Status Track Combobox Search track status type

8 List Site Parking List Textbox List of authorized parking lots

9 Face In Image Display face image coming in

10 Face Out Image Show outgoing face image

11 Vehicle In Image Display images of vehicles entering

12 Vehicle Out Image Display image of outgoing vehicles

Figure 3.7.4 Profile Detail Member Interface

1 Face Image Image User's face image

2 ID card Image Image Citizen ID photo

3 Time check in Textbox Check in time

8 Save Button Save member information that has been checked in

9 Delete Button Delete member information that has been checked in

Figure 3.7.5 Site Management Setting Interface

1 Reload Button Reload data site

2 Create Button Create New parking

4 List parking site Data table List of parking lots in the system

Button Every parking lot needs system administrators

6 Site type Textbox Adjust the site type when performing the function of

54 | Page parking or checking in apartments

Figure 3.7.6 Voice File Deploy Interface

5 Voice File ID Textbox The files will be generated with a corresponding code in the system

7 Audio List Data table List of sounds used in the system

8 Print Button Print a list of deployed audio

Figure 3.7.7 Audio File Information Detail Interface

2 Check diplicate Button Check if the user entered code is duplicate or not

4 Select file Button Select audio file from device

5 Audio play Audio Allows user to listen to audio again for testing

6 Path audio Textbox The path of the audio file is saved in the system

7 Sound type Textbox The type of sound will divide the task of the sound for each job

10 Deploy Button Implement synchronization for client functions

11 Detete Button Delete the sound select

12 Create Button Create new sound

13 Save Button Save the change of sound

1 Ad type Textbox Advertisement type

2 Ad name Textbox Advertisement name

5 Ad Status all Checkbox Search all Advertisement

6 Ad false Status Checkbox Search for "False" Advertisement

7 Ad true Status Checkbox Search for "True" Advertisement

Figure 3.7.9 Advertising Management Information Interface

1 List of parking lots Data table List of parking lots

2 Ad no Textbox Advertising code

3 Ad type Combobox Type of advertisement posted

4 Ad middle location CheckBox The ad display position is in the middle

5 Ad top location CheckBox The ad display position is in the top

6 Ad name Textbox Advertisement name

9 Start time Textbox Display start time

10 End time Textbox End time display

11 Register Date Textbox Registration time

14 Path file Textbox File path in the system

15 Creator Textbox The person who created the ad

7 List user Data table List of registered users on the system

1 Type visitor months Combobox Search for users by number entering the parking lot

2 Type visitor months Combobox Search for users by number entering the parking lot

3 List of parking lots Data table Select the parking lot to monitor and check the vehicle

4 List of registered users entering the system

Data table Display information if the user is not registered in the system, some columns of registered user information will be left blank

Figure 3.7.12 Vehicle detail management Interface

1 Plate check-in Image Number plate of vehicles entering

2 Face check-in Image The face of the person entering the system

3 Plate check-out Image The license plate of the person leaving the system

4 Face check-out Image The face of the person going out of the system

5 Vehicle In and out history

Data table Vehicle entry and exit information

6 Username Textbox Username of the vehicle owner

7 Gender Check box Vehicle owner's gender

8 Register date Textbox User registration date

9 Phone Number Textbox Vehicle owner's phone number

1 Default camera image Picture box System camera

2 Advertising program Video Advertising program from the system

3 Make a registration Button Implement the registration function

4 Connect Textbox Server connection notification

1 Verification by Card ID Picture box Select to the verification by Card

2 Verification by Phone Picture box Select to the verification by

3 Button Back Button Back to Home Form

3.7.3.3 Register users by phone number

1 Type of phone number prefix

Combo box Make a phone number prefix selection

2 All user phone numbers Textbox Show phone number

3 Button Number Button Enter phone number

4 Button next Button Next to OTP function

1 OTP number Textbox Show OTP

2 Resent OTP Button Resend confirmation code

3 Button Number Button Enter phone number

4 Button Rollback Button Return to the phone number input function

5 Button next Button Confirm phone number and redirect to input function

Enter your citizen identification number.

Figure 3.7.17 Enter your citizen identification number

1 All user citizen IDs numbers

2 Button Number Button Enter citizen ID number

3 Button Exit Button Exit Form Register

4 Button next Button Next to OTP function

Take a photo and identify the citizen identification number

Figure 3.7.18 Take a photo and identify the citizen identification number

1 Camera citizen ID PictureBox Show citizen ID taken

2 Button Previous Button Before back the citizen ID number function

3 Button Complete Button Complete citizenship identification and go to user information input

Figure 3.7.19 Enter user's personal information

1 Date of birth PictureBox Show citizen ID taken

2 Gender Combo Box Before back the citizen ID number function

3 Number Button Complete citizenship identification and go to user information input

4 Button Previous Button Before back the citizen ID capture

5 Button Next Button Complete Infor user and go to user full name input

Enter the user's full name

Figure 3.7.20 Enter the user's full name

1 Full name Picture Box Show citizen ID taken

2 Number Group Button Enter the full name on the screen

3 Button Previous Button Before back the User Infor

4 Button Next Button Complete full name and go to face capture

Take a photo of your face for authentication using the SSD model

Figure 3.7.21 Take a photo of your face for authentication using the SSD model

1 Camera Face detects Picture Box Show Face camera taken

2 Button Previous Button Before back the citizen ID number function

3 Button Complete Button Complete Face detection and go to user information input

4 Face Image Picturebox Show face captured from camera

5 ID VN Image Picturebox Show citizen ID detected

TECHNOLOGY KEY POINTS

Fast API

FastAPI is a user-friendly framework that accelerates project development, leveraging OpenAPI (formerly Swagger) for its web components and utilizing Pydantic for data handling.

+ Fast: Very high performance, on par with NodeJS and Go

+ Fast to code: Increase the speed to develop features based on the adaptable and learning foundation of the python programming language

+ Fewer bugs: Reduce about 40% of human (developer) induced errors

+ Intuitive: Great editor support Completion everywhere Less time debugging + Easy: Designed to be easy to use and learn Less time reading docs

+ Short: Minimize code duplication Multiple features from each parameter declaration Fewer bugs

+ Robust: Get production-ready code With automatic interactive documentation

+ Standards-based: Based on (and fully compatible with) the open standards for APIs: OpenAPI (previously known as Swagger) and JSON Schema

- Increase project completion speed in a short period of time

- Built-in swagger convenient for testing and bug fixing

- Deploy Deep Learning models faster and more efficiently with Pytorch and Tensorflow libraries.

4.2 NET Framework and ASP.Net Core

● ASP.NET Core is a cross-platform, open-source framework for building web applications

● It introduces various improvements, including enhanced performance, improved JSON serialization, simplified routing, and improved endpoint routing

● It offers built-in support for dependency injection, which helps manage application dependencies and promotes modularity and testability

● ASP.NET Core 3.1 includes Razor Pages, a lightweight alternative to MVC for building web applications, and SignalR, a library for building real-time web functionality

Windows Forms with NET Framework 4.8:

● Windows Forms is a graphical user interface (GUI) framework for building Windows desktop applications

● Windows Forms allows developers to create rich, responsive, and event-driven desktop applications with a drag-and-drop design approach

● It provides a wide range of controls and components to create user interfaces, handle user input, and interact with data

● NET Framework 4.8 includes various improvements and bug fixes, ensuring a stable and reliable development experience.

MS SQL Server

Microsoft SQL Server, or MS SQL Server, is a powerful relational database management system created by Microsoft It offers extensive tools for efficiently storing, managing, and retrieving structured data Notable features of MS SQL Server include its scalability, performance optimization, seamless integration with the Microsoft ecosystem, robust security and compliance measures, and advanced business intelligence capabilities.

Microsoft SQL Server offers high availability and disaster recovery features, along with support for advanced analytics and machine learning Its integration with Microsoft Azure enhances hybrid and cloud capabilities, making it a popular choice for efficient data management in enterprise applications.

Deep learning

Deep learning, a subfield of machine learning, trains artificial neural networks with multiple layers to extract meaningful data representations It provides benefits like automatic feature learning, scalability, flexibility, and top-tier performance, particularly in computer vision and natural language processing tasks However, it also encounters challenges, including high data requirements, significant computational resources, interpretability issues, and the potential for overfitting Despite these obstacles, deep learning has transformed numerous fields and remains a key driver of advancements in artificial intelligence.

Pytorch

PyTorch is a flexible and user-friendly open-source machine learning framework that features a dynamic computation graph, allowing for easy model modifications and debugging Its Pythonic syntax and intuitive interface make it accessible to developers, researchers, and students alike With a vibrant community and a rich ecosystem of libraries and pre-trained models, PyTorch accelerates development and prototyping It also offers seamless GPU integration for enhanced training and inference However, it presents a steeper learning curve than some other frameworks and may lack mature tools for large-scale production deployment, along with limited support for mobile and embedded platforms Despite these challenges, PyTorch's power and flexibility make it a favored choice for deep learning applications.

Tensorflow

TensorFlow is an open-source machine learning framework that offers a comprehensive set of tools and resources for building and deploying deep learning models

It supports static computational graphs for efficient computation and offers scalability for distributed training and deployment TensorFlow Hub provides pre-trained models and

TensorFlow Extended (TFX) facilitates the creation of comprehensive machine learning pipelines, while TensorFlow Serving and TensorFlow Lite are designed for model deployment in production and limited-resource settings The integration of TensorFlow 2.0 with the Keras API enhances user-friendliness, and the platform's vibrant community and extensive ecosystem significantly contribute to its popularity, offering valuable resources for developers.

DEEP LEARNING AND APPLICATIONS

Vehicle license plate recognition

The data collection process utilized a variety of sources to gather license plate images, including public repositories like GitHub and Kaggle, as well as images from our school's parking slots This comprehensive approach enabled us to compile a diverse array of license plates from different contexts, encompassing both public and private vehicles.

Incorporating license plate images from our school's parking areas allowed us to create a localized dataset, which is essential for training our model on the specific license plates prevalent in our vicinity This localized data is expected to enhance the model's performance when implemented within our school grounds.

License plate detection dataset size: 20000 photos [2]

Size of license plate recognition dataset: 4000 photos [3]

The YOLO (You Only Look Once) architecture is a widely recognized object detection algorithm celebrated for its impressive speed and accuracy It processes an input image by dividing it into a grid of cells, with each cell tasked with predicting bounding boxes and class probabilities.

● The input image passes through a base convolutional network, such as DarkNet or Tiny DarkNet, to extract high-level features

● The network typically consists of multiple convolutional layers, max-pooling layers, and activation functions

● The output of the base network is a feature map that retains the spatial information of the input image

● The feature map is divided into a grid of cells

● Each cell in the grid is responsible for predicting bounding boxes and class probabilities for objects

● Prior knowledge about the object shapes and sizes is incorporated using predefined anchor boxes of different aspect ratios and scales

● For each grid cell, YOLO predicts bounding boxes based on the anchor boxes

● Each bounding box consists of four coordinates (x, y, width, height) relative to the grid cell

● YOLO also predicts the confidence score, which represents the likelihood of containing an object, and the class probabilities for different object categories

● To remove duplicate and overlapping bounding box predictions, YOLO performs non-maximum suppression (NMS)

● NMS selects the most confident bounding box among the overlapping ones based on a predefined threshold

● This process ensures that each object is detected only once

The final output of the YOLO algorithm is a list of bounding boxes, along with their confidence scores and class probabilities

The SSD (Single Shot MultiBox Detector) architecture is a widely recognized object detection algorithm celebrated for its real-time performance and precision It operates by predicting object bounding boxes and class probabilities in a single pass through a convolutional network.

○ Similar to YOLO, SSD starts with a base convolutional network, such as VGG-16 or ResNet, to extract high-level features from the input image

○ The network is typically pretrained on a large-scale image classification task, such as ImageNet, to learn general image representations

○ SSD generates a series of feature maps at multiple scales by adding extra convolutional layers to the base network

○ Each feature map has a different spatial resolution and captures features at a specific scale

○ SSD defines a set of default anchor boxes at different scales and aspect ratios on each feature map

○ The anchor boxes act as priors for predicting bounding boxes

○ Each anchor box represents a potential object location and has associated class predictions

○ For each anchor box, SSD predicts offsets for the coordinates (center x, center y, width, height) relative to the anchor box shape

○ SSD also predicts the class probabilities for each anchor box, representing the likelihood of containing different object categories

○ Predictions are made using a combination of convolutional layers with different kernel sizes

○ SSD matches the predicted bounding boxes to ground truth boxes based on the overlap criterion (e.g., Intersection over Union, IoU)

○ The matching process assigns positive and negative labels to anchor boxes for training

○ The loss is calculated using a combination of localization loss (e.g., Smooth L1 loss) and classification loss (e.g., softmax or focal loss)

The localization loss is the averaged Smooth L1 loss between the encoded offsets of positively matched localization boxes and their ground truths

Then, the confidence loss is simply the sum of the Cross Entropy losses among the positive and hard negative matches

The Multibox loss is the aggregate of the two losses, combined in a ratio α

In general, we needn't decide on a value for α It could be a learnable parameter

○ After prediction, SSD performs non-maximum suppression (NMS) to eliminate redundant and overlapping bounding box detections

○ NMS selects the most confident bounding box among the overlapping ones based on a predefined threshold

○ The final output of SSD is a list of bounding boxes, along with their confidence scores and class probabilities

5.1.4.1 Yolo experiment a) Vehicle detection by Yolo

Figure 5.1.4 Vehicle detection results by Yolo

Figure 5.1.5 Vehicle detection confusion matrix

84 | Page b) Vehicle recognition by Yolo

Figure 5.1.6 Vehicle recognition results by Yolo

Figure 5.1.7 Vehicle recognition confusion matrix

5.1.4.2 SSD Experiment a) Vehicles detect by MB-SSD

Figure 5.1.8 Vehicles detect results by MB-SSD b) Citizen detects by MB-SSD

Figure 5.1.9 Citizen detects results by MB-SSD c) Citizen detects by VGG-SSD

Figure 5.1.10 Citizen detects result by VGG-SSD

Face Detection

P-Net is the first stage of MTCNN and its main goal is to generate candidate face regions in an image

It takes an input image and applies a series of convolutional layers to extract features The output of P-Net consists of two main components:

○ Face classification probability: It predicts the probability of a region containing a face

○ Bounding box regression: It predicts the offset values to adjust the initial bounding box coordinates and better align them with the actual face regions

P-Net uses a non-maximum suppression technique to filter out overlapping and redundant candidate face regions

R-Net is the second stage of MTCNN and its purpose is to refine the candidate face regions generated by P-Net

It takes the candidate face regions as input and processes them through a similar series of convolutional layers

The output of R-Net consists of improved face classification probabilities and more accurate bounding box regressions

R-Net also uses non-maximum suppression to remove redundant and low-confidence face regions

O-Net is the final stage of MTCNN and it focuses on detecting facial landmarks within the refined face regions

It takes the refined face regions from R-Net as input and performs further processing

O-Net predicts the coordinates of several facial landmarks, such as the position of the eyes, nose, and mouth

Similar to the previous stages, O-Net applies non-maximum suppression to filter out overlapping and unreliable detections

MTCNN enhances face detection and facial landmark localization by cascading the P-Net, R-Net, and O-Net stages This multi-stage approach enables iterative refinement, significantly improving the accuracy of both face and landmark detection.

Face Recognition

Size: The dataset contains many identities, with approximately 360,000 individuals represented and 17M images with 120GB

The dataset includes multiple images for each identity, showcasing variations in lighting, poses, expressions, and occlusions This diversity enables a thorough assessment of face recognition models.

Diversity: The dataset encompasses a wide range of demographic factors such as age, gender, and ethnicity, providing a diverse set of faces for training and evaluation

The GLINT360K dataset comprises images sourced from the internet, offering a diverse range of image origins and quality This variety effectively simulates real-world conditions, where facial images are captured in various environments and under different imaging circumstances.

5.3.1.2 Dataset for evaluation agedb_30: The exact size and volume of the Agedb_30 dataset are not specified However, it typically contains face images from 30 individuals at different ages, resulting in a moderate-sized dataset calfw: CALFW (Cross-Age LFW) consists of face images from different age groups The dataset size is approximately 3,000 images cfp_ff: The CFP-FF (Cross-Modality Face Recognition on Frontal Faces) dataset contains face images captured with different devices, illuminations, and poses It has approximately 500 images cfp_fp: The CFP-FP (Cross-Modality Face Recognition on Full Poses) dataset includes face images captured in full poses The dataset size is approximately 7,000 images cplfw: CPLFW (Cross-Pose LFW) is a subset of the Labeled Faces in the Wild (LFW) dataset It contains face images from the original LFW dataset, focusing on pose variations The LFW dataset itself contains approximately 13,000 labeled images of over 5,700 individuals lfw: The LFW (Labeled Faces in the Wild) dataset contains approximately 13,000 labeled images of over 5,700 individuals Each image captures a face under unconstrained conditions, resulting in a diverse dataset vgg2_fp: The VGG2-FP subset, which focuses on face-pair verification, consists of approximately 7K million face pairs

To improve the quality and consistency of face images, it is essential to perform preprocessing steps Key techniques involve resizing the images to a uniform dimension, converting them to grayscale, and normalizing the pixel values.

Apply a feature extraction algorithm or deep learning model to capture discriminative features from the preprocessed face image

Popular methods for face feature extraction include:

○ Convolutional Neural Networks (CNN): Deep learning models that learn hierarchical representations of faces and can extract high-level features

○ MB Face, Rest Face, orVIT: Pretrained CNN models specifically designed for face recognition tasks

After feature extraction, you will obtain a vector or representation that encodes the extracted features

Normalize or scale the feature vector to ensure consistent comparisons across different faces

5.3.3 Calculate Face Distance in Vector Space

ArcFace is a specialized loss function designed for face recognition, aimed at enhancing the discriminative capabilities of deep learning models By incorporating an angular margin between classes in the feature space, it significantly improves inter-class separation This modification of the softmax loss with an angular margin term promotes greater angular distinction among class representations, leading to clearer decision boundaries and improved face recognition performance Particularly effective in large-scale face recognition tasks, ArcFace generates more discriminative face embeddings, thereby boosting recognition accuracy.

- Choose an appropriate distance metric to measure the similarity or dissimilarity between face embeddings

- Common distance metrics used in face recognition include Euclidean distance, Cosine similarity, and L2 normalization distance

- The threshold value may vary depending on the specific application and desired trade-off between false positives and false negatives

- Calculate euclid distance between two face in vector space, if distance smallest than threshold => same be people

Figure 5.2.6.Calculate Face Distance in Vector Space

The ViT model architecture combines the Transformer architecture, originally proposed for natural language processing tasks, with self-attention mechanisms to capture global dependencies in an image [7]

- The input face image is divided into small patches of equal size, similar to a grid

- Each patch represents a local region of the face and contains pixel information

- Since the Vision Transformer lacks positional information, positional encoding is added to the patch embeddings

- Positional encoding helps the model understand the spatial relationship between different patches

- The patch embeddings with positional encoding are then passed through a stack of Transformer encoder layers

- Each encoder layer consists of self-attention mechanisms and feed-forward neural networks

- Self-attention allows the model to attend to relevant patches and capture global dependencies

- Feed-forward neural networks further process the embeddings to refine the learned representations

- Each patch is linearly transformed into a lower-dimensional vector called an embedding

- These patch embeddings capture local features and serve as input to the Transformer model

- The final embedding obtained from the Transformer encoder layers can be used for face recognition

- A classification head, typically consisting of fully connected layers, is added on top of the embeddings

- The classification head maps the embeddings to the desired output, such as identity labels for face recognition

But in task face recognition we will take embedding layer, because it representation for extractive vector of face

Skip connections, or residual connections, are designed to enhance gradient flow in deep neural networks during both forward and backward propagation They address the vanishing gradient problem, which occurs when gradients diminish across multiple layers, hindering earlier layers from learning effective representations By implementing skip connections, gradients can effectively bypass certain layers, improving overall network performance.

94 | Page certain layers and directly propagate to subsequent layers, allowing for smoother gradient flow and aiding in the training process

To address the vanishing gradient problem associated with L1 and L2 regularization, the architecture incorporates skip connections that bypass certain layers and link directly to the output This design allows gradients to backpropagate through the skip connections, preventing performance degradation even when the main path is zero, as information continues to flow during forward propagation.

The Inception architecture addresses the computational expense of using a 5x5 filter on a 28x28x192 input volume by implementing a 1x1 convolution beforehand This 1x1 convolution effectively reduces the number of input channels from 192 to a smaller value, like 32, which significantly decreases the multiplications needed for the following 5x5 convolution By integrating the 1x1 convolution, the Inception architecture enhances computational efficiency while maintaining the ability to capture features at various scales.

To lower computational costs, we can enhance our architecture by incorporating 1x1 convolutions These 1x1 filters reduce the number of weights, leading to fewer calculations and quicker inference times The diagram below illustrates an Inception module, which is a fundamental component of the Inception network that combines multiple such modules.

Inception Module -> Residual Connections -> Stem Block -> Inception-ResNet Blocks -> Global Average Pooling and Classifier:

- In depth-wise convolution, a separate convolutional filter is applied to each input channel individually

- The number of output channels remains the same as the number of input channels, but each output channel corresponds to a specific input channel

- Depth-wise convolution helps capture spatial information within each channel while reducing the number of parameters compared to a standard convolution

- It is performed using small filter sizes, such as 3x3 or 5x5, which slide across each input channel independently

- Point-wise convolution is a 1x1 convolution applied to the output of the depth- wise convolution

- It uses a 1x1 filter to combine the depth-wise outputs, transforming the feature maps by adjusting the number of channels

- The point-wise convolution is responsible for learning channel-wise interactions and creating new representations by linearly combining the depth-wise outputs

- It helps in capturing higher-level features and provides flexibility for the network to adjust the channel dimensions

Dataset IR100 IR34 IR18 VIT MBF agedb_30 98.3% 98.20% 97.5% 97.55% 96.85%

Calfw 96.08% 96.07% 95.78% 95.63% 95.22% cfp_ff 99.8% 99.70% 99.66% 99.59% 99.53% cfp_fp 99.07% 98.14% 96.39% 96.59% 95.61%

Table 5.2.1 ROC curve score table

Here's a summary chart comparing the accuracy of different models on the dataset

The analysis of the dataset reveals that while model R100 exhibits the highest ROC curve among the models, it requires significantly more resources for execution In contrast, the MBF model operates more quickly, despite having a lower performance compared to the other models.

Figure 5.2.14 Face recognition result with time comparison

Optical Character Recognition (OCR)

The card recognition model we use is like the one described in section 5.1

Transformer OCR utilizes Transformer-based models for Optical Character Recognition (OCR), which involves extracting text from images or scanned documents Originally developed for natural language processing (NLP), Transformer models have demonstrated significant effectiveness in sequence-to-sequence tasks, making them highly applicable to OCR applications.

○ The input image containing text is preprocessed and encoded into a suitable format for the Transformer model

○ Common approaches include converting the image to grayscale, resizing, and normalizing pixel values

○ Additional techniques like image enhancement, denoising, and rotation correction can also be employed to improve OCR accuracy

○ The preprocessed image is passed through a feature extraction network, such as a convolutional neural network (CNN)

○ The CNN extracts high-level features from the image, capturing important visual patterns and structures related to the text

○ In order to maintain positional information, positional encoding is added to the extracted features

○ Positional encoding helps the Transformer model understand the relative spatial arrangement of characters in the input image

○ The feature representations, along with positional encodings, are passed to the Transformer architecture

○ The Transformer model consists of encoder and decoder layers that perform attention-based computations

○ The encoder layers capture contextual information from the input features, while the decoder layers generate the output sequence of recognized characters

○ The core component of Transformer models is the self-attention mechanism

○ Self-attention allows the model to attend to different parts of the input sequence and capture dependencies between characters

○ This mechanism is effective in recognizing and aligning characters within the text

○ The Transformer model predicts the output sequence of characters, representing the recognized text

○ Post-processing techniques like beam search or language models can be applied to refine the output and improve accuracy

○ Finally, the recognized text is obtained from the predicted character sequence

Figure 5.4.2 Results of the VietOCR pretrained model

INSTALL AND TEST

Installation Steps

This guide offers detailed instructions for establishing a Python environment to utilize FastAPI alongside machine learning models By adhering to these setup steps, you can develop a web API with FastAPI and seamlessly integrate it with your machine learning model.

Before you begin, ensure that you have the following prerequisites installed on your system:

Git clone source: https://github.com/ngocthien2306/ML.API.git

 pip install -r requirements.txt to install package necessary

Download model and paste into folder in the source

Click execute main.py or use cmd ‘python main.py’, please go to src folder

Access API: http/localhost:8000/docs

Before starting, make sure your system has the following prerequisites: Visual Studio (version 19 or later), the NET SDK, DevExpress downloaded, and the necessary NuGet packages installed.

Git clone source Web: https://github.com/ngocthien2306/Parking.Website.git

Git clone source Client: https://github.com/ngocthien2306/ClientVehicleManagement.git

Software Testing

Purpose: Test some important functions of the system

1 Users enter the parking lot, employees

The system will check by recognizing the license plate and check if this vehicle can perform the function of entering the parking lot

2 User Register a new account by citizen id

The system will use user information entered including citizen identification, date of birth, name, face to register users to allow access to apartments

3 User Register a new account by citizen phone

The system will use user information entered including phone, date of birth, name, face to register users to allow access to apartments

Tracking User checks in to the system

PKT_01 The customer enters the system and points the license plate and face to the camera

Users are allowed to enter the parking lot

User checks out to the system

PKT_02 The user exits the system with a face matching the vehicle's license plate

Allow vehicles to go outside the system

PKT_03 The user exits the system with a face that doesn't match the vehicle's license plate

Display notification that vehicles are not allowed to go out and allow users to

"Report" to the parking manager

Register a new account by citizen id

User needs to register the system

RES_01 Users enter all the information needed to register the system including citizen identification, date of birth, name, and face

Notice of successful user registration

Register a new account by citizen id

User needs to register the system

RES_02 Users enter all the information needed to register the system including phone,

Notice of successful user registration

104 | Page date of birth, name, and face

Test Case ID: PKT_01 Test Designed by: Pham Van Manh Hung

Post Condition: Notice to allow users to enter the parking lot

Step Test steps Test data Expected result Actual result

1 Vehicles move into the shooting area of the camera

Notice to allow users to enter the parking lot

Notice to allow users to enter the parking lot

Test Case ID: PKT_02 Test Designed by: Pham Van Manh Hung Precondition:

Post Condition: Notice to allow users to go out the parking lot

Step Test steps Test data Expected result Actual result

1 Vehicles move into the shooting area of the camera

Allow vehicles to go outside the system

Allow vehicles to go outside the system

Test Case ID: PKT_03 Test Designed by: Pham Van Manh Hung Precondition:

Post Condition: Display notification that vehicles are not allowed to go out and allow users to "Report" to the parking manager

Step Test steps Test data Expected result Actual result

1 Vehicles move into the shooting area of the camera

Display notification that vehicles are not allowed to go out and allow users to

"Report" to the parking manager

Display notification that vehicles are not allowed to go out and allow users to "Report" to the

6.2.2.Register a new account by citizen id

Test Case ID: RES_01 Test Designed by: Pham Van Manh

Post Condition: Notice of successful user registration

Step Test steps Test data Expected result

1 Enter the citizen Id number citizen Id“0927148099”

Notice of successful user registration

Notice of successfu l user registratio n

2 citizen Id taken citizen Id Image

6.2.3.Register a new account by Phone

Test Case ID: RES_02 Test Designed by: Pham Van

Post Condition: Notice of successful user registration

Step Test steps Test data Expected result

Notice of successful user registration

Notice of successful user registratio n

6 Enter full name Name= “Hung”

CONCLUTION

Ngày đăng: 05/12/2023, 10:03

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm