1. Trang chủ
  2. » Luận Văn - Báo Cáo

Operating system (cc05) assignment simple operating system

28 0 0
Tài liệu được quét OCR, nội dung có thể không chính xác
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Simple Operating System
Tác giả Bui Trung Hai, La Vi Luong, Nguyen Viet Phuong
Người hướng dẫn Le Thanh Van
Trường học Vietnam National University, Ho Chi Minh City, University of Technology of Computer Science and Engineering
Chuyên ngành Computer Science and Engineering
Thể loại Assignment
Năm xuất bản 2022
Thành phố Ho Chi Minh City
Định dạng
Số trang 28
Dung lượng 4,32 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

VIETNAM NATIONAL UNIVERSITY, HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY FACULTY OF COMPUTER SCIENCE AND ENGINEERING Advisor : Le Thanh Van Students: Bui Trung Hai - 2052082 La Vi Luong -

Trang 1

VIETNAM NATIONAL UNIVERSITY, HO CHI MINH CITY

UNIVERSITY OF TECHNOLOGY FACULTY OF COMPUTER SCIENCE AND ENGINEERING

Advisor : Le Thanh Van

Students: Bui Trung Hai - 2052082

La Vi Luong - 2052590 Nguyen Viet Phuong - 2052659

HO CHI MINH CITY, December 2022

Trang 2

A University of Technology, Ho Chi Minh City

two “virtual” resources : CPU(s) and RAM using two core components : Scheduler (and Dispatcher) and Virtual memory engine (VME) We focus mainly on implementing the functions of the OS and

answering the questions included in the assignment

Trang 3

A University of Technology, Ho Chi Minh City

in memory that are ready to execute and allocates the CPU to that process CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU’s

core There are many different CPU scheduling algorithms The OS uses priority feedback queue

to determine which process to be executed when a CPU becomes available The scheduler design is based on “multilevel feedback queue” algorithm used in Linux kernel In this part is to implement this algorithm by completing the following functions :

— enqueue() and dequeue() (in queue.c) : to help put a new PCB to the queue and get a PCB with the highest priority out of the queue

— get_proc() (in sched.c) : gets PCB of a process waiting at ready queue If the queue is empty at the time the function is called, you must move all PCBs of processes waiting at run queue back to ready queue before getting a process from ready queue

Trang 4

o> University of Technology, Ho Chi Minh City

FIGURE 3 - Snippet of enqueue ()

Explanation : This function in file queue.c puts a new process at the last position of the queue when the current size of the queue is less than its capacity After insertion, we increment the size of the queue

Trang 5

o> University of Technology, Ho Chi Minh City

BK

Faculty of Computer Science and Engineering

2.1.2 dequeue()

FIGURE 3 -— Snippet of dequeue ()

Explanation : This function in queue.c returns the max priority process in the queue We return NULL if the current queue does not contain any processes Otherwise, we traverse the queue to find the highest priority process in the queue and return it After the removal, we adjust the queue by moving the processes after the one we found up 1 unit and adjust the size of it

Trang 6

o> University of Technology, Ho Chi Minh City

BK

Faculty of Computer Science and Engineering

2.1.3 get proc()

FIGURE 4 — Snippet of get_proc()

Explanation : This function in sched.c returns a process in ready_queue If the ready_queue is

empty, we enqueue it with all the processes in run_queue and then we get the highest priority process

in the updated ready_queue The function will return NULL if both run_queue and ready_queue

are empty

Trang 7

o> University of Technology, Ho Chi Minh City

Faculty of Computer Science and Engineering

Trang 8

® a University of Technology, Ho Chi Minh City Faculty of Computer Science and Engineering

ia icon eT Dispatched p s 2 Put process 2 t0 nưn quaue

›patched proress 2 Processed 2 has finished

Put process 2 to run queue

Dispatched process 2 Processed 2 has finished Dispatched process 3

Put process 3 to run queue

: Dispatched process 3

: Put process 3 to run queue

8: Dispatched process 3 slot 29

ut process 4 to run queue

itched process 4

: Put process 4 to run queue

: Dispatched process 4 slot 49

slot 41 CPU 9:

CPU 8:

slot 42 slot 43

Put process 4 to run queue

Dispatched process 4 Processed 4 has finished

CPU 8 stoppad

: Read file output/schad 1 to verify your result

FIGURE 6 - Snippet of sched 1 result

Trang 9

University of Technology, Ho Chi Minh City

Faculty of Computer Science and Engineering

Question : What is the advantage of using priority queue in comparison with other scheduling al- gorithms you have learned ?

Solution :

In comparision with other CPU scheduling algorithms :

First-Come, First-Served Scheduling : Since FCFS is the non-preemptive scheduling algorithm, there is a possibility that a longer CPU burst process will arrive before shorter ones causing the Convoy effect : shorter processes wait for too long In contrast, the Multilevel (Priority) Queue uses priority in the mlq_ready_ queue, so the shorter burst time process can run before the longer one as long as the shorter has a higher relative priority, minimizing such

unwanted effects

Shortest-Job-First Scheduling : Although the SJF and its pre-emptive version SRTF are probably the optimal scheduling algorithms, they have 2 major problems : starvation and inability to correctly predict the future The MLQ allocates only a specific time slice for each

process but it can still suffer from starvation However, the MLQ does not involve futuristic

characteristics, which makes the algorithm seem more reliable

Round-Robin Scheduling : Regarding the Round-Robin is a good enough algorithm, it would be even better if the priority of the processes are taken into consideration; MLQ does support this because the mlq_ready_ queue is, once again, a priority queue

Priority Scheduling : With priority scheduling, both the pre-emptive and non-preemptive versions face the same challenge : starvation The Multilevel Queue, as discussed above, also

suffers such situations

Multi-level Feedback Queue : Multi-level Feedback Queue is considered as a really good scheduling algorithm However, in the last (the lowest) level, it has to employ either FCFS

or SJF to let the leftover processes run until finish This causes problems due to the fact the FCFS and SJF, themselves, has disadvantages It is really hard to compare these two Priority Feedback Queue : Priority Feedback Queue is considered as one of the best scheduling algorithms The Priority Feedback Queue has three advantages compared to others : Shorter waiting time, Prevent starvation, and Taken priority into consideration Even though MLQ has low scheduling overhead and has taken priority into consideration, it still suffer from starvation

In conclusion, the Multilevel Queue has two advantages : low scheduling overhead and taking priority

of the process into consideration One disadvantage is as once a process is classified into a particular

queue, no modification is allowed This restriction can lead to starvation when lower priority queues

cannot run because higher priority queues continuously receive new processes

Trang 10

A University of Technology, Ho Chi Minh City

Test 1 : Consist of 2 processes pl and p2 in 23 time slots

GANTT CHART FOR SCHEDULING 0 TEST (TIME SLICE = 2s, 2 PROCESSES) Process

| S0: Priority = 12, Instruction = 15, Prio = 0 0 so 0

S1: Priority = 20, Instruction = 7, Prio = 3 4 s13

Trang 11

A University of Technology, Ho Chi Minh City

BK

: Faculty of Computer Science and Engineering

Test 2 : Consist of 4 processes pl, p2, p3 and p4 in 45 time slots

s3: = 7, Instruction = 11, Prio = 3 FIGURE 8 — Gantt chart of Test 2

Trang 12

A University of Technology, Ho Chi Minh City

BK

ˆ Faculty of Computer Science and Engineering

2.2 Memory Management

Virtual memory : a technique that allows the execution of processes that are not completely in memory One major advantage of this scheme is that programs can be larger than physical memory Further, virtual memory abstracts main memory into an extremely large, uniform array of storage, separating logical memory as viewed by the programmer from physical memory This technique frees programmers from the concerns of memory-storage limitations Virtual memory also allows processes

to share files and libraries, and to implement shared memory In addition, it provides an efficient mechanism for process creation

Paging : a memory management scheme that permits a process’s physical address space to be noncontiguous Paging avoids external fragmentation and the associated need for compaction, two problems that plague contiguous memory allocation Because it offers numerous advantages, paging

in its various forms is used in most operating systems, from those for large servers through those for mobile devices Paging is implemented through cooperation between the operating system and the computer hardware

In this section , we need to implement the process of translating virtual address of a process to physical one by completing the following functions :

— get_page_table() (in mem.c) : find the page table given a segment index of a process

— translate() (in mem.c) : use get_page_table() function to translate a virtual address to

physical address

2.2.1 get page _table()

FIGURE 9 — Snippet of get_trans_table()

According to Segmentation with Paging, we kept seg_table as a list of elements assigned to v_index (search key) and a pointer pointing to page_table Hence, we traverse the list and we traverse and search using the given index

Trang 13

o> University of Technology, Ho Chi Minh City

BK

ˆ Faculty of Computer Science and Engineering

2.2.2 translate()

To translate from a given addr_t virtual_addr to physical_addr, we first search first level in the

process to get its page table, then continue to search in the second level to get its p_index

The physical_addr to return is created by first 10-bit p_index and 10-bit left for the virtual_addr offset

Trang 14

o> University of Technology, Ho Chi Minh City

Trang 15

o> University of Technology, Ho Chi Minh City

Faculty of Computer Science and Engineering

addr _t tmp address addr_t page_id:

Trang 16

o> University of Technology, Ho Chi Minh City

Faculty of Computer Science and Engineering

2.2.4 free mem()

(&mem_lock);

ble t *found

1_addr SOT eRe

t index = physical addr

index addr_t curr_addr = ad

addr_t page idx

Trang 17

o> University of Technology, Ho Chi Minh City

Trang 18

o> University of Technology, Ho Chi Minh City

Faculty of Computer Science and Engineering

Trang 19

A University of Technology, Ho Chi Minh City

— Pages are smaller than segments

— Each Segment has a page table which means every program has multiple page tables

— The logical address is represented as Segment Number (base address), Page number and page

offset

Advantages of Segmentation with Paging :

— The page table size is reduced as pages are present only for data of segments, hence reducing the memory requirements

— Gives a programmers view along with the advantages of paging

— Reduces external fragmentation in comparison with segmentation

— Since the entire segment need not be swapped out, the swapping out into virtual memory

becomes easier

Disadvantages of Segementation with Paging :

— Internal fragmentation still exists in pages

— Extra hardware is required

— Translation becomes more sequential increasing the memory access time

— External fragmentation occurs because of varying sizes of page tables and varying sizes of segment tables in today’s systems

Trang 20

University of Technology, Ho Chi Minh City

Faculty of Computer Science and Engineering

2.3 Put It All Together

Question : What will happen if the synchronization is not handled in your simple OS? Illustrate

by example the problem of your simple OS if you have any

Solution :

Process Synchronization is the task of coordinating the execution of processes in a way that no two processes can have access to the same shared data and resources.It is specially needed in a multi- process system when multiple processes are running together, and more than one processes try to gain access to the same shared resource or data at the same time This can lead to the inconsistency

of shared data And this simple OS we’re creating, this will eventually lead to unwanted or faulty results

We combine scheduler and Virtual Memory Engine to form a complete OS and obtain the result

NOTE: Read file output/m0 to verify your

- MEMORY MANAGEMENT TEST 1

Loaded a process at input/proc/s0, PID: 1 PRIO: 0

CPU 0: Dispatched process 1

CPU 0: Put process

CPU 0: Dispatched process 1

Trang 21

University of Technology, Ho Chi Minh City

Faculty of Computer Science and Engineering

CPU 0: Put process 1 to run queue

CPU 0: Dispatched process 1

slot 10

slot 11

CPU 0: Put process 1 to run queue

CPU 0: Dispatched process 1

slot 12

slot 13

CPU 0: Put process 1 to run queue

CPU 0: Dispatched process 1

slot 14

slot 15

CPU 0: Put process 1 to run queue

CPU 0: Dispatched process 1

slot 16

CPU 0: Processed 1 has finished

CPU 0: Dispatched process 2

slot 17

slot 18

CPU 0: Put process 2 to run queue

CPU 0: Dispatched process 2

slot 19

slot 20

CPU 0: Put process 2 to run queue

CPU 0: Dispatched process 2

slot 21

slot 22

CPU 0: Put process 2 to run queue

CPU 0: Dispatched process 2

CPU 0: Put process 1 to run queue

CPU 0: Dispatched process 1

slot 4

Trang 22

A University of Technology, Ho Chi Minh City

Put process 2 to run queue Dispatched process 2

Put process 2 to run queue Dispatched process 2

Put process 2 to run queue Dispatched process 2 Processed 2 has finished Dispatched process 3 Put process 3 to run queue

Trang 23

A University of Technology, Ho Chi Minh City

CPU 0: Put process 3 to run queue

CPU 0: Dispatched process 3

Time slot 28

Time slot 29

CPU 0: Put process 3 to run queue

CPU 0: Dispatched process 3

Time slot 30

Time slot 31

CPU 0: Put process 3 to run queue

CPU 0: Dispatched process 3

Time slot 32

Time slot 33

CPU 0: Put process 3 to run queue

CPU 0: Dispatched process 3

Time slot 34

Time slot 35

CPU 0: Processed 3 has finished

CPU 0: Dispatched process 4

Time slot 36

Time slot 37

CPU 0: Put process 4 to run queue

CPU 0: Dispatched process 4

Time slot 38

Time slot 39

CPU 0: Put process 4 to run queue

CPU 0: Dispatched process 4

Time slot 40

Time slot 41

CPU 0: Put process 4 to run queue

CPU 0: Dispatched process 4

Time slot 42

Time slot 43

CPU 0: Put process 4 to run queue

CPU 0: Dispatched process 4

Time slot 44

Time slot 45

CPU 0: Put process 4 to run queue

CPU 0: Dispatched process 4

Ngày đăng: 28/10/2024, 12:13