1. Trang chủ
  2. » Luận Văn - Báo Cáo

Operating systems (co2018) assignment simple operating system

83 4 0
Tài liệu được quét OCR, nội dung có thể không chính xác
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Simple Operating System
Tác giả Nguyen Nguyen, Phuong Duy Ngoc Khoi, QuangPhu Tien Hung, Thai Quang Phat, Nguyen Minh Khoi, Dinh Ba Khanh
Người hướng dẫn Advisor: Unknown
Trường học Vietnam National University, Ho Chi Minh City University of Technology
Chuyên ngành Computer Science and Engineering
Thể loại Bài tập
Năm xuất bản 2024
Thành phố Ho Chi Minh City
Định dạng
Số trang 83
Dung lượng 10,34 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Each virtual page ofa process has a page table to assist the process in accessing the correspond- ing physical frames.. Figure 2: Paging hardware Bits 0-4: Swap type if swapped Bits 5-2

Trang 1

FACULTY OF COMPUTER SCIENCE AND ENGINEERING

Advisor: Nguyen Phuong Duy

Students: Nguyen Ngoc Khoi - 2252378 (CC07) Nguyen QuangPhu - 2252621 (CC07) Nguyen Tien Hung - 2252280 (CC07)

Thai Quang Phat - 2252606 (CC07) Nguyen Minh Khoi - 2252377 (CC07) Dinh Ba Khanh - 2252323 (CC07)

HO CHI MINH CITY, MAY 2024

Trang 2

Contents

1 Introduction

2 Background

2.1 Scheduler 2 20 000000000000 00 00000

2.2 MemoryManagement

2.3 Putitalltogether ẶẶ ẶẶ VỀ 3_ Answering Questions 3.1 Priority Queue Advantage

3.2 Multiple Segments Advantage

3.3 Divide Addressin Paging

3.4 Paging Advantage & Disadvantage

3.5 MulticoreProblem

3.6 Synchronizaton Problem

4 Implementation Progress 4.1 Scheduler 2 2 000 eee 4.2 Memory 0.00.0 eee eee ee eee eee 4.3 Translaton LookasideBufer

5 Interpretation 5.1 Scheduler 2 20 00.00.0000 00000000 5.2 MemorySynchronizalon

Trang 3

1 Introduction

The assignment simulate a simple operating system that specializes in scheduling, synchroniza- tion and memory management The implementation of important functions and results from tests are described the following sections

2 Background

2.1 Scheduler

In this assignment, Multi Level Queues (MLQ) algorithm is used to decide which process will

be executed during scheduling

MLQ algorithm have separate queues for each distinct priority, and priority scheduling simply schedules the process in the highest-priority queue

The operating system needs to schedule the queues

Assignment for Operating System - Academic year 2023 - 2024 Page 2/82

Trang 4

Fixed priority scheduling: Serving from the highest to the lowest priority queue How- ever, it can lead to potential starvation

Time slice: Each queue is allocated a CPU time slice and distributed among processes in that queue during that time period Each time unit is called a timeslot In this assignment, queues will be executed using the Round Robin scheduling algorithm with the number of allocations per run equal to max_priority minus priority

In this assignment, we design the scheduling for MLQ algorithm using Time slice approach 2.2 Memory Management

In this assignment, the Paging technique is utilized for memory management The computer employs this technique to store and retrieve data from secondary memory into main memory for use

The fundamental approach to implementing paging consists of dividing physical memory into fixed-sized units known as frames, and dividing logical memory into blocks of the same size referred to as pages When a process needs to be executed, its pages are loaded into any avail- able memory frames from their source, which could be a file system or the backing store The backing store is further divided into fixed-sized units that match the size of memory frames or clusters of multiple frames Despite its simplicity, this concept offers significant functionality and has extensive implications

Every address generated by the CPU is divided into two parts: a page number(p) and a page

offset (d)

The page number serves as an index within a page table specific to each process This table stores the base address of every frame in physical memory, while the offset represents the po- sition within the referenced frame Consequently, the base address of the frame is added to the page offset to determine the physical memory address

Retrieve the page number (p) and utilize it as an index within the page table Retrieve the cor- responding frame number (f) from the page table Substitute the page number (p) in the logical address with the frame number (f) Then, these are the steps of translating a logical address generated by the CPU to a physical address As the offset d does not change, it is not replaced, and the frame number and offset now comprise the physical address

Each virtual page ofa process has a page table to assist the process in accessing the correspond- ing physical frames This table contains entries (Page Table Entries - PTE), each with a 32-bit

value, defined in terms of data and structure as follows:

Trang 5

Figure 2: Paging hardware

Bits 0-4: Swap type if swapped

Bits 5-25: Swap offset if swapped

Bits 0-12: Page frame number (PFN) if present

Bits 13-14: Zero if present

Bits 15-27: User-defined numbering if present

The operating systems and memory management subsystems ensure that each process pos- sesses its own page table This table contains entries known as page table entries, which fur- nish the frame number The main challenge lies in optimizing the access time for these entries,

Trang 6

which involves twice the access time compared to reading the page table itself (which is typi- cally stored in main memory or MEMPHY) and accessing the memory data in MEMPHY With large process sizes resulting in significant overhead, the Translation Lookaside Buffer (TLB) is introduced to capitalize on cache capabilities due to its high-speed nature Although TLB is es- sentially MEMPHY, it functions as a high-speed cache for page table entries However, memory cache is a high-cost component; therefore, it has limited capacity

There are some fundamental works of TLB:

- TLB Accessing Method: TLB is MEMPHY with support the mapping mechanism to deter- mine how the content is associated with the identifier info In this work, we leverage the equipped knowledge of previous course about the computer hardware, where it employs the cache mapping techniques: direct mapped/ set associative/ fully associative

- TLB Setup: TLB contains recently used page table entries When the CPU generates a

virtual address, it checks the TLB:

+ Ifa page table entry is present (a TLB hit), the corresponding frame number is re-

trieved,

+ Ifa page table entry is not found (a TLB miss), the page number is used as an index

to access the page table in main memory If the page is not in main memory, a page fault occurs, and the TLB is updated with the new page entry

- TLB Hit:

+ CPU generates a virtual address

+ TLB is checked (entry present)

+ Corresponding frame number is retrieved

- TLB Miss:

+ CPU generates a virtual address

+ TLB is checked (entry not present)

+ Page number is matched to the page table in main memory

+ Corresponding frame number is retrieved

2.3 Putit all together

After working with 2 above sections, we have the complete organization of the OS memory man- agement, which is presented by this graph below

The final step remaining is synchronization Given that the operating system operates across multiple processors, there’s a potential for shared resources to be simultaneously accessed by multiple processes

Trang 7

logical address

CPU mal pid

page frame number number

Answer The scheduling strategy used in this assignment is Priority Queue Some advantages of using a Priority Queue over other scheduling algorithms include:

* Prioritizing Important Processes: With a Priority Queue, more critical processes are ex- ecuted first, ensuring timely handling and correctness of the system

* Ensure Priority: Processes with higher priority are always allocated more CPU time, while those with lower priority still have a chance to be served, avoiding situations where they

wait in the queue for too long (starvation)

* Improve Response Time: Using a priority queue allows a flexible combination of schedul- ing algorithms for different queues Specifically, in the assignment, the round-robin al- gorithm is used This ensures that each process is allocated a certain amount of CPU

Assignment for Operating System - Academic year 2023 - 2024 Page 6/82

Trang 8

physical memory address space

Figure 4: Simple Operating System

time, preventing one process from coming first and occupying the entire CPU, even for the highest priority process

CPU Utilization Optimization: By efficiently prioritizing important processes, the CPU can be utilized more effectively, thereby improving overall system performance Flexibility and Dynamism: Priority Queues can dynamically adjust priorities based on changing system conditions It emphasizes the responsiveness of the scheduling strategy

to adapt to varying workload intensities, system states, or external factors in real-time Adaptability to Different Priority Tasks: This emphasizes the capability of Priority Queues

to handle tasks with varying priorities efficiently It highlights how the scheduling strategy can accommodate different types of tasks, each with its own priority level, ensuring that critical tasks are addressed promptly while maintaining fairness in resource allocation

Assignment for Operating System - Academic year 2023 - 2024 Page 7/82

Trang 9

3.2 Multiple Segments Advantage

3.3

Process Isolation:Each process operates within its own isolated address space This pre- vents unintended interference between processes, enhancing security and stability Memory Protection:Segmentation allows different access permissions for memory seg- ments Processes can only access authorized memory areas, preventing data corruption Flexible Allocation:The OS can allocate or de-allocate memory segments dynamically, adapting to resource needs This optimizes memory utilization

Shared Memory Areas:Processes can communicate via shared memory segments Effi- cient inter-process communication is facilitated

Addressing Efficiency:Segmented memory simplifies addressing Processes refer directly

to specific segments, improving efficiency

Fragmentation Handling:Segmentation minimizes memory fragmentation, ensuring ef- ficient memory usage

Enhanced Security:Isolation, protection, and controlled access contribute to system se- curity

Divide Address in Paging

Question 3 What will happen if we divide the address to more than 2-levels in the paging

Answer If we divide the address into more than 2 levels in a paging memory management sys- tem, it can lead to both advantages and disadvantages:

Advantages:

¢ Scalability: The system becomes more scalable with a larger number of addresses

¢ Efficient Management: Additional paging levels can help manage memory more effi- ciently, especially for systems with multiple entities or processes

Trang 10

* Segmented paging simplifies memory allocation processes With fixed-size pages, man- aging memory becomes more straightforward

Disadvantage:

* Each segment has its page table, consuming more hardware resources

* Page tables in segmented paging are contiguous in memory The translation is sequential which leads to increased memory access time

3.5 Multicore Problem

Question 5 What will happen if the multi-core system has each CPU core can be run in a

different context, and each core has its own MMU and its part of the core (the TLB)? In mod-

ern CPU, 2-level TLBs are common now, what is the impact of these new memory hardware

Trang 11

| configurations to our translation schemes?

AnsweI

In a multi-core system where each CPU core has its own Memory Management Unit (MMU)

and Translation Lookaside Buffer (TLB), memory management becomes more efficient and se-

cure due to better isolation and quicker context switching capabilities The use of 2-level TLBs, with a faster L1 and a larger L2, enhances memory access speeds by improving TLB hit rates and reducing the frequency of slow page table walks However, this setup requires sophisticated translation schemes and synchronization mechanisms to maintain coherence among the TLBs across different cores, especially when handling updates to shared memory regions This com- plexity ensures that each core can independently manage its memory translations while still maintaining overall system consistency and performance

3.6 Synchronization Problem

Question 6 What will happen if the synchronization is not handled in your simple OS? Illustrate the problem of your simple OS by example if you have any

Answet

In a simple operating system without synchronization, concurrent access to shared resources can result in race conditions and data inconsistencies For instance, if two processes simulta- neously attempt to increment a shared counter without coordination, a race condition may oc- cur, leading to unpredictable outcomes Data inconsistency arises from interleaved execution, where processes read partially updated values, compromising the integrity of shared resources Lost updates may happen if one process overwrites the changes made by another concurrently, resulting in an inaccurate representation of the resource Without synchronization, the system’s behavior becomes unpredictable and challenging to debug due to the non-deterministic nature

of concurrent execution To address these issues, synchronization mechanisms like locks and semaphores are essential in operating systems to coordinate and control access to shared re- sources, ensuring a consistent and correct system behavior ina multi-process or multi-threaded environment

Trang 12

int timeslot; // add new param

Code 1: Code of struct queue_t in queue.h

4.1.2 Function enqueue() in queue.c

void enqueue(struct queue_t *q, struct pcb_t *proc)

/* TODO: put a new process to queue [q] */

Code 2: Code of enqueue() in queue.c

4.1.3 Function dequeue() in queue.c

#ifdef MLQ_SCHED

struct peb_t *dequeue(struct queue_t ¥*q)

/* TODO: return a pcb whose priority is the highest

* in the queue [q] and remember to remove it from q

Trang 13

struct peb_t *ret_proc = q->proc[ind];

for (int i ind; i < q->size - 1; i++)

Code 3: Code of dequeue0 in queue.c

4.1.4 Function get_mlq_proc() in sched.c

Here we take process out of ready queue and put it in the CPU, this follows the MLQ mechanism Mutex lock is used to lock the critical sections where process is being executed in the CPU, then unlock when it’s done or exceeded quantum time q

/*

* Stateful design for routine calling

* based on the priority and our MLQ policy

* We implement stateful here using transition technique

* State representation prio = 0 MAX_PRIO, curr_slot = 0 (MAX_PRIO - prio)

*/

Trang 14

24 struct peb_t *proc = NULL;

25 // TODO: get a process from PRIORITY [ready_queue] Remember to use lock to

protect the queue

a list of mapped frames (frames), and a pointer to struct vm_rg_struct to facilitate the return of the mapped region

During the execution of this method, a comprehensive approach is adopted A for loop is em- ployed to systematically traverse through the current frame, synchronously integrating the page

amount into a FIFO (First-In-First-Out) queue This systematic traversal ensures that each

frame is duly considered and incorporated into the mapping process, contributing to the in- tegrity and efficiency of the virtual memory mapping operation

1 /*

* vymap_page_range - map a range of page at aligned address

Trang 15

a

*/

int vmap_page_range (struct peb_t *caller, // process call

int addr, // start address which is aligned to pagesz

int pgnum, // num of mapping page struct framephy_struct *frames, // list of the mapped frames struct vm_rg_struct *ret_rg) // veturn mapped region, the

real mapped fp

{ // no guarantee all given pages

are mapped

// aint32_t * pte = malloc (sizeof (uint32_t));

struct framephy_struct *fpit = malloc(sizeof(struct framephy_struct));

/* TODO map range of frame to address space

ae [addr to addr + pgnum*PAGING_PAGESZ

* in page table caller->mm->pgd[]

*/

/* Tracking for later page replacement activities (if needed)

* Enqueue new usage page */

// Increase limit of rg_end

ret_rg->rg_end = addr + pgnum * PAGING_PAGESZ;

while (fpit != NULL && pgit < pgnum)

Trang 16

4.2.2 alloc_pages_range() in mm.c

This method is responsible for the allocation of a specified number of frames (req_pgnum) fora given process It performs this function by managing the mapping of a range within the virtual memory area, ensuring that the necessary frames required by the process are assigned appro- priately

This role within the system’s memory management is pivotal, as it guarantees that processes obtain the essential memory resources they need for execution By effectively overseeing frame allocation, this method plays a crucial role in maintaining system functionality, contributing to its smooth operation and reliability

struct framephy_struct *newfp_str;

struct framephy_struct *temp;

if ((caller->mram->maxsz / PAGING_PAGESZ) < req_pgnum)

Trang 17

woe

vicpte = FIFO_find_vt_page_for_swap ();

/*Get frame of victim page*/

int vicfpn = GETVAL(*vicpte, PAGING_PTE_FPN_MASK, PAGING_PTE_FPN_LOBIT);

7 #ifdef RAM_STATUS_DUMP

printf("[Page Replacement]\tPID #4d:\tVictim:%d\tPTE:4%08x\n", caller->pid, vicfpn, *vicpte);

#endif

/*Swap from RAM to SWAP*/

Swap_cp_page(caller->mram, vicfpn, caller->active_mswp, swpfpn) ;

Code 6: Code of alloc_pages_range() in mm.c

4.2.3 MEMPHY_ dump() in mm-memphy.c

In our current task, we’re tasked with implementing the MEMPHY dump( method, specifically designated for debugging purposes Our primary goal is to exhibit the contents of the physical memory struct comprehensively, adhering strictly to the debugging objective without incorpo- rating any supplementary functionalities

To accomplish this, our implementation should meticulously traverse through the physical mem- ory struct, meticulously documenting and presenting each component's details It’s crucial to ensure that the dump() method provides an exhaustive insight into the internal structure of the physical memory, aiding in the identification and resolution of potential issues or anomalies

By adhering closely to the debugging-focused mandate and presenting a thorough overview of the physical memory struct’s content, our implementation will serve as a valuable tool for diag- nosing and troubleshooting any underlying concerns within the system’s memory management framework

int MEMPHY_dump(struct memphy_struct *mp)

{

Trang 18

/*TODO dump memphy contnt mp->storage

* for tracing the memory content

*/

printf("\n");

printf("Print content of RAM (only print nonzero value)\n");

for (int i = 0; i < mp->maxsz; i++)

Code 7: Code of alloc_pages_range() in mm.c

4.2.4 Allocate memory - alloc() in mm-vm.c

This method, as its name implies, is designed to facilitate the allocation of a memory region within the virtual memory area for the invoking process It follows a systematic procedure to ac- complish this task Initially, it endeavors to locate a free memory region of adequate size for the

allocation process This search is conducted through the utilization of the get_free_vmrg_area(Q)

function, which scrutinizes the virtual memory area to identify a suitable region However, if such a region is not found, indicating a shortage of available memory space, the method initi- ates a procedure to expand the limit of the virtual memory area

This expansion mechanism, executed through the inc_vna_limitQ function, aims to augment

the available memory regions by adjusting the limit of the virtual memory area If this expansion operation succeeds, the method proceeds to update the symrgtbl (symbolic region table) with the pertinent information regarding the newly created memory region This includes recording the starting and ending addresses of the region, ensuring comprehensive documentation and management of the virtual memory layout

By adhering to this systematic approach, the method ensures that the calling process is pro- vided with a suitable memory region for its allocation requirements Moreover, the inclusion

of mechanisms to handle potential memory scarcity situations enhances the robustness and reliability of the memory allocation process within the system

/* alloc - allocate a region memory

*@caller: caller

Trang 19

Ø

*@vmaid: ID vm area to alloc memory region

*@rgid: memory region ID (used to identify variable in symbole table)

*@size: allocated size

*@alloc_addr: address of allocated memory region

/*Allocate at the toproof */

if (rgid < 0 || rgid > PAGING_MAX_SYMTBL_SZ)

printf("Process %d alloc error: Invalid region\n", caller->pid) ;

return -1;

else if (caller->mm->symrgtbl[rgid].rg_start > caller->mm->symrgtbl[rgid].rg_end)

printf("Process 4d alloc error: Region was alloc before\n", caller->pid); return -1;

if (get_free_vmrg_area(caller, vmaid, size, &rgnode) == 0)

struct vm_area_struct *cur_vma = get_vma_by_num(caller->mm, vmaid);

printfi("VMA id 41d : start = %lu, end = %lu, sbrk = %lu\n", cur_vma->vm_id,

RAM_dump (caller ->mram) ;

struct vm_area_struct *cur_vma = get_vma_by_num(caller->mm, vmaid);

int inc_sz = PAGING_PAGE_ALIGNSZ (size) ;

// int inc_limit_ret

Trang 20

103

/* TODO INCREASE THE LIMIT

* inc_vma_limit(caller, vmaid, inc_sz)

*/

inc_vma_limit(caller, vmaid, inc_sz);

/*Successful increase limit */

caller ->mm->symrgtbl[rgid].rg_start = old_sbrk;

caller ->mm->symrgtbl[rgid].rg_end = old_sbrk + size;

// Add free rg to region list

struct vm_rg_struct *rgnode_temp = malloc(sizeof(struct vm_rg_struct));

printf("Process 4d Free Region list \n", caller->pid);

while (temp != NULL)

printfi("VMA id 41d : start = %lu, end = %lu, sbrk = %1lu\n", cur_vma->vm_id,

RAM_dump (caller ->mram) ;

Trang 21

4.2.5 Free up the unused memory - free mem() in mm-vm.c

The freeQ method entails a multi-step process aimed at efficiently reclaiming a process's virtual memory region and reincorporating it into the pool of available memory regions This task is made relatively simple by the implementation of robust methods designed for memory reuti- lization

Initially, the method involves identifying and retrieving the virtual memory region associated with the specific process Once this region has been isolated, it undergoes a series of proce- dures to ensure its proper release and readiness for potential reuse

Following the retrieval of the memory region, any associated resources or data structures linked

to the region are appropriately deallocated or released This step ensures that no lingering refer- ences or dependencies remain, allowing for a clean and thorough release of the memory region Subsequently, the freed memory region is reintegrated into the list of available memory regions, effectively making it accessible for future allocation requests This involves updating relevant data structures or memory management tables to reflect the availability of the reclaimed region Throughout this process, careful attention is paid to maintaining the integrity and coherence

of the memory management system Additionally, considerations may be made to optimize the reutilization of freed memory regions, such as implementing strategies for memory com- paction or defragmentation to mitigate fragmentation issues and maximize available memory resources

By adhering to these steps and leveraging well-developed methods for memory reutilization, the freeQ method effectively manages the release of process memory regions, ensuring efficient utilization of system resources and facilitating the seamless operation of memory management processes

2 *@caller: caller

3 *@vmaid: ID vm area to alloc memory region

4 *@rgid: memory region ID (used to identify variable in symbole table)

5 *@size: allocated size

7 */

8 // Function Declaration

9 int free(struct peb_t *caller, int vmaid, int rgid)

13 if (rgid < 0 || rgid > PAGING_MAX_SYMTBL_SZ)

{

15 return -1;

16 printf("Process %d free error: Invalid region\n", caller ->pid) ;

17 +

Trang 22

/* TODO: Manage the collect freed region to freerg list */

rgnode = get_symrg_byid(caller->mm, rgid);

if (rgnode->rg_start == rgnode ->rg_end)

printf("Process %d FREE Error: Region wasn’t alloc or was freed before\n", caller->pid) ;

return -1;

}

struct vm_rg_struct *rgnode_temp = malloc(sizeof(struct vm_rg_struct));

// Clear content of region in RAM

BYTE value;

value = 0;

for (int i = rgnode->rg_start; i < rgnode->rg_end; i++)

pe_setval(caller->mm, i, value, caller);

// (caller ->mram ,rgnode->rg_start ,rgnode ->rg_end)

// Create new node for region

renode_temp->rg_start = rgnode->rg_start;

rgnode_temp ->rg_end = rgnode->rg_end;

rgnode->rg_start = rgnode->rg_end = 0;

/*enlist the obsoleted memory region */

#ifdef RAM_STATUS_DUMP

printf (" - \n");

printf("Process 4d FREE CALL | Region id %d after free: [4lu,%lu]\n", caller->pid,

rgid, rgnode->rg_start, rgnode->rg_end);

for (int it = 0; it < PAGING_MAX_SYMTBL_SZ; it+t+)

printf("Process 4d Free Region list \n", caller->pid);

while (temp != NULL)

Trang 23

4.2.6 Memory area ovelapping check - validate overlap vm area() in mm-vm.c

The Validate Overlap VM Area method serves the purpose of determining whether a caller’s vir- tual memory area overlaps with any existing regions This situation commonly arises when at- tempting to expand a process's virtual memory area, potentially leading to overlaps with other memory regions belonging to different processes

This method plays a crucial role in maintaining the integrity of the virtual memory space by detecting and preventing overlaps, which could result in conflicts and unpredictable behavior within the system By thoroughly assessing the boundaries of the caller’s virtual memory area and comparing them against existing regions, this method helps to ensure that memory alloca- tions and expansions occur in a controlled and non-interfering manner, minimizing the risk of data corruption or access violations

/*validate_overlap_vm_area

*@caller: caller

*@vmaid: ID vm area to alloc memory region

*@vmastart: vma end

*@vmaend: vma end

// struct vm_area_struct *vma = caller->mm->mmap;

/* TODO: validate the planned memory area is not overlapped */

while (vmit != NULL)

if ((vmastart < vmit->vm_start && vmaend > vmit->vm_start))

Code 10: Code of validate_overlap_vm_area in mm-vm.c

4.2.7 Choose a page to be replaced - find victim page() in mm-vm.c

In our page replacement algorithm, we’re incorporating a mechanism to determine which page

to replace when a new page needs to be brought into the system To make this decision, we’ve

opted for the FIFO (First-In-First-Out) algorithm

The process begins by initializing a pointer, pg, to point to the FIFO page number of the struct mmm This pointer serves as our reference for traversing the queue As we traverse through the

Trang 24

By adhering to the FIFO principle, we ensure that the page that has been in memory the longest

is the one selected for replacement, maintaining a fair and straightforward approach to page management within our system

/*find_victim_page - find victim page

Code 11: Code of find_victim_page in mm-vm.c

4.2.8 pg_getpage() - Get the page to MEMRAM, swap from MEMSWAP if needed The pg_getpageQ function plays a crucial role in managing the memory hierarchy by ensuring the availability ofa specific page, denoted by its page number (pgn), within the RAM Its primary duty encompasses a comprehensive process to validate the presence of the specified page in the primary memory Should the page not already reside in RAM, the function initiates a swapping procedure to bring it into memory from secondary storage, thus facilitating its accessibility for subsequent operations This intricate mechanism involves meticulous coordination with the memory management subsystem to orchestrate the seamless transfer of data between different tiers of memory Through its diligent execution, pg_getpage() effectively optimizes memory uti- lization while maintaining data integrity, contributing to the overall efficiency and performance

of the system’s memory management operations

int pg_getpage(struct mm_struct *mm, int pgn, int *fpn, struct peb_t *caller)

Trang 25

Code 12: Code of pg_getpage in mm-vm.c

4.2.9 pg_getval() - Get the page to MEMRAM in mm-vm.c

The pg_getval() function is integral to the retrieval of data stored within a process’s virtual mem- ory space It operates by first extracting the page number and offset from the provided memory address Subsequently, it calls the pg_getpage() function to fetch the corresponding page from memory, ensuring its presence in RAM or swapping from secondary storage if necessary Once the page frame number is obtained, the function calculates the physical address by combining the frame number with the offset Using this physical address, the function then reads the data

from memory via the MEMPHY_read() function and stores it in the designated buffer Upon

successful completion of these operations, the function returns 0, indicating a successful re- trieval of the desired data

/*pg_getval - read value at given offset

/* Get the page to MEMRAM, swap from MEMSWAP if needed */

if (pg_getpage(mm, pgn, &fpn, caller) != 0)

return -1; /* invalid page access */

int phyaddr = (fpn << PAGING_ADDR_FPN_LOBIT) + off;

Trang 26

1 /*pg_setval - write value to given offset

18 /* Get the page to MEMRAM, swap from MEMSWAP if needed */

14 if (pg_getpage(mm, pgn, &fpn, caller) != 0)

15 return -1; /* invalid page access */

17 int phyaddr = (fpn << PAGING_ADDR_FPN_LOBIT) + off;

19 MEMPHY_write(caller->mram, phyaddr, value);

21 return 0;

Code 14: Code of pg_setval() in mm-vm.c

4.2.11 get_free_vmrg_area() - Traverse on list of free vm region to find a fit space in mm- vm.c

An adjustment has been made to incorporate a "break" statement when a suitable space region has been identified during the fitting process

1 /*get_free_vmrg_area - get a free vm region

2 *@caller: caller

Trang 27

if (rgit == NULL)

return -1;

/* Probe unintialized newrg */

newrg->rg_start = newrg->rg_end = -1;

/* Traverse on list of free vm region to find a fit space */

while (rgit != NULL)

{

if (rgit->rg_start + size <= rgit->rg_end)

{ /* Current region has enough space */

newrg->rg_start = rgit->rg_start;

newrg->rg_end = rgit->rg_start + size;

/* Update left space in chosen region */

if (rgit->rg_start + size < rgit->rg_end)

reit->rg_start = rgit->rg_start + size;

else

{ /*Use up all space, remove current node */

/*Clone next rg node */

struct vm_rg_struct *nextrg = rgit->rg_next;

{ /*End of free list */

rgit->rg_start = rgit->rg_end; // dummy, size 0 region

Trang 28

if (newrg->rg_start -1) // new region not found

return -1;

return 0;

Code 15: Code of get_free_vmrg_area() in mm.c

4.2.12 enlist_vm_freerg_list - Add free region to free region list in mm-vm.c

A noteworthy refinement has been introduced to this method, involving the incorporation of

a semaphore to uphold synchronization within the virtual memory infrastructure This addi- tion serves as a pivotal enhancement, ensuring orderly coordination and management of op- erations within the virtual memory environment By integrating this semaphore, the method orchestrates synchronized access and manipulation of virtual memory resources, fostering a harmonized and efficient system operation This augmentation underscores a commitment to optimizing the integrity and reliability of the virtual memory subsystem, ultimately bolstering the overall performance and stability of the system

Code 16: Code of enlist_vm_freerg_listQ) in mm-vm.c

4.2.13 inc_vma_limit - Increase the virtual memory area in mm-vm.c

This method has undergone a minor refinement, which includes the integration of a semaphore aimed at ensuring synchronization within the physical memory infrastructure This enhance- ment represents a significant addition to the method’s functionality, as it facilitates orderly co- ordination and management of operations pertaining to physical memory By incorporating this semaphore, the method effectively regulates access and manipulation of physical memory resources, thereby promoting a synchronized and efficient operation of the memory manage- ment subsystem This modification underscores a commitment to optimizing the integrity and

Trang 29

reliability of physical memory handling, thereby contributing to the overall robustness and sta- bility of the system

*@caller: caller

4 *@inc_sz: increment size

6 */

7 int inc_vma_limit(struct peb_t *caller, int vmaid, int inc_sz)

8

struct vm_rg_struct *newrg = malloc(sizeof (struct vm_rg_struct)) ;

10 int inc_amt = PAGING_PAGE_ALIGNSZ(inc_sz) ;

11 int incnumpage = inc_amt / PAGING_PAGESZ;

/*Validate overlap of obtained region */

19 return -1; /*Overlap and failed allocation */

20

21 /* The obtained vm area (only)

22 * now will be alloc real ram region */

26 old_end, incnumpage, newrg) < 0)

ar return -1; /* Map the memory to MEMRAM */

Trang 30

emu

4.3 Translation Lookaside Buffer

In a multi-core system where each CPU core is equipped with its own Memory Management

Unit (MMU) and Translation Lookaside Buffer (TLB), the architecture provides enhanced effi-

ciency and security through improved isolation and faster context switching The implementa- tion of two-level TLBs, featuring a swift L1 and a more capacious L2, optimizes memory access

by boosting TLB hit rates and minimizing the occurrence of slow page table walks However, these benefits necessitate complex translation schemes and sophisticated synchronization pro- tocols to ensure coherence among the TLBs of different cores, particularly when updates occur

in shared memory areas This complexity is vital for enabling each core to manage its memory translations autonomously while ensuring consistent performance and system integrity across all cores

4.3.1 tlb change all page tables_ofQ - Update all page table directory info

int tlb_change_all_page_tables_of(struct pcb_t *proc, struct memphy_struct *mp)

// This function might be used to update all page table directory info

return 0;

Code 18: Code of tlb_change_all_page_tables_ofQ) in mm-vm.c

4.3.2 tlb_flush_tlb_of(Q) - Flush tlb cached

Code 19: Code of tlb_flush_tlb_of0 in cpu-tlb.c

4.3.3 tlballoc() - CPU TLB-based allocate a region memory

{

int addr, val;

val = alloc(proc, 0, reg_index, size, &addr);

for (uint32_t i = 0; i size; i++)

int pgn = PAGING_PGN((addr + i));

uint32_t page_entry = proc->mm->pgd[pgn] ;

page_entry = PAGING_FPN(page_entry) ;

Trang 31

1

tlb_cache_write(proc->tlb, proc->pid, pgn, page_entry);

return val;

Code 20: Code of tlballoc0 in cpu-tlb.c

4.3.4 tlbfree_data() - CPU TLB-based free a region memory

{

if (reg_index < 0 || reg_index > PAGING_MAX_SYMTBL_SZ)

return -1;

int addr = proc->mm->symrgtbl [reg_index].rg_start;

uint32_t size = proc->mm->symrgtbl[reg_index].rg_end - proc->mm->symrgtblL reg_index].rg_start;

for (uint32_t i = 0; i size; i++)

Code 21: Code of tlbfree_dataQ) in cpu-tlb.c

4.3.5 tlbread() - CPU TLB-based read a region memory

int phyaddr = (frmnum << PAGING_ADDR_FPN_LOBIT) + PAGING_OFFST (address) ;

*destination = (uint32_t)proc->mram->storage [phyaddr];

TLB_hit++;

}

else

{

int val = read(proc, 0, source, offset, &data);

*destination = (uint32_t) data;

uint32_t page_entry = proc->mm->pgd[pgn];

Trang 32

Code 22: Code of tlbread0 in cpu-tlb.c

4.3.6 tlbwri(e(Q - CPU TLB-based write a region memory

int tlbwrite(struct pcb_t *proc, BYTE data, uint32_t destination, uint32_t offset)

int val = write(proc, 0, destination, offset, data);

uint32_t page_entry = proc->mm->pgd[pgn];

Code 23: Code of tlbwriteQ in cpu-tlb.c

4.3.7 tlb_cache_read() - Read TLB cache device

int tlb_cache_read(struct memphy_struct *mp, int pid, int pgnum, uinti6_t *value)

Trang 33

4.3.8 tlb_cache_write() - Write TLB cache device

int tlb_cache_write (struct memphy_struct *mp, int pid, int pgnum, uinti6_t value)

Code 25: Code of tlb_cache_write() in cpu-tlbcache.c

4.3.9 TLBMEMPHY_dump() - Dump memphy contnt mp->storage for tracing the memory

Trang 34

Figure 5: The Gantt chart of sched

Time slice: 4, Number of CPUs: 2, Number of processes: 3

Start Time: 0, Process: pis, Priority: 0

Start Time: 1, Process: p2s, Priority: 0

Start Time: 2, Process: p3s, Priority: 0

Time slot 9

ld_routine

Loaded a process at input/proc/pis, PID: 1 PRIO: 0

Time slot HD

CPU QO: Dispatched process 1

Process 1 executes calc

Loaded a process at input/proc/p2s, PID: 2 PRIO: 0

Time slot 2

Process 1 executes calc

CPU 1: Dispatched process 2

Assignment for Operating System - Academic year 2023 - 2024 Page 33/82

Trang 35

16

Process 2 executes calc

Loaded a process at input/proc/p3s, PID: 3 PRIO: 0

Time slot 3

Process 1 executes calc

Process 2 executes calc

Time slot

Process 1 executes calc

Process 2 executes calc

Time slot 5

CPU 0: Put process i to run queue

CPU 0: Dispatched process

Process 3 executes calc

Process 2 executes calc

Time slot 6

Process 3 executes calc

CPU 1: Put process 2 to run queue

CPU 1: Dispatched process 1

Process 1 executes calc

Time slot

Process 3 executes calc

Process 1 executes calc

8

Time slot

Process 3 executes calc

Process 1 executes calc

Process 1 executes calc

Time slot 9

CPU O0: Put process 3 to run queue

CPU 0: Dispatched process

Process 2 executes calc

CPU 1: Put process i to run queue

CPU 1: Dispatched process 3

Process 3 executes calc

Time slot 10

Process 2 executes calc

Process 3 executes calc

Time slot

Process 2 executes calc

Process 3 executes calc

Time slot 12

Process 2 executes calc

Process 3 executes calc

Time slot 13

CPU 0: Put process 2 to run queue

CPU 0: Dispatched process 1

Process 1 executes calc

CPU 1: Put process 3 to run queue

CPU 1: Dispatched process 2

Process 2 executes calc

Time slot 14

Process 1 executes calc

Process 2 executes calc

Time slot 15

CPU O: Processed i has finished

CPU 0: Dispatched process

Process 3 executes calc

Process 2 executes calc

Time slot 16

Process 3 executes calc

Trang 36

Process 2 executes calc

Time slot 17

Process 3 executes calc

CPU 1: Processed 2 has finished

At time slot 5, process pls completes its execution on CPU1 and is moved from CPU]

to queue 1 As queue 0 has only been dispatched once, its process is then removed and loaded onto CPU] (as it is available), completing the 2/2 dispatch from queue 0 Subse- quently, the scheduler returns to processing queue 1

At time slot 6, process p2s completes its execution on CPU0O and is moved from CPU0 to the run queue As the scheduler’s pointer is currently at queue 1, process pls is removed from queue 1 and executed

This process continues until all processes have completed 10 seconds of work Once all processes have finished, the algorithm stops and terminates

Trang 37

Figure 6: The Gantt chart of sched_0

Time slice: 2, Number of CPUs: i, Number of processes: 2

Start Time: 0, Process: sO, Priority: 3

Start Time: 4, Process: si, Priority: 0

Time slot 9

ld_routine

Loaded a process at input/proc/sO, PID: 1 PRID: 3

Time slot 1

CPU QO: Dispatched process 1

Process 1 executes calc

Time slot 2

Process 1 executes calc

Time slot 3

CPU 0: Put process i to run queue

CPU 0: Dispatched process 1

Process 1 executes calc

Time slot 4

Process 1 executes calc

Loaded a process at input/proc/si, PID: 2 PRIO: 0

Time slot 5

CPU 0: Put process i to run queue

CPU 0: Dispatched process 2

Begin: Process 2 allocates 300 from RAM for register 0

Process 2 ALLOC CALL | SIZE = 300

[Page mapping] PID #2: Frame:0 PTE:80000000 PGN:O

[Page mapping] PID #2: Frame:1 PTE:80000001 PGN:1

Assignment for Operating System - Academic year 2023 - 2024 Page 36/82

Trang 38

Process 2 Free Region list

Start = 300, end = 512

512,

start = 0, end =

RAM mapping status

Number of mapped frames: 2

Number of remaining frames:

CPU O: Put process 2 to run

CPU 0: Dispatched process

Process 2 executes calc

CPU O: Put process 2 to run

CPU 0: Dispatched process 2

Process 2 executes calc

Time slot 12

CPU 0: Processed

CPU 0: Dispatched process 1

Process 1 executes calc

Time slot 13

Process 1 executes calc

Put process 2 to run

CPU O: Put process i to run

CPU 0: Dispatched process 1

Process 1 executes calc

Time slot 15

Process 1 executes calc

Time slot 16

CPU O: Put process i to run

CPU 0: Dispatched process 1

Process 1 executes calc

Time slot 17

Process 1 executes calc

Time slot 18

CPU O: Put process i to run

CPU 0: Dispatched process 1

Trang 39

Process 1 executes calc

Time slot 19

Process 1 executes calc

Time slot 20

CPU 0: Put process i to run queue

CPU 0: Dispatched process 1

Process 1 executes calc

Time slot 21

Process 1 executes calc

Time slot 22

CPU 0: Put process i to run queue

CPU 0: Dispatched process 1

Process 1 executes calc

- At time slot 1, process sO will execute in 2 time slots as the time slice is 2

- Then at time slot 3, because there is no other process waiting for the CPU except for pro- cess s0, the CPU takes process s0 again

- Process s1 arrives at time slot 4, but at that time, we have only one CPU but it is using for process sO, so process s1 has to be waiting for process s0

Next, at time slot 5 when process s0 finishes, the CPU takes process s1 and executes it This process continues until all the 2 processes have finished The program ends

Trang 40

Figure 7: The Gantt chart of sched_1

Time slice: 2, Number of CPUs: 1, Number of processes: 4

Start Time: 0, Process: sQ, Priority: 4

Start Time: 4, Process: si, Priority: 1

Start Time: 6, Process: s2, Priority: 0

Start Time: 7, Process: s3, Priority: 2

Time slot 9

ld_routine

Loaded a process at input/proc/sO, PID: i PRIO: 4

Time slot 1

CPU 0: Dispatched process 1

Process 1 executes calc

Time slot 2

Process 1 executes calc

Time slot 3

CPU 0: Put process i to run queue

CPU 0: Dispatched process 1

Process 1 executes calc

Time slot 4

Process 1 executes calc

Loaded a process at input/proc/si, PID: 2 PRIO: 1

Time slot 5

CPU 0: Put process i to run queue

CPU 0: Dispatched process 2

Begin: Process 2 allocates 300 from RAM for register 0

Process 2 ALLOC CALL | SIZE = 300

[Page mapping] PID #2: Frame:0 PTE:80000000 PGN:O

[Page mapping] PID #2: Frame:1 PTE:80000001 PGN:1

Region id

Process 2 Free Region list

Start = 300, end = 512

VMA id 1 start = 0, end = 512, sbrk = 512

~—————————¬ RAM mapping status -

Assignment for Operating System - Academic year 2023 - 2024 Page 39/82

Ngày đăng: 10/02/2025, 15:50

TRÍCH ĐOẠN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN