Last updated on March 20th, 2025 at 03:26 am
Here, We will see the Operating System Question Paper from the previous year’s exam and the syllabus of the Operating System for GATE and PSU Exam.
Table of Contents
1. Operating System in Gate CSE and PSU Exam
Operating System (OS) is a pivotal subject in the GATE Computer Science (CSE) exam. GATE questions in this domain test conceptual clarity and problem-solving skills across core OS functionalities, including process management, memory allocation, synchronization, and file systems. For instance, recurring topics involve CPU scheduling algorithms (e.g., Round Robin, Shortest Job First), deadlock handling (Banker’s Algorithm), semaphore operations, and paging mechanisms in virtual memory. Recent trends also emphasize multithreading, resource allocation graphs, and page replacement strategies like LRU and FIFO.
2. Operating System Syllabus
The Operating System Questions asked in the GATE and PSU Exam focused on topics:
- Process Management: Processes, threads, inter-process communication, CPU scheduling, and concurrency control (semaphores, monitors).
- Memory Management: Paging, segmentation, virtual memory, and address translation.
- Deadlocks: Detection, prevention, avoidance (Banker’s Algorithm), and recovery.
- File Systems: Disk scheduling (SCAN, C-SCAN), file organization, and I/O management.
- Advanced Topics: Real-time OS basics, system calls, and synchronization mechanisms.
Operating System Question Paper-GATE & PSU Exam
1. A clustering index is defined on the fields which are of type
- non-key and ordering
- non-key and non-ordering
- key and ordering
- key and non-ordering
Answer
non-key and ordering
[2008]
2. A B– tree of order 4 is built from scratch by 10 successive insertions. What is the maximum number of node splitting operations that may take place?
- 3
- 4
- 5
- 6
Answer
5
[2008]
3. Consider a file of 16384 records. Each record is 32 bytes long and its key field is of size 6 bytes. The file is ordered on a non-key field, and the file organization is unspanned. The file is stored in a file system with block size 1024 bytes, and the size of a block pointer is 10 bytes. If the secondary index is built on the key field of the file, and a multilevel index scheme is used to store the secondary index, the number of first-level and second-level blocks in the multilevel index are respectively
- 8 and 0
- 128 and 6
- 256 and 4
- 512 and 5
Answer
256 and 4
[2008]
4. The following key values are inserted into a B+ tree in which order of the internal node s is 3, and that of the leaf nodes is 2, in the sequence given below. The order of internal nodes is the maximum number of tree pointers in each node, and the order of leaf nodes is the maximum number of data items that can be stored in it. The B+ tree is initially empty.
10, 3, 6, 8, 4, 2, 1
The maximum number of times leaf nodes would get split up as a result of these insertions is
- 2
- 3
- 4
- 5
Answer
4
[2009]
5. Consider a B+ tree in which the maximum number of keys in a node is 5. What is the minimum number of keys in any non-root node?
- 1
- 2
- 3
- 4
Answer
2
[2010]
6. An index is clustered if
- it is on a set of fields that form a candidate key.
- it is on a set of fields that include the primary key.
- the data records of the file are organized in the same order as the data entries of the index.
- the data records of the file are organized not in the same order as the data entries of the index.
Answer
the data records of the file are organized in the same order as the data entries of the index.
[2013]
7. A file is organized so that the ordering of data records is the same as or close to the ordering of data entries in some index. Then that index is called
- Dense
- Sparse
- Clustered
- Unclustered
Anwer
Clustered
[2015]
8. With reference to the B+ tree index of order 1 shown below, the minimum number of nodes (including the Root node) that must be fetched in order to satisfy the following query: “Get all records with a search key greater than or equal to 7 and less than 15” is ___________ .
- 5
- 8
- 7
- 15
Answer
5
[2015]
9. Consider a B+ tree in which the search key is 12 bytes long, block size is 1024 bytes, record pointer is 10 bytes long and block pointer is 8 bytes long. The maximum number of keys that can be accommodated in each non-leaf node of the tree is _________.
- 44
- 50
- 64
- 72
Answer
50
[2015]
10. B+ Trees are considered BALANCED because
- The lengths of the paths from the root to all leaf nodes are all equal.
- The lengths of the paths from the root to all leaf nodes differ from each other by at most 1.
- The number of children of any two non – leaf sibling nodes differ by at most 1.
- The number of records in any two leaf nodes differ by at most 1.
Answer
The lengths of the paths from the root to all leaf nodes are all equal.
[2016]
11. In a B+ tree, if the search-key value is 8 bytes long, the block size is 512 bytes and the block pointer size is 2 bytes, then the maximum order of the B+ tree is __________.
- 46
- 52
- 58
- 66
Answer
52
[2017]
12. Consider a disk drive with the following specifications:
16 surfaces, 512 tracks/surfaces, 512 sectors/track, 1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever one 4 byte word is ready it is sent to memory; similarly, for writing, the disk interface reads a 4 byte word from the memory in each DMA cycle. Memory cycle time is 40 nsec. The maximum percentage of time that the CPU gets blocked during DMA operation is:
- 10
- 25
- 40
- 50
Answer
25
[2005]
13. Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of data are stored in a bit serial manner in a sector. The capacity of the disk pack and the number of bits required to specify a particular sector in the disk are, respectively:
- 256 Mbyte, 19 bits
- 256 Mbyte, 28 bits
- 512 Mbyte, 20 bits
- 64 Gbyte, 28 bits
Answer
256 Mbyte, 19 bits
[2007]
14. For a magnetic disk with concentric circular tracks, the seek latency is not linearly proportional to the seek distance due to
- non-uniform distribution of requests
- arm starting and stopping inertia
- higher capacity of tracks on the periphery of the platter
- use of unfair arm scheduling policies
Answer
higher capacity of tracks on the periphery of the platter
[2008]
15. Consider a file of 16384 records. Each record is 32 bytes long and its key field is of size 6 bytes. The file is ordered on a non-key field, and the file organization is unspanned. The file is stored in a file system with block size 1024 bytes, and the size of a block pointer is 10 bytes. If the secondary index is built on the key field of the file, and a multi-level index scheme is used to store the secondary index, the number of first level and second-level blocks in the multi-level index are, respectively
- 8 and 0
- 128 and 6
- 256 and 4
- 512 and 5
Answer
256 and 4
[2008]
16. Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following sequence:
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1ms to move from one cylinder to adjacent one and shortest seek time first policy is used?
- 95 ms
- 119 ms
- 233 ms
- 276 ms
Answer
119 ms
[2009]
17. A hard disk has 63 sectors per track, 10 platters each with 2 recording surfaces and 1000 cylinders. The address of a sector is given as a triple 〈c, h, s〉 , where c is the cylinder number, h is the surface number and s is the sector number. Thus, the 0th sector is addressed as 〈0, 0, 0〉 , the 1st sector as 〈0, 0, 1〉,and so on. The address <400, 16, 29> corresponds to the sector number:
- 505035
- 505036
- 505037
- 505038
Answer
505037
[2009]
18. A hard disk has 63 sectors per track, 10 platters each with 2 recording surfaces and 1000 cylinders. The address of a sector is given as a triple 〈c, h, s〉 , where c is the cylinder number, h is the surface number and s is the sector number. Thus, the 0th sector is addressed as 〈0, 0, 0〉 , the 1st sector as 〈0, 0, 1〉,and so on. The address of the 1039th sector is
- 〈0, 15, 31〉
- 〈0, 16, 30〉
- 〈0, 16, 31〉
- 〈0, 17, 31〉
Answer
〈0, 16, 31〉
[2009]
19. A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses, 1 indirect block address and 1 doubly indirect block address. The size of each disk block is 128 bytes and the size of each disk block address is 8 bytes. The maximum possible file size in this file system is
- 3 Kbytes
- 35 Kbytes
- 280 Kbytes
- dependent on the size of the disk
Answer
35 Kbytes
[2012]
20. Consider a hard disk with 16 recording surfaces (0–15) having 16384 cylinders (0–16383) and each cylinder contains 64 sectors (0–63). Data storage capacity in each sector is 512 bytes. Data are organized cylinder-wise and the addressing format is <cylinder no., surface no., sector no.>. A file of size 42797 KB is stored in the disk and the starting disk location of the file is <1200, 9, 40>. What is the cylinder number of the last sector of the file, if it is stored in a contiguous manner?
- 1281
- 1282
- 1283
- 1284
Answer
1284
[2013]
21. A FAT (File allocation table)-based file system is being used, and the total over head of each entry in the FAT is 4 bytes in size. Given a 100 × 106 bytes disk on which the file system is stored and data block size is 103 bytes, the maximum size of a file that can be stored on this disk in units of 106 bytes is __?
- 98.6
- 98.9
- 99.6
- 99.2
Answer
99.6
[2014]
22. Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100 when it starts execution) is __ tracks.
- 45
- 90
- 10
- 25
Answer
10
[2015]
23. Consider a typical disk that rotates at 15000 rotations per minute (RPM) and has a transfer rate of 50 × 106 bytes/sec. If the average seek time of the disk is twice the average rotational delay and the controller’s transfer time is 10 times the disk transfer time, the average time (in milliseconds) to read or write a 512-byte sector of the disk is __
- 4.0
- 7.6
- 8.4
- 6.1
Answer
6.1
[2015]
24. Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-LOOK scheduling algorithm is used. The head is initially at cylinder number 63, moving towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests is _.
- 156
- 120
- 386
- 346
Answer
346
[2016]
25. In a file allocation system, which of the following allocation scheme(s) can be used if no external fragmentation is allowed?
I. Contiguous
II. Linked
III. Indexed
- I and III only
- II only
- III only
- II and III only
Answer
II and III only
[2017]
26. Consider a storage disk with 4 platters (numbered as 0, 1, 2 and 3), 200 cylinders (numbered as 0, 1, …, 199), and 256 sectors per track (numbered as 0, 1, …, 255). The following 6 disk requests of the form [sector number, cylinder number, platter number] are received by the disk controller at the same time:
[120, 72, 2], [180, 134, 1], [60, 20, 0], [212, 86, 3], [56, 116, 2], [118, 16, 1]
Currently the head is positioned at sector number 100 of cylinder 80, and is moving towards higher cylinder numbers. The average power dissipation in moving the head over 100 cylinders is 20 milliwatts and for reversing the direction of the head movement once is 15 milliwatts. Power dissipation associated with rotational latency and switching of head between different platters is negligible.
The total power consumption in milliwatts to satisfy all of the above disk requests using the Shortest Seek Time First disk scheduling algorithm is _.
- 85
- 117
- 135
- 70
Answer
85
[2018]
27. Consider three processes (process id 0, 1, 2, respectively) with compute bursts 2, 4 and 8 time units. All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. The average turnaround time is:
- 13 units
- 14 units
- 15 units
- 16 units
Answer
13 units
[2006]
28. Consider three processes, all arriving at zero, with total execution time of 10, 20 and 30 units, respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing computation, and the last 10% of time doing I/O again. The operating system uses the shortest remaining compute time first scheduling algorithm and schedules a new process either when the running process gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O operations can be overlapped as much as possible. For what percentage of time does the CPU remain idle?
- 0%
- 10.6%
- 30.0%
- 89.4%
Answer
10.6%
[2006]
29. A single processor system has three resource types X, Y, and Z, which are shared by three processes. There are five units of each resource type. Consider the following scenario, where the column alloc denotes the number of units of each resource type allocated to each process, and the column request denotes the number of units of each resource type requested by a process in order to complete execution. Which of these processes will finish last?
alloc | alloc | alloc | request | request | request | |
X | Y | Z | X | Y | Z | |
P0 | 1 | 2 | 1 | 1 | 0 | 3 |
P1 | 2 | 0 | 1 | 0 | 1 | 2 |
P2 | 2 | 2 | 1 | 1 | 2 | 0 |
- P0
- P1
- P2
- None of the above, since the system is in a deadlock.
Answer
P2
[2007]
30. Which of the following is not true of deadlock prevention and deadlock avoidance schemes?
- In deadlock prevention, the request for resources is always granted if the resulting state is safe
- In deadlock avoidance, the request for resources is always granted if the resulting state is safe
- Deadlock avoidance is less restrictive than deadlock prevention
- Deadlock avoidance requires knowledge of resource requirements as a priority
Answer
In deadlock prevention, the request for resources is always granted if the resulting state is safe
[2008]
31. In the following process state transition diagram for a uniprocessor system, assume that there are always some processes in the ready state:
Now consider the following statements:
I. If a process makes a transition D, it would result in another process making transition A immediately.
II. A process P2 in a blocked state can make transition E, while another process P1 is in a running state.
III. The OS uses pre-emptive scheduling.
IV. The OS uses non-pre-emptive scheduling.
Which of the above statements are true?
- I and II
- I and III
- II and III
- II and IV
Answer
II and III
[2009]
32. Which of the following statements are true?
I. Shortest remaining time first scheduling may cause starvation
II. Pre-emptive scheduling may cause starvation
III. Round Robin is better than FCFS in terms of response time
- I only
- I and III only
- II and III only
- I, II and III
Answer
I, II and III
[2010]
33. A system has n resources R0,…, Rn–1, and k processes P0,…Pk–1. The implementation of the resource request logic of each process Pi, is as follows:
if (i% 2 = = 0) {
if (i<n) request Ri;
if (i+2<n) request Ri+2;
}
else {
if (i<n) request Rn-i;
if (i+2<n) request Rn-i-2;
}
In which one of the following situations is a deadlock possible?
- n = 40, k = 26
- n = 21, k = 12
- n = 20, k = 10
- n = 41, k = 19
Answer
n = 21, k = 12
[2010]
34. Consider the following table of arrival time and burst time for three processes P0, P1, and P2.
Process | Arrival time | Burst time |
P0 | 0 ms | 9 ms |
P1 | 1 ms | 4 ms |
P2 | 2 ms | 9 ms |
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at the arrival or completion of processes. What is the average waiting time for the three processes?
- 5.0 ms
- 4.33 ms
- 6.33 ms
- 7.33 ms
Answer
5.0 ms
[2011]
35. Consider the three processes, P1, P2 and P3 as shown in the table.
Process | Arrival time | Time units required |
P1 | 0 | 5 |
P2 | 1 | 7 |
P3 | 3 | 4 |
The completion order of the three processes under the policies FCFS and RR2 (round-robin scheduling with CPU quantum of 2 time units) are
- FCFS: P1, P2, P3 RR2: P1, P2, P3
- FCFS: P1, P3, P2 RR2: P1, P3, P2
- FCFS: P1, P2, P3 RR2: P1, P3, P2
- FCFS: P1, P3, P2 RR2: P1, P2, P3
Answer
FCFS: P1, P2, P3 RR2: P1, P3, P2
[2012]
36. A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts with priority zero (the lowest priority). The scheduler re-evaluates the process priorities every T time unit and decides the next process to schedule. Which one of the following is TRUE if the processes have no I/O operations and all arrive at time zero?
- This algorithm is equivalent to the first-come first-serve algorithm.
- This algorithm is equivalent to the Round Robin algorithm.
- This algorithm is equivalent to the shortest-jobfirst algorithm.
- This algorithm is equivalent to the shortest remaining-time-first algorithm.
Answer
This algorithm is equivalent to the Round Robin algorithm.
[2013]
37. An operating system uses the Banker’s algorithm for deadlock avoidance when managing the allocation of three resource types X, Y, and Z to three processes P0, P1, and P2. The table given below presents the current system state. Here, the allocation matrix shows the current number of resources of each type allocated to each process and the Max matrix shows the maximum number of resources of each type required by each process during its execution.
| Allocation | Allocation | Allocation | Max | Max | Max |
| X | Y | Z | X | Y | Z |
P0 | 0 | 0 | 1 | 8 | 4 | 3 |
P1 | 3 | 2 | 0 | 6 | 2 | 0 |
P2 | 2 | 1 | 1 | 3 | 3 | 3 |
There are three units of type X, two units of type Y and two units of type Z still available. The system is currently in a safe state. Consider the following independent requests for additional resources in the current state:
REQ1: P0 requests 0 units of X, 0 units of Y and two units of Z
REQ2: P1 requests two units of X, 0 units of Y and 0 units of Z
Which one of the following is true?
- Only REQ1 can be permitted
- Only REQ2 can be permitted
- Both REQ1 and REQ2 can be permitted
- Neither REQ1 nor REQ2 can be permitted
Answer
Only REQ2 can be permitted
[2014]
38. Consider the following set of processes that need to be scheduled on a single CPU. All the times are given in milliseconds.
Process name | Arrival time | Execution time |
A | 0 | 6 |
B | 3 | 2 |
C | 5 | 4 |
D | 7 | 6 |
E | 10 | 3 |
Using the shortest remaining time first scheduling algorithm, the average process turnaround time (in msec) is ________.
- 5.0
- 7.2
- 6.4
- 9.1
Answer
7.2
[2014]
39. Three processes A, B and C each execute a loop of 100 iterations. In each iteration of the loop, a process performs a single computation that requires tc CPU milliseconds and then initiates a single I/O operation that lasts for tio milliseconds. It is assumed that the computer where the processes execute has a sufficient number of I/O devices and the OS of the computer assigns different I/O devices to each process. Also, the scheduling overhead of the OS is negligible. The processes have the following characteristics:
Process | tc | tio |
A | 100 ms | 500 ms |
B | 350 ms | 500 ms |
C | 200 ms | 500 ms |
The processes A, B, and C are started at times 0, 5, and 10 milliseconds, respectively, in a pure time-sharing system (round-robin scheduling) that uses a time slice of 50 milliseconds. The time in milliseconds at which process C would complete its first I/O operation is ________
- 950
- 1000
- 1250
- 700
Answer
1000
[2014]
40. A system contains three programs and each requires three tape units for its operation. The minimum number of tape units that the system must have such that deadlocks never arise is _______ .
- 7
- 11
- 17
- 5
Answer
7
[2014]
41. An operating system uses the shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes. Consider the following set of processes with their arrival times and CPU burst times (in milliseconds):
Process | Arrival time | Burst time |
P1 | 0 | 12 |
P2 | 2 | 4 |
P3 | 3 | 6 |
P4 | 8 | 5 |
The average waiting time (in milliseconds) of the processes is _.
- 6.9
- 4.2
- 5.5
- 7.8
Answer
5.5
[2014]
42. Consider a uniprocessor system executing three tasks T1, T2, and T3, each of which is composed of an infinite sequence of jobs (or instances) that arrive periodically at intervals of 3, 7, and 20 milliseconds, respectively. The priority of each task is the inverse of its period, and the available tasks are scheduled in order of priority, with the highest priority task schedule first. Each instance of T1, T2, and T3 requires an execution time of 1, 2, and 4 milliseconds, respectively. Given that all tasks initially arrive at the beginning of the 1st millisecond and task preemptions are allowed, the first instance of T3 completes its execution at the end of __ milliseconds.
- 8
- 12
- 16
- 26
Answer
12
[2015]
43. A system has 6 identical resources and N processes competing for them. Each process can request at most 2 resources. Which one of the following values of N could lead to a deadlock?
- 1
- 2
- 3
- none of the above
Answer
none of the above
[2015]
44. The maximum number of processes that can be in Ready state for a computer system with n CPUs is
- n
- n2
- 2n
- Independent of n
Answer
Independent of n
[2015]
45. Consider the following policies for preventing deadlock in a system with mutually exclusive resources.
(1) Processes should acquire all their resources at the beginning of execution. If any resource is not available, all resources acquired so far are released.
(2) The resources are numbered uniquely, and processes are allowed to request resources only in increasing resource numbers.
(3) The resources are numbered uniquely, and processes are allowed to request resources only in decreasing resource numbers.
(4) The resources are numbered uniquely. A process is allowed to request only for a resource with a resource number larger than its currently held resources.
Which of the above policies can be used to prevent deadlock?
- Anyone of 1 and 3 but not 2 or 4
- Anyone of 1, 3, and 4 but not 2
- Anyone of 2 and 3 but not 1 or 4
- Anyone of 1, 2, 3 and 4
Answer
Anyone of 1, 2, 3 and 4
[2015]
46. For the processes listed in the following table, which of the following scheduling schemes will give the lowest average turnaround time?
Process | Arrival Time | Processing Time |
A | 0 | 3 |
B | 1 | 6 |
C | 4 | 4 |
D | 6 | 2 |
- First Come First Serve
- Non-preemptive Shortest Job First
- Shortest Remaining Time
- Round Robin with Quantum value two
Answer
Shortest Remaining Time
[2015]
47. Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer system. Which one of the following process scheduling algorithms would minimize the average waiting time in the ready queue?
- Shortest remaining time first
- Round-robin with time quantum less than the shortest CPU burst
- Uniform random
- Highest priority first with priority proportional to CPU burst length
Answer
Shortest remaining time first
[2016]
48. Consider the following processes, with the arrival time and the length of the CPU burst given in milliseconds. The scheduling algorithm used is preemptive shortest remaining – time first.
Process | Arrival Time | Burst Time |
P1 | 0 | 10 |
P2 | 3 | 6 |
P3 | 7 | 1 |
P4 | 8 | 3 |
The average turnaround time of these processes is __ milliseconds.
- 7.32
- 11.65
- 8.24
- 9.3
Answer
8.24
[2016]
49. Consider the following CPU processes with arrival times (in milliseconds) and length of CPU bursts (in milliseconds) as given below :
Process | Arrival Time | Burst Time |
P1 | 0 | 7 |
P2 | 3 | 3 |
P3 | 5 | 5 |
P4 | 86 | 2 |
If the pre-emptive shortest remaining time first scheduling algorithm is used to schedule the processes, then the average waiting time across all processes is milliseconds.
- 7
- 3
- 2
- 0
Answer
3
[2017]
50. A system shares 9 tape drives. The current allocation and maximum requirement of tape drives for three processes are shown below:
Process | Current Allocation | Maximum Requirement |
P1 | 3 | 7 |
P2 | 1 | 6 |
P3 | 3 | 5 |
Which of the following best describes current state of the system?
- Safe. Deadlocked
- Safe. Not Deadlocked
- Not Safe. Deadlocked
- Not Safe, Not Deadlocked
Answer
Safe. Not Deadlocked
[2017]
51. Consider the set of processes with arrival time (in milliseconds). CPU burst time (in milliseconds). and priority (0 is the highest priority) shown below. None of the processes have I/O burst time.
Process | Arrival Time | Burst Time | Priority |
P1 | 0 | 11 | 2 |
P2 | 5 | 28 | 0 |
P3 | 12 | 2 | 3 |
P4 | 2 | 10 | 1 |
P5 | 9 | 16 | 4 |
The average waiting time (in milliseconds) of all the processes using preemptive priority scheduling algorithm is________.
- 29
- 11
- 6
- 15
Answer
29
[2017]
52. Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a maximum of K instances. Resource instances can be requested and released only one at a time. The largest value of K that will always avoid deadlock is __.
- 0
- 1
- 2
- 3
Answer
2
[2018]
53. In a system, there are three types of resources: E, and G. Four processes P0, P1, P2 and P3 execute concurrently. At the outset, the processes have declared their maximum resource requirements using a matrix named Max as given below. For example, Max[P2, F] is the maximum number of instances of F that P2 would require. The number of instances of the resources allocated to the various processes at any given state is given by a matrix named Allocation.
Consider a state of the system with the Allocation matrix as shown below, in which 3 instances of E and 3 instances of F are the only resources available.
Allocation | Allocation | Allocation | |
E | F | G | |
P0 | 1 | 0 | 1 |
P1 | 1 | 1 | 2 |
P2 | 1 | 0 | 3 |
P3 | 2 | 0 | 0 |
Max | Max | Max | |
E | F | G | |
P0 | 4 | 3 | 1 |
P1 | 2 | 1 | 4 |
P2 | 1 | 3 | 3 |
P3 | 5 | 4 | 1 |
From the perspective of deadlock avoidance, which one of the following is true?
- The system is in safe state.
- The system is not in safe state, but would be safe if one more instance of E were available.
- The system is not in safe state, but would be safe if one more instance of F were available.
- The system is not in safe state, but would be safe if one more instance of G were available.
Answer
The system is in safe state.
[2018]
54. Consider these two functions and two statements S1 and S2 about them:
int work1 (int *a,int i, int j) { int x=a[i+2]; a[j]=x+1; return a[i+2]-3; } | int work2 (int *a,int i, int j) { int t1=i+2; int t2=a[t1]; a[j]=t2+1; return t2 – 3; } |
S1 : The transformation from work1 to work2 is valid, that is, for any program state and input arguments, work 2 will compute the same output and have the same effect on the program state as work 1
S2 : All the transformations applied to work 1 to get work 2 will always improve the performance (i.e., reduce CPU time) of work 2 compared to work 1
- S1 is false and S2 is false
- S1 is false and S2 is true
- S1 is true and S2 is false
- S1 is true and S2 is true
Answer
S1 is true and S2 is true
[2006]
54. The atomic fetch-and-set x, y instructions unconditionally set the memory location x to 1 and fetches the old value of the of x in y without allowing any intervening access to the memory location x. Consider the following implementation of P and V functions on binary semaphore S.
void P(binary - semaphore * s) { unsigned y; unsigned * x = & (s→ value); do { fetch - and - set x, y; } while (y); } void V(binary - semaphore * s) { s→ value = 0; }
Which one of the following is true?
- The implementation may not work if context switching is disabled in P
- Instead of using fetch-and-set, a pair of normal loads/stores can be used
- The implementation of V is wrong
- The code does not implement a binary semaphore
Answer
The implementation may not work if context switching is disabled in P
[2006]
55. The P and V operations on counting semaphores, where s is a counting semaphore, are defined as follows:
P(s) : s = s – 1;
if s < 0 then wait;
V(s) : s = s + 1;
if s <= 0 then wake up a process waiting on s;
Assume that Pb and Vb, the wait and signal operations on binary semaphores, are provided. Two binary semaphores xb and yb are used to implement the semaphore operations P(s) and V(s) as follows:
P(s) : Pb (xb) ;
s = s– 1; if (s < 0) { Vb(xb); Pb(yb); } else Vb(xb); V(s): Pb(xb); s = s + 1; if (s <= 0) Vb(yb); Vb(xb);
The initial values of xb and yb are respectively
- 0 and 0
- 0 and 1
- 1 and 0
- 1 and 1
Answer
1 and 0
[2008]
56. Consider a system with four types of resources R1 (3 units), R2 (2 units), R3 (3 units), R4 (2 units). A non pre-emptive resource allocation policy is used. At any given instance, a request is not entertained if it cannot be completely satisfied. Three processes P1, P2, P3 request the resources as follows if executed independently.
Process P1 | Process P2 | Process P3 |
t = 0: requests 2 units of R2 | t = 0: requests 2 units of R3 | t = 0: requests 1 unit of R4 |
t = 1: requests 1 unit of R3 | t = 2: requests 1 unit of R4 | t = 2: requests 2 units of R1 |
t = 3: requests 2 units of R1 | t = 4: requests 1 unit of R1 | t = 5: releases 2 units of R1 |
t = 5: releases 1 unit of R2 and 1 unit of R1. | t = 6: releases 1 unit of R3 | t = 7: requests 1 unit of R2 |
t = 7: releases 1 unit of R3 | t = 8: Finishes | t = 8: requests 1 unit of R3 |
t = 8: requests 2 units of R4 | t = 9: Finishes | |
t = 10: Finishes |
Which one of the following statements is true if all three processes run concurrently starting at time t = 0?
- All processes will finish without any deadlock
- Only P1 and P2 will be in deadlock.
- Only P1 and P3 will be in a deadlock.
- All three processes will be in deadlock.
Answer
All processes will finish without any deadlock
[2009]
57. The enter_CS() and leave_CS() functions to implement a critical section of a process are realized using test-and-set instruction as follows:
void enter_CS(X) { while (test - and - set(X)); } void(leave_CS(X)) { X = 0; }
In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider the following statements:
(i) The above solution to the CS problem is deadlock-free.
(ii) The solution is starvation free.
(iii) The processes enter CS in FIFO order.
(iv) More than one process can enter CS at the same time.
Which of the above statements is true?
- (i) only
- (i) and (ii)
- (ii) and (iii)
- (iv) only
Answer
(i) only
[2009]
58. Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared Boolean variables S1 and S2 are randomly assigned.
Method used by P1 | Method used by P2 |
while (S1 = = S2); Critical section S1 = S2; | while (S1 != S2); Critical section S2 = not (S1); |
Which one of the following statements describes the properties achieved?
- Mutual exclusion but not progress
- Progress but not mutual exclusion
- Neither mutual exclusion nor progress
- Both mutual exclusion and progress
Answer
Mutual exclusion but not progress
[2010]
59. The following program consists of three concurrent processes and three binary semaphores. The semaphores are initialized as S0 = 1, S1 = 0, S2 = 0.
Process P0 | Process P1 | Process P2 |
while (true) { wait (S0); print ‘0’ release (S1); release (S2); } | wait (S1); Release (S0); | wait (S2); release (S0); |
How many times will process P0 print ‘0’?
- At least twice
- Exactly twice
- Exactly thrice
- Exactly once
Answer
At least twice
[2010]
60. Fetch _ And _Add(X, i) is an atomic Read- Modify-Write instruction that reads the value of memory location X, increments it by the value i, and returns the old value of X. It is used in the pseudocode shown below to implement a busy wait lock. L is an unsigned integer shared variable initialized to 0. The value of 0 corresponds to the lock being available, while any non-zero value corresponds to the lock being not available.
AcquireLock(L) { while (Fetch_And_Add(L, 1)) L = 1; } ReleaseLock(L) { L = 0; }
This implementation
- fails as L can overflow
- fails as L can take on a non-zero value when the lock is actually available
- works correctly but may starve some processes
- works correctly without starvation
Answer
fails as L can take on a non-zero value when the lock is actually available
[2012]
61. A shared variable x, initialized to 0, is operated on by four concurrent processes W, X, Y, Z as follows. Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory, and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory. Semaphore S is initialized to 2. What is the maximum possible value of x after all processes complete execution?
- –2
- –1
- 1
- 2
Answer
2
[2013]
62. A certain computation generates two arrays ‘a’ and ‘b’ such that a[i] = f(i) for 0 ≤ i < n and b[i] = g(a[i]) for 0 ≤ i < n. Suppose this computation is decomposed into two concurrent processes X and Y such that X computes the array ‘a’ and Y computes the array ‘b’. The processes employ two binary semaphores R and S, both initialized to zero. The array ‘a’ is shared by the two processes. The structures of the processes are shown below.
Process X: | Process Y: |
private i; for (i = 0; i<n; i++) { a[i] = f(i); ExitX(R, S); } | private i; for (i = 0; i<n; i++) { EntryY(R, S); b[i] = g(a[i]); } |
Which one of the following represents the correct implementations of ExitX and EntryY?
- ExitX(R, S) {
P(R);
V(S);
}
EntryY(R,S){
P(S);
V(R);
} - ExitX(R, S) {
V(R);
V(S);
}
EntryY(R,S){
P(R);
P(S);
} - ExitX (R, S){
P(S);
V(R);
}
EntryY(R,S){
V(S);
P(R);
} - ExitX (R, S){
V(R);
P(S);
}
EntryY(R,S){
V(S);
P(R);
}
Answer
ExitX (R, S){
P(S);
V(R);
}
EntryY(R,S){
V(S);
P(R);
}
[2013]
63. Consider the procedure below for the producer-consumer problem which uses semaphores;
semaphore n = 0; semaphore s = 1; void producer() { while (true) { produce() semWait(s); addToBuffer(); semSignal(s); semSignal(n); } } void consumer() { while (true) { semWait(s); semWait(n); remove FromBuffer(); semSignal(s); consume(); } }
Which one of the following is true?
- The producer will be able to add an item to the buffer, but the consumer can never consume it.
- The consumer will remove no more than one item from the buffer.
- Deadlock occurs if the consumer succeeds in acquiring semaphores when the buffer is empty.
- The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation.
Answer
Deadlock occurs if the consumer succeeds in acquiring semaphores when the buffer is empty.
[2014]
64. The following two functions P1 and P2 that share a variable B with an initial value of 2 execute concurrently.
P1 ( ) { C = B – 1; B = 2 * C; } | P2 ( ) { D = 2 * B; B = D – 1; } |
The number of distinct values that B can possibly take after the execution is _
- 2
- 3
- 4
- 5
Answer
3
[2015]
65. Two processes X and Y need to access a critical section. Consider the following synchronization construct used by both the processes
Process X | Process Y |
/* other code for process X */ while (true) { varP = true; while(varQ == true) { /* Critical Section */ varP = false; } } /* other code for process X */ | /* other code for process Y */ while (true) { varQ = true; while (varP == true) { /* Critical Section */ varQ = false; } } /* other code for process Y */ |
Here, varP and varQ are shared variables and both are initialized to false. Which one of the following statements is true?
- The proposed solution prevents deadlock but fails to guarantee mutual exclusion.
- The proposed solution guarantees mutual exclusion but fails to prevent deadlock.
- The proposed solution guarantees mutual exclusion and prevents deadlock.
- The proposed solution fails to prevent deadlock and fails to guarantee mutual exclusion.
Answer
The proposed solution prevents deadlock but fails to guarantee mutual exclusion.
[2015]
66. Consider the following proposed solution for the critical section problem. There are n process: P0…Pn–1. In the code, function pmax returns an integer not smaller than any of its arguments. For all i, t[ i ] is initialized to zero.
do { c[i] = 1; t[i] = pmax(t[i], ……, t[n– 1]) + 1; c[i] = 0; for every j≠ i in (0, …., n– 1) { while (c[j]); while (t[j] ! = 0 && t[j] < = t[i]); } Critical Section; t[i] = 0; Remainder Section; } while (true);
Which one of the following is TRUE about the above solution?
- At most one process can be in the critical section at any time.
- The bounded wait condition is satisfied.
- The progress condition is satisfied.
- It cannot cause a deadlock.
Answer
At most one process can be in the critical section at any time.
[2016]
67. Consider the following two-process synchronization Solution.
Process 0 | Process 1 |
Entry: loop while (turn == 1); (Critical section) Exit: turn = 1; | Entry: loop while (turn == 0); (Critical section) Exist: turn = 0; |
The shared variable turn is initialized to zero. Which one of the following is TRUE?
- This is a correct two-process synchronization Solution.
- This Solution violates the mutual exclusion requirement.
- This Solution violates the progress requirement.
- This Solution violates the bounded wait requirement.
Answer
This Solution violates the progress requirement.
[2016]
68. Consider a non-negative counting semaphore S. The operation P(S) decrements S, and V(S) increments S. During an execution, 20 P(S) operations and 12 V(S) operations are issued in some order. The largest initial value of S for which at least one P(S) operation will remain blocked is _ .
- 7
- 12
- 17
- 11
Answer
7
[2016]
69. A multithreaded program P executes with x number of threads and uses y number of locks for ensuring mutual exclusion while operating on shared memory locations. All locks in the program are non-reentrant, i.e., if a thread holds a lock l, then it cannot re-acquire lock l without releasing it. If a thread is unable to acquire a lock, it blocks until the lock becomes available. The minimum value of x and the minimum value of y together for which execution of P can result in a deadlock are:
- x = 1, y = 2
- x = 2, y = 1
- x = 2, y = 2
- x = 1, y = 1
Answer
x = 1, y = 1
[2017]
70. Consider the following solution to the producer-consumer synchronization problem. The shared buffer size is N. Three semaphores empty, full and mutex are defined with respective initial values of 0, N and 1. Semaphore empty denotes the number of available slots in the buffer, for the consumer to read from. Semaphore full denotes the number of available slots in the buffer, for the producer to write to. The placeholder variables, denoted by P, Q, R, and S, in the code below can be assigned either empty or full. The valid semaphore operations are: wait () and signal ().
Producer | Consumer |
do { wait(P); wait (mutex); //Add item to buffer signal (mutex); signal (Q); } while (1); | do { wait(R); wait (mutex); //Consume item from buffer signal (mutex); signal (S); } while (1); |
Which one of the following assignments to P, Q, R and S will yield the correct solution?
- P: full, Q: full, R: empty, S: empty
- P: empty, Q: empty, R: full, S: full
- P: full, Q: empty, R: empty, S: full
- P: empty, Q: full, R: full, S: empty
Answer
P: full, Q: empty, R: empty, S: full
[2018]
71. Consider a small two-way set-associative cache memory, consisting of four blocks. For choosing the block to be replaced, use the least recently used (LRU) scheme. The number of cache misses for the following sequence of block addresses is 8, 12, 0, 12, 8
- 2
- 3
- 4
- 5
Answer
4
[2004]
72. Consider a direct mapped cache of size 32 KB with a block size of 32 bytes. The CPU generates 32-bit addresses. The number of bits needed for cache indexing and the number of tag bits are respectively.
- 10, 17
- 10, 22
- 15, 17
- 5, 17
Answer
10, 17
[2005]
73. Consider two cache organizations: The first one is a 32 KB 2-way set associative with a 32-byte block size. The second one is of the same size but directly mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a k-bit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2. The value of h1 is:
- 2.4 ns
- 2.3 ns
- 1.8 ns
- 1.7 ns
Answer
2.4 ns
[2006]
74. Consider two cache organizations: The first one is a 32 KB 2-way set associative with a 32-byte block size. The second one is of the same size but direct mapped. The size of an address is 32 bits in both cases. A 2-to-1 multiplexer has a latency of 0.6 ns while a k-bit comparator has a latency of k/10 ns. The hit latency of the set associative organization is h1 while that of the direct mapped one is h2. The value of h2 is:
- 6 ns
- 1.7 ns
- 4.3 ns
- 7.7 ns
Answer
7.7 ns
[2006]
75. Consider a machine with a byte-addressable main memory of 216 bytes. Assume that a direct mapped data cache consisting of 32 lines of 64 bytes each is used in the system. A 50 × 50 two-dimensional array of bytes is stored in the main memory starting from memory location 1100 H. Assume that the data cache is initially empty. The complete array is accessed twice. Assume that the contents of the data cache do not change in between the two accesses. Which of the following lines of the data cache will be replaced by new blocks in accessing the array for the second time?
- line 4 to line 11
- line 4 to line 12
- line 0 to line 7
- line 0 to line 8
Answer
line 0 to line 7
[2007]
76. For inclusion to hold between two cache levels L1 and L2 in a multi-level cache hierarchy, which of the following are necessary?
(i) L1 must be a write-through cache
(ii) L2 must be a write-through cache
(iii) The associativity of L2 must be greater than that of L1
(iv) The L2 cache must be at least as large as the L1cache
- (iv) only
- (i) and (iv) only
- (i), (ii) and (iv) only
- (i), (ii), (iii) and (iv)
Answer
(i) and (iv) only
[2008]
77. Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16-bytes. The cache is managed using 32-bit virtual addresses and the page size is 4Kbytes. A program to be run on this machine begins as follows:
double ARR[1024][1024]; int i, j; /* Initialize array ARR to 0.0 */ for (i = 0; i < 1024; i++) for (j = 0; j < 1024; j++) ARR[i][j] = 0.0;
The size of double is 8 Bytes. Array ARR is located in memory starting at the beginning of virtual page 0XFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR. The total size of the tags in the cache directory is
- 32K-bits
- 34K-bits
- 64K-bits
- 68K-bits
Answer
68K-bits
[2008]
78. Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16-bytes. The cache is managed using 32-bit virtual addresses and the page size is 4Kbytes. A program to be run on this machine begins as follows:
double ARR[1024][1024]; int i, j; /* Initialize array ARR to 0.0 */ for (i = 0; i < 1024; i++) for (j = 0; j < 1024; j++) ARR[i][j] = 0.0;
The size of double is 8 Bytes. Array ARR is located in memory starting at the beginning of virtual page 0XFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR. Which of the following array elements has the same cache index as ARR [0] [0]?
- ARR [0] [4]
- ARR [4] [0]
- ARR [0] [5]
- ARR [5] [0]
Answer
ARR [4] [0]
[2008]
79. Consider a machine with a 2-way set associative data cache of size 64Kbytes and block size 16-bytes. The cache is managed using 32-bit virtual addresses and the page size is 4Kbytes. A program to be run on this machine begins as follows:
double ARR [1024] [1024] ;
int i, j;
/* Initialize array ARR to 0.0 */
for (i = 0; i < 1024; i++)
for (j = 0; j < 1024; j++)
ARR [i] [j] = 0.0;
The size of double is 8 Bytes. Array ARR is located in memory starting at the beginning of virtual page 0XFF000 and stored in row major order. The cache is initially empty and no pre-fetching is done. The only data memory references made by the program are those to array ARR. The cache hit ratio for this initialization loop is
- 0%
- 25%
- 50%
- 75%
Answer
50%
[2008]
80. Consider a 4-way set associative cache (initially empty) with total 16 cache blocks. The main memory consists of 256 blocks and the request for memory blocks is in the following order:
0, 255, 1, 4, 3, 8, 133, 159, 216, 129, 63, 8, 48, 32, 73, 92, 155.
Which one of the following memory block will NOT be in cache if LRU replacement policy is used?
- 3
- 8
- 129
- 216
Answer
216
[2009]
81. A computer system has an L1 cache, an L2 cache, and a main memory unit connected as shown below. The block size in L1 cache is 4 words. The block size in L2 cache is 16 words. The memory access times are 2 nanoseconds. 20 nanoseconds and 200 nanoseconds, for L1 cache, L2 cache and main memory unit respectively.
When there is a miss in L1 cache and a hit in L2, a block is transferred from L2 cache to L1 cache. What is the time taken for this transfer?
- 2 ns
- 20 ns
- 22 ns
- 88 ns
Answer
88 ns
[2010]
82. A computer system has an L1 cache, an L2 cache, and a main memory unit connected as shown below. The block size in L1 cache is 4 words. The block size in L2 cache is 16 words. The memory access times are 2 nanoseconds. 20 nanoseconds and 200 nanoseconds, for L1 cache, L2 cache and main memory unit respectively.
When there is a miss in both L1 cache and L2 cache, first a block is transferred from main memory to L2 cache, and then a block is transferred from L2 cache to L1 cache. What is the total time taken for these transfers?
- 222 ns
- 888 ns
- 902 ns
- 968 ns
Answer
968 ns
[2010]
83. An 8 KB direct-mapped write-back cache is organized as multiple blocks, each of size 32 bytes. The processor generates 32-bit addresses. The cache controller maintains the tag information for each cache block comprising of the following.
1 Valid bit
1 Modified bit
As many bits as the minimum needed to identify the memory block mapped in the cache. What is the total size of memory needed at the cache controller to store meta-data (tags) for the cache?
- 4864 bits
- 6144 bits
- 6656 bits
- 5376 bits
Answer
5376 bits
[2011]
84. A computer has a 256 KB, 4-way set associative, write back data cache with block size of 32 bytes. The processor sends 32 bit addresses to the cache controller. Each cache tag directory entry contains, in addition to address tag, 2 valid bits, 1 modified bit and 1 replacement bit. The number of bits in the tag field of an address is
- 11
- 14
- 16
- 27
Answer
16
[2012]
85. A computer has a 256 KB, 4-way set associative, write back data cache with block size of 32 bytes. The processor sends 32 bit addresses to the cache controller. Each cache tag directory entry contains, in addition to address tag, 2 valid bits, 1 modified bit and 1 replacement bit. The size of the cache tag directory is
- 160 K-bits
- 136 K-bits
- 40 K-bits
- 32 K-bits
Answer
160 K-bits
[2012]
86. In a k-way set associative cache, the cache is divided into v sets, each of which consists of k lines. The lines of a set are placed in sequence one after another. The lines in set s are sequenced before the lines in set (s + 1). The main memory blocks are numbered 0 onwards. The main memory block numbered j must be mapped to any one of the cache lines from
- ( j mod v) * k to (j mod v) * k + (k – 1)
- ( j mod v) to ( j mod v) + (k – 1)
- ( j mod k) to (j mod k) + (v – 1)
- ( j mod k) * v to ( j mod k) * v + ( v – 1)
Answer
( j mod v) * k to (j mod v) * k + (k – 1)
[2013]
87. An access sequence of cache block addresses is of length N and contains n unique block addresses. The number of unique block addresses between two consecutive accesses to the same block address is bounded above by K. What is the miss ratio if the access sequence is passed through a cache of associativity A ≥ K exercising least recently used replacement policy?
- n/N
- 1/N
- 1/A
- K/n
Answer
n/N
[2014]
88. A 4-way set -associative cache memory unit with a capacity of 16 KB is built using a block size of 8 words. The word length is 32 bits. The size of the physical address space is 4 GB. The number of bits for the TAG field is _.
- 12
- 18
- 20
- 24
Answer
20
[2014]
89. In designing a computer’s cache system, the cache block (or cache line) size is an important parameter. Which one of the following statements is correct in this context?
- A smaller block size implies better spatial locality.
- A smaller block size implies a smaller cache tag and hence lower cache tag overhead.
- A smaller block size implies a larger cache tag and hence lower cache hit time.
- A smaller block size incurs a lower cache miss penalty.
Answer
A smaller block size incurs a lower cache miss penalty.
[2014]
90. Consider a main memory system that consists of 8 memory modules attached to the system bus, which is one word wide. When a write request is made, the bus is occupied for 100 nanoseconds (ns) by the data, address, and control signals. During the same 100 ns, and for 500 ns thereafter, the addressed memory module executes one cycle accepting and storing the data. The (internal) operation of different memory modules may overlap in time, but only one request can be on the bus at any time. The maximum number of stores (of one word each) that can be initiated in 1 millisecond is _.
- 1000
- 5000
- 10000
- 9990
Answer
10000
[2014]
91. If the associativity of processor cache is doubled while keeping the capacity and block size unchanged, which one of the following is guaranteed to be NOT affected?
- Width of tag comparator
- Width of set index decoder
- Width of way selection multiplexer
- Width of processor to main memory data bus
Answer
Width of processor to main memory data bus
[2014]
92. The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10 nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves 100 instruction fetch operations. 60 memory operand read operations and 40 memory operand write operations. The cache -hit ratio is 0.9. The average memory access time (in nanoseconds) in executing the sequence of instructions is _.
- 1.20
- 1.56
- 1.68
- 2.34
Answer
1.68
[2014]
93. Assume that for a certain processor, a read request takes 50 nanoseconds on a cache miss and 5 nanoseconds on a cache hit. Suppose while running a program, it was observed that 80% of the processor’s read requests result in a cache hit. The average read access time in nanoseconds is ______
- 11
- 14
- 18
- 15
Answer
14
[2015]
94. A computer system implements a 40-bit virtual address, page size of 8 kilobytes, and a 128-entry translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tag does not store any process id. The minimum length of the TLB tag in bits is __
- 18
- 22
- 27
- 31
Answer
22
[2015]
95. Consider a machine with a byte addressable main memory of 220 bytes, block size of 16 bytes and a direct mapped cache having 212 cache lines. Let the addresses of two consecutive bytes in main memory be (E201F)16 and (E2020)16. What are the tag and cache line address (in hex) for main memory address (E201F)16?
- E, 201
- F, 201
- E, E20
- 2, 01F
Answer
E, 201
[2015]
96. The width of the physical address on a machine is 40 bits. The width of the tag field in a 512KB 8-way set associative cache is __ bits.
- 8
- 16
- 24
- 32
Answer
24
[2016]
97. A file system uses an in-memory cache to cache disk blocks. The miss rate of the cache is shown in the figure. The latency to read a block from the cache is 1 ms and to read a block from the disk is 10ms. Assume that the cost of checking whether a block exists in the cache is negligible. Available cache sizes are in multiples of 10MB.
The smallest cache size required to ensure an average read latency of less than 6 ms is __ MB.
- 24
- 15
- 30
- 32
Answer
30
[2016]
98. Consider a two-level cache hierarchy with L1 and L2 caches. An application incurs 1.4 memory accesses per instruction on average. For this application, the miss rate of L1 cache is 0.1; the L2 cache experiences on average 7 misses per 1000 instructions. The miss rate of L2 expressed correct to two decimal places is .
- 0
- 0.01
- 0.05
- 0.5
Answer
0.05
[2017]
99. Consider a 2-way set associative cache with 256 blocks and uses LRU replacement. Initially, the cache is empty. Conflict misses are those misses that occur due to the contention of multiple blocks for the same cache set. Compulsory misses occur due to first time access to the block. The following sequence of accesses to memory blocks
(0, 128, 256, 128, 0, 128, 256, 128, 1, 129, 257, 129, 1, 129, 257, 129)
is repeated 10 times. The number of conflict misses experienced by the cache is .
- 64
- 73
- 76
- 92
Answer
76
[2017]
100. A cache memory unit with capacity of N words and block size of B words is to be designed. If it is designed as a direct mapped cache, the length of the TAG field is 10 bits. If the cache unit is now designed as a 16-way set-associative cache, the length of the TAG filed is _ bits.
- 6
- 14
- 18
- 10
Answer
14
[2017]
101. In a two-level cache system, the access times of L1 and L2 caches are 1 and 8 clock cycles, respectively. The miss penalty from the L2 cache to main memory is 18 clock cycles. The miss rate of L1 cache is twice that of L2. The average memory access time (AMAT) of this cache system is 2 cycles. The miss rates of L1 and L2 respectively are:
- 0.111 and 0.056
- 0.056 and 0.111
- 0.0892 and 0.1784
- 0.1784 and 0.0892
Answer
0.111 and 0.056
[2017]
102. The read access times and the hit ratios for different caches in a memory hierarchy are as given below.
Cache | Read access time (in nanoseconds) | Hit ratio |
I-cache | 2 | 0.8 |
D-cache | 2 | 0.9 |
L2-cache | 8 | 0.9 |
The read access time of main memory is 90 nanoseconds. Assume that the caches use the referred- word first read policy and the write-back policy. Assume that all the caches are direct-mapped caches. Assume that the dirty bit is always 0 for all the blocks in the caches. In execution of a program. 60% of memory reads are for instruction fetch and 40% are for memory operand fetch. The average read access time in nanoseconds (up to 2 decimal places) is __.
- 2.7
- 5.32
- 4.72
- 5.86
Answer
4.72
[2017]
103. Consider a machine with a byte-addressable main memory of 232 bytes divided into blocks of size 32 bytes. Assume that a direct mapped cache having 512 cache lines is used with this machine. The size of the tag field in bits is_______.
- 18
- 12
- 24
- 28
Answer
18
[2017]
104. The size of the physical address space of a processor is 2P bytes. The word length is 2W bytes. The capacity of cache memory is 2N bytes. The size of each cache block is 2M words. For a K-way set-associative cache memory, the length (in number of bits) of the tag field is:
- P – N – log2 K
- P – N + log2 K
- P – N – M – W – log2 K
- P – N – M – W + log2 K
Answer
P – N + log2 K
[2018]
105. A processor uses 36-bit physical addresses and 32-bit virtual addresses, with a page frame size of 4 K bytes. Each page table entry is of size 4 bytes. A three-level page table is used for virtual to physical address translation, where the virtual address is used as follows:
1. Bits 30–31 are used to index into the first-level page table
2. Bits 21–29 are used to index into the second-level page table
3. Bits 12–20 are used to index into the third-level page table, and
4. Bits 0–11 are used as offsets within the page
The number of bits required for addressing the next level page table (or page frame) in the page table entry of the first, second and third level page tables are, respectively,
- 20, 20 and 20
- 24, 24 and 24
- 24, 24 and 20
- 25, 25 and 24
Answer
25, 25 and 24
[2008]
106. How many 32 K × 1 RAM chips are needed to provide a memory capacity of 256 K bytes?
- 8
- 32
- 64
- 128
Answer
64
[2009]
107. In which one of the following page replacement policies, Belady’s anomaly may occur?
- FIFO
- Optimal
- LRU
- MRU
Answer
FIFO
[2009]
108. The essential content(s) in each entry of a page table is/are
- Virtual page number
- Page frame number
- Both virtual page number and page frame number
- Access right information
Answer
Page frame number
[2009]
109. A multilevel page table is preferred in comparison to a single-level page table for translating virtual address to physical address because
- It reduces the memory access time to read or write a memory location.
- It helps to reduce the size of the page table needed to implement the virtual address space of a process.
- It is required by the translation look-aside buffer.
- It helps to reduce the number of page faults in page replacement algorithms.
Answer
It helps to reduce the size of the page table needed to implement the virtual address space of a process.
[2009]
110. A system uses FIFO policy for page replacement. It has four-page frames with no pages loaded to begin with. The system first accesses 100 distinct pages in some order and then accesses the same 100 pages but now in the reverse order. How many page faults will occur?
- 196
- 192
- 197
- 195
Answer
196
[2010]
111. Let the page fault service time be 10ms in a computer with average memory access time being 20 ns. If one page fault is generated for every 106 memory accesses, what is the effective access time for the memory?
- 21 ns
- 30 ns
- 23 ns
- 35 ns
Answer
30 ns
[2011]
112. Consider the virtual page reference string
1, 2, 3, 2, 4, 1, 3, 2, 4, 1
on a demand-paged virtual memory system running on a computer system that has main memory size of three-page frames which are initially empty. Let LRU, FIFO, and OPTIMAL denote the number of page faults under the corresponding page replacement policy. Then
- OPTIMAL< LRU < FIFO
- OPTIMAL < FIFO < LRU
- OPTIMAL = LRU
- OPTIMAL = FIFO
Answer
OPTIMAL < FIFO < LRU
[2012]
113. A RAM chip has a capacity of 1024 words of 8 bits each (1 K × 8). The number of 2 × 4 decoders with enable line needed to construct a 16 K × 16 RAM from 1 K × 8 RAM is
- 4
- 5
- 6
- 7
Answer
5
[2013]
114. A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32-bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes. What is the size of a page in KB in this computer?
- 2
- 4
- 8
- 13
Answer
8
[2013]
115. A computer uses 46-bit virtual address, 32-bit physical address, and a three-level paged page table organization. The page table base register stores the base address of the first-level table (T1), which occupies exactly one page. Each entry of T1 stores the base address of a page of the second-level table (T2). Each entry of T2 stores the base address of a page of the third-level table (T3). Each entry of T3 stores a page table entry (PTE). The PTE is 32-bits in size. The processor used in the computer has a 1 MB 16-way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes. What is the minimum number of page colours needed to guarantee that no two synonyms map to different sets in the processor cache of this computer?
- 2
- 4
- 8
- 16
Answer
8
[2013]
116. Assume that there are three page frames which are initially empty. If the page reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is ______ .
- 1
- 3
- 6
- 7
Answer
7
[2014]
117. A computer has 20 physical page frames which contain pages numbered 101 through 120. Now a program accesses the pages numbered 1, 2, … 100 in that order, and repeats the access sequence THRICE. Which one of the following page replacement policies experiencies the same number of page faults as the optimal page replacement policy for this program?
- Least-recently used
- First-in-first-out
- Last-in-first-out
- Most-recently-used
Answer
Most-recently-used
[2014]
118. A system uses three page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below?
4, 7, 6, 1, 7, 6, 1, 2, 7, 2
- 1
- 7
- 4
- 6
Answer
6
[2014]
119. Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is ___________ .
- 156
- 122
- 136
- 112
Answer
122
[2014]
120. Consider a system with byte-addressable memory, 32-bit logical addresses, 4 kilobyte page size and page table entries of 4 bytes each. The size of the page table in the system in megabytes is _.
- 4
- 8
- 16
- 32
Answer
4
[2015]
121. Consider a main memory with five page frames and the following sequence of page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. Which one of the following is true with respect to page replacement policies First In First Out (FIFO) and Least Recently Used (LRU)?
- Both incur the same number of page faults
- FIFO incurs 2 more page faults than LRU
- LRU incurs 2 more pages faults than FIFO
- FIFO incurs 1 more page faults than LRU
Answer
Both incur the same number of page faults
[2015]
122. Consider six memory partitions of sizes 200 KB, 400 KB, 600 KB, 500 KB, 300 KB and 250 KB, where KB refers to kilobyte. These partitions need to be allotted to four processes of sizes 357 KB, 210 KB, 468 KB and 491 KB in that order. If the best fit algorithm is used, which partitions are NOT allotted to any process?
- 200 KB and 300 KB
- 200 KB and 250 KB
- 250 KB and 300 KB
- 300 KB and 400 KB
Answer
200 KB and 300 KB
[2015]
123. A computer system implements 8 kilobyte pages and a 32-bit physical address space. Each page table entry contains a valid bit, a dirty bit, three permission bits, and the translation. If the maximum size of the page table of a process is 24 megabytes, the length of the virtual address supported by the system is _ bits.
- 24
- 18
- 36
- 30
Answer
36
[2015]
124. Consider the following two C code segments. Y and Xare one and two dimensional arrays of size n and n ×n respectively, where 2 ≤ n ≤ 10. Assume that in both code segments, elements of Y are initialized to 0 and each element X[i] [j] of array X is initialized to i + j. Further assume that when stored in main memory all elements of X are in same main memory page frame.
Code segment 1: //initialize elements of Y to 0 //initialize elements X[i][j] of X to i + j for (i = 0; i < n; i++) Y[i] += X[0][i];
Code Segment 2: //initialize elements of Y to 0 //initialize elements X[i] [j] of X to i + j for (i = 0; i < n; i++) Y[i] += X[i][0];
Which of the following statements is/are correct?
S1 : Final contents of array Y will be same in both code segments
S2 : Elements of array X accessed inside the for loop shown in code segment 1 are contiguous in main memory
S3 : Elements of array X accessed inside the for loop shown in code segment 2 are contiguous in main memory.
- Only S2 is correct
- Only S3 is correct
- Only S1 and S2 are correct
- Only S1 and S3 are correct
Answer
Only S1 and S2 are correct
[2015]
125. Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the computer system has a one-level page table per process and each page table entry requires 48 bits, then the size of the per-process table is __ megabytes.
- 254
- 186
- 384
- 348
Answer
384
[2016]
126. Consider a computer system with ten physical page frames. The system is provided with an access sequence (a1, a2, ….., a20, a1, a2, ….., a20), where each ai is a distinct virtual page number. The difference in the number of page faults between the last-in-firstout page replacement policy and the optimal page replacement policy is __.
- 0
- 1
- 2
- 3
Answer
1
[2016]
127. In which one of the following page replacement algorithms it is possible for the page fault rate to increase even when the number of allocated frames increases?
- LRU (Least Recently Used)
- OPT (Optimal Page Replacement)
- MRU (Most Recently Used)
- FIFO (First In First Out)
Answer
FIFO (First In First Out)
[2016]
128. Recall that Belady’s anomaly is that the page-fault rate may increase as the number of allocated frames increases. Now, consider the following statements:
S1 : Random page replacement algorithm (where a page chosen at random is replaced) suffers from Belady’s anomaly
S2 : LRU page replacement algorithm suffers from Belady’s anomaly
Which of the following is CORRECT?
- S1 is true, S2 is true
- S1 is true, S2 is false
- S1 is false, S2 is true
- S1 is false, S2 is false
Answer
S1 is true, S2 is false
[2017]
129. Consider a process executing on an operating system that uses demand paging. The average time for a memory access in the system is M units if the corresponding memory page is available in memory and D units if the memory access causes a page fault. It has been experimentally measured that the average time taken for a memory access in the process is X units. Which one of the following is the correct expression for the page fault rate experienced by the process?
- (D – M)/(X – M)
- (X – M/(D – M)
- (D – X/(D – M)
- (X – M/(D – X)
Answer
(X – M/(D – M)
[2018]
130. Consider the following statements with respect to User-level threads and Kernel-supported threads.
(1) Context switch is faster with Kernel-supported threads
(2) For user-level threads, a system call can block the entire process
(3) Kernel supported threads can be scheduled independently
(4) User level threads are transparent to the Kernel
Which of the above statements are true?
- 2, 3 and 4 only
- 2 and 3 only
- 1 and 3 only
- 1 and 2 only
Answer
2, 3 and 4 only
[2004]
131. Which one of the following is true for a CPU having a single interrupt request line and a single interrupt grant line?
- Neither vectored interrupt nor multiple interrupting devices are possible
- Vectored interrupts are not possible but multiple interrupting devices are possible
- Vectored interrupts and multiple interrupting devices are both possible
- Vectored interrupt is possible but multiple interrupting devices are not possible
Answer
Vectored interrupts and multiple interrupting devices are both possible
[2005]
132. Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In a CPU with memory-mapped I/O, there is no explicit I/O instruction. Which one of the following is true for a CPU with memory mapped I/O?
- I/O protection is ensured by operating system routine(s)
- I/O protection is ensured by a hardware trap
- I/O protection is ensured during system configuration
- I/O protection is not possible
Answer
I/O protection is ensured by operating system routine(s)
[2005]
133. Consider the following code fragment: if (fork ()==0)
{ a = a + 5; printf(“ % d, % d\ n”, a, & a); } else { a = a - 5; printf(“ % d, % d\ n”, a, & a); }
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process. Which one of the following is true?
- u = x + 10 and v = y
- u = x + 10 and v ≠ y
- u + 10 = x and v = y
- u + 10 = x and v ≠ y
Answer
u + 10 = x and v ≠ y
[2005]
134. Consider the following statements about user-level threads and Kernel-level threads. Which one of the following statements is false?
- Context switch time is longer for Kernel-level threads than for user-level threads.
- User-level threads do not need any hardware support.
- Related Kernel-level threads can be scheduled on different processors in a multi-processor system.
- Blocking one Kernel-level thread blocks all related threads.
Answer
Blocking one Kernel-level thread blocks all related threads.
[2007]
135. Which of the following statements about synchronous and asynchronous I/O is NOT true?
- An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
- In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after completion of the I/O
- A process making a synchronous I/O call waits until the I/O is complete, but a process making an asynchronous I/O call does not wait for the completion of the I/O
- In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR that is invoked after the completion of I/O
Answer
An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
[2008]
136. A process executes the following code
for (i=0; i<n; i++) fork( );
The total number of child processes created is
- n
- 2n − 1
- 2n
- 2n+1 – 1
Answer
2n − 1
[2008]
137. A CPU generally handles an interrupt by executing an interrupt service routine
- As soon as an interrupt is raised.
- By checking the interrupt register at the end of the fetch cycle.
- By checking the interrupt register after finishing the execution of the current instruction.
- By checking the interrupt register at fixed time intervals.
Answer
By checking the interrupt register after finishing the execution of the current instruction.
[2009]
138. A thread is usually defined as ‘light weight process’ because an operating system (OS) maintains smaller data structures for a thread than for a process. In relation to this, which of the following is true?
- On per-thread basis, the OS maintains only CPU register state
- The OS does not maintain a separate stack for each thread
- On per thread basis, the OS does not maintain virtual memory state.
- On per thread basis, the OS maintains only scheduling and accounting information.
Answer
On per thread basis, the OS does not maintain virtual memory state.
[2011]
139. Let the time taken to switch between user and kernel modes of execution be t1 while the time taken to switch between two processes be t2. Which of the following is true?
- t1 > t2
- t1 = t2
- t1 < t2
- Nothing can be said about the relation between t1 and t2
Answer
t1 < t2
[2011]
140. A process executes the code
fork();
fork();
fork();
The total number of child processes created is
- 3
- 4
- 7
- 8
Answer
7
[2012]
141. Which one of the following is FALSE?
- User-level threads are not scheduled by the Kernel.
- When a user-level thread is blocked, all other threads of its process are blocked.
- Context switching between user-level threads is faster than context switching between kernel-level threads.
- Kernel-level threads cannot share the code segment.
Answer
Kernel-level threads cannot share the code segment.
[2014]
142. Threads of a process share
- global variables but not heap.
- heap but not global variables.
- neither global variables nor heap.
- both heap and global variables.
Answer
both heap and global variables.
[2017]
143. Which of the following is/are shared by all the threads in a process?
I. Program counter
II. Stack
III. Address space
IV. Registers
- I and II only
- III only
- IV only
- III and IV only
Answer
III only
[2017]
144. Consider a disk pack with a seek time of 4 milliseconds and rotational speed of 10000 rotations per minute (RPM). It has 600 sectors per track and each sector can store 512 bytes of data. Consider a file stored in the disk. The file contains 2000 sectors. Assume that every sector access necessitates a seek, and the average rotational latency for accessing each sector is half of the time for one complete rotation. The total time (in milliseconds) needed to read the entire file is __.
- 1360
- 9840
- 14020
- 34890
Answer
14020
[2015]
145. Consider these two functions and two statements S1 and S2 about them:
int work1 (int *a,int i, int j) { int x=a[i+2]; a[j]=x+1; return a[i+2]-3; } | int work2 (int *a,int i, int j) { int t1=i+2; int t2=a[t1]; a[j]=t2+1; return t2 – 3; } |
S1 : The transformation from work1 to work2 is valid, that is, for any program state and input arguments, work 2 will compute the same output and have the same effect on the program state as work 1
S2 : All the transformations applied to work 1 to get work 2 will always improve the performance (i.e., reduce CPU time) of work 2 compared to work 1
- S1 is false and S2 is false
- S1 is false and S2 is true
- S1 is true and S2 is false
- S1 is true and S2 is true
Answer
S1 is true and S2 is true
[2006]