A fundamental requirement for a good operating system is the ability to complete several tasks seemingly simultaneously. Also to maxmise the utilisation of CPU time, to allocate resources and to support inter-process communications. These requires are best met by an operating system which supports multiple processes/threads. Processing small time slices of each task gives the user the sense of simultaneous processing. It also maximises CPU utilisation by ensuring that the CPU is not left waiting for slower devices which may be used in one specific process.
Processes & Threads Edit
- Also called a task or job
- Execution of an individual program
- “Owner” of resources allocated for program execution
- Encompasses one or more threads
Process information is stored in a Process Control Block(PCB). Stores information regarding memory, file system and process management.
- Unit of execution
- Can be traced
- list the sequence of instructions that execute
- Belongs to a process
Process Model Edit
In the image above we have 4 independent processes containing a single thread of execution. Part C demonstrates how processes are given time slices.
Process States EditRunning
The thread/process is currently being executed by the CPU.
The thread/process is ready to be executed by the CPU. It is waiting for the scheduler to grant the thread/process a time slice in which to execute.
The thread/process is waiting a resource to become available before it can continue to execute ie waiting for disk or network. During this state the process will not be scheduled for execution.
Thread Model Edit
Using a thread rather than processes for every subtask allows for some common information between threads in a process be shared, without having to duplicate memory.
|Per Process Items||Per Thread Items|
Why Threads Edit
- Simpler to program than a state machine
- Less resources are associated with them than a complete process
- Cheaper to create and destroy
- Shares resources (especially memory) between them
- Performance: Threads waiting for I/O can be overlapped with computing threads
- Note if all threads are compute bound, then there is no performance improvement (on a uniprocessor)
- Threads can take advantage of the parallelism available
on machines with more than one CPU (multiprocessor)
User-Level Threads Edit
- User-level Thread Control Block (TCB), ready queue, blocked queue, and dispatcher manages the threads of execution in user mode.
- Kernel has no knowledge of the threads (it only sees a single process).
- If a thread blocks waiting for a resource, the whole process blocks.
- Thread management (create, exit, yield, wait) are implemented in a run-time support library.
- Thread management and switching at user level is much faster than doing it in kernel level. No need to trap (take syscall exception) into kernel and back to switch.
- Dispatcher algorithm can be tuned to the application.
- Can be implemented on any OS (thread or non-thread aware).
- Can easily support massive numbers of threads on a per- application basis.
- Use normal application virtual memory.
- Kernel memory more constrained. Difficult to efficiently support wildly differing numbers of threads for different applications.
- Threads have to yield() manually (no timer interrupt delivery to user-level). Co-operative multithreading.
- A single poorly design/implemented thread can monopolise the available CPU time.
- Does not take advantage of multiple CPUs (in reality, we still have a single threaded process as far as the kernel is concerned).
- If a thread makes a blocking system call (or takes a page fault), the process (and all the internal threads) blocks
- Can’t overlap I/O with computation
- Can use wrappers as a work around
- Example: wrap the read()call. Use select()to test if read system call would block
- select()then read()
- Only call read()if it won’t block
- Otherwise schedule another thread
- Wrapper requires 2 system calls instead of one
- Wrappers are needed for environments doing lots of blocking system calls –exactly when efficiency matters!
Kernel-Level Threads Edit
Implementation Threads are implemented in the kernel TCBs are stored in the kernel containing a subset of information in a traditional PCB. All TCBs then have a PCB associated with them and resources associated with the group of threads (the process). Thread management calls are implemented as system calls.
- Parallelism - Can overlap blocking I/O with computation.
- Can take advantage of a multiprocessor.
- Thread creation and destruction, and blocking and unblocking threads requires kernel entry and exit. - More expensive than user-level equivalent .