Threads: Definition, Types and Management, Multithreading Models

By | September 13, 2021

Thread definition

  • A thread is a single sequential flow of execution of the tasks of a process.
  • A thread is a lightweight process and the smallest unit of CPU utilization. Thus a thread is like a little miniprocess.
  • Each thread has a thread id, a program counter, a register set and a stack.
  • A thread undergoes different states such as new, ready, running, waiting and terminated similar to that of a process.
  • However, a thread is not a program as it cannot run on its own. It runs within a program.

Threads: Definition, Management, Types and Multithreading Models


  • A process can have single thread of control or multiple threads of control.
  • If a process has a single thread of control, it can perform only one task at a time. For example, if a process is running a word-processor program, a single thread of instructions is being executed. In such a situation the user could not simultaneously type in characters and run the checker within same process.
  • Many modern operating systems have extended the process concept to allow a process to have multiple threads of execution. Thus allowing the process to perform multiple tasks at the same time. This concept is known as multithreading.
  • For example, the tasks in a web browser are divided into multiple threads : downloading the images, downloading the text and displaying images and text. While one thread is busy in downloading the images, another thread displays the text.
  • The various operating systems that implements the concept of multithreading are Windows NT 4.0, Windows 95, Windows 98, Windows 2000, UNIX.
  • In multithreading, a thread can share its code section, data section and operating system resources such as open files and signals with the other threads of same process.

Advantages of multithreading

The various advantages of multithreading are:

1. Responsiveness:

  • In multithreading the responsiveness of process towards a user is high.
  • In case of interactive applications, if part of a program is blocked or is performing lengthy operation, multithreading still allows a program to continue to run.
  • For example a multithreaded web browser could still allow user interaction in one thread while an image is being loaded in another thread.

2. Resource sharing:

All the threads of a process share memory as well as the resources.

Single threaded and multithreaded processes

3. Economy:

Threads are easier to create and maintain as compared to processes. Moreover it is less time consuming to create a thread and more economical to context switch a thread.

For example, In Solaris 2, creating a process is about 30 times slower than is creating a thread and context switching is about five times slower.

4. Utilization of multiprocessor architectures:

The benefit of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on different processor. In this way concurrency is increased as multiple threads are running on multiple processors.

Types of Threads

  • Threads in a process are specified by the operating system or by the user.
  • Based on this there are three types of threads :
  1. Kernel level thread
  2. User level treads
  3. Hybrid thread

Types of threads

1. Kernel level threads

  • Threads of processes defined by operating system itself are called kernel threads.
  • In these types of threads, kernel performs thread creation, scheduling and management in kernel space.
  • Kernel threads are used for internal workings of the operating system, such as scheduling the user threads.
  • Kernel threads are slower to create and manage as operating system manages them.
  • If a thread performs a blocking system call, the kernel can schedule another threads in the application for execution.
  • In multiprocessor environment, the kernel can schedule threads on different processors.
  • The various operating systems that support kernel level threads are windows NT, Windows 2000, Solaris 2.

Advantages of kernel level threads

  1. The operating system can schedule multiple threads for the same process on multiple processors.
  2. The operating system is aware of the presence of threads in the processes, therefore, even if one thread of a process gets blocked, the operating system chooses the next one to run either from the same process or from the different process. Hence they are good for applications that frequently block.
  3. Kernel routines themselves can be multithreaded.

Disadvantages of kernel level threads

  1. Kernel level threads are slower and inefficient.
  2. Switching between threads is time consuming as the kernel performs switching (via an interrupt).
  3. Since kernel must manage and schedule threads as processes, it requires full threads control block (TCB) for each thread. As a result there is significant overhead and increase in kernel complexity.

2. User level threads

  • The threads of user application process are called user threads.
  • They are implemented entirely in the user space of the main memory.
  • They are supported above the kernel and are implemented in user level libraries rather than via a system call.
  • Here, user level library (containing functions to manipulate user threads) is used for thread creation, scheduling and management without any support from kernel.
  • As kernel’s intervention is not required in thread creation and scheduling, user level threads are fast to create and manage.

Advantages of user level threads

  1. User level threads are fast in creation as kernel intervention is not required.
  2. There is fast switching among threads, as switching between user level threads can be done independent of the operating system.
  3. They have better performance over kernel threads as they do not need to make system calls for threads creation.
  4. Threads scheduling can be application specific.
  5. User level threads can run on any operating system.

Disadvantages of user level threads

  1. When a user level thread executes a system call, not only that thread is blocked but also all of the threads within the process are blocked. This is because operating system is unaware of the presence of threads and only knows about the existence of a process actually constituting these threads.
  2.  Multithreaded applications using user level threads cannot take the advantage of multiprocessing as kernel assigns one process to only one processor as a time. This is again because of operating system is unaware of the presence of threads and schedules processes not threads.

3. Hybrid Approach

  • In Hybrid approach both kernel level threads and user level threads are implemented.
  • For example, Solaris 2 (a version of UNIX). In hybrid approach both kernel level threads and user level threads are implemented

Multithreading Models

Depending on the support for user and kernel threads, three different multithreading models

  1. Many-to-one model
  2. One-to-one model
  3. Many-to-many model

1. Many-to-one model

  • In many-to-one model, many user level threads are mapped to one kernel level thread.
  • Thread management is done in user space.
  • If any of the threads makes a blocking system call, the entire process will be blocked.
  • Multiple threads are unable to run in parallel on multiprocessor as only one thread can access the kernel at a time.
  • For example, Green thread-a thread library of Solaris 2 uses this model.

Many-to-one Multithreading Models

2. One-to-one model

  • In this model, each user thread is mapped to one kernel thread hence the name one-to- one model.
  • In this model, when one thread makes a blocking system call, the other threads continue to run. Therefore it provides more concurrency than many-to-one model.
  • It also allows the multiple threads to run on multiprocessors.

One-to-one Multithreading Model

  • The drawback of this model is that each user thread requires its corresponding kernel thread. Thus the overhead of creating kernel thread burdens the performance of an application.
  • The operating systems that implement this model are Windows NT, Window 2000 and OS/2.

3. Many-to-many model

  • Many-to-many model multiplexes many user level threads to a smaller or equal number of kernel threads.
  • The number of kernel threads created depends on a particular application or a particular machine.
  • Although, this model allows the creation of multiple kernel threads, true concurrency cannot be achieved by this model as the kernel can schedule only one thread at a time. Also, when a thread performs a blocking system call the kernel can schedule another thread for execution.
  • In this model, developer can create as many user threads as necessary.
  • This model is supported by Solaris 2 operating system.

Many-to-many Multithreading Model

 Thread Management

  • In multithreaded processes, CPU needs to be switched among the various threads of a process just in the same way as CPU is switched between different processes in multiprogramming environment.
  • This process of switching the CPU among the various threads of a process is called thread switching.
  • Thread switching can be implemented in user space or kernel space or both.

Thread Library

  • In some systems, thread switching is implemented in system library. In these systems, thread switching is done in user space and not in kernel space.
  • Operating system services are not required for thread switching. The library manages thread-specific register sets.
  • The operating system is not aware of the presence of multiple threads in application. It sees only processes.
  • If any thread executes a system call, the system treats it as a call from the owner process, and the entire process has to wait until the system call returns.
  • In such systems, operating system does not have thread management capabilities.
  • The library implements APIS related to thread handling.

Thread interaction with kernel via library


Multithread kernel

  • In some other system, kernel itself looks after the thread management activities. Thread creation, destruction and scheduling is done by operating system.
  • The system calls are used by operating system for thread handling.
  • Every thread is represented by thread descriptor in the kernel. Thus, kernel is aware of all the threads of a process.
  • Thread switching is performed by kernel. As this switching is done in kernel space, it is more time consuming than switching threads in the user space. This switching involves saving a register set of one thread and reloading that of another thread.
  • The various threads of process are scheduled for execution independent of one another.
  • Even if one thread is blocked other can execute applications.

Thread interaction with kernel via system call


Mixed System

  • Some system use mix of user-level and kernel level thread. This technique is also known as hybrid. For example, Solaris 2 operating system uses this method of thread handling.
  • In these systems, kernel threads and user threads are connected through an immediate abstraction called lightweight process (LWPs)
  • The operations of thread (creating, scheduling destruction) for user threads are implemented in a system library in the user space.
  • The operating system has no knowledge about user threads. The operating system supports system call for the management of LWPs.
Lightweight process ( LWP )

Lightweight process ( LWP )

  • The operating system sees LWPs and kernel threads and application processes see both LWPs and user threads.
  • In such systems, user thread is a unit of work assignment in a process, the kernel thread is the unit of CPU allocation by operating system and the process is the unit of resource allocation in the system.
  • Each process contains at least one LWP. The process has three LWPs, the LWP on left has two user threads, the LWP in the middle has three user threads and the one on the right has one user threads. If needed these threads can be switched from one LWP to another.

There are three different methods used to attach a user thread to LWP: 

  1. One to one (1:1). In this, there is one-to-one correspondence between a thread and its peer LWP.
  2. Many to one (M:1). In this, number of user threads are linked one LWP, one thread at a time.
  3. Many to many (M:M). In this, any user thread can be attached to any LWP of the process.

Leave a Reply

Your email address will not be published. Required fields are marked *