Home  >  Article  >  System Tutorial  >  Detailed explanation of Linux driver technology (5)_Device blocking/non-blocking reading and writing

Detailed explanation of Linux driver technology (5)_Device blocking/non-blocking reading and writing

WBOY
WBOYforward
2024-02-15 16:00:23393browse

In the process of writing Linux drivers, device blocking/non-blocking reading and writing is a very important technology. It can achieve efficient data transmission and event processing, improving system performance and response speed. In this article, we will delve into Linux driver technology (5)_The implementation principles and related technologies of device blocking/non-blocking reading and writing.

详解Linux驱动技术(五) _设备阻塞/非阻塞读写

Waiting queue is a very important data structure in the kernel for process scheduling. Its task is to maintain a linked list. Each node in the linked list is a PCB (process control block). The kernel will schedule all processes in the waiting queue of PCB to sleep until a certain wake-up condition occurs. I have already discussed the use of blocking IO and non-blocking IO at the application layer in the article Linux I/O Multiplexing. This article mainly discusses how to implement blocking and non-blocking reading and writing of device IO in the driver. Obviously, the waiting queue mechanism is required to implement this blocking-related mechanism. The kernel source code of this article uses version 3.14.0

Implementation of device blocking IO

When we read and write the IO of the device file, the corresponding interface in the driver will eventually be called back, and these interfaces will also appear in the process (kernel) space of the reading and writing device process , if the conditions are not Satisfied, the interface function puts the process into sleep state, even if the user process of reading and writing the device enters sleep, which is what we often call blocking. In a word, the essence of reading and writing device file blocking is that the driver implements blocking of device files in the driver. The reading and writing process can be summarized as follows:

1. Definition-Initialize the waiting queue head

//定义等待队列头
wait_queue_head_t waitq_h;
//初始化,等待队列头
init_waitqueue_head(wait_queue_head_t *q);
 //或
//定义并初始化等待队列头
DECLARE_WAIT_QUEUE_HEAD(waitq_name);

Among the above choices, the last one will directly define and initialize a waiting head. However, if you use global variables to pass parameters within the module, it is inconvenient. Which one to use depends on the requirements.
We can trace the source code and see what the above lines do:

//include/linux/wait.h 
 35 struct __wait_queue_head { 
 36         spinlock_t              lock;
 37         struct list_head        task_list;
 38 };
 39 typedef struct __wait_queue_head wait_queue_head_t;

wait_queue_head_t
–36–>Spin lock used by this queue
–27–>The link that “strings” the entire queue together

Then let’s take a look at the initialization macro:

 55 #define __WAIT_QUEUE_HEAD_INITIALIZER(name) {                           \
 56         .lock           = __SPIN_LOCK_UNLOCKED(name.lock),              \
 57         .task_list      = { &(name).task_list, &(name).task_list } }
 58 
 59 #define DECLARE_WAIT_QUEUE_HEAD(name) \
 60         wait_queue_head_t name = __WAIT_QUEUE_HEAD_INITIALIZER(name)

DECLARE_WAIT_QUEUE_HEAD()
–60–>Create a waiting queue head named name based on the incoming string name
–57–>To initialize the above task_list field, the kernel standard initialization macro is not used. I am speechless. . .

2. Add this process to the waiting queue

Add events to the waiting queue, that is, the process enters sleep state and does not return until condition is true. The version of **_interruptible indicates that sleep can be interrupted, and the version of _timeout** indicates the timeout version, which will be returned after timeout. This naming convention can be seen everywhere in the kernel API.

void wait_event(wait_queue_head_t *waitq_h,int condition);
void wait_event_interruptible(wait_queue_head_t *waitq_h,int condition);
void wait_event_timeout(wait_queue_head_t *waitq_h,int condition);
void wait_event_interruptible_timeout(wait_queue_head_t *waitq_h,int condition);

This is the core of the waiting queue, let’s take a look

wait_event
└── wait_event
└──
_wait_event
├── abort_exclusive_wait
├── finish_wait
├── prepare_to_wait_event
└── ___wait_is_interruptible

244 #define wait_event(wq, condition)                                       \
245 do {                                                                    \
246         if (condition)                                                  \
247                 break;                                                  \
248         __wait_event(wq, condition);                                    \ 
249 } while (0)

wait_event
–246–>如果condition为真,立即返回
–248–>否则调用__wait_event

194 #define ___wait_event(wq, condition, state, exclusive, ret, cmd)        \       
195 ({                                                                      \
206         for (;;) {                                                      \
207                 long __int = prepare_to_wait_event(&wq, &__wait, state);\
208                                                                         \  
209                 if (condition)                                          \       
210                         break;                                          \
212                 if (___wait_is_interruptible(state) && __int) {         \
213                         __ret = __int;                                  \
214                         if (exclusive) {                                \
215                                 abort_exclusive_wait(&wq, &__wait,      \
216                                                      state, NULL);      \
217                                 goto __out;                             \
218                         }                                               \
219                         break;                                          \
220                 }                                                       \
222                 cmd;                                                    \
223         }                                                               \
224         finish_wait(&wq, &__wait);                                      \
225 __out:  __ret;                                                          \
226 })

___wait_event
–206–>死循环的轮询
–209–>如果条件为真,跳出循环,执行finish_wait();进程被唤醒
–212–>如果进程睡眠的方式是interruptible的,那么当中断来的时候也会abort_exclusive_wait被唤醒
–222–>如果上面两条都不满足,就会回调传入的schedule(),即继续睡眠

模板

struct wait_queue_head_t xj_waitq_h;
static ssize_t demo_read(struct file *filp, char __user *buf, size_t size, loff_t *offset)
{
    if(!condition)    //条件可以在中断处理函数中置位
        wait_event_interruptible(&xj_waitq_h,condition);
}
static file_operations fops = {
    .read = demo_read,
};
static __init demo_init(void)
{
    init_waitqueue_head(&xj_waitq_h);
}

IO多路复用的实现

对于普通的非阻塞IO,我们只需要在驱动中注册的read/write接口时不使用阻塞机制即可,这里我要讨论的是IO多路复用,即当驱动中的read/write并没有实现阻塞机制的时候,我们如何利用内核机制来在驱动中实现对IO多路复用的支持。下面这个就是我们要用的API

int poll(struct file *filep, poll_table *wait);
void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p)  

当应用层调用select/poll/epoll机制的时候,内核其实会遍历回调相关文件的驱动中的poll接口,通过每一个驱动的poll接口的返回值,来判断该文件IO是否有相应的事件发生,我们知道,这三种IO多路复用的机制的核心区别在于内核中管理监视文件的方式,分别是数组链表,但对于每一个驱动,回调的接口都是poll。

模板

struct wait_queue_head_t waitq_h;
static unsigned int demo_poll(struct file *filp, struct poll_table_struct *pts)
{
    unsigned int mask = 0;
    poll_wait(filp, &wwaitq_h, pts);
    if(counter){
        mask = (POLLIN | POLLRDNORM);
    }
    return mask;
}

static struct file_operations fops = {
    .owner  = THIS_MODULE,
    .poll   = demo_poll,
};
static __init demo_init(void)
{
    init_waitqueue_head(&xj_waitq_h);
}

其他API

刚才我们讨论了如何使用等待队列实现阻塞IO,非阻塞IO,其实关于等待队列,内核还提供了很多其他API用以完成相关的操作,这里我们来认识一下

//在等待队列上睡眠
sleep_on(wait_queue_head_t *wqueue_h);
sleep_on_interruptible(wait_queue_head_t *wqueue_h);

//唤醒等待的进程
void wake_up(wait_queue_t *wqueue);
void wake_up_interruptible(wait_queue_t *wqueue);

总之,设备阻塞/非阻塞读写是Linux驱动程序编写过程中不可或缺的一部分。它可以实现高效的数据传输和事件处理,提高系统的性能和响应速度。希望本文能够帮助读者更好地理解Linux驱动技术(五) _设备阻塞/非阻塞读写的实现原理和相关技术。

The above is the detailed content of Detailed explanation of Linux driver technology (5)_Device blocking/non-blocking reading and writing. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:lxlinux.net. If there is any infringement, please contact admin@php.cn delete