通过新增自定义调用约定,能够使IDA参数识别出现问题。自定义调用约定不像编写中端Pass那样简单,涉及了LLVM的后端修改和中端修改两部分。LLVM的后端部分代码主要分布在**/llvm/lib/Target**下。这里主要在X86下新增调用约定,其他架构下为一样的步骤。
LLVM的后端主要通过TableGen语言描述架构信息,包含寄存器,调用约定等。X86下的调用约定被定义在**/llvm/lib/Target/X86/X86CallingConv.td**文件中。该文件中定义了X86架构下支持的所有调用约定,并以如下格式进行定义。
可以发现TableGen语言实际上是一种规则描述语言,通过大量的If语句来定义调用约定的规则,例如上述约定的意义为如果参数类型为i32,那么存入EDI, ESI, EDX, ECX, R8D, R9D,如果参数类型为i64,则存入RDI, RSI, RDX, RCX, R8 , R9,如果上述规则无法满足,则使用第三条规则,即将多余参数压入栈中。
为了简单起见,我们想要新建一个调用约定,使得传参通过RAX,RBX实现,直接复制粘贴现成的调用约定。代码如下:
我们将原有的CC_X86_64_C调用约定进行了复制修改,并将原有传递参数的寄存器改成了RAX和RBX。我们就得到了一个新的调用约定规则:CC_X86_64_Obfu1。
然后虽然拥有了定义,但是实际上并不能被调用。因此我们需要将这个规则注册到Root Argument Calling Conventions中,这表示X86架构下调用约定的选择逻辑。我们找到CC_X86_64,它是X86架构下调用约定的主约定。
可以发现这个调用约定根据不同的CallingConv的值将规则分发(通过CCDelegateTo语句)给了不同的子调用约定。我们新建一个调用约定取名为X86_64_Obfu1,并将调用约定逻辑分发给我们之前定义的CC_X86_64_Obfu1规则。
然后我们转到中端,向头文件**/llvm/include/llvm/IR/CallingConv.h**中注册这个调用约定的存在,这样我们能够在Pass中使用这个调用约定。添加结果如下:
重新编译代码你就可以使用这个新增的调用约定。
通过编写中端Pass,我们修改所有函数的调用约定为新增的调用约定。代码如下所示
下面是效果,调用函数的代码片段:

下面是被调用函数的效果,参数通过rax和rbx传递:

通过添加大量的自定义调用约定,能够使IDA的反编译结果出现问题(虽然效果不大,也能被恢复)。但是如果每个函数的调用都不一样的话,恢复起来就比较麻烦了。
具体的代码可以查询,其中还包含了一些其他的混淆:https://github.com/za233/Polaris-Obfuscator
可以直接编译使用,通过开启ccc选项和添加注释即可。
def
CC_X86_64_C : CallingConv<[
CCIfType<[i32], CCAssignToReg<[EDI, ESI, EDX, ECX, R8D, R9D]>>,
CCIfType<[i64], CCAssignToReg<[RDI, RSI, RDX, RCX, R8 , R9 ]>>,
CCIfType<[i32, i64, f16, f32, f64], CCAssignToStack<
8
,
8
>>,
.....
/
/
many rules
]>
def
CC_X86_64_C : CallingConv<[
CCIfType<[i32], CCAssignToReg<[EDI, ESI, EDX, ECX, R8D, R9D]>>,
CCIfType<[i64], CCAssignToReg<[RDI, RSI, RDX, RCX, R8 , R9 ]>>,
CCIfType<[i32, i64, f16, f32, f64], CCAssignToStack<
8
,
8
>>,
.....
/
/
many rules
]>
def
CC_X86_64_Obfu1 : CallingConv<[
/
/
Handles byval parameters.
CCIfByVal<CCPassByVal<
8
,
8
>>,
/
/
Promote i1
/
i8
/
i16
/
v1i1 arguments to i32.
CCIfType<[i1, i8, i16, v1i1], CCPromoteToType<i32>>,
/
/
The
'nest'
parameter,
if
any
,
is
passed
in
R10.
CCIfNest<CCIfSubtarget<
"isTarget64BitILP32()"
, CCAssignToReg<[R10D]>>>,
CCIfNest<CCAssignToReg<[R10]>>,
/
/
Pass SwiftSelf
in
a callee saved register.
CCIfSwiftSelf<CCIfType<[i64], CCAssignToReg<[R13]>>>,
/
/
A SwiftError
is
passed
in
R12.
CCIfSwiftError<CCIfType<[i64], CCAssignToReg<[R12]>>>,
/
/
Pass SwiftAsync
in
an otherwise callee saved register so that calls to
/
/
normal functions don't need to save it somewhere.
CCIfSwiftAsync<CCIfType<[i64], CCAssignToReg<[R14]>>>,
/
/
For Swift Calling Conventions,
pass
sret
in
%
rax.
CCIfCC<
"CallingConv::Swift"
,
CCIfSRet<CCIfType<[i64], CCAssignToReg<[RAX]>>>>,
CCIfCC<
"CallingConv::SwiftTail"
,
CCIfSRet<CCIfType<[i64], CCAssignToReg<[RAX]>>>>,
/
/
Pointers are always passed
in
full
64
-
bit registers.
CCIfPtr<CCCustom<
"CC_X86_64_Pointer"
>>,
/
/
The first
6
integer arguments are passed
in
integer registers.
CCIfType<[i32], CCAssignToReg<[EAX, EBX ]>>,
/
/
注意这个地方,我们进行了修改!
CCIfType<[i64], CCAssignToReg<[RAX, RBX ]>>,
/
/
注意这个地方,我们进行了修改!
/
/
The first
8
MMX vector arguments are passed
in
XMM registers on Darwin.
CCIfType<[x86mmx],
CCIfSubtarget<
"isTargetDarwin()"
,
CCIfSubtarget<
"hasSSE2()"
,
CCPromoteToType<v2i64>>>>,
/
/
Boolean vectors of AVX
-
512
are passed
in
SIMD registers.
/
/
The call
from
AVX to AVX
-
512
function should work,
/
/
since the boolean types
in
AVX
/
AVX2 are promoted by default.
CCIfType<[v2i1], CCPromoteToType<v2i64>>,
CCIfType<[v4i1], CCPromoteToType<v4i32>>,
CCIfType<[v8i1], CCPromoteToType<v8i16>>,
CCIfType<[v16i1], CCPromoteToType<v16i8>>,
CCIfType<[v32i1], CCPromoteToType<v32i8>>,
CCIfType<[v64i1], CCPromoteToType<v64i8>>,
/
/
The first
8
FP
/
Vector arguments are passed
in
XMM registers.
CCIfType<[f16, f32, f64, f128, v16i8, v8i16, v4i32, v2i64, v8f16, v4f32, v2f64],
CCIfSubtarget<
"hasSSE1()"
,
CCAssignToReg<[XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7]>>>,
/
/
The first
8
256
-
bit vector arguments are passed
in
YMM registers, unless
/
/
this
is
a vararg function.
/
/
FIXME: This isn't precisely correct; the x86
-
64
ABI document says that
/
/
fixed arguments to vararg functions are supposed to be passed
in
/
/
registers. Actually modeling that would be a lot of work, though.
CCIfNotVarArg<CCIfType<[v32i8, v16i16, v8i32, v4i64, v16f16, v8f32, v4f64],
CCIfSubtarget<
"hasAVX()"
,
CCAssignToReg<[YMM0, YMM1, YMM2, YMM3,
YMM4, YMM5, YMM6, YMM7]>>>>,
/
/
The first
8
512
-
bit vector arguments are passed
in
ZMM registers.
CCIfNotVarArg<CCIfType<[v64i8, v32i16, v16i32, v8i64, v32f16, v16f32, v8f64],
CCIfSubtarget<
"hasAVX512()"
,
CCAssignToReg<[ZMM0, ZMM1, ZMM2, ZMM3, ZMM4, ZMM5, ZMM6, ZMM7]>>>>,
/
/
Integer
/
FP values get stored
in
stack slots that are
8
bytes
in
size
and
/
/
8
-
byte aligned
if
there are no more registers to hold them.
CCIfType<[i32, i64, f16, f32, f64], CCAssignToStack<
8
,
8
>>,
/
/
Long
doubles get stack slots whose size
and
alignment depends on the
/
/
subtarget.
CCIfType<[f80, f128], CCAssignToStack<
0
,
0
>>,
/
/
Vectors get
16
-
byte stack slots that are
16
-
byte aligned.
CCIfType<[v16i8, v8i16, v4i32, v2i64, v8f16, v4f32, v2f64], CCAssignToStack<
16
,
16
>>,
/
/
256
-
bit vectors get
32
-
byte stack slots that are
32
-
byte aligned.
CCIfType<[v32i8, v16i16, v8i32, v4i64, v16f16, v8f32, v4f64],
CCAssignToStack<
32
,
32
>>,
/
/
512
-
bit vectors get
64
-
byte stack slots that are
64
-
byte aligned.
CCIfType<[v64i8, v32i16, v16i32, v8i64, v32f16, v16f32, v8f64],
CCAssignToStack<
64
,
64
>>
]>;
def
CC_X86_64_Obfu1 : CallingConv<[
/
/
Handles byval parameters.
CCIfByVal<CCPassByVal<
8
,
8
>>,
/
/
Promote i1
/
i8
/
i16
/
v1i1 arguments to i32.
CCIfType<[i1, i8, i16, v1i1], CCPromoteToType<i32>>,
/
/
The
'nest'
parameter,
if
any
,
is
passed
in
R10.
CCIfNest<CCIfSubtarget<
"isTarget64BitILP32()"
, CCAssignToReg<[R10D]>>>,
CCIfNest<CCAssignToReg<[R10]>>,
/
/
Pass SwiftSelf
in
a callee saved register.
CCIfSwiftSelf<CCIfType<[i64], CCAssignToReg<[R13]>>>,
/
/
A SwiftError
is
passed
in
R12.
CCIfSwiftError<CCIfType<[i64], CCAssignToReg<[R12]>>>,
/
/
Pass SwiftAsync
in
an otherwise callee saved register so that calls to
/
/
normal functions don't need to save it somewhere.
CCIfSwiftAsync<CCIfType<[i64], CCAssignToReg<[R14]>>>,
/
/
For Swift Calling Conventions,
pass
sret
in
%
rax.
CCIfCC<
"CallingConv::Swift"
,
CCIfSRet<CCIfType<[i64], CCAssignToReg<[RAX]>>>>,
CCIfCC<
"CallingConv::SwiftTail"
,
CCIfSRet<CCIfType<[i64], CCAssignToReg<[RAX]>>>>,
/
/
Pointers are always passed
in
full
64
-
bit registers.
CCIfPtr<CCCustom<
"CC_X86_64_Pointer"
>>,
/
/
The first
6
integer arguments are passed
in
integer registers.
CCIfType<[i32], CCAssignToReg<[EAX, EBX ]>>,
/
/
注意这个地方,我们进行了修改!
CCIfType<[i64], CCAssignToReg<[RAX, RBX ]>>,
/
/
注意这个地方,我们进行了修改!
/
/
The first
8
MMX vector arguments are passed
in
XMM registers on Darwin.
CCIfType<[x86mmx],
CCIfSubtarget<
"isTargetDarwin()"
,
CCIfSubtarget<
"hasSSE2()"
,
CCPromoteToType<v2i64>>>>,
/
/
Boolean vectors of AVX
-
512
are passed
in
SIMD registers.
/
/
The call
from
AVX to AVX
-
512
function should work,
/
/
since the boolean types
in
AVX
/
AVX2 are promoted by default.
CCIfType<[v2i1], CCPromoteToType<v2i64>>,
CCIfType<[v4i1], CCPromoteToType<v4i32>>,
CCIfType<[v8i1], CCPromoteToType<v8i16>>,
CCIfType<[v16i1], CCPromoteToType<v16i8>>,
CCIfType<[v32i1], CCPromoteToType<v32i8>>,
CCIfType<[v64i1], CCPromoteToType<v64i8>>,
/
/
The first
8
FP
/
Vector arguments are passed
in
XMM registers.
CCIfType<[f16, f32, f64, f128, v16i8, v8i16, v4i32, v2i64, v8f16, v4f32, v2f64],
CCIfSubtarget<
"hasSSE1()"
,
CCAssignToReg<[XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7]>>>,
/
/
The first
8
256
-
bit vector arguments are passed
in
YMM registers, unless
/
/
this
is
a vararg function.
/
/
FIXME: This isn't precisely correct; the x86
-
64
ABI document says that
/
/
fixed arguments to vararg functions are supposed to be passed
in
/
/
registers. Actually modeling that would be a lot of work, though.
CCIfNotVarArg<CCIfType<[v32i8, v16i16, v8i32, v4i64, v16f16, v8f32, v4f64],
CCIfSubtarget<
"hasAVX()"
,
CCAssignToReg<[YMM0, YMM1, YMM2, YMM3,
YMM4, YMM5, YMM6, YMM7]>>>>,
/
/
The first
8
512
-
bit vector arguments are passed
in
ZMM registers.
CCIfNotVarArg<CCIfType<[v64i8, v32i16, v16i32, v8i64, v32f16, v16f32, v8f64],
CCIfSubtarget<
"hasAVX512()"
,
CCAssignToReg<[ZMM0, ZMM1, ZMM2, ZMM3, ZMM4, ZMM5, ZMM6, ZMM7]>>>>,
/
/
Integer
/
FP values get stored
in
stack slots that are
8
bytes
in
size
and
/
/
8
-
byte aligned
if
there are no more registers to hold them.
CCIfType<[i32, i64, f16, f32, f64], CCAssignToStack<
8
,
8
>>,
/
/
Long
doubles get stack slots whose size
and
alignment depends on the
/
/
subtarget.
CCIfType<[f80, f128], CCAssignToStack<
0
,
0
>>,
[培训]内核驱动高级班,冲击BAT一流互联网大厂工作,每周日13:00-18:00直播授课