From e14001381a8a8ee300b35b072541fb488e548e1a Mon Sep 17 00:00:00 2001 From: Rot127 Date: Thu, 24 Aug 2023 18:30:31 -0500 Subject: [PATCH] Make AArch64/ARM analysis and asm plugins Capstone v6 compatible. This commit refactors the AArch64 and partially the ARM plugin to make it Capstone v6 compatible. Due to the big API changes in Capstone v6 several changes had to be made. Because we need to be compatible to Capstone v4 and v5 many include guards are added as well. Overview of changes done: **ARM** - Instruction alias were introduced. This leads to different decoding and analysis paths taken for certain instructions. Some alias have their IL code generated like the real instruction now (no special handling needed anymore). This change is responsible for many changes you'll encounter. - The operand details of each instruction are now always the one of the real instruction. Also for alias. For example, if "MOV , #" is an alias for `ORR , WZR, #`, the details by CS hold all three operands of "ORR". Before, they held only the two of "MOV". - Several bugs in variable and argument generation were fixed. Especially the default variable width for ARM Thumb was changed to 32bit instead of the 16bit. **AArch64/ARM64** The changes listed above for ARM, also apply to AArch64. Additionally: - Capstone v6 changed the name ARM64 now everywhere to AArch64. To be compatible with Capstone v4/v5 AArch64 names must be wrapped into macros which resolve the name, depending on the CS version used. - Capstone v6 is now more consistent with register real and alias names. From now on we use the register alias by default. **List squashed commit messages:** [AArch64 CS v6 BEGIN] Change subproject config to use cs-auto-sync-aarch64 branch Replace ARM64 with version sensitive macros. Exclude alias if CS version >= 6 Update access to writeback member Exclude instr alias from inclusion Update memory operand printing to json. Enable real instr. detail only for AArch64 Set correct arch name in meson.build for CS Fix U/SBFM instructions and their alias. Mark parameters with RZ_OUt/BORROW Optimize register extension to skip some, if the width already matches. Adapt width and lsb of U/SBFM alias instructions (ImmR and ImmS are from U/SBFM). Fix tests correct semantic buy bad syntax Pass alias MOV instructions to mov() Handle CSET and CSETM alias Fix lsl, lsr and asr by handling them as alias. Fix mov alias. Handle TST alias Fix CNEG, CINV alias Fix bfi and bfxil alias. Fix sign extensions. Fix compare instructions. Fix NEG, NGC, NGCS, NEGS, MVN Fix CINC Fix multiply instructions. Fix ROR Run clang-format Handle CMP for ESIL Handle new position of memory disponents of post index operands. Fix post-index operations. Add missing writeback checks for Post and preindex Handle UBFM and SBFM alias Handl BFM alias Handle CMP, CSET and CINC alias Update meson file of for cs-aarch64 branch Fix asm tests. Use reg alias now. Fix condition confusion and incorrect operand usage. Fix plf test. Run clang-format Use register alias in tests Add support for fp and lr reg alias assembly. Use reg alias in test Rename cond tranlate functions r2 -> rz Fix condition check which assume 0 == invalid. Fix issues intruduced by rebase Set CS commit to current next branch. Rename ARM64 -> AArch64 Add missing source file to meson.build Remove DisassemblerExtension.c file for CS v5 Update to newest capstone next branch Bump up CS version REVERT ME: Get Capstone v4/v5 via git clone until new tars are released. Wrap setting of CS_DETAIL_REAL into CS version check Add maybe-unitialized to Capstone C args. Fix CS pre v6 build by adding guards. Use reg alias now printed by default by CS. Bump CS version to most recent next. Fix build errors due to stircter alias handling in ARM. Fix RzIL tests introduced by alias introduction to ARM. Fix ESIL bugs introduced with ARM alias introduction. - stackptr hasn't been set for POP and PUSH Add support again for Thumb1 pop/push Handle PUSHW and POPW alias Update test case Add more POP and PUSH alias and enrich detail for other versions of them. Fix incorrect mem access width guesses for ARM thumb. Set POP return info if it writes to PC Fix tests about default var size and POP mem write direction. Bump CS version to newest next. Fix incorrect tests. - TriCore: Functions were in ro section. - Default arg width in ARM thumb is 32bit. Revert check for a set stackptr. stackptr is used in different ways: 1. Safes the offset from the stack frame base. 2. Is interpreted as somthing else for x86 and I cannot find out what, in a reasonably time. Hence we cannot use it here consistently. Remove check for non existing ARM_GRP_RET in CSv5 Fix incorrect stack offsets of variables. 'push ' instructions for which the second register was the FP, reset the stackptr variable to 0. This led to wrong bp offsets in the variable names. In this case it was +0xc. Bump CS version. Add copy of meta-programming macros for capstone-sys build. Update capstone-next.wrap Use bracket-less met-programming macro to fix Windows build warnings. Update wrap files for Capstone with branch names Add new meta-programming macro Add workaround for MSVC pre-processor bug. --- librz/analysis/analysis.c | 17 + librz/analysis/arch/arm/arm_accessors64.h | 31 +- librz/analysis/arch/arm/arm_cs.h | 3 +- librz/analysis/arch/arm/arm_esil32.c | 4 + librz/analysis/arch/arm/arm_esil64.c | 627 +++-- librz/analysis/arch/arm/arm_il32.c | 49 +- librz/analysis/arch/arm/arm_il64.c | 2202 ++++++++++------- librz/analysis/fcn.c | 2 +- librz/analysis/op.c | 3 + librz/analysis/p/analysis_arm_cs.c | 827 ++++--- librz/analysis/var.c | 6 +- librz/asm/arch/arm/aarch64_meta_macros.h | 69 + librz/asm/arch/arm/armass64.c | 23 +- librz/asm/arch/arm/asm-arm.h | 1 + librz/asm/p/asm_arm_cs.c | 2 +- librz/core/canalysis.c | 2 +- librz/core/cmd/cmd_search.c | 2 +- librz/include/rz_analysis.h | 5 + meson.build | 7 +- subprojects/capstone-next.wrap | 2 +- subprojects/capstone-v4.wrap | 7 +- subprojects/capstone-v5.wrap | 7 +- .../capstone-auto-sync-aarch64/meson.build | 98 + .../packagefiles/capstone-next/meson.build | 7 +- test/db/analysis/arm | 51 +- test/db/analysis/arm64 | 8 +- test/db/analysis/tricore | 2 - test/db/analysis/vars | 5 +- test/db/archos/darwin-arm64/dbg | 8 +- test/db/asm/arm_16 | 4 +- test/db/asm/arm_32 | 2 +- test/db/asm/arm_64 | 44 +- test/db/cmd/cmd_plf | 144 +- test/db/cmd/dwarf | 20 +- test/db/cmd/types | 32 +- test/db/formats/mach0/fatmach0 | 4 +- test/db/formats/mach0/objc | 6 +- 37 files changed, 2670 insertions(+), 1663 deletions(-) create mode 100644 librz/asm/arch/arm/aarch64_meta_macros.h create mode 100644 subprojects/packagefiles/capstone-auto-sync-aarch64/meson.build diff --git a/librz/analysis/analysis.c b/librz/analysis/analysis.c index 5d0d18e4ff9..51e79ffe060 100644 --- a/librz/analysis/analysis.c +++ b/librz/analysis/analysis.c @@ -13,6 +13,23 @@ RZ_LIB_VERSION(rz_analysis); static RzAnalysisPlugin *analysis_static_plugins[] = { RZ_ANALYSIS_STATIC_PLUGINS }; +/** + * \brief Returns the default size byte width of memory access operations. + * The size is just a best guess. + * + * \param analysis The current RzAnalysis in use. + * + * \return The default width of a memory access in bytes. + */ +RZ_API ut32 rz_analysis_guessed_mem_access_width(RZ_NONNULL const RzAnalysis *analysis) { + if (analysis->bits == 16 && RZ_STR_EQ(analysis->cur->arch, "arm")) { + // Thumb access is usually 4 bytes of memory by default. + return 4; + } + // Best guess for variable size. + return analysis->bits / 8; +} + RZ_API void rz_analysis_set_limits(RzAnalysis *analysis, ut64 from, ut64 to) { free(analysis->limit); analysis->limit = RZ_NEW0(RzAnalysisRange); diff --git a/librz/analysis/arch/arm/arm_accessors64.h b/librz/analysis/arch/arm/arm_accessors64.h index 213ebc70dd2..4d8b765679d 100644 --- a/librz/analysis/arch/arm/arm_accessors64.h +++ b/librz/analysis/arch/arm/arm_accessors64.h @@ -9,21 +9,30 @@ #include -#define IMM64(x) (ut64)(insn->detail->arm64.operands[x].imm) -#define INSOP64(x) insn->detail->arm64.operands[x] +#define IMM64(x) (ut64)(insn->detail->CS_aarch64_.operands[x].imm) +#define INSOP64(x) insn->detail->CS_aarch64_.operands[x] -#define REGID64(x) insn->detail->arm64.operands[x].reg -#define REGBASE64(x) insn->detail->arm64.operands[x].mem.base +#define REGID64(x) insn->detail->CS_aarch64_.operands[x].reg +#define REGBASE64(x) insn->detail->CS_aarch64_.operands[x].mem.base // s/index/base|reg/ -#define HASMEMINDEX64(x) (insn->detail->arm64.operands[x].mem.index != ARM64_REG_INVALID) -#define MEMDISP64(x) (ut64) insn->detail->arm64.operands[x].mem.disp -#define ISIMM64(x) (insn->detail->arm64.operands[x].type == ARM64_OP_IMM) -#define ISREG64(x) (insn->detail->arm64.operands[x].type == ARM64_OP_REG) -#define ISMEM64(x) (insn->detail->arm64.operands[x].type == ARM64_OP_MEM) +#define HASMEMINDEX64(x) (insn->detail->CS_aarch64_.operands[x].mem.index != CS_AARCH64(_REG_INVALID)) +#define MEMDISP64(x) (ut64) insn->detail->CS_aarch64_.operands[x].mem.disp +#define ISIMM64(x) (insn->detail->CS_aarch64_.operands[x].type == CS_AARCH64(_OP_IMM)) +#define ISREG64(x) (insn->detail->CS_aarch64_.operands[x].type == CS_AARCH64(_OP_REG)) +#define ISMEM64(x) (insn->detail->CS_aarch64_.operands[x].type == CS_AARCH64(_OP_MEM)) -#define LSHIFT2_64(x) insn->detail->arm64.operands[x].shift.value -#define OPCOUNT64() insn->detail->arm64.op_count +#define LSHIFT2_64(x) insn->detail->CS_aarch64_.operands[x].shift.value +#define OPCOUNT64() insn->detail->CS_aarch64_.op_count +#if CS_NEXT_VERSION < 6 #define ISWRITEBACK64() (insn->detail->arm64.writeback == true) +#else +#define ISWRITEBACK64() (insn->detail->writeback == true) +#endif +#if CS_NEXT_VERSION < 6 #define ISPREINDEX64() (((OPCOUNT64() == 2) && (ISMEM64(1)) && (ISWRITEBACK64())) || ((OPCOUNT64() == 3) && (ISMEM64(2)) && (ISWRITEBACK64()))) #define ISPOSTINDEX64() (((OPCOUNT64() == 3) && (ISIMM64(2)) && (ISWRITEBACK64())) || ((OPCOUNT64() == 4) && (ISIMM64(3)) && (ISWRITEBACK64()))) +#else +#define ISPREINDEX64() (!insn->detail->CS_aarch64_.post_index && ISWRITEBACK64()) +#define ISPOSTINDEX64() (insn->detail->CS_aarch64_.post_index && ISWRITEBACK64()) +#endif \ No newline at end of file diff --git a/librz/analysis/arch/arm/arm_cs.h b/librz/analysis/arch/arm/arm_cs.h index 25bd301ffac..44159459fa0 100644 --- a/librz/analysis/arch/arm/arm_cs.h +++ b/librz/analysis/arch/arm/arm_cs.h @@ -6,6 +6,7 @@ #include #include +#include "../../asm/arch/arm/aarch64_meta_macros.h" RZ_IPI int rz_arm_cs_analysis_op_32_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 addr, const ut8 *buf, int len, csh *handle, cs_insn *insn, bool thumb); RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 addr, const ut8 *buf, int len, csh *handle, cs_insn *insn); @@ -18,7 +19,7 @@ RZ_IPI const char *rz_arm32_cs_esil_prefix_cond(RzAnalysisOp *op, ARMCC_CondCode #else RZ_IPI const char *rz_arm32_cs_esil_prefix_cond(RzAnalysisOp *op, arm_cc cond_type); #endif -RZ_IPI const char *rz_arm64_cs_esil_prefix_cond(RzAnalysisOp *op, arm64_cc cond_type); +RZ_IPI const char *rz_arm64_cs_esil_prefix_cond(RzAnalysisOp *op, CS_aarch64_cc() cond_type); RZ_IPI RzILOpEffect *rz_arm_cs_32_il(csh *handle, cs_insn *insn, bool thumb); RZ_IPI RzAnalysisILConfig *rz_arm_cs_32_il_config(bool big_endian); diff --git a/librz/analysis/arch/arm/arm_esil32.c b/librz/analysis/arch/arm/arm_esil32.c index c11cb3958eb..8313a3ab221 100644 --- a/librz/analysis/arch/arm/arm_esil32.c +++ b/librz/analysis/arch/arm/arm_esil32.c @@ -272,7 +272,11 @@ RZ_IPI int rz_arm_cs_analysis_op_32_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a case ARM_INS_BKPT: rz_strbuf_setf(&op->esil, "%d,%d,TRAP", IMM(0), IMM(0)); break; +#if CS_NEXT_VERSION < 6 case ARM_INS_NOP: +#else + case ARM_INS_HINT: +#endif rz_strbuf_setf(&op->esil, ","); break; case ARM_INS_BL: diff --git a/librz/analysis/arch/arm/arm_esil64.c b/librz/analysis/arch/arm/arm_esil64.c index eea43f5bdba..4760b64177b 100644 --- a/librz/analysis/arch/arm/arm_esil64.c +++ b/librz/analysis/arch/arm/arm_esil64.c @@ -7,75 +7,75 @@ #include "arm_cs.h" #include "arm_accessors64.h" -#define REG64(x) rz_str_get_null(cs_reg_name(*handle, insn->detail->arm64.operands[x].reg)) -#define MEMBASE64(x) rz_str_get_null(cs_reg_name(*handle, insn->detail->arm64.operands[x].mem.base)) -#define MEMINDEX64(x) rz_str_get_null(cs_reg_name(*handle, insn->detail->arm64.operands[x].mem.index)) +#define REG64(x) rz_str_get_null(cs_reg_name(*handle, insn->detail->CS_aarch64_.operands[x].reg)) +#define MEMBASE64(x) rz_str_get_null(cs_reg_name(*handle, insn->detail->CS_aarch64_.operands[x].mem.base)) +#define MEMINDEX64(x) rz_str_get_null(cs_reg_name(*handle, insn->detail->CS_aarch64_.operands[x].mem.index)) -RZ_IPI const char *rz_arm64_cs_esil_prefix_cond(RzAnalysisOp *op, arm64_cc cond_type) { +RZ_IPI const char *rz_arm64_cs_esil_prefix_cond(RzAnalysisOp *op, CS_aarch64_cc() cond_type) { const char *close_cond[2]; close_cond[0] = ""; close_cond[1] = ",}"; int close_type = 0; switch (cond_type) { - case ARM64_CC_EQ: + case CS_AARCH64CC(_EQ): close_type = 1; rz_strbuf_setf(&op->esil, "zf,?{,"); break; - case ARM64_CC_NE: + case CS_AARCH64CC(_NE): close_type = 1; rz_strbuf_setf(&op->esil, "zf,!,?{,"); break; - case ARM64_CC_HS: + case CS_AARCH64CC(_HS): close_type = 1; rz_strbuf_setf(&op->esil, "cf,?{,"); break; - case ARM64_CC_LO: + case CS_AARCH64CC(_LO): close_type = 1; rz_strbuf_setf(&op->esil, "cf,!,?{,"); break; - case ARM64_CC_MI: + case CS_AARCH64CC(_MI): close_type = 1; rz_strbuf_setf(&op->esil, "nf,?{,"); break; - case ARM64_CC_PL: + case CS_AARCH64CC(_PL): close_type = 1; rz_strbuf_setf(&op->esil, "nf,!,?{,"); break; - case ARM64_CC_VS: + case CS_AARCH64CC(_VS): close_type = 1; rz_strbuf_setf(&op->esil, "vf,?{,"); break; - case ARM64_CC_VC: + case CS_AARCH64CC(_VC): close_type = 1; rz_strbuf_setf(&op->esil, "vf,!,?{,"); break; - case ARM64_CC_HI: + case CS_AARCH64CC(_HI): close_type = 1; rz_strbuf_setf(&op->esil, "cf,zf,!,&,?{,"); break; - case ARM64_CC_LS: + case CS_AARCH64CC(_LS): close_type = 1; rz_strbuf_setf(&op->esil, "cf,!,zf,|,?{,"); break; - case ARM64_CC_GE: + case CS_AARCH64CC(_GE): close_type = 1; rz_strbuf_setf(&op->esil, "nf,vf,^,!,?{,"); break; - case ARM64_CC_LT: + case CS_AARCH64CC(_LT): close_type = 1; rz_strbuf_setf(&op->esil, "nf,vf,^,?{,"); break; - case ARM64_CC_GT: + case CS_AARCH64CC(_GT): // zf == 0 && nf == vf close_type = 1; rz_strbuf_setf(&op->esil, "zf,!,nf,vf,^,!,&,?{,"); break; - case ARM64_CC_LE: + case CS_AARCH64CC(_LE): // zf == 1 || nf != vf close_type = 1; rz_strbuf_setf(&op->esil, "zf,nf,vf,^,|,?{,"); break; - case ARM64_CC_AL: + case CS_AARCH64CC(_AL): // always executed break; default: @@ -86,37 +86,37 @@ RZ_IPI const char *rz_arm64_cs_esil_prefix_cond(RzAnalysisOp *op, arm64_cc cond_ static int arm64_reg_width(int reg) { switch (reg) { - case ARM64_REG_W0: - case ARM64_REG_W1: - case ARM64_REG_W2: - case ARM64_REG_W3: - case ARM64_REG_W4: - case ARM64_REG_W5: - case ARM64_REG_W6: - case ARM64_REG_W7: - case ARM64_REG_W8: - case ARM64_REG_W9: - case ARM64_REG_W10: - case ARM64_REG_W11: - case ARM64_REG_W12: - case ARM64_REG_W13: - case ARM64_REG_W14: - case ARM64_REG_W15: - case ARM64_REG_W16: - case ARM64_REG_W17: - case ARM64_REG_W18: - case ARM64_REG_W19: - case ARM64_REG_W20: - case ARM64_REG_W21: - case ARM64_REG_W22: - case ARM64_REG_W23: - case ARM64_REG_W24: - case ARM64_REG_W25: - case ARM64_REG_W26: - case ARM64_REG_W27: - case ARM64_REG_W28: - case ARM64_REG_W29: - case ARM64_REG_W30: + case CS_AARCH64(_REG_W0): + case CS_AARCH64(_REG_W1): + case CS_AARCH64(_REG_W2): + case CS_AARCH64(_REG_W3): + case CS_AARCH64(_REG_W4): + case CS_AARCH64(_REG_W5): + case CS_AARCH64(_REG_W6): + case CS_AARCH64(_REG_W7): + case CS_AARCH64(_REG_W8): + case CS_AARCH64(_REG_W9): + case CS_AARCH64(_REG_W10): + case CS_AARCH64(_REG_W11): + case CS_AARCH64(_REG_W12): + case CS_AARCH64(_REG_W13): + case CS_AARCH64(_REG_W14): + case CS_AARCH64(_REG_W15): + case CS_AARCH64(_REG_W16): + case CS_AARCH64(_REG_W17): + case CS_AARCH64(_REG_W18): + case CS_AARCH64(_REG_W19): + case CS_AARCH64(_REG_W20): + case CS_AARCH64(_REG_W21): + case CS_AARCH64(_REG_W22): + case CS_AARCH64(_REG_W23): + case CS_AARCH64(_REG_W24): + case CS_AARCH64(_REG_W25): + case CS_AARCH64(_REG_W26): + case CS_AARCH64(_REG_W27): + case CS_AARCH64(_REG_W28): + case CS_AARCH64(_REG_W29): + case CS_AARCH64(_REG_W30): return 32; break; default: @@ -125,20 +125,20 @@ static int arm64_reg_width(int reg) { return 64; } -static int decode_sign_ext(arm64_extender extender) { +static int decode_sign_ext(CS_aarch64_extender() extender) { switch (extender) { - case ARM64_EXT_UXTB: - case ARM64_EXT_UXTH: - case ARM64_EXT_UXTW: - case ARM64_EXT_UXTX: + case CS_AARCH64(_EXT_UXTB): + case CS_AARCH64(_EXT_UXTH): + case CS_AARCH64(_EXT_UXTW): + case CS_AARCH64(_EXT_UXTX): return 0; // nothing needs to be done for unsigned - case ARM64_EXT_SXTB: + case CS_AARCH64(_EXT_SXTB): return 8; - case ARM64_EXT_SXTH: + case CS_AARCH64(_EXT_SXTH): return 16; - case ARM64_EXT_SXTW: + case CS_AARCH64(_EXT_SXTW): return 32; - case ARM64_EXT_SXTX: + case CS_AARCH64(_EXT_SXTX): return 64; default: break; @@ -147,24 +147,24 @@ static int decode_sign_ext(arm64_extender extender) { return 0; } -#define EXT64(x) decode_sign_ext(insn->detail->arm64.operands[x].ext) +#define EXT64(x) decode_sign_ext(insn->detail->CS_aarch64_.operands[x].ext) -static const char *decode_shift_64(arm64_shifter shift) { +static const char *decode_shift_64(CS_aarch64_shifter() shift) { const char *E_OP_SR = ">>"; const char *E_OP_SL = "<<"; const char *E_OP_RR = ">>>"; const char *E_OP_VOID = ""; switch (shift) { - case ARM64_SFT_ASR: - case ARM64_SFT_LSR: + case CS_AARCH64(_SFT_ASR): + case CS_AARCH64(_SFT_LSR): return E_OP_SR; - case ARM64_SFT_LSL: - case ARM64_SFT_MSL: + case CS_AARCH64(_SFT_LSL): + case CS_AARCH64(_SFT_MSL): return E_OP_SL; - case ARM64_SFT_ROR: + case CS_AARCH64(_SFT_ROR): return E_OP_RR; default: @@ -173,22 +173,22 @@ static const char *decode_shift_64(arm64_shifter shift) { return E_OP_VOID; } -#define DECODE_SHIFT64(x) decode_shift_64(insn->detail->arm64.operands[x].shift.type) +#define DECODE_SHIFT64(x) decode_shift_64(insn->detail->CS_aarch64_.operands[x].shift.type) static int regsize64(cs_insn *insn, int n) { - unsigned int reg = insn->detail->arm64.operands[n].reg; - if ((reg >= ARM64_REG_S0 && reg <= ARM64_REG_S31) || - (reg >= ARM64_REG_W0 && reg <= ARM64_REG_W30) || - reg == ARM64_REG_WZR) { + unsigned int reg = insn->detail->CS_aarch64_.operands[n].reg; + if ((reg >= CS_AARCH64(_REG_S0) && reg <= CS_AARCH64(_REG_S31)) || + (reg >= CS_AARCH64(_REG_W0) && reg <= CS_AARCH64(_REG_W30)) || + reg == CS_AARCH64(_REG_WZR)) { return 4; } - if (reg >= ARM64_REG_B0 && reg <= ARM64_REG_B31) { + if (reg >= CS_AARCH64(_REG_B0) && reg <= CS_AARCH64(_REG_B31)) { return 1; } - if (reg >= ARM64_REG_H0 && reg <= ARM64_REG_H31) { + if (reg >= CS_AARCH64(_REG_H0) && reg <= CS_AARCH64(_REG_H31)) { return 2; } - if (reg >= ARM64_REG_Q0 && reg <= ARM64_REG_Q31) { + if (reg >= CS_AARCH64(_REG_Q0) && reg <= CS_AARCH64(_REG_Q31)) { return 16; } return 8; @@ -210,7 +210,7 @@ static void shifted_reg64_append(RzStrBuf *sb, csh *handle, cs_insn *insn, int n } if (LSHIFT2_64(n)) { - if (insn->detail->arm64.operands[n].shift.type != ARM64_SFT_ASR) { + if (insn->detail->CS_aarch64_.operands[n].shift.type != CS_AARCH64(_SFT_ASR)) { if (signext) { rz_strbuf_appendf(sb, "%d,%d,%s,~,%s", LSHIFT2_64(n), signext, rn, DECODE_SHIFT64(n)); } else { @@ -272,16 +272,132 @@ static void arm64math(RzAnalysis *a, RzAnalysisOp *op, ut64 addr, const ut8 *buf } } +#if CS_NEXT_VERSION >= 6 +static void cmp(RzAnalysisOp *op, csh *handle, cs_insn *insn) { + // update esil, cpu flags + int bits = arm64_reg_width(REGID64(1)); + if (ISIMM64(2)) { + rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,==,$z,zf,:=,%d,$s,nf,:=,%d,$b,!,cf,:=,%d,$o,vf,:=", IMM64(2) << LSHIFT2_64(2), REG64(1), bits - 1, bits, bits - 1); + } else { + // cmp w10, w11 + SHIFTED_REG64_APPEND(&op->esil, 2); + rz_strbuf_appendf(&op->esil, ",%s,==,$z,zf,:=,%d,$s,nf,:=,%d,$b,!,cf,:=,%d,$o,vf,:=", REG64(1), bits - 1, bits, bits - 1); + } +} + +static void bfm(RzAnalysisOp *op, csh *handle, cs_insn *insn) { + ut64 lsb = IMM64(2); + ut64 width = IMM64(3); + switch (insn->alias_id) { + default: + return; + case AArch64_INS_ALIAS_BFI: // bfi w8, w8, 2, 1 + width += 1; + // TODO Mod depends on (sf && N) bits + lsb = -lsb % 32; + break; + case AArch64_INS_ALIAS_BFXIL: + width = width - lsb + 1; + break; + } + ut64 mask = rz_num_bitmask((ut8)width); + ut64 shift = lsb; + ut64 notmask = ~(mask << shift); + // notmask,dst,&,lsb,mask,src,&,<<,|,dst,= + rz_strbuf_setf(&op->esil, "%" PFMT64u ",%s,&,%" PFMT64u ",%" PFMT64u ",%s,&,<<,|,%s,=", + notmask, REG64(0), shift, mask, REG64(1), REG64(0)); +} + +static void subfm(RzAnalysisOp *op, csh *handle, cs_insn *insn) { + ut64 lsb = IMM64(2); + ut64 width = IMM64(3); + if (insn->alias_id == AArch64_INS_ALIAS_SBFIZ) { + width += 1; + lsb = -lsb % 64; + rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%" PFMT64d ",%s,%" PFMT64u ",&,~,<<,%s,=", + lsb, IMM64(3), REG64(1), rz_num_bitmask((ut8)width), REG64(0)); + } else if (insn->alias_id == AArch64_INS_ALIAS_UBFIZ) { + width += 1; + lsb = -lsb % 64; + rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%s,%" PFMT64u ",&,<<,%s,=", + lsb, REG64(1), rz_num_bitmask((ut8)width), REG64(0)); + } else if (insn->alias_id == AArch64_INS_ALIAS_SBFX) { + width = width - lsb + 1; + rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%" PFMT64d ",%s,%" PFMT64d ",%" PFMT64u ",<<,&,>>,~,%s,=", + IMM64(3), IMM64(2), REG64(1), IMM64(2), rz_num_bitmask((ut8)IMM64(3)), REG64(0)); + } else if (insn->alias_id == AArch64_INS_ALIAS_UBFX) { + width = width - lsb + 1; + rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%s,%" PFMT64d ",%" PFMT64u ",<<,&,>>,%s,=", + lsb, REG64(1), lsb, rz_num_bitmask((ut8)width), REG64(0)); + } else if (insn->alias_id == AArch64_INS_ALIAS_LSL) { + // imms != 0x1f => mod 32 + // imms != 0x3f => mod 64 + ut32 m = IMM64(3) != 0x1f ? 32 : 64; + const char *r0 = REG64(0); + const char *r1 = REG64(1); + const int size = REGSIZE64(0) * 8; + + if (ISREG64(2)) { + if (LSHIFT2_64(2) || EXT64(2)) { + SHIFTED_REG64_APPEND(&op->esil, 2); + rz_strbuf_appendf(&op->esil, ",%d,%%,%s,<<,%s,=", size, r1, r0); + } else { + const char *r2 = REG64(2); + rz_strbuf_setf(&op->esil, "%d,%s,%%,%s,<<,%s,=", size, r2, r1, r0); + } + } else { + ut64 i2 = IMM64(2) % m; + rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,<<,%s,=", i2 % (ut64)size, r1, r0); + } + } else if (insn->alias_id == AArch64_INS_ALIAS_LSR) { + const char *r0 = REG64(0); + const char *r1 = REG64(1); + const int size = REGSIZE64(0) * 8; + + if (ISREG64(2)) { + if (LSHIFT2_64(2) || EXT64(2)) { + SHIFTED_REG64_APPEND(&op->esil, 2); + rz_strbuf_appendf(&op->esil, ",%d,%%,%s,>>,%s,=", size, r1, r0); + } else { + const char *r2 = REG64(2); + rz_strbuf_setf(&op->esil, "%d,%s,%%,%s,>>,%s,=", size, r2, r1, r0); + } + } else { + ut64 i2 = IMM64(2); + rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,>>,%s,=", i2 % (ut64)size, r1, r0); + } + } else if (insn->alias_id == AArch64_INS_ALIAS_ASR) { + const char *r0 = REG64(0); + const char *r1 = REG64(1); + const int size = REGSIZE64(0) * 8; + + if (ISREG64(2)) { + if (LSHIFT2_64(2)) { + SHIFTED_REG64_APPEND(&op->esil, 2); + rz_strbuf_appendf(&op->esil, ",%d,%%,%s,>>>>,%s,=", size, r1, r0); + } else { + const char *r2 = REG64(2); + rz_strbuf_setf(&op->esil, "%d,%s,%%,%s,>>>>,%s,=", size, r2, r1, r0); + } + } else { + ut64 i2 = IMM64(2); + rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,>>>>,%s,=", i2 % (ut64)size, r1, r0); + } + } + return; +} +#endif + RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 addr, const ut8 *buf, int len, csh *handle, cs_insn *insn) { const char *postfix = NULL; rz_strbuf_init(&op->esil); rz_strbuf_set(&op->esil, ""); - postfix = rz_arm64_cs_esil_prefix_cond(op, insn->detail->arm64.cc); + postfix = rz_arm64_cs_esil_prefix_cond(op, insn->detail->CS_aarch64_.cc); switch (insn->id) { - case ARM64_INS_REV: + case CS_AARCH64(_INS_REV): // these REV* instructions were almost right, except in the cases like rev x0, x0 // where the use of |= caused copies of the value to be erroneously present { @@ -322,7 +438,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_REV32: { + case CS_AARCH64(_INS_REV32): { const char *r0 = REG64(0); const char *r1 = REG64(1); rz_strbuf_setf(&op->esil, @@ -333,7 +449,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a r1, r1, r1, r1, r0); break; } - case ARM64_INS_REV16: { + case CS_AARCH64(_INS_REV16): { const char *r0 = REG64(0); const char *r1 = REG64(1); rz_strbuf_setf(&op->esil, @@ -342,69 +458,71 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a r1, r1, r0); break; } - case ARM64_INS_ADR: + case CS_AARCH64(_INS_ADR): // TODO: must be 21bit signed rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,=", IMM64(1), REG64(0)); break; - case ARM64_INS_SMADDL: { + case CS_AARCH64(_INS_SMADDL): { int size = REGSIZE64(1) * 8; rz_strbuf_setf(&op->esil, "%d,%s,~,%d,%s,~,*,%s,+,%s,=", size, REG64(2), size, REG64(1), REG64(3), REG64(0)); break; } - case ARM64_INS_UMADDL: - case ARM64_INS_FMADD: - case ARM64_INS_MADD: + case CS_AARCH64(_INS_UMADDL): + case CS_AARCH64(_INS_FMADD): + case CS_AARCH64(_INS_MADD): rz_strbuf_setf(&op->esil, "%s,%s,*,%s,+,%s,=", REG64(2), REG64(1), REG64(3), REG64(0)); break; - case ARM64_INS_MSUB: + case CS_AARCH64(_INS_MSUB): rz_strbuf_setf(&op->esil, "%s,%s,*,%s,-,%s,=", REG64(2), REG64(1), REG64(3), REG64(0)); break; - case ARM64_INS_MNEG: +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_MNEG): rz_strbuf_setf(&op->esil, "%s,%s,*,0,-,%s,=", REG64(2), REG64(1), REG64(0)); break; - case ARM64_INS_ADD: - case ARM64_INS_ADC: // Add with carry. - // case ARM64_INS_ADCS: // Add with carry. +#endif + case CS_AARCH64(_INS_ADD): + case CS_AARCH64(_INS_ADC): // Add with carry. + // case CS_AARCH64(_INS_ADCS): // Add with carry. OPCALL("+"); break; - case ARM64_INS_SUB: + case CS_AARCH64(_INS_SUB): OPCALL("-"); break; - case ARM64_INS_SBC: + case CS_AARCH64(_INS_SBC): // TODO have to check this more, VEX does not work rz_strbuf_setf(&op->esil, "%s,cf,+,%s,-,%s,=", REG64(2), REG64(1), REG64(0)); break; - case ARM64_INS_SMULL: { + case CS_AARCH64(_INS_SMULL): { int size = REGSIZE64(1) * 8; rz_strbuf_setf(&op->esil, "%d,%s,~,%d,%s,~,*,%s,=", size, REG64(2), size, REG64(1), REG64(0)); break; } - case ARM64_INS_MUL: + case CS_AARCH64(_INS_MUL): OPCALL("*"); break; - case ARM64_INS_AND: + case CS_AARCH64(_INS_AND): OPCALL("&"); break; - case ARM64_INS_ORR: + case CS_AARCH64(_INS_ORR): OPCALL("|"); break; - case ARM64_INS_EOR: + case CS_AARCH64(_INS_EOR): OPCALL("^"); break; - case ARM64_INS_ORN: + case CS_AARCH64(_INS_ORN): OPCALL_NEG("|"); break; - case ARM64_INS_EON: + case CS_AARCH64(_INS_EON): OPCALL_NEG("^"); break; - case ARM64_INS_LSR: { + case CS_AARCH64(_INS_LSR): { const char *r0 = REG64(0); const char *r1 = REG64(1); const int size = REGSIZE64(0) * 8; @@ -423,7 +541,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_LSL: { + case CS_AARCH64(_INS_LSL): { const char *r0 = REG64(0); const char *r1 = REG64(1); const int size = REGSIZE64(0) * 8; @@ -442,15 +560,18 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_ROR: + case CS_AARCH64(_INS_ROR): OPCALL(">>>"); break; - case ARM64_INS_NOP: + case CS_AARCH64(_INS_HINT): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_NOP): rz_strbuf_setf(&op->esil, ","); break; - case ARM64_INS_FDIV: +#endif + case CS_AARCH64(_INS_FDIV): break; - case ARM64_INS_SDIV: { + case CS_AARCH64(_INS_SDIV): { /* TODO: support WZR XZR to specify 32, 64bit op */ int size = REGSIZE64(1) * 8; if (ISREG64(2)) { @@ -460,7 +581,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_UDIV: + case CS_AARCH64(_INS_UDIV): /* TODO: support WZR XZR to specify 32, 64bit op */ if ISREG64 (2) { rz_strbuf_setf(&op->esil, "%s,%s,/,%s,=", REG64(2), REG64(1), REG64(0)); @@ -468,20 +589,20 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a rz_strbuf_setf(&op->esil, "%s,%s,/=", REG64(1), REG64(0)); } break; - case ARM64_INS_BR: + case CS_AARCH64(_INS_BR): rz_strbuf_setf(&op->esil, "%s,pc,=", REG64(0)); break; - case ARM64_INS_B: + case CS_AARCH64(_INS_B): /* capstone precompute resulting address, using PC + IMM */ rz_strbuf_appendf(&op->esil, "%" PFMT64d ",pc,=", IMM64(0)); break; - case ARM64_INS_BL: + case CS_AARCH64(_INS_BL): rz_strbuf_setf(&op->esil, "pc,lr,=,%" PFMT64d ",pc,=", IMM64(0)); break; - case ARM64_INS_BLR: + case CS_AARCH64(_INS_BLR): rz_strbuf_setf(&op->esil, "pc,lr,=,%s,pc,=", REG64(0)); break; - case ARM64_INS_CLZ:; + case CS_AARCH64(_INS_CLZ):; int size = 8 * REGSIZE64(0); // expression is much more concise with GOTO, but GOTOs should be minimized @@ -528,43 +649,43 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; - case ARM64_INS_LDRH: - case ARM64_INS_LDUR: - case ARM64_INS_LDURB: - case ARM64_INS_LDURH: - case ARM64_INS_LDR: - // case ARM64_INS_LDRSB: - // case ARM64_INS_LDRSH: - case ARM64_INS_LDRB: - // case ARM64_INS_LDRSW: - // case ARM64_INS_LDURSW: - case ARM64_INS_LDXR: - case ARM64_INS_LDXRB: - case ARM64_INS_LDXRH: - case ARM64_INS_LDAXR: - case ARM64_INS_LDAXRB: - case ARM64_INS_LDAXRH: - case ARM64_INS_LDAR: - case ARM64_INS_LDARB: - case ARM64_INS_LDARH: { + case CS_AARCH64(_INS_LDRH): + case CS_AARCH64(_INS_LDUR): + case CS_AARCH64(_INS_LDURB): + case CS_AARCH64(_INS_LDURH): + case CS_AARCH64(_INS_LDR): + // case CS_AARCH64(_INS_LDRSB): + // case CS_AARCH64(_INS_LDRSH): + case CS_AARCH64(_INS_LDRB): + // case CS_AARCH64(_INS_LDRSW): + // case CS_AARCH64(_INS_LDURSW): + case CS_AARCH64(_INS_LDXR): + case CS_AARCH64(_INS_LDXRB): + case CS_AARCH64(_INS_LDXRH): + case CS_AARCH64(_INS_LDAXR): + case CS_AARCH64(_INS_LDAXRB): + case CS_AARCH64(_INS_LDAXRH): + case CS_AARCH64(_INS_LDAR): + case CS_AARCH64(_INS_LDARB): + case CS_AARCH64(_INS_LDARH): { int size = REGSIZE64(0); switch (insn->id) { - case ARM64_INS_LDRB: - case ARM64_INS_LDARB: - case ARM64_INS_LDAXRB: - case ARM64_INS_LDXRB: - case ARM64_INS_LDURB: + case CS_AARCH64(_INS_LDRB): + case CS_AARCH64(_INS_LDARB): + case CS_AARCH64(_INS_LDAXRB): + case CS_AARCH64(_INS_LDXRB): + case CS_AARCH64(_INS_LDURB): size = 1; break; - case ARM64_INS_LDRH: - case ARM64_INS_LDARH: - case ARM64_INS_LDXRH: - case ARM64_INS_LDAXRH: - case ARM64_INS_LDURH: + case CS_AARCH64(_INS_LDRH): + case CS_AARCH64(_INS_LDARH): + case CS_AARCH64(_INS_LDXRH): + case CS_AARCH64(_INS_LDAXRH): + case CS_AARCH64(_INS_LDURH): size = 2; break; - case ARM64_INS_LDRSW: - case ARM64_INS_LDURSW: + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDURSW): size = 4; break; default: @@ -607,7 +728,11 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a if (ISREG64(2)) { // not sure if register valued post indexing exists? rz_strbuf_appendf(&op->esil, ",tmp,%s,+,%s,=", REG64(2), REG64(1)); } else { +#if CS_NEXT_VERSION < 6 rz_strbuf_appendf(&op->esil, ",tmp,%" PFMT64d ",+,%s,=", IMM64(2), REG64(1)); +#else + rz_strbuf_appendf(&op->esil, ",tmp,%" PFMT64d ",+,%s,=", MEMDISP64(1), MEMBASE64(1)); +#endif } } } @@ -623,7 +748,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a instructions like ldr x16, [x13, x9] ldrb w2, [x19, x23] - are not detected as ARM64_OP_MEM type and + are not detected as CS_AARCH64(_OP_MEM) type and fall in this case instead. */ if (ISREG64(2)) { @@ -638,25 +763,25 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_LDRSB: - case ARM64_INS_LDRSH: - case ARM64_INS_LDRSW: - case ARM64_INS_LDURSB: - case ARM64_INS_LDURSH: - case ARM64_INS_LDURSW: { + case CS_AARCH64(_INS_LDRSB): + case CS_AARCH64(_INS_LDRSH): + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDURSB): + case CS_AARCH64(_INS_LDURSH): + case CS_AARCH64(_INS_LDURSW): { // handle the sign extended instrs here int size = REGSIZE64(0); switch (insn->id) { - case ARM64_INS_LDRSB: - case ARM64_INS_LDURSB: + case CS_AARCH64(_INS_LDRSB): + case CS_AARCH64(_INS_LDURSB): size = 1; break; - case ARM64_INS_LDRSH: - case ARM64_INS_LDURSH: + case CS_AARCH64(_INS_LDRSH): + case CS_AARCH64(_INS_LDURSH): size = 2; break; - case ARM64_INS_LDRSW: - case ARM64_INS_LDURSW: + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDURSW): size = 4; break; default: @@ -699,7 +824,11 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a if (ISREG64(2)) { // not sure if register valued post indexing exists? rz_strbuf_appendf(&op->esil, ",tmp,%s,+,%s,=", REG64(2), REG64(1)); } else { +#if CS_NEXT_VERSION < 6 rz_strbuf_appendf(&op->esil, ",tmp,%" PFMT64d ",+,%s,=", IMM64(2), REG64(1)); +#else + rz_strbuf_appendf(&op->esil, ",tmp,%" PFMT64d ",+,%s,=", MEMDISP64(1), MEMBASE64(1)); +#endif } } } @@ -715,7 +844,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a instructions like ldr x16, [x13, x9] ldrb w2, [x19, x23] - are not detected as ARM64_OP_MEM type and + are not detected as CS_AARCH64(_OP_MEM) type and fall in this case instead. */ if (ISREG64(2)) { @@ -730,12 +859,14 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_FCMP: - case ARM64_INS_CCMP: - case ARM64_INS_CCMN: - case ARM64_INS_TST: // cmp w8, 0xd - case ARM64_INS_CMP: // cmp w8, 0xd - case ARM64_INS_CMN: // cmp w8, 0xd + case CS_AARCH64(_INS_FCMP): + case CS_AARCH64(_INS_CCMP): + case CS_AARCH64(_INS_CCMN): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_TST): // cmp w8, 0xd + case CS_AARCH64(_INS_CMP): // cmp w8, 0xd + case CS_AARCH64(_INS_CMN): // cmp w8, 0xd +#endif { // update esil, cpu flags int bits = arm64_reg_width(REGID64(0)); @@ -748,47 +879,89 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_FCSEL: - case ARM64_INS_CSEL: // csel Wd, Wn, Wm --> Wd := (cond) ? Wn : Wm +#if CS_NEXT_VERSION >= 6 + case AArch64_INS_SUBS: + if (insn->alias_id != AArch64_INS_ALIAS_CMP && + insn->alias_id != AArch64_INS_ALIAS_CMN) { + cmp(op, handle, insn); + break; + } + // update esil, cpu flags + int bits = arm64_reg_width(REGID64(1)); + if (ISIMM64(2)) { + rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,==,$z,zf,:=,%d,$s,nf,:=,%d,$b,!,cf,:=,%d,$o,vf,:=", IMM64(2) << LSHIFT2_64(2), REG64(1), bits - 1, bits, bits - 1); + } else { + // cmp w10, w11 + SHIFTED_REG64_APPEND(&op->esil, 2); + rz_strbuf_appendf(&op->esil, ",%s,==,$z,zf,:=,%d,$s,nf,:=,%d,$b,!,cf,:=,%d,$o,vf,:=", REG64(1), bits - 1, bits, bits - 1); + } + break; +#endif + case CS_AARCH64(_INS_FCSEL): + case CS_AARCH64(_INS_CSEL): // csel Wd, Wn, Wm --> Wd := (cond) ? Wn : Wm rz_strbuf_appendf(&op->esil, "%s,}{,%s,},%s,=", REG64(1), REG64(2), REG64(0)); postfix = ""; break; - case ARM64_INS_CSET: // cset Wd --> Wd := (cond) ? 1 : 0 +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_CSET): // cset Wd --> Wd := (cond) ? 1 : 0 rz_strbuf_appendf(&op->esil, "1,}{,0,},%s,=", REG64(0)); postfix = ""; break; - case ARM64_INS_CINC: // cinc Wd, Wn --> Wd := (cond) ? (Wn+1) : Wn + case CS_AARCH64(_INS_CINC): // cinc Wd, Wn --> Wd := (cond) ? (Wn+1) : Wn rz_strbuf_appendf(&op->esil, "1,%s,+,}{,%s,},%s,=", REG64(1), REG64(1), REG64(0)); postfix = ""; break; - case ARM64_INS_CSINC: // csinc Wd, Wn, Wm --> Wd := (cond) ? Wn : (Wm+1) + case CS_AARCH64(_INS_CSINC): // csinc Wd, Wn, Wm --> Wd := (cond) ? Wn : (Wm+1) rz_strbuf_appendf(&op->esil, "%s,}{,1,%s,+,},%s,=", REG64(1), REG64(2), REG64(0)); postfix = ""; break; - case ARM64_INS_STXRB: - case ARM64_INS_STXRH: - case ARM64_INS_STXR: { +#else + case CS_AARCH64(_INS_CSINC): + switch (insn->alias_id) { + default: + // csinc Wd, Wn, Wm --> Wd := (cond) ? Wn : (Wm+1) + rz_strbuf_appendf(&op->esil, "%s,}{,1,%s,+,},%s,=", REG64(1), REG64(2), REG64(0)); + postfix = ""; + break; + case AArch64_INS_ALIAS_CSET: // cset Wd --> Wd := (cond) ? 1 : 0 + rz_strbuf_drain_nofree(&op->esil); + rz_arm64_cs_esil_prefix_cond(op, AArch64CC_getInvertedCondCode(insn->detail->CS_aarch64_.cc)); + rz_strbuf_appendf(&op->esil, "1,}{,0,},%s,=", REG64(0)); + postfix = ""; + break; + case AArch64_INS_ALIAS_CINC: // cinc Wd, Wn --> Wd := (cond) ? (Wn+1) : Wn + rz_strbuf_drain_nofree(&op->esil); + rz_arm64_cs_esil_prefix_cond(op, AArch64CC_getInvertedCondCode(insn->detail->CS_aarch64_.cc)); + rz_strbuf_appendf(&op->esil, "1,%s,+,}{,%s,},%s,=", REG64(1), REG64(1), REG64(0)); + postfix = ""; + break; + } + break; +#endif + case CS_AARCH64(_INS_STXRB): + case CS_AARCH64(_INS_STXRH): + case CS_AARCH64(_INS_STXR): { int size = REGSIZE64(1); - if (insn->id == ARM64_INS_STXRB) { + if (insn->id == CS_AARCH64(_INS_STXRB)) { size = 1; - } else if (insn->id == ARM64_INS_STXRH) { + } else if (insn->id == CS_AARCH64(_INS_STXRH)) { size = 2; } rz_strbuf_setf(&op->esil, "0,%s,=,%s,%s,%" PFMT64d ",+,=[%d]", REG64(0), REG64(1), MEMBASE64(1), MEMDISP64(1), size); break; } - case ARM64_INS_STRB: - case ARM64_INS_STRH: - case ARM64_INS_STUR: - case ARM64_INS_STURB: - case ARM64_INS_STURH: - case ARM64_INS_STR: // str x6, [x6,0xf90] + case CS_AARCH64(_INS_STRB): + case CS_AARCH64(_INS_STRH): + case CS_AARCH64(_INS_STUR): + case CS_AARCH64(_INS_STURB): + case CS_AARCH64(_INS_STURH): + case CS_AARCH64(_INS_STR): // str x6, [x6,0xf90] { int size = REGSIZE64(0); - if (insn->id == ARM64_INS_STRB || insn->id == ARM64_INS_STURB) { + if (insn->id == CS_AARCH64(_INS_STRB) || insn->id == CS_AARCH64(_INS_STURB)) { size = 1; - } else if (insn->id == ARM64_INS_STRH || insn->id == ARM64_INS_STURH) { + } else if (insn->id == CS_AARCH64(_INS_STRH) || insn->id == CS_AARCH64(_INS_STURH)) { size = 2; } if (ISMEM64(1)) { @@ -827,7 +1000,11 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a if (ISREG64(2)) { // not sure if register valued post indexing exists? rz_strbuf_appendf(&op->esil, ",tmp,%s,+,%s,=", REG64(2), REG64(1)); } else { +#if CS_NEXT_VERSION < 6 rz_strbuf_appendf(&op->esil, ",tmp,%" PFMT64d ",+,%s,=", IMM64(2), REG64(1)); +#else + rz_strbuf_appendf(&op->esil, ",tmp,%" PFMT64d ",+,%s,=", MEMDISP64(1), MEMBASE64(1)); +#endif } } } @@ -843,7 +1020,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a instructions like ldr x16, [x13, x9] ldrb w2, [x19, x23] - are not detected as ARM64_OP_MEM type and + are not detected as CS_AARCH64(_OP_MEM) type and fall in this case instead. */ if (ISREG64(2)) { @@ -858,7 +1035,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_BIC: + case CS_AARCH64(_INS_BIC): if (OPCOUNT64() == 2) { if (REGSIZE64(0) == 4) { rz_strbuf_appendf(&op->esil, "%s,0xffffffff,^,%s,&=", REG64(1), REG64(0)); @@ -873,28 +1050,28 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } } break; - case ARM64_INS_CBZ: + case CS_AARCH64(_INS_CBZ): rz_strbuf_setf(&op->esil, "%s,!,?{,%" PFMT64d ",pc,=,}", REG64(0), IMM64(1)); break; - case ARM64_INS_CBNZ: + case CS_AARCH64(_INS_CBNZ): rz_strbuf_setf(&op->esil, "%s,?{,%" PFMT64d ",pc,=,}", REG64(0), IMM64(1)); break; - case ARM64_INS_TBZ: + case CS_AARCH64(_INS_TBZ): // tbnz x0, 4, label // if ((1<<4) & x0) goto label; rz_strbuf_setf(&op->esil, "%" PFMT64d ",1,<<,%s,&,!,?{,%" PFMT64d ",pc,=,}", IMM64(1), REG64(0), IMM64(2)); break; - case ARM64_INS_TBNZ: + case CS_AARCH64(_INS_TBNZ): // tbnz x0, 4, label // if ((1<<4) & x0) goto label; rz_strbuf_setf(&op->esil, "%" PFMT64d ",1,<<,%s,&,?{,%" PFMT64d ",pc,=,}", IMM64(1), REG64(0), IMM64(2)); break; - case ARM64_INS_STNP: - case ARM64_INS_STP: // stp x6, x7, [x6,0xf90] + case CS_AARCH64(_INS_STNP): + case CS_AARCH64(_INS_STP): // stp x6, x7, [x6,0xf90] { int disp = (int)MEMDISP64(2); char sign = disp >= 0 ? '+' : '-'; @@ -911,7 +1088,11 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a REG64(1), MEMBASE64(2), size, size); // Post-index case } else if (ISPOSTINDEX64()) { +#if CS_NEXT_VERSION < 6 int val = IMM64(3); +#else + int val = MEMDISP64(2); +#endif sign = val >= 0 ? '+' : '-'; abs = val >= 0 ? val : -val; // "stp x4, x5, [x8], 0x10" @@ -930,7 +1111,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a REG64(1), MEMBASE64(2), abs, sign, size, size); } } break; - case ARM64_INS_LDP: // ldp x29, x30, [sp], 0x10 + case CS_AARCH64(_INS_LDP): // ldp x29, x30, [sp], 0x10 { int disp = (int)MEMDISP64(2); char sign = disp >= 0 ? '+' : '-'; @@ -950,7 +1131,11 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a size, MEMBASE64(2), size, REG64(1)); // Post-index case } else if (ISPOSTINDEX64()) { +#if CS_NEXT_VERSION < 6 int val = IMM64(3); +#else + int val = MEMDISP64(2); +#endif sign = val >= 0 ? '+' : '-'; abs = val >= 0 ? val : -val; // ldp x4, x5, [x8], -0x10 @@ -970,18 +1155,18 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a size, abs, MEMBASE64(2), sign, size, REG64(1)); } } break; - case ARM64_INS_ADRP: + case CS_AARCH64(_INS_ADRP): rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,=", IMM64(1), REG64(0)); break; - case ARM64_INS_MOV: + case CS_AARCH64(_INS_MOV): if (ISREG64(1)) { rz_strbuf_setf(&op->esil, "%s,%s,=", REG64(1), REG64(0)); } else { rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,=", IMM64(1), REG64(0)); } break; - case ARM64_INS_EXTR: + case CS_AARCH64(_INS_EXTR): // from VEX /* 01 | t0 = GET:I64(x4) @@ -994,21 +1179,23 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a rz_strbuf_setf(&op->esil, "%" PFMT64d ",%s,>>,%" PFMT64d ",%s,<<,|,%s,=", IMM64(3), REG64(2), IMM64(3), REG64(1), REG64(0)); break; - case ARM64_INS_RBIT: + case CS_AARCH64(_INS_RBIT): // this expression reverses the bits. it does. do not scroll right. // Derived from VEX rz_strbuf_setf(&op->esil, "0xffffffff00000000,0x20,0xffff0000ffff0000,0x10,0xff00ff00ff00ff00,0x8,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,<<,&,0x8,0xff00ff00ff00ff00,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,&,>>,|,<<,&,0x10,0xffff0000ffff0000,0xff00ff00ff00ff00,0x8,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,<<,&,0x8,0xff00ff00ff00ff00,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,&,>>,|,&,>>,|,<<,&,0x20,0xffffffff00000000,0xffff0000ffff0000,0x10,0xff00ff00ff00ff00,0x8,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,<<,&,0x8,0xff00ff00ff00ff00,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,&,>>,|,<<,&,0x10,0xffff0000ffff0000,0xff00ff00ff00ff00,0x8,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,<<,&,0x8,0xff00ff00ff00ff00,0xf0f0f0f0f0f0f0f0,0x4,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,<<,&,0x4,0xf0f0f0f0f0f0f0f0,0xcccccccccccccccc,0x2,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,<<,&,0x2,0xcccccccccccccccc,0xaaaaaaaaaaaaaaaa,0x1,%1$s,<<,&,0x1,0xaaaaaaaaaaaaaaaa,%1$s,&,>>,|,&,>>,|,&,>>,|,&,>>,|,&,>>,|,&,>>,|,%2$s,=", REG64(1), REG64(0)); break; - case ARM64_INS_MVN: - case ARM64_INS_MOVN: +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_MVN): +#endif + case CS_AARCH64(_INS_MOVN): if (ISREG64(1)) { rz_strbuf_setf(&op->esil, "%d,%s,-1,^,<<,%s,=", LSHIFT2_64(1), REG64(1), REG64(0)); } else { rz_strbuf_setf(&op->esil, "%d,%" PFMT64d ",<<,-1,^,%s,=", LSHIFT2_64(1), IMM64(1), REG64(0)); } break; - case ARM64_INS_MOVK: // movk w8, 0x1290 + case CS_AARCH64(_INS_MOVK): // movk w8, 0x1290 { ut64 shift = LSHIFT2_64(1); if (shift < 0) { @@ -1027,13 +1214,13 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a break; } - case ARM64_INS_MOVZ: + case CS_AARCH64(_INS_MOVZ): rz_strbuf_setf(&op->esil, "%" PFMT64u ",%s,=", IMM64(1) << LSHIFT2_64(1), REG64(0)); break; /* ASR, SXTB, SXTH and SXTW are alias for SBFM */ - case ARM64_INS_ASR: { + case CS_AARCH64(_INS_ASR): { // OPCALL(">>>>"); const char *r0 = REG64(0); const char *r1 = REG64(1); @@ -1053,7 +1240,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_SXTB: + case CS_AARCH64(_INS_SXTB): if (arm64_reg_width(REGID64(0)) == 32) { rz_strbuf_setf(&op->esil, "0xffffffff,8,0xff,%s,&,~,&,%s,=", REG64(1), REG64(0)); @@ -1062,7 +1249,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a REG64(1), REG64(0)); } break; - case ARM64_INS_SXTH: /* halfword */ + case CS_AARCH64(_INS_SXTH): /* halfword */ if (arm64_reg_width(REGID64(0)) == 32) { rz_strbuf_setf(&op->esil, "0xffffffff,16,0xffff,%s,&,~,&,%s,=", REG64(1), REG64(0)); @@ -1071,27 +1258,28 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a REG64(1), REG64(0)); } break; - case ARM64_INS_SXTW: /* word */ + case CS_AARCH64(_INS_SXTW): /* word */ rz_strbuf_setf(&op->esil, "32,0xffffffff,%s,&,~,%s,=", REG64(1), REG64(0)); break; - case ARM64_INS_UXTB: + case CS_AARCH64(_INS_UXTB): rz_strbuf_setf(&op->esil, "%s,0xff,&,%s,=", REG64(1), REG64(0)); break; - case ARM64_INS_UMULL: + case CS_AARCH64(_INS_UMULL): rz_strbuf_setf(&op->esil, "%s,%s,*,%s,=", REG64(1), REG64(2), REG64(0)); break; - case ARM64_INS_UXTH: + case CS_AARCH64(_INS_UXTH): rz_strbuf_setf(&op->esil, "%s,0xffff,&,%s,=", REG64(1), REG64(0)); break; - case ARM64_INS_RET: + case CS_AARCH64(_INS_RET): rz_strbuf_setf(&op->esil, "lr,pc,="); break; - case ARM64_INS_ERET: + case CS_AARCH64(_INS_ERET): rz_strbuf_setf(&op->esil, "lr,pc,="); break; - case ARM64_INS_BFI: // bfi w8, w8, 2, 1 - case ARM64_INS_BFXIL: { +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_BFI): // bfi w8, w8, 2, 1 + case CS_AARCH64(_INS_BFXIL): { if (OPCOUNT64() >= 3 && ISIMM64(3) && IMM64(3) > 0) { ut64 mask = rz_num_bitmask((ut8)IMM64(3)); ut64 shift = IMM64(2); @@ -1102,32 +1290,43 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } break; } - case ARM64_INS_SBFIZ: + case CS_AARCH64(_INS_SBFIZ): if (IMM64(3) > 0 && IMM64(3) <= 64 - IMM64(2)) { rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%" PFMT64d ",%s,%" PFMT64u ",&,~,<<,%s,=", IMM64(2), IMM64(3), REG64(1), rz_num_bitmask((ut8)IMM64(3)), REG64(0)); } break; - case ARM64_INS_UBFIZ: + case CS_AARCH64(_INS_UBFIZ): if (IMM64(3) > 0 && IMM64(3) <= 64 - IMM64(2)) { rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%s,%" PFMT64u ",&,<<,%s,=", IMM64(2), REG64(1), rz_num_bitmask((ut8)IMM64(3)), REG64(0)); } break; - case ARM64_INS_SBFX: + case CS_AARCH64(_INS_SBFX): if (IMM64(3) > 0 && IMM64(3) <= 64 - IMM64(2)) { rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%" PFMT64d ",%s,%" PFMT64d ",%" PFMT64u ",<<,&,>>,~,%s,=", IMM64(3), IMM64(2), REG64(1), IMM64(2), rz_num_bitmask((ut8)IMM64(3)), REG64(0)); } break; - case ARM64_INS_UBFX: + case CS_AARCH64(_INS_UBFX): if (IMM64(3) > 0 && IMM64(3) <= 64 - IMM64(2)) { rz_strbuf_appendf(&op->esil, "%" PFMT64d ",%s,%" PFMT64d ",%" PFMT64u ",<<,&,>>,%s,=", IMM64(2), REG64(1), IMM64(2), rz_num_bitmask((ut8)IMM64(3)), REG64(0)); } break; - case ARM64_INS_NEG: - case ARM64_INS_NEGS: +#else + case AArch64_INS_BFM: + bfm(op, handle, insn); + break; + case AArch64_INS_UBFM: + case AArch64_INS_SBFM: + subfm(op, handle, insn); + break; +#endif + case CS_AARCH64(_INS_NEG): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_NEGS): +#endif if (LSHIFT2_64(1)) { SHIFTED_REG64_APPEND(&op->esil, 1); } else { @@ -1135,7 +1334,7 @@ RZ_IPI int rz_arm_cs_analysis_op_64_esil(RzAnalysis *a, RzAnalysisOp *op, ut64 a } rz_strbuf_appendf(&op->esil, ",0,-,%s,=", REG64(0)); break; - case ARM64_INS_SVC: + case CS_AARCH64(_INS_SVC): rz_strbuf_setf(&op->esil, "%" PFMT64u ",$", IMM64(0)); break; } diff --git a/librz/analysis/arch/arm/arm_il32.c b/librz/analysis/arch/arm/arm_il32.c index 7cfac9769d5..d9626787be4 100644 --- a/librz/analysis/arch/arm/arm_il32.c +++ b/librz/analysis/arch/arm/arm_il32.c @@ -1201,10 +1201,22 @@ static RzILOpEffect *stm(cs_insn *insn, bool is_thumb) { size_t op_first; arm_reg ptr_reg; bool writeback; +#if CS_NEXT_VERSION < 6 if (insn->id == ARM_INS_PUSH || insn->id == ARM_INS_VPUSH) { op_first = 0; ptr_reg = ARM_REG_SP; writeback = true; +#else + if (insn->alias_id == ARM_INS_ALIAS_PUSH || insn->alias_id == ARM_INS_ALIAS_VPUSH) { + op_first = 1; + ptr_reg = ARM_REG_SP; + writeback = true; + } else if (insn->id == ARM_INS_PUSH) { + // Thumb1 PUSH instructions. Have no alias defined in the ISA. + op_first = 0; + ptr_reg = ARM_REG_SP; + writeback = true; +#endif } else { // ARM_INS_STMDB.* if (!ISREG(0)) { return NULL; @@ -1221,10 +1233,14 @@ static RzILOpEffect *stm(cs_insn *insn, bool is_thumb) { if (!ptr) { return NULL; } - bool decrement = insn->id == ARM_INS_STMDA || insn->id == ARM_INS_STMDB || insn->id == ARM_INS_PUSH || - insn->id == ARM_INS_VSTMDB || insn->id == ARM_INS_VPUSH; - bool before = insn->id == ARM_INS_STMDB || insn->id == ARM_INS_PUSH || insn->id == ARM_INS_VSTMDB || - insn->id == ARM_INS_STMIB || insn->id == ARM_INS_VPUSH; + bool decrement = insn->id == ARM_INS_PUSH || insn->id == ARM_INS_STMDA || insn->id == ARM_INS_STMDB || insn->id == ARM_INS_VSTMDB; +#if CS_NEXT_VERSION < 6 + decrement |= insn->id == ARM_INS_VPUSH; +#endif + bool before = insn->id == ARM_INS_PUSH || insn->id == ARM_INS_STMDB || insn->id == ARM_INS_VSTMDB || insn->id == ARM_INS_STMIB; +#if CS_NEXT_VERSION < 6 + before |= insn->id == ARM_INS_VPUSH; +#endif ut32 regsize = reg_bits(REGID(op_first)) / 8; RzILOpEffect *eff = NULL; // build up in reverse order so the result recurses in the second arg of seq (for tail-call optimization) @@ -1262,10 +1278,22 @@ static RzILOpEffect *ldm(cs_insn *insn, bool is_thumb) { size_t op_first; arm_reg ptr_reg; bool writeback; +#if CS_NEXT_VERSION < 6 if (insn->id == ARM_INS_POP || insn->id == ARM_INS_VPOP) { op_first = 0; ptr_reg = ARM_REG_SP; writeback = true; +#else + if (insn->alias_id == ARM_INS_ALIAS_POP || insn->alias_id == ARM_INS_ALIAS_VPOP) { + op_first = 1; + ptr_reg = ARM_REG_SP; + writeback = true; + } else if (insn->id == ARM_INS_POP) { + // Thumb1 POP instructions. Have no alias defined in the ISA. + op_first = 0; + ptr_reg = ARM_REG_SP; + writeback = true; +#endif } else { // ARM_INS_LDM.* if (!ISREG(0)) { return NULL; @@ -1293,6 +1321,9 @@ static RzILOpEffect *ldm(cs_insn *insn, bool is_thumb) { } bool decrement = insn->id == ARM_INS_LDMDA || insn->id == ARM_INS_LDMDB || insn->id == ARM_INS_VLDMDB; bool before = insn->id == ARM_INS_LDMDB || insn->id == ARM_INS_LDMIB || insn->id == ARM_INS_VLDMIA; +#if CS_NEXT_VERSION >= 6 + before &= !(insn->alias_id == ARM_INS_ALIAS_POP || insn->alias_id == ARM_INS_ALIAS_VPOP); +#endif ut32 regsize = reg_bits(REGID(op_first)) / 8; if (writeback) { RzILOpEffect *wb = write_reg(ptr_reg, @@ -4085,7 +4116,11 @@ static RzILOpEffect *il_unconditional(csh *handle, cs_insn *insn, bool is_thumb) // -- // Base Instruction Set case ARM_INS_DBG: +#if CS_NEXT_VERSION < 6 case ARM_INS_NOP: +#else + case ARM_INS_HINT: +#endif case ARM_INS_PLD: case ARM_INS_PLDW: case ARM_INS_PLI: @@ -4200,11 +4235,15 @@ static RzILOpEffect *il_unconditional(csh *handle, cs_insn *insn, bool is_thumb) case ARM_INS_STMDA: case ARM_INS_STMDB: case ARM_INS_PUSH: +#if CS_NEXT_VERSION < 6 case ARM_INS_VPUSH: +#endif case ARM_INS_STMIB: return stm(insn, is_thumb); - case ARM_INS_POP: +#if CS_NEXT_VERSION < 6 case ARM_INS_VPOP: +#endif + case ARM_INS_POP: case ARM_INS_LDM: case ARM_INS_LDMDA: case ARM_INS_LDMDB: diff --git a/librz/analysis/arch/arm/arm_il64.c b/librz/analysis/arch/arm/arm_il64.c index 645bc6d1104..9ebc3c87ed3 100644 --- a/librz/analysis/arch/arm/arm_il64.c +++ b/librz/analysis/arch/arm/arm_il64.c @@ -15,7 +15,7 @@ #define ISMEM ISMEM64 #define OPCOUNT OPCOUNT64 #undef MEMDISP64 // the original one casts to ut64 which we don't want here -#define MEMDISP(x) insn->detail->arm64.operands[x].mem.disp +#define MEMDISP(x) insn->detail->CS_aarch64_.operands[x].mem.disp #include @@ -35,144 +35,144 @@ static const char *regs_bound[] = { * IL for arm64 condition * unconditional is returned as NULL (rather than true), for simpler code */ -static RzILOpBool *cond(arm64_cc c) { +static RzILOpBool *cond(CS_aarch64_cc() c) { switch (c) { - case ARM64_CC_EQ: + case CS_AARCH64CC(_EQ): return VARG("zf"); - case ARM64_CC_NE: + case CS_AARCH64CC(_NE): return INV(VARG("zf")); - case ARM64_CC_HS: + case CS_AARCH64CC(_HS): return VARG("cf"); - case ARM64_CC_LO: + case CS_AARCH64CC(_LO): return INV(VARG("cf")); - case ARM64_CC_MI: + case CS_AARCH64CC(_MI): return VARG("nf"); - case ARM64_CC_PL: + case CS_AARCH64CC(_PL): return INV(VARG("nf")); - case ARM64_CC_VS: + case CS_AARCH64CC(_VS): return VARG("vf"); - case ARM64_CC_VC: + case CS_AARCH64CC(_VC): return INV(VARG("vf")); - case ARM64_CC_HI: + case CS_AARCH64CC(_HI): return AND(VARG("cf"), INV(VARG("zf"))); - case ARM64_CC_LS: + case CS_AARCH64CC(_LS): return OR(INV(VARG("cf")), VARG("zf")); - case ARM64_CC_GE: + case CS_AARCH64CC(_GE): return INV(XOR(VARG("nf"), VARG("vf"))); - case ARM64_CC_LT: + case CS_AARCH64CC(_LT): return XOR(VARG("nf"), VARG("vf")); - case ARM64_CC_GT: + case CS_AARCH64CC(_GT): return INV(OR(XOR(VARG("nf"), VARG("vf")), VARG("zf"))); - case ARM64_CC_LE: + case CS_AARCH64CC(_LE): return OR(XOR(VARG("nf"), VARG("vf")), VARG("zf")); default: return NULL; } } -static arm64_reg xreg(ut8 idx) { - // for some reason, the ARM64_REG_X0...ARM64_REG_X30 enum values are not contiguous, +static CS_aarch64_reg() xreg(ut8 idx) { + // for some reason, the CS_AARCH64(_REG_X0)...CS_AARCH64(_REG_X30) enum values are not contiguous, // so use switch here and let the compiler optimize: switch (idx) { - case 0: return ARM64_REG_X0; - case 1: return ARM64_REG_X1; - case 2: return ARM64_REG_X2; - case 3: return ARM64_REG_X3; - case 4: return ARM64_REG_X4; - case 5: return ARM64_REG_X5; - case 6: return ARM64_REG_X6; - case 7: return ARM64_REG_X7; - case 8: return ARM64_REG_X8; - case 9: return ARM64_REG_X9; - case 10: return ARM64_REG_X10; - case 11: return ARM64_REG_X11; - case 12: return ARM64_REG_X12; - case 13: return ARM64_REG_X13; - case 14: return ARM64_REG_X14; - case 15: return ARM64_REG_X15; - case 16: return ARM64_REG_X16; - case 17: return ARM64_REG_X17; - case 18: return ARM64_REG_X18; - case 19: return ARM64_REG_X19; - case 20: return ARM64_REG_X20; - case 21: return ARM64_REG_X21; - case 22: return ARM64_REG_X22; - case 23: return ARM64_REG_X23; - case 24: return ARM64_REG_X24; - case 25: return ARM64_REG_X25; - case 26: return ARM64_REG_X26; - case 27: return ARM64_REG_X27; - case 28: return ARM64_REG_X28; - case 29: return ARM64_REG_X29; - case 30: return ARM64_REG_X30; - case 31: return ARM64_REG_SP; - case 32: return ARM64_REG_XZR; + case 0: return CS_AARCH64(_REG_X0); + case 1: return CS_AARCH64(_REG_X1); + case 2: return CS_AARCH64(_REG_X2); + case 3: return CS_AARCH64(_REG_X3); + case 4: return CS_AARCH64(_REG_X4); + case 5: return CS_AARCH64(_REG_X5); + case 6: return CS_AARCH64(_REG_X6); + case 7: return CS_AARCH64(_REG_X7); + case 8: return CS_AARCH64(_REG_X8); + case 9: return CS_AARCH64(_REG_X9); + case 10: return CS_AARCH64(_REG_X10); + case 11: return CS_AARCH64(_REG_X11); + case 12: return CS_AARCH64(_REG_X12); + case 13: return CS_AARCH64(_REG_X13); + case 14: return CS_AARCH64(_REG_X14); + case 15: return CS_AARCH64(_REG_X15); + case 16: return CS_AARCH64(_REG_X16); + case 17: return CS_AARCH64(_REG_X17); + case 18: return CS_AARCH64(_REG_X18); + case 19: return CS_AARCH64(_REG_X19); + case 20: return CS_AARCH64(_REG_X20); + case 21: return CS_AARCH64(_REG_X21); + case 22: return CS_AARCH64(_REG_X22); + case 23: return CS_AARCH64(_REG_X23); + case 24: return CS_AARCH64(_REG_X24); + case 25: return CS_AARCH64(_REG_X25); + case 26: return CS_AARCH64(_REG_X26); + case 27: return CS_AARCH64(_REG_X27); + case 28: return CS_AARCH64(_REG_X28); + case 29: return CS_AARCH64(_REG_X29); + case 30: return CS_AARCH64(_REG_X30); + case 31: return CS_AARCH64(_REG_SP); + case 32: return CS_AARCH64(_REG_XZR); default: rz_warn_if_reached(); - return ARM64_REG_INVALID; + return CS_AARCH64(_REG_INVALID); } } -static bool is_xreg(arm64_reg reg) { +static bool is_xreg(CS_aarch64_reg() reg) { switch (reg) { - case ARM64_REG_X0: - case ARM64_REG_X1: - case ARM64_REG_X2: - case ARM64_REG_X3: - case ARM64_REG_X4: - case ARM64_REG_X5: - case ARM64_REG_X6: - case ARM64_REG_X7: - case ARM64_REG_X8: - case ARM64_REG_X9: - case ARM64_REG_X10: - case ARM64_REG_X11: - case ARM64_REG_X12: - case ARM64_REG_X13: - case ARM64_REG_X14: - case ARM64_REG_X15: - case ARM64_REG_X16: - case ARM64_REG_X17: - case ARM64_REG_X18: - case ARM64_REG_X19: - case ARM64_REG_X20: - case ARM64_REG_X21: - case ARM64_REG_X22: - case ARM64_REG_X23: - case ARM64_REG_X24: - case ARM64_REG_X25: - case ARM64_REG_X26: - case ARM64_REG_X27: - case ARM64_REG_X28: - case ARM64_REG_X29: - case ARM64_REG_X30: - case ARM64_REG_SP: - case ARM64_REG_XZR: + case CS_AARCH64(_REG_X0): + case CS_AARCH64(_REG_X1): + case CS_AARCH64(_REG_X2): + case CS_AARCH64(_REG_X3): + case CS_AARCH64(_REG_X4): + case CS_AARCH64(_REG_X5): + case CS_AARCH64(_REG_X6): + case CS_AARCH64(_REG_X7): + case CS_AARCH64(_REG_X8): + case CS_AARCH64(_REG_X9): + case CS_AARCH64(_REG_X10): + case CS_AARCH64(_REG_X11): + case CS_AARCH64(_REG_X12): + case CS_AARCH64(_REG_X13): + case CS_AARCH64(_REG_X14): + case CS_AARCH64(_REG_X15): + case CS_AARCH64(_REG_X16): + case CS_AARCH64(_REG_X17): + case CS_AARCH64(_REG_X18): + case CS_AARCH64(_REG_X19): + case CS_AARCH64(_REG_X20): + case CS_AARCH64(_REG_X21): + case CS_AARCH64(_REG_X22): + case CS_AARCH64(_REG_X23): + case CS_AARCH64(_REG_X24): + case CS_AARCH64(_REG_X25): + case CS_AARCH64(_REG_X26): + case CS_AARCH64(_REG_X27): + case CS_AARCH64(_REG_X28): + case CS_AARCH64(_REG_X29): + case CS_AARCH64(_REG_X30): + case CS_AARCH64(_REG_SP): + case CS_AARCH64(_REG_XZR): return true; default: return false; } } -static ut8 wreg_idx(arm64_reg reg) { - if (reg >= ARM64_REG_W0 && reg <= ARM64_REG_W30) { - return reg - ARM64_REG_W0; +static ut8 wreg_idx(CS_aarch64_reg() reg) { + if (reg >= CS_AARCH64(_REG_W0) && reg <= CS_AARCH64(_REG_W30)) { + return reg - CS_AARCH64(_REG_W0); } - if (reg == ARM64_REG_WSP) { + if (reg == CS_AARCH64(_REG_WSP)) { return 31; } - if (reg == ARM64_REG_WZR) { + if (reg == CS_AARCH64(_REG_WZR)) { return 32; } rz_warn_if_reached(); return 0; } -static bool is_wreg(arm64_reg reg) { - return (reg >= ARM64_REG_W0 && reg <= ARM64_REG_W30) || reg == ARM64_REG_WSP || reg == ARM64_REG_WZR; +static bool is_wreg(CS_aarch64_reg() reg) { + return (reg >= CS_AARCH64(_REG_W0) && reg <= CS_AARCH64(_REG_W30)) || reg == CS_AARCH64(_REG_WSP) || reg == CS_AARCH64(_REG_WZR); } -static arm64_reg xreg_of_reg(arm64_reg reg) { +static CS_aarch64_reg() xreg_of_reg(CS_aarch64_reg() reg) { if (is_wreg(reg)) { return xreg(wreg_idx(reg)); } @@ -182,41 +182,41 @@ static arm64_reg xreg_of_reg(arm64_reg reg) { /** * Variable name for a register given by cs */ -static const char *reg_var_name(arm64_reg reg) { +static const char *reg_var_name(CS_aarch64_reg() reg) { reg = xreg_of_reg(reg); switch (reg) { - case ARM64_REG_X0: return "x0"; - case ARM64_REG_X1: return "x1"; - case ARM64_REG_X2: return "x2"; - case ARM64_REG_X3: return "x3"; - case ARM64_REG_X4: return "x4"; - case ARM64_REG_X5: return "x5"; - case ARM64_REG_X6: return "x6"; - case ARM64_REG_X7: return "x7"; - case ARM64_REG_X8: return "x8"; - case ARM64_REG_X9: return "x9"; - case ARM64_REG_X10: return "x10"; - case ARM64_REG_X11: return "x11"; - case ARM64_REG_X12: return "x12"; - case ARM64_REG_X13: return "x13"; - case ARM64_REG_X14: return "x14"; - case ARM64_REG_X15: return "x15"; - case ARM64_REG_X16: return "x16"; - case ARM64_REG_X17: return "x17"; - case ARM64_REG_X18: return "x18"; - case ARM64_REG_X19: return "x19"; - case ARM64_REG_X20: return "x20"; - case ARM64_REG_X21: return "x21"; - case ARM64_REG_X22: return "x22"; - case ARM64_REG_X23: return "x23"; - case ARM64_REG_X24: return "x24"; - case ARM64_REG_X25: return "x25"; - case ARM64_REG_X26: return "x26"; - case ARM64_REG_X27: return "x27"; - case ARM64_REG_X28: return "x28"; - case ARM64_REG_X29: return "x29"; - case ARM64_REG_X30: return "x30"; - case ARM64_REG_SP: return "sp"; + case CS_AARCH64(_REG_X0): return "x0"; + case CS_AARCH64(_REG_X1): return "x1"; + case CS_AARCH64(_REG_X2): return "x2"; + case CS_AARCH64(_REG_X3): return "x3"; + case CS_AARCH64(_REG_X4): return "x4"; + case CS_AARCH64(_REG_X5): return "x5"; + case CS_AARCH64(_REG_X6): return "x6"; + case CS_AARCH64(_REG_X7): return "x7"; + case CS_AARCH64(_REG_X8): return "x8"; + case CS_AARCH64(_REG_X9): return "x9"; + case CS_AARCH64(_REG_X10): return "x10"; + case CS_AARCH64(_REG_X11): return "x11"; + case CS_AARCH64(_REG_X12): return "x12"; + case CS_AARCH64(_REG_X13): return "x13"; + case CS_AARCH64(_REG_X14): return "x14"; + case CS_AARCH64(_REG_X15): return "x15"; + case CS_AARCH64(_REG_X16): return "x16"; + case CS_AARCH64(_REG_X17): return "x17"; + case CS_AARCH64(_REG_X18): return "x18"; + case CS_AARCH64(_REG_X19): return "x19"; + case CS_AARCH64(_REG_X20): return "x20"; + case CS_AARCH64(_REG_X21): return "x21"; + case CS_AARCH64(_REG_X22): return "x22"; + case CS_AARCH64(_REG_X23): return "x23"; + case CS_AARCH64(_REG_X24): return "x24"; + case CS_AARCH64(_REG_X25): return "x25"; + case CS_AARCH64(_REG_X26): return "x26"; + case CS_AARCH64(_REG_X27): return "x27"; + case CS_AARCH64(_REG_X28): return "x28"; + case CS_AARCH64(_REG_X29): return "x29"; + case CS_AARCH64(_REG_X30): return "x30"; + case CS_AARCH64(_REG_SP): return "sp"; default: return NULL; } } @@ -224,11 +224,11 @@ static const char *reg_var_name(arm64_reg reg) { /** * Get the bits of the given register or 0, if it is not known (e.g. not implemented yet) */ -static ut32 reg_bits(arm64_reg reg) { - if (is_xreg(reg) || reg == ARM64_REG_XZR) { +static ut32 reg_bits(CS_aarch64_reg() reg) { + if (is_xreg(reg) || reg == CS_AARCH64(_REG_XZR)) { return 64; } - if (is_wreg(reg) || reg == ARM64_REG_WZR) { + if (is_wreg(reg) || reg == CS_AARCH64(_REG_WZR)) { return 32; } return 0; @@ -237,11 +237,11 @@ static ut32 reg_bits(arm64_reg reg) { /** * IL to read the given capstone reg */ -static RzILOpBitVector *read_reg(arm64_reg reg) { - if (reg == ARM64_REG_XZR) { +static RzILOpBitVector *read_reg(CS_aarch64_reg() reg) { + if (reg == CS_AARCH64(_REG_XZR)) { return U64(0); } - if (reg == ARM64_REG_WZR) { + if (reg == CS_AARCH64(_REG_WZR)) { return U32(0); } const char *var = reg_var_name(reg); @@ -267,60 +267,69 @@ static RzILOpBitVector *adjust_unsigned(ut32 bits, RZ_OWN RzILOpBitVector *v) { return v; } -static RzILOpBitVector *extend(ut32 dst_bits, arm64_extender ext, RZ_OWN RzILOpBitVector *v, ut32 v_bits) { +static RzILOpBitVector *reg_extend(ut32 dst_bits, CS_aarch64_extender() ext, RZ_OWN RzILOpBitVector *reg, ut32 v_bits) { bool is_signed = false; - ut32 src_bits; + ut32 src_bits = v_bits; switch (ext) { - case ARM64_EXT_SXTB: + case CS_AARCH64(_EXT_SXTB): is_signed = true; // fallthrough - case ARM64_EXT_UXTB: + case CS_AARCH64(_EXT_UXTB): src_bits = 8; break; - case ARM64_EXT_SXTH: + case CS_AARCH64(_EXT_SXTH): is_signed = true; // fallthrough - case ARM64_EXT_UXTH: + case CS_AARCH64(_EXT_UXTH): src_bits = 16; break; - case ARM64_EXT_SXTW: + case CS_AARCH64(_EXT_SXTW): is_signed = true; // fallthrough - case ARM64_EXT_UXTW: + case CS_AARCH64(_EXT_UXTW): src_bits = 32; break; - case ARM64_EXT_SXTX: + case CS_AARCH64(_EXT_SXTX): is_signed = true; // fallthrough - case ARM64_EXT_UXTX: + case CS_AARCH64(_EXT_UXTX): src_bits = 64; break; default: - if (dst_bits == v_bits) { - return v; - } else { - return adjust_unsigned(dst_bits, v); + break; + } + if (dst_bits < src_bits && src_bits <= v_bits) { + // Just cast it down once. + if (reg->code == RZ_IL_OP_CAST) { + // Already a casted down register. Set new width. + reg->op.cast.length = dst_bits; + return reg; } + return UNSIGNED(dst_bits, reg); } - - v = adjust_unsigned(src_bits, v); - return is_signed ? SIGNED(dst_bits, v) : UNSIGNED(dst_bits, v); + if (src_bits != v_bits) { + reg = adjust_unsigned(src_bits, reg); + } + if (dst_bits != src_bits) { + return is_signed ? SIGNED(dst_bits, reg) : UNSIGNED(dst_bits, reg); + } + return is_signed ? SIGNED(dst_bits, reg) : reg; } -static RzILOpBitVector *apply_shift(arm64_shifter sft, ut32 dist, RZ_OWN RzILOpBitVector *v) { +static RzILOpBitVector *apply_shift(CS_aarch64_shifter() sft, ut32 dist, RZ_OWN RzILOpBitVector *v) { if (!dist) { return v; } switch (sft) { - case ARM64_SFT_LSL: + case CS_AARCH64(_SFT_LSL): return SHIFTL0(v, UN(6, dist)); - case ARM64_SFT_LSR: + case CS_AARCH64(_SFT_LSR): return SHIFTR0(v, UN(6, dist)); - case ARM64_SFT_ASR: + case CS_AARCH64(_SFT_ASR): return SHIFTRA(v, UN(6, dist)); default: return v; @@ -329,13 +338,13 @@ static RzILOpBitVector *apply_shift(arm64_shifter sft, ut32 dist, RZ_OWN RzILOpB #define REG(n) read_reg(REGID(n)) #define REGBITS(n) reg_bits(REGID(n)) -#define MEMBASEID(x) insn->detail->arm64.operands[x].mem.base +#define MEMBASEID(x) insn->detail->CS_aarch64_.operands[x].mem.base #define MEMBASE(x) read_reg(MEMBASEID(x)) /** * IL to write a value to the given capstone reg */ -static RzILOpEffect *write_reg(arm64_reg reg, RZ_OWN RZ_NONNULL RzILOpBitVector *v) { +static RzILOpEffect *write_reg(CS_aarch64_reg() reg, RZ_OWN RZ_NONNULL RzILOpBitVector *v) { rz_return_val_if_fail(v, NULL); const char *var = reg_var_name(reg); if (!var) { @@ -348,12 +357,12 @@ static RzILOpEffect *write_reg(arm64_reg reg, RZ_OWN RZ_NONNULL RzILOpBitVector return SETG(var, v); } -static RzILOpBitVector *arg_mem(RzILOpBitVector *base_plus_disp, cs_arm64_op *op) { - if (op->mem.index == ARM64_REG_INVALID) { +static RzILOpBitVector *arg_mem(RzILOpBitVector *base_plus_disp, CS_aarch64_op() * op) { + if (op->mem.index == CS_AARCH64(_REG_INVALID)) { return base_plus_disp; } RzILOpBitVector *index = read_reg(op->mem.index); - index = extend(64, op->ext, index, reg_bits(op->mem.index)); + index = reg_extend(64, op->ext, index, reg_bits(op->mem.index)); index = apply_shift(op->shift.type, op->shift.value, index); return ADD(base_plus_disp, index); } @@ -364,11 +373,11 @@ static RzILOpBitVector *arg_mem(RzILOpBitVector *base_plus_disp, cs_arm64_op *op * This is necessary for immediate operands for example. * In any case, if a value is returned, its bitness is written back into this storage. */ -static RzILOpBitVector *arg(cs_insn *insn, size_t n, ut32 *bits_inout) { +static RzILOpBitVector *arg(RZ_BORROW cs_insn *insn, size_t n, RZ_OUT ut32 *bits_inout) { ut32 bits_requested = bits_inout ? *bits_inout : 0; - cs_arm64_op *op = &insn->detail->arm64.operands[n]; + CS_aarch64_op() *op = &insn->detail->CS_aarch64_.operands[n]; switch (op->type) { - case ARM64_OP_REG: { + case CS_AARCH64(_OP_REG): { if (!bits_requested) { bits_requested = REGBITS(n); if (!bits_requested) { @@ -382,27 +391,32 @@ static RzILOpBitVector *arg(cs_insn *insn, size_t n, ut32 *bits_inout) { if (!r) { return NULL; } - return apply_shift(op->shift.type, op->shift.value, extend(bits_requested, op->ext, r, REGBITS(n))); + return apply_shift(op->shift.type, op->shift.value, reg_extend(bits_requested, op->ext, r, REGBITS(n))); } - case ARM64_OP_IMM: { + case CS_AARCH64(_OP_IMM): { if (!bits_requested) { return NULL; } ut64 val = IMM(n); - if (op->shift.type == ARM64_SFT_LSL) { + if (op->shift.type == CS_AARCH64(_SFT_LSL)) { val <<= op->shift.value; } return UN(bits_requested, val); } - case ARM64_OP_MEM: { + case CS_AARCH64(_OP_MEM): { RzILOpBitVector *addr = MEMBASE(n); +#if CS_NEXT_VERSION >= 6 + if (ISPOSTINDEX64()) { + return addr; + } +#endif st64 disp = MEMDISP(n); if (disp > 0) { addr = ADD(addr, U64(disp)); } else if (disp < 0) { addr = SUB(addr, U64(-disp)); } - return arg_mem(addr, &insn->detail->arm64.operands[n]); + return arg_mem(addr, &insn->detail->CS_aarch64_.operands[n]); } default: break; @@ -436,16 +450,16 @@ static RzILOpEffect *update_flags_zn00(RzILOpBitVector *v) { } /** - * Capstone: ARM64_INS_ADD, ARM64_INS_ADC, ARM64_INS_SUB, ARM64_INS_SBC + * Capstone: CS_AARCH64(_INS_ADD), CS_AARCH64(_INS_ADC), CS_AARCH64(_INS_SUB), CS_AARCH64(_INS_SBC) * ARM: add, adds, adc, adcs, sub, subs, sbc, sbcs */ static RzILOpEffect *add_sub(cs_insn *insn) { if (!ISREG(0)) { return NULL; } - bool is_sub = insn->id == ARM64_INS_SUB || insn->id == ARM64_INS_SBC + bool is_sub = insn->id == CS_AARCH64(_INS_SUB) || insn->id == CS_AARCH64(_INS_SBC) #if CS_API_MAJOR > 4 - || insn->id == ARM64_INS_SUBS || insn->id == ARM64_INS_SBCS + || insn->id == CS_AARCH64(_INS_SUBS) || insn->id == CS_AARCH64(_INS_SBCS) #endif ; ut32 bits = REGBITS(0); @@ -461,23 +475,23 @@ static RzILOpEffect *add_sub(cs_insn *insn) { } RzILOpBitVector *res = is_sub ? SUB(a, b) : ADD(a, b); bool with_carry = false; - if (insn->id == ARM64_INS_ADC + if (insn->id == CS_AARCH64(_INS_ADC) #if CS_API_MAJOR > 4 - || insn->id == ARM64_INS_ADCS + || insn->id == CS_AARCH64(_INS_ADCS) #endif ) { res = ADD(res, ITE(VARG("cf"), UN(bits, 1), UN(bits, 0))); with_carry = true; - } else if (insn->id == ARM64_INS_SBC + } else if (insn->id == CS_AARCH64(_INS_SBC) #if CS_API_MAJOR > 4 - || insn->id == ARM64_INS_SBCS + || insn->id == CS_AARCH64(_INS_SBCS) #endif ) { res = SUB(res, ITE(VARG("cf"), UN(bits, 0), UN(bits, 1))); with_carry = true; } RzILOpEffect *set = write_reg(REGID(0), res); - bool update_flags = insn->detail->arm64.update_flags; + bool update_flags = insn->detail->CS_aarch64_.update_flags; if (update_flags) { return SEQ6( SETL("a", DUP(a)), @@ -491,7 +505,7 @@ static RzILOpEffect *add_sub(cs_insn *insn) { } /** - * Capstone: ARM64_INS_ADR, ARM64_INS_ADRP + * Capstone: CS_AARCH64(_INS_ADR), CS_AARCH64(_INS_ADRP) * ARM: adr, adrp */ static RzILOpEffect *adr(cs_insn *insn) { @@ -502,7 +516,7 @@ static RzILOpEffect *adr(cs_insn *insn) { } /** - * Capstone: ARM64_INS_AND, ARM64_INS_EON, ARM64_INS_EOR, ARM64_INS_ORN, ARM64_INS_AORR + * Capstone: CS_AARCH64(_INS_AND), CS_AARCH64(_INS_EON), CS_AARCH64(_INS_EOR), CS_AARCH64(_INS_ORN), CS_AARCH64(_INS_AORR) * ARM: and, eon, eor, orn, orr */ static RzILOpEffect *bitwise(cs_insn *insn) { @@ -522,19 +536,19 @@ static RzILOpEffect *bitwise(cs_insn *insn) { } RzILOpBitVector *res; switch (insn->id) { - case ARM64_INS_EOR: + case CS_AARCH64(_INS_EOR): res = LOGXOR(a, b); break; - case ARM64_INS_EON: + case CS_AARCH64(_INS_EON): res = LOGXOR(a, LOGNOT(b)); break; - case ARM64_INS_ORN: + case CS_AARCH64(_INS_ORN): res = LOGOR(a, LOGNOT(b)); break; - case ARM64_INS_ORR: + case CS_AARCH64(_INS_ORR): res = LOGOR(a, b); break; - default: // ARM64_INS_AND + default: // CS_AARCH64(_INS_AND) res = LOGAND(a, b); break; } @@ -542,14 +556,14 @@ static RzILOpEffect *bitwise(cs_insn *insn) { if (!eff) { return NULL; } - if (insn->detail->arm64.update_flags) { + if (insn->detail->CS_aarch64_.update_flags) { return SEQ2(eff, update_flags_zn00(REG(0))); } return eff; } /** - * Capstone: ARM64_INS_ASR, ARM64_INS_LSL, ARM64_INS_LSR, ARM64_INS_ROR + * Capstone: CS_AARCH64(_INS_ASR), CS_AARCH64(_INS_LSL), CS_AARCH64(_INS_LSR), CS_AARCH64(_INS_ROR) * ARM: asr, asrv, lsl, lslv, lsr, lsrv, ror, rorv */ static RzILOpEffect *shift(cs_insn *insn) { @@ -572,16 +586,25 @@ static RzILOpEffect *shift(cs_insn *insn) { } RzILOpBitVector *res; switch (insn->id) { - case ARM64_INS_ASR: + case CS_AARCH64(_INS_ASR): res = SHIFTRA(a, b); break; - case ARM64_INS_LSR: + case CS_AARCH64(_INS_LSR): res = SHIFTR0(a, b); break; - case ARM64_INS_ROR: + case CS_AARCH64(_INS_ROR): res = LOGOR(SHIFTR0(a, b), SHIFTL0(DUP(a), NEG(DUP(b)))); break; - default: // ARM64_INS_LSL +#if CS_NEXT_VERSION >= 6 + case AArch64_INS_EXTR: + if (insn->alias_id != AArch64_INS_ALIAS_ROR) { + return NULL; + } + b = ARG(3, &bits); + res = LOGOR(SHIFTR0(a, b), SHIFTL0(DUP(a), NEG(DUP(b)))); + break; +#endif + default: // CS_AARCH64(_INS_LSL) res = SHIFTL0(a, b); break; } @@ -589,14 +612,14 @@ static RzILOpEffect *shift(cs_insn *insn) { } /** - * Capstone: ARM64_INS_B, ARM64_INS_RET, ARM64_INS_RETAA, ARM64_INS_RETAB + * Capstone: CS_AARCH64(_INS_B), CS_AARCH64(_INS_RET), CS_AARCH64(_INS_RETAA), CS_AARCH64(_INS_RETAB) * ARM: b, b.cond, ret, retaa, retab */ static RzILOpEffect *branch(cs_insn *insn) { RzILOpBitVector *a; if (OPCOUNT() == 0) { - // for ARM64_INS_RET and similar - a = read_reg(ARM64_REG_LR); + // for CS_AARCH64(_INS_RET) and similar + a = read_reg(CS_AARCH64(_REG_LR)); } else { ut32 bits = 64; a = ARG(0, &bits); @@ -604,7 +627,7 @@ static RzILOpEffect *branch(cs_insn *insn) { if (!a) { return NULL; } - RzILOpBool *c = cond(insn->detail->arm64.cc); + RzILOpBool *c = cond(insn->detail->CS_aarch64_.cc); if (c) { return BRANCH(c, JMP(a), NOP()); } @@ -612,7 +635,7 @@ static RzILOpEffect *branch(cs_insn *insn) { } /** - * Capstone: ARM64_INS_BL, ARM64_INS_BLR, ARM64_INS_BLRAA, ARM64_INS_BLRAAZ, ARM64_INS_BLRAB, ARM64_INS_BLRABZ + * Capstone: CS_AARCH64(_INS_BL), CS_AARCH64(_INS_BLR), CS_AARCH64(_INS_BLRAA), CS_AARCH64(_INS_BLRAAZ), CS_AARCH64(_INS_BLRAB), CS_AARCH64(_INS_BLRABZ) * ARM: bl, blr, blraa, blraaz, blrab, blrabz */ static RzILOpEffect *bl(cs_insn *insn) { @@ -627,7 +650,7 @@ static RzILOpEffect *bl(cs_insn *insn) { } /** - * Capstone: ARM64_INS_BFM, ARM64_INS_BFI, ARM64_INS_BFXIL + * Capstone: CS_AARCH64(_INS_BFM), CS_AARCH64(_INS_BFI), CS_AARCH64(_INS_BFXIL) * ARM: bfm, bfc, bfi, bfxil */ static RzILOpEffect *bfm(cs_insn *insn) { @@ -648,17 +671,36 @@ static RzILOpEffect *bfm(cs_insn *insn) { if (!b) { return NULL; } +#if CS_NEXT_VERSION < 6 ut64 mask_base = rz_num_bitmask(IMM(3)); ut64 mask = mask_base << RZ_MIN(63, IMM(2)); - if (insn->id == ARM64_INS_BFI) { + if (insn->id == CS_AARCH64(_INS_BFI)) { return write_reg(REGID(0), LOGOR(LOGAND(a, UN(bits, ~mask)), SHIFTL0(LOGAND(b, UN(bits, mask_base)), UN(6, IMM(2))))); } - // insn->id == ARM64_INS_BFXIL + // insn->id == CS_AARCH64(_INS_BFXIL) return write_reg(REGID(0), LOGOR(LOGAND(a, UN(bits, ~mask_base)), SHIFTR0(LOGAND(b, UN(bits, mask)), UN(6, IMM(2))))); +#else + ut64 lsb = IMM(2); + ut64 width = IMM(3); + if (insn->alias_id == AArch64_INS_ALIAS_BFI) { + width += 1; + // TODO Mod depends on (sf && N) bits + lsb = -lsb % 32; + ut64 mask_base = rz_num_bitmask(width); + ut64 mask = mask_base << RZ_MIN(63, lsb); + return write_reg(REGID(0), LOGOR(LOGAND(a, UN(bits, ~mask)), SHIFTL0(LOGAND(b, UN(bits, mask_base)), UN(6, lsb)))); + } else if (insn->alias_id == AArch64_INS_ALIAS_BFXIL) { + width = width - lsb + 1; + ut64 mask_base = rz_num_bitmask(width); + ut64 mask = mask_base << RZ_MIN(63, lsb); + return write_reg(REGID(0), LOGOR(LOGAND(a, UN(bits, ~mask_base)), SHIFTR0(LOGAND(b, UN(bits, mask)), UN(6, lsb)))); + } + return NULL; +#endif } /** - * Capstone: ARM64_INS_BIC, ARM64_INS_BICS + * Capstone: CS_AARCH64(_INS_BIC), CS_AARCH64(_INS_BICS) * ARM: bic, bics */ static RzILOpEffect *bic(cs_insn *insn) { @@ -678,14 +720,14 @@ static RzILOpEffect *bic(cs_insn *insn) { } RzILOpBitVector *res = LOGAND(a, LOGNOT(b)); RzILOpEffect *eff = NULL; - if (REGID(0) != ARM64_REG_XZR && REGID(0) != ARM64_REG_WZR) { + if (REGID(0) != CS_AARCH64(_REG_XZR) && REGID(0) != CS_AARCH64(_REG_WZR)) { eff = write_reg(REGID(0), res); if (!eff) { return NULL; } res = NULL; } - if (insn->detail->arm64.update_flags) { + if (insn->detail->CS_aarch64_.update_flags) { RzILOpEffect *eff1 = update_flags_zn00(res ? res : REG(0)); return eff ? SEQ2(eff, eff1) : eff1; } @@ -697,9 +739,9 @@ static RzILOpEffect *bic(cs_insn *insn) { #if CS_API_MAJOR > 4 /** - * Capstone: ARM64_INS_CAS, ARM64_INS_CASA, ARM64_INS_CASAL, ARM64_INS_CASL, - * ARM64_INS_CASB, ARM64_INS_CASAB, ARM64_INS_CASALB, ARM64_INS_CASLB, - * ARM64_INS_CASH, ARM64_INS_CASAH, ARM64_INS_CASALH, ARM64_INS_CASLH: + * Capstone: CS_AARCH64(_INS_CAS), CS_AARCH64(_INS_CASA), CS_AARCH64(_INS_CASAL), CS_AARCH64(_INS_CASL), + * CS_AARCH64(_INS_CASB), CS_AARCH64(_INS_CASAB), CS_AARCH64(_INS_CASALB), CS_AARCH64(_INS_CASLB), + * CS_AARCH64(_INS_CASH), CS_AARCH64(_INS_CASAH), CS_AARCH64(_INS_CASALH), CS_AARCH64(_INS_CASLH): * ARM: cas, casa, casal, casl, casb, casab, casalb, caslb, cash, casah, casalh, caslh */ static RzILOpEffect *cas(cs_insn *insn) { @@ -711,16 +753,16 @@ static RzILOpEffect *cas(cs_insn *insn) { return NULL; } switch (insn->id) { - case ARM64_INS_CASB: - case ARM64_INS_CASAB: - case ARM64_INS_CASALB: - case ARM64_INS_CASLB: + case CS_AARCH64(_INS_CASB): + case CS_AARCH64(_INS_CASAB): + case CS_AARCH64(_INS_CASALB): + case CS_AARCH64(_INS_CASLB): bits = 8; break; - case ARM64_INS_CASH: - case ARM64_INS_CASAH: - case ARM64_INS_CASALH: - case ARM64_INS_CASLH: + case CS_AARCH64(_INS_CASH): + case CS_AARCH64(_INS_CASAH): + case CS_AARCH64(_INS_CASALH): + case CS_AARCH64(_INS_CASLH): bits = 16; break; default: @@ -744,7 +786,7 @@ static RzILOpEffect *cas(cs_insn *insn) { } /** - * Capstone: ARM64_INS_CASP, ARM64_INS_CASPA, ARM64_INS_CASPAL, ARM64_INS_CASPL + * Capstone: CS_AARCH64(_INS_CASP), CS_AARCH64(_INS_CASPA), CS_AARCH64(_INS_CASPAL), CS_AARCH64(_INS_CASPL) * ARM: casp, caspa, caspal, caspl */ static RzILOpEffect *casp(cs_insn *insn) { @@ -783,7 +825,7 @@ static RzILOpEffect *casp(cs_insn *insn) { #endif /** - * Capstone: ARM64_INS_CBZ, ARM64_INS_CBNZ + * Capstone: CS_AARCH64(_INS_CBZ), CS_AARCH64(_INS_CBNZ) * ARM: cbz, cbnz */ static RzILOpEffect *cbz(cs_insn *insn) { @@ -795,23 +837,42 @@ static RzILOpEffect *cbz(cs_insn *insn) { rz_il_op_pure_free(tgt); return NULL; } - return BRANCH(insn->id == ARM64_INS_CBNZ ? INV(IS_ZERO(v)) : IS_ZERO(v), JMP(tgt), NULL); + return BRANCH(insn->id == CS_AARCH64(_INS_CBNZ) ? INV(IS_ZERO(v)) : IS_ZERO(v), JMP(tgt), NULL); } /** - * Capstone: ARM64_INS_CMP, ARM64_INS_CMN, ARM64_INS_CCMP, ARM64_INS_CCMN + * Capstone: CS_AARCH64(_INS_CMP), CS_AARCH64(_INS_CMN), CS_AARCH64(_INS_CCMP), CS_AARCH64(_INS_CCMN) * ARM: cmp, cmn, ccmp, ccmn */ static RzILOpEffect *cmp(cs_insn *insn) { ut32 bits = 0; +#if CS_NEXT_VERSION < 6 RzILOpBitVector *a = ARG(0, &bits); RzILOpBitVector *b = ARG(1, &bits); + +#else + RzILOpBitVector *a; + RzILOpBitVector *b; + if (insn->alias_id == AArch64_INS_ALIAS_CMP || + insn->alias_id == AArch64_INS_ALIAS_CMN) { + // Reg at 0 is zero register + a = ARG(1, &bits); + b = ARG(2, &bits); + } else { + a = ARG(0, &bits); + b = ARG(1, &bits); + } +#endif if (!a || !b) { rz_il_op_pure_free(a); rz_il_op_pure_free(b); return NULL; } - bool is_neg = insn->id == ARM64_INS_CMN || insn->id == ARM64_INS_CCMN; +#if CS_NEXT_VERSION < 6 + bool is_neg = insn->id == CS_AARCH64(_INS_CMN) || insn->id == CS_AARCH64(_INS_CCMN); +#else + bool is_neg = insn->alias_id == AArch64_INS_ALIAS_CMN || insn->id == CS_AARCH64(_INS_CCMN); +#endif RzILOpEffect *eff = SEQ6( SETL("a", a), SETL("b", b), @@ -819,7 +880,7 @@ static RzILOpEffect *cmp(cs_insn *insn) { SETG("cf", (is_neg ? add_carry : sub_carry)(VARL("a"), VARL("b"), false, bits)), SETG("vf", (is_neg ? add_overflow : sub_overflow)(VARL("a"), VARL("b"), VARL("r"))), update_flags_zn(VARL("r"))); - RzILOpBool *c = cond(insn->detail->arm64.cc); + RzILOpBool *c = cond(insn->detail->CS_aarch64_.cc); if (c) { ut64 imm = IMM(2); return BRANCH(c, @@ -834,7 +895,7 @@ static RzILOpEffect *cmp(cs_insn *insn) { } /** - * Capstone: ARM64_INS_CINC, ARM64_INS_CSINC, ARM64_INS_CINV, ARM64_INS_CSINV, ARM64_INS_CNEG, ARM64_INS_CSNEG, ARM64_INS_CSEL + * Capstone: CS_AARCH64(_INS_CINC), CS_AARCH64(_INS_CSINC), CS_AARCH64(_INS_CINV), CS_AARCH64(_INS_CSINV), CS_AARCH64(_INS_CNEG), CS_AARCH64(_INS_CSNEG), CS_AARCH64(_INS_CSEL) * ARM: cinc, csinc, cinv, csinv, cneg, csneg, csel */ static RzILOpEffect *csinc(cs_insn *insn) { @@ -852,7 +913,19 @@ static RzILOpEffect *csinc(cs_insn *insn) { if (!src0) { return NULL; } - RzILOpBool *c = cond(insn->detail->arm64.cc); +#if CS_NEXT_VERSION < 6 + RzILOpBool *c = cond(insn->detail->CS_aarch64_.cc); +#else + AArch64CC_CondCode cc; + if (insn->alias_id == AArch64_INS_ALIAS_CINV || + insn->alias_id == AArch64_INS_ALIAS_CNEG || + insn->alias_id == AArch64_INS_ALIAS_CINC) { + cc = AArch64CC_getInvertedCondCode(insn->detail->CS_aarch64_.cc); + } else { + cc = insn->detail->CS_aarch64_.cc; + } + RzILOpBool *c = cond(cc); +#endif if (!c) { // al/nv conditions, only possible in cs(inc|inv|neg) return write_reg(REGID(dst_idx), src0); @@ -866,26 +939,45 @@ static RzILOpEffect *csinc(cs_insn *insn) { RzILOpBitVector *res; bool invert_cond = false; switch (insn->id) { - case ARM64_INS_CSEL: + case CS_AARCH64(_INS_CSEL): invert_cond = true; res = src1; break; - case ARM64_INS_CSINV: +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_CSINV): invert_cond = true; // fallthrough - case ARM64_INS_CINV: + case CS_AARCH64(_INS_CINV): res = LOGNOT(src1); break; - case ARM64_INS_CSNEG: + case CS_AARCH64(_INS_CSNEG): invert_cond = true; // fallthrough - case ARM64_INS_CNEG: + case CS_AARCH64(_INS_CNEG): res = NEG(src1); break; - case ARM64_INS_CSINC: + case CS_AARCH64(_INS_CSINC): invert_cond = true; +#else + case CS_AARCH64(_INS_CSINV): + if (!insn->is_alias) { + invert_cond = true; + } + res = LOGNOT(src1); + break; + case CS_AARCH64(_INS_CSNEG): + if (!insn->is_alias) { + invert_cond = true; + } + res = NEG(src1); + break; + case CS_AARCH64(_INS_CSINC): + if (!insn->is_alias) { + invert_cond = true; + } +#endif // fallthrough - default: // ARM64_INS_CINC, ARM64_INS_CSINC + default: // CS_AARCH64(_INS_CINC), CS_AARCH64(_INS_CSINC) res = ADD(src1, UN(bits, 1)); break; } @@ -893,23 +985,37 @@ static RzILOpEffect *csinc(cs_insn *insn) { } /** - * Capstone: ARM64_INS_CSET, ARM64_INS_CSETM + * Capstone: CS_AARCH64(_INS_CSET), CS_AARCH64(_INS_CSETM) * ARM: cset, csetm */ static RzILOpEffect *cset(cs_insn *insn) { if (!ISREG(0) || !REGBITS(0)) { return NULL; } - RzILOpBool *c = cond(insn->detail->arm64.cc); + RzILOpBool *c = NULL; +#if CS_NEXT_VERSION < 6 + c = cond(insn->detail->CS_aarch64_.cc); +#else + if (insn->alias_id == AArch64_INS_ALIAS_CSET || + insn->alias_id == AArch64_INS_ALIAS_CSETM) { + c = cond(AArch64CC_getInvertedCondCode(insn->detail->CS_aarch64_.cc)); + } else { + c = cond(insn->detail->CS_aarch64_.cc); + } +#endif if (!c) { return NULL; } ut32 bits = REGBITS(0); - return write_reg(REGID(0), ITE(c, SN(bits, insn->id == ARM64_INS_CSETM ? -1 : 1), SN(bits, 0))); +#if CS_NEXT_VERSION < 6 + return write_reg(REGID(0), ITE(c, SN(bits, insn->id == CS_AARCH64(_INS_CSETM) ? -1 : 1), SN(bits, 0))); +#else + return write_reg(REGID(0), ITE(c, SN(bits, insn->alias_id == AArch64_INS_ALIAS_CSETM ? -1 : 1), SN(bits, 0))); +#endif } /** - * Capstone: ARM64_INS_CLS + * Capstone: CS_AARCH64(_INS_CLS) * ARM: cls */ static RzILOpEffect *cls(cs_insn *insn) { @@ -933,7 +1039,7 @@ static RzILOpEffect *cls(cs_insn *insn) { } /** - * Capstone: ARM64_INS_CLZ + * Capstone: CS_AARCH64(_INS_CLZ) * ARM: clz */ static RzILOpEffect *clz(cs_insn *insn) { @@ -956,7 +1062,7 @@ static RzILOpEffect *clz(cs_insn *insn) { } /** - * Capstone: ARM64_INS_EXTR + * Capstone: CS_AARCH64(_INS_EXTR) * ARM: extr */ static RzILOpEffect *extr(cs_insn *insn) { @@ -993,7 +1099,7 @@ static void label_svc(RzILVM *vm, RzILOpEffect *op) { } /** - * Capstone: ARM64_INS_HVC + * Capstone: CS_AARCH64(_INS_HVC) * ARM: hvc */ static RzILOpEffect *hvc(cs_insn *insn) { @@ -1004,7 +1110,7 @@ static void label_hvc(RzILVM *vm, RzILOpEffect *op) { // stub, nothing to do here } -static RzILOpEffect *load_effect(ut32 bits, bool is_signed, arm64_reg dst_reg, RZ_OWN RzILOpBitVector *addr) { +static RzILOpEffect *load_effect(ut32 bits, bool is_signed, CS_aarch64_reg() dst_reg, RZ_OWN RzILOpBitVector *addr) { RzILOpBitVector *val = bits == 8 ? LOAD(addr) : LOADW(bits, addr); if (bits != 64) { if (is_signed) { @@ -1022,13 +1128,17 @@ static RzILOpEffect *load_effect(ut32 bits, bool is_signed, arm64_reg dst_reg, R } static RzILOpEffect *writeback(cs_insn *insn, size_t addr_op, RZ_BORROW RzILOpBitVector *addr) { - if (!insn->detail->arm64.writeback || !is_xreg(MEMBASEID(addr_op))) { +#if CS_NEXT_VERSION < 6 + if (!insn->detail->CS_aarch64_.writeback || !is_xreg(MEMBASEID(addr_op))) { +#else + if (!insn->detail->writeback || !is_xreg(MEMBASEID(addr_op))) { +#endif return NULL; } RzILOpBitVector *wbaddr = DUP(addr); - if (ISIMM(addr_op + 1)) { + if (ISPOSTINDEX64()) { // post-index - st64 disp = IMM(addr_op + 1); + st64 disp = MEMDISP(addr_op); if (disp > 0) { wbaddr = ADD(wbaddr, U64(disp)); } else if (disp < 0) { @@ -1039,16 +1149,16 @@ static RzILOpEffect *writeback(cs_insn *insn, size_t addr_op, RZ_BORROW RzILOpBi } /** - * Capstone: ARM64_INS_LDR, ARM64_INS_LDRB, ARM64_INS_LDRH, ARM64_INS_LDRU, ARM64_INS_LDRUB, ARM64_INS_LDRUH, - * ARM64_INS_LDRSW, ARM64_INS_LDRSB, ARM64_INS_LDRSH, ARM64_INS_LDURSW, ARM64_INS_LDURSB, ARM64_INS_LDURSH, - * ARM64_INS_LDAPR, ARM64_INS_LDAPRB, ARM64_INS_LDAPRH, ARM64_INS_LDAPUR, ARM64_INS_LDAPURB, ARM64_INS_LDAPURH, - * ARM64_INS_LDAPURSB, ARM64_INS_LDAPURSH, ARM64_INS_LDAPURSW, ARM64_INS_LDAR, ARM64_INS_LDARB, ARM64_INS_LDARH, - * ARM64_INS_LDAXP, ARM64_INS_LDXP, ARM64_INS_LDAXR, ARM64_INS_LDAXRB, ARM64_INS_LDAXRH, - * ARM64_INS_LDLAR, ARM64_INS_LDLARB, ARM64_INS_LDLARH, - * ARM64_INS_LDP, ARM64_INS_LDNP, ARM64_INS_LDPSW, - * ARM64_INS_LDRAA, ARM64_INS_LDRAB, - * ARM64_INS_LDTR, ARM64_INS_LDTRB, ARM64_INS_LDTRH, ARM64_INS_LDTRSW, ARM64_INS_LDTRSB, ARM64_INS_LDTRSH, - * ARM64_INS_LDXR, ARM64_INS_LDXRB, ARM64_INS_LDXRH + * Capstone: CS_AARCH64(_INS_LDR), CS_AARCH64(_INS_LDRB), CS_AARCH64(_INS_LDRH), CS_AARCH64(_INS_LDRU), CS_AARCH64(_INS_LDRUB), CS_AARCH64(_INS_LDRUH), + * CS_AARCH64(_INS_LDRSW), CS_AARCH64(_INS_LDRSB), CS_AARCH64(_INS_LDRSH), CS_AARCH64(_INS_LDURSW), CS_AARCH64(_INS_LDURSB), CS_AARCH64(_INS_LDURSH), + * CS_AARCH64(_INS_LDAPR), CS_AARCH64(_INS_LDAPRB), CS_AARCH64(_INS_LDAPRH), CS_AARCH64(_INS_LDAPUR), CS_AARCH64(_INS_LDAPURB), CS_AARCH64(_INS_LDAPURH), + * CS_AARCH64(_INS_LDAPURSB), CS_AARCH64(_INS_LDAPURSH), CS_AARCH64(_INS_LDAPURSW), CS_AARCH64(_INS_LDAR), CS_AARCH64(_INS_LDARB), CS_AARCH64(_INS_LDARH), + * CS_AARCH64(_INS_LDAXP), CS_AARCH64(_INS_LDXP), CS_AARCH64(_INS_LDAXR), CS_AARCH64(_INS_LDAXRB), CS_AARCH64(_INS_LDAXRH), + * CS_AARCH64(_INS_LDLAR), CS_AARCH64(_INS_LDLARB), CS_AARCH64(_INS_LDLARH), + * CS_AARCH64(_INS_LDP), CS_AARCH64(_INS_LDNP), CS_AARCH64(_INS_LDPSW), + * CS_AARCH64(_INS_LDRAA), CS_AARCH64(_INS_LDRAB), + * CS_AARCH64(_INS_LDTR), CS_AARCH64(_INS_LDTRB), CS_AARCH64(_INS_LDTRH), CS_AARCH64(_INS_LDTRSW), CS_AARCH64(_INS_LDTRSB), CS_AARCH64(_INS_LDTRSH), + * CS_AARCH64(_INS_LDXR), CS_AARCH64(_INS_LDXRB), CS_AARCH64(_INS_LDXRH) * ARM: ldr, ldrb, ldrh, ldru, ldrub, ldruh, ldrsw, ldrsb, ldrsh, ldursw, ldurwb, ldursh, * ldapr, ldaprb, ldaprh, ldapur, ldapurb, ldapurh, ldapursb, ldapursh, ldapursw, * ldaxp, ldxp, ldaxr, ldaxrb, ldaxrh, ldar, ldarb, ldarh, @@ -1059,8 +1169,8 @@ static RzILOpEffect *ldr(cs_insn *insn) { if (!ISREG(0)) { return NULL; } - bool pair = insn->id == ARM64_INS_LDAXP || insn->id == ARM64_INS_LDXP || - insn->id == ARM64_INS_LDP || insn->id == ARM64_INS_LDNP || insn->id == ARM64_INS_LDPSW; + bool pair = insn->id == CS_AARCH64(_INS_LDAXP) || insn->id == CS_AARCH64(_INS_LDXP) || + insn->id == CS_AARCH64(_INS_LDP) || insn->id == CS_AARCH64(_INS_LDNP) || insn->id == CS_AARCH64(_INS_LDPSW); if (pair && !ISREG(1)) { return NULL; } @@ -1070,65 +1180,65 @@ static RzILOpEffect *ldr(cs_insn *insn) { if (!addr) { return NULL; } - arm64_reg dst_reg = REGID(0); + CS_aarch64_reg() dst_reg = REGID(0); ut64 loadsz; bool is_signed = false; switch (insn->id) { - case ARM64_INS_LDRSB: - case ARM64_INS_LDURSB: - case ARM64_INS_LDTRSB: + case CS_AARCH64(_INS_LDRSB): + case CS_AARCH64(_INS_LDURSB): + case CS_AARCH64(_INS_LDTRSB): #if CS_API_MAJOR > 4 - case ARM64_INS_LDAPURSB: + case CS_AARCH64(_INS_LDAPURSB): #endif is_signed = true; // fallthrough - case ARM64_INS_LDRB: - case ARM64_INS_LDURB: - case ARM64_INS_LDARB: - case ARM64_INS_LDAXRB: - case ARM64_INS_LDTRB: - case ARM64_INS_LDXRB: + case CS_AARCH64(_INS_LDRB): + case CS_AARCH64(_INS_LDURB): + case CS_AARCH64(_INS_LDARB): + case CS_AARCH64(_INS_LDAXRB): + case CS_AARCH64(_INS_LDTRB): + case CS_AARCH64(_INS_LDXRB): #if CS_API_MAJOR > 4 - case ARM64_INS_LDLARB: - case ARM64_INS_LDAPRB: - case ARM64_INS_LDAPURB: + case CS_AARCH64(_INS_LDLARB): + case CS_AARCH64(_INS_LDAPRB): + case CS_AARCH64(_INS_LDAPURB): #endif loadsz = 8; break; - case ARM64_INS_LDRSH: - case ARM64_INS_LDURSH: - case ARM64_INS_LDTRSH: + case CS_AARCH64(_INS_LDRSH): + case CS_AARCH64(_INS_LDURSH): + case CS_AARCH64(_INS_LDTRSH): #if CS_API_MAJOR > 4 - case ARM64_INS_LDAPURSH: + case CS_AARCH64(_INS_LDAPURSH): #endif is_signed = true; // fallthrough - case ARM64_INS_LDRH: - case ARM64_INS_LDURH: - case ARM64_INS_LDARH: - case ARM64_INS_LDAXRH: - case ARM64_INS_LDTRH: - case ARM64_INS_LDXRH: + case CS_AARCH64(_INS_LDRH): + case CS_AARCH64(_INS_LDURH): + case CS_AARCH64(_INS_LDARH): + case CS_AARCH64(_INS_LDAXRH): + case CS_AARCH64(_INS_LDTRH): + case CS_AARCH64(_INS_LDXRH): #if CS_API_MAJOR > 4 - case ARM64_INS_LDAPRH: - case ARM64_INS_LDAPURH: - case ARM64_INS_LDLARH: + case CS_AARCH64(_INS_LDAPRH): + case CS_AARCH64(_INS_LDAPURH): + case CS_AARCH64(_INS_LDLARH): #endif loadsz = 16; break; - case ARM64_INS_LDRSW: - case ARM64_INS_LDURSW: - case ARM64_INS_LDPSW: - case ARM64_INS_LDTRSW: + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDURSW): + case CS_AARCH64(_INS_LDPSW): + case CS_AARCH64(_INS_LDTRSW): #if CS_API_MAJOR > 4 - case ARM64_INS_LDAPURSW: + case CS_AARCH64(_INS_LDAPURSW): #endif is_signed = true; loadsz = 32; break; default: - // ARM64_INS_LDR, ARM64_INS_LDRU, ARM64_INS_LDAPR, ARM64_INS_LDAPUR, ARM64_INS_LDAR, ARM64_INS_LDAXR, ARM64_INS_LDLAR, - // ARM64_INS_LDP, ARM64_INS_LDNP, ARM64_INS_LDRAA, ARM64_INS_LDRAB, ARM64_INS_LDTR, ARM64_INS_LDXR + // CS_AARCH64(_INS_LDR), CS_AARCH64(_INS_LDRU), CS_AARCH64(_INS_LDAPR), CS_AARCH64(_INS_LDAPUR), CS_AARCH64(_INS_LDAR), CS_AARCH64(_INS_LDAXR), CS_AARCH64(_INS_LDLAR), + // CS_AARCH64(_INS_LDP), CS_AARCH64(_INS_LDNP), CS_AARCH64(_INS_LDRAA), CS_AARCH64(_INS_LDRAB), CS_AARCH64(_INS_LDTR), CS_AARCH64(_INS_LDXR) loadsz = is_wreg(dst_reg) ? 32 : 64; break; } @@ -1158,11 +1268,11 @@ static RzILOpEffect *ldr(cs_insn *insn) { } /** - * Capstone: ARM64_INS_STR, ARM64_INS_STUR, ARM64_INS_STRB, ARM64_INS_STURB, ARM64_INS_STRH, ARM64_INS_STURH, - * ARM64_INS_STLLR, ARM64_INS_STLLRB, ARM64_INS_STLLRH, ARM64_INS_STLR, ARM64_INS_STLRB, ARM64_INS_STLRH, - * ARM64_INS_STLUR, ARM64_INS_STLURB, ARM64_INS_STLURH, ARM64_INS_STP, ARM64_INS_STXR, ARM64_INS_STXRB, - * ARM64_INS_STXRH, ARM64_INS_STXP, ARM64_INS_STLXR, ARM64_INS_STLXRB. ARM64_INS_STLXRH, ARM64_INS_STLXP, - * ARM64_INS_STNP, ARM64_INS_STTR, ARM64_INS_STTRB, ARM64_INS_STTRH + * Capstone: CS_AARCH64(_INS_STR), CS_AARCH64(_INS_STUR), CS_AARCH64(_INS_STRB), CS_AARCH64(_INS_STURB), CS_AARCH64(_INS_STRH), CS_AARCH64(_INS_STURH), + * CS_AARCH64(_INS_STLLR), CS_AARCH64(_INS_STLLRB), CS_AARCH64(_INS_STLLRH), CS_AARCH64(_INS_STLR), CS_AARCH64(_INS_STLRB), CS_AARCH64(_INS_STLRH), + * CS_AARCH64(_INS_STLUR), CS_AARCH64(_INS_STLURB), CS_AARCH64(_INS_STLURH), CS_AARCH64(_INS_STP), CS_AARCH64(_INS_STXR), CS_AARCH64(_INS_STXRB), + * CS_AARCH64(_INS_STXRH), CS_AARCH64(_INS_STXP), CS_AARCH64(_INS_STLXR), CS_AARCH64(_INS_STLXRB). CS_AARCH64(_INS_STLXRH), CS_AARCH64(_INS_STLXP), + * CS_AARCH64(_INS_STNP), CS_AARCH64(_INS_STTR), CS_AARCH64(_INS_STTRB), CS_AARCH64(_INS_STTRH) * ARM: str, stur, strb, sturb, strh, sturh, stllr, stllrb, stllrh, stlr, stlrb, stlrh, stlur, stlurb, stlurh, stp, stxr, stxrb, * stxrh, stxp, stlxr, stlxrb. stlxrh, stlxp, stnp, sttr, sttrb, sttrh */ @@ -1170,9 +1280,9 @@ static RzILOpEffect *str(cs_insn *insn) { if (!ISREG(0) || !REGBITS(0)) { return NULL; } - bool result = insn->id == ARM64_INS_STXR || insn->id == ARM64_INS_STXRB || insn->id == ARM64_INS_STXRH || insn->id == ARM64_INS_STXP || - insn->id == ARM64_INS_STLXR || insn->id == ARM64_INS_STLXRB || insn->id == ARM64_INS_STLXRH || insn->id == ARM64_INS_STLXP; - bool pair = insn->id == ARM64_INS_STP || insn->id == ARM64_INS_STNP || insn->id == ARM64_INS_STXP || insn->id == ARM64_INS_STLXP; + bool result = insn->id == CS_AARCH64(_INS_STXR) || insn->id == CS_AARCH64(_INS_STXRB) || insn->id == CS_AARCH64(_INS_STXRH) || insn->id == CS_AARCH64(_INS_STXP) || + insn->id == CS_AARCH64(_INS_STLXR) || insn->id == CS_AARCH64(_INS_STLXRB) || insn->id == CS_AARCH64(_INS_STLXRH) || insn->id == CS_AARCH64(_INS_STLXP); + bool pair = insn->id == CS_AARCH64(_INS_STP) || insn->id == CS_AARCH64(_INS_STNP) || insn->id == CS_AARCH64(_INS_STXP) || insn->id == CS_AARCH64(_INS_STLXP); size_t src_op = result ? 1 : 0; size_t addr_op = (result ? 1 : 0) + 1 + (pair ? 1 : 0); ut32 addr_bits = 64; @@ -1182,33 +1292,33 @@ static RzILOpEffect *str(cs_insn *insn) { } ut32 bits; switch (insn->id) { - case ARM64_INS_STRB: - case ARM64_INS_STURB: - case ARM64_INS_STLRB: - case ARM64_INS_STXRB: - case ARM64_INS_STLXRB: - case ARM64_INS_STTRB: + case CS_AARCH64(_INS_STRB): + case CS_AARCH64(_INS_STURB): + case CS_AARCH64(_INS_STLRB): + case CS_AARCH64(_INS_STXRB): + case CS_AARCH64(_INS_STLXRB): + case CS_AARCH64(_INS_STTRB): #if CS_API_MAJOR > 4 - case ARM64_INS_STLLRB: - case ARM64_INS_STLURB: + case CS_AARCH64(_INS_STLLRB): + case CS_AARCH64(_INS_STLURB): #endif bits = 8; break; - case ARM64_INS_STRH: - case ARM64_INS_STURH: - case ARM64_INS_STLRH: - case ARM64_INS_STXRH: - case ARM64_INS_STLXRH: - case ARM64_INS_STTRH: + case CS_AARCH64(_INS_STRH): + case CS_AARCH64(_INS_STURH): + case CS_AARCH64(_INS_STLRH): + case CS_AARCH64(_INS_STXRH): + case CS_AARCH64(_INS_STLXRH): + case CS_AARCH64(_INS_STTRH): #if CS_API_MAJOR > 4 - case ARM64_INS_STLLRH: - case ARM64_INS_STLURH: + case CS_AARCH64(_INS_STLLRH): + case CS_AARCH64(_INS_STLURH): #endif bits = 16; break; default: - // ARM64_INS_STR, ARM64_INS_STUR, ARM64_INS_STLLR, ARM64_INS_STLR, ARM64_INS_STLUR, ARM64_INS_STP, - // ARM64_INS_STXR, ARM64_INS_STXP, ARM64_INS_STLXR, ARM64_INS_STLXP, ARM64_INS_STNP, ARM64_INS_STTR + // CS_AARCH64(_INS_STR), CS_AARCH64(_INS_STUR), CS_AARCH64(_INS_STLLR), CS_AARCH64(_INS_STLR), CS_AARCH64(_INS_STLUR), CS_AARCH64(_INS_STP), + // CS_AARCH64(_INS_STXR), CS_AARCH64(_INS_STXP), CS_AARCH64(_INS_STLXR), CS_AARCH64(_INS_STLXP), CS_AARCH64(_INS_STNP), CS_AARCH64(_INS_STTR) bits = REGBITS(src_op); if (!bits) { rz_il_op_pure_free(addr); @@ -1253,34 +1363,34 @@ static RzILOpEffect *str(cs_insn *insn) { #if CS_API_MAJOR > 4 /** - * Capstone: ARM64_INS_LDADD, ARM64_INS_LDADDA, ARM64_INS_LDADDAL, ARM64_INS_LDADDL, - * ARM64_INS_LDADDB, ARM64_INS_LDADDAB, ARM64_INS_LDADDALB, ARM64_INS_LDADDLB, - * ARM64_INS_LDADDH, ARM64_INS_LDADDAH, ARM64_INS_LDADDALH, ARM64_INS_LDADDLH, - * ARM64_INS_STADD, ARM64_INS_STADDL, ARM64_INS_STADDB, ARM64_INS_STADDLB, ARM64_INS_STADDH, ARM64_INS_STADDLH, - * ARM64_INS_LDCLRB, ARM64_INS_LDCLRAB, ARM64_INS_LDCLRALB, ARM64_INS_LDCLRLB, - * ARM64_INS_LDCLRH, ARM64_INS_LDCLRAH, ARM64_INS_LDCLRALH, ARM64_INS_LDCLRLH - * ARM64_INS_LDCLR, ARM64_INS_LDCLRA, ARM64_INS_LDCLRAL, ARM64_INS_LDCLRL, - * ARM64_INS_STSETB, ARM64_INS_STSETLB, ARM64_INS_STSETH, ARM64_INS_STSETLH, ARM64_INS_STSET, ARM64_INS_STSETL, - * ARM64_INS_LDSETB, ARM64_INS_LDSETAB, ARM64_INS_LDSETALB, ARM64_INS_LDSETLB, - * ARM64_INS_LDSETH, ARM64_INS_LDSETAH, ARM64_INS_LDSETALH, ARM64_INS_LDSETLH - * ARM64_INS_LDSET, ARM64_INS_LDSETA, ARM64_INS_LDSETAL, ARM64_INS_LDSETL, - * ARM64_INS_STSETB, ARM64_INS_STSETLB, ARM64_INS_STSETH, ARM64_INS_STSETLH, ARM64_INS_STSET, ARM64_INS_STSETL, - * ARM64_INS_LDSMAXB, ARM64_INS_LDSMAXAB, ARM64_INS_LDSMAXALB, ARM64_INS_LDSMAXLB, - * ARM64_INS_LDSMAXH, ARM64_INS_LDSMAXAH, ARM64_INS_LDSMAXALH, ARM64_INS_LDSMAXLH - * ARM64_INS_LDSMAX, ARM64_INS_LDSMAXA, ARM64_INS_LDSMAXAL, ARM64_INS_LDSMAXL, - * ARM64_INS_STSMAXB, ARM64_INS_STSMAXLB, ARM64_INS_STSMAXH, ARM64_INS_STSMAXLH, ARM64_INS_STSMAX, ARM64_INS_STSMAXL, - * ARM64_INS_LDSMINB, ARM64_INS_LDSMINAB, ARM64_INS_LDSMINALB, ARM64_INS_LDSMINLB, - * ARM64_INS_LDSMINH, ARM64_INS_LDSMINAH, ARM64_INS_LDSMINALH, ARM64_INS_LDSMINLH - * ARM64_INS_LDSMIN, ARM64_INS_LDSMINA, ARM64_INS_LDSMINAL, ARM64_INS_LDSMINL, - * ARM64_INS_STSMINB, ARM64_INS_STSMINLB, ARM64_INS_STSMINH, ARM64_INS_STSMINLH, ARM64_INS_STSMIN, ARM64_INS_STSMINL, - * ARM64_INS_LDUMAXB, ARM64_INS_LDUMAXAB, ARM64_INS_LDUMAXALB, ARM64_INS_LDUMAXLB, - * ARM64_INS_LDUMAXH, ARM64_INS_LDUMAXAH, ARM64_INS_LDUMAXALH, ARM64_INS_LDUMAXLH - * ARM64_INS_LDUMAX, ARM64_INS_LDUMAXA, ARM64_INS_LDUMAXAL, ARM64_INS_LDUMAXL, - * ARM64_INS_STUMAXB, ARM64_INS_STUMAXLB, ARM64_INS_STUMAXH, ARM64_INS_STUMAXLH, ARM64_INS_STUMAX, ARM64_INS_STUMAXL, - * ARM64_INS_LDUMINB, ARM64_INS_LDUMINAB, ARM64_INS_LDUMINALB, ARM64_INS_LDUMINLB, - * ARM64_INS_LDUMINH, ARM64_INS_LDUMINAH, ARM64_INS_LDUMINALH, ARM64_INS_LDUMINLH - * ARM64_INS_LDUMIN, ARM64_INS_LDUMINA, ARM64_INS_LDUMINAL, ARM64_INS_LDUMINL, - * ARM64_INS_STUMINB, ARM64_INS_STUMINLB, ARM64_INS_STUMINH, ARM64_INS_STUMINLH, ARM64_INS_STUMIN, ARM64_INS_STUMINL + * Capstone: CS_AARCH64(_INS_LDADD), CS_AARCH64(_INS_LDADDA), CS_AARCH64(_INS_LDADDAL), CS_AARCH64(_INS_LDADDL), + * CS_AARCH64(_INS_LDADDB), CS_AARCH64(_INS_LDADDAB), CS_AARCH64(_INS_LDADDALB), CS_AARCH64(_INS_LDADDLB), + * CS_AARCH64(_INS_LDADDH), CS_AARCH64(_INS_LDADDAH), CS_AARCH64(_INS_LDADDALH), CS_AARCH64(_INS_LDADDLH), + * CS_AARCH64(_INS_STADD), CS_AARCH64(_INS_STADDL), CS_AARCH64(_INS_STADDB), CS_AARCH64(_INS_STADDLB), CS_AARCH64(_INS_STADDH), CS_AARCH64(_INS_STADDLH), + * CS_AARCH64(_INS_LDCLRB), CS_AARCH64(_INS_LDCLRAB), CS_AARCH64(_INS_LDCLRALB), CS_AARCH64(_INS_LDCLRLB), + * CS_AARCH64(_INS_LDCLRH), CS_AARCH64(_INS_LDCLRAH), CS_AARCH64(_INS_LDCLRALH), CS_AARCH64(_INS_LDCLRLH) + * CS_AARCH64(_INS_LDCLR), CS_AARCH64(_INS_LDCLRA), CS_AARCH64(_INS_LDCLRAL), CS_AARCH64(_INS_LDCLRL), + * CS_AARCH64(_INS_STSETB), CS_AARCH64(_INS_STSETLB), CS_AARCH64(_INS_STSETH), CS_AARCH64(_INS_STSETLH), CS_AARCH64(_INS_STSET), CS_AARCH64(_INS_STSETL), + * CS_AARCH64(_INS_LDSETB), CS_AARCH64(_INS_LDSETAB), CS_AARCH64(_INS_LDSETALB), CS_AARCH64(_INS_LDSETLB), + * CS_AARCH64(_INS_LDSETH), CS_AARCH64(_INS_LDSETAH), CS_AARCH64(_INS_LDSETALH), CS_AARCH64(_INS_LDSETLH) + * CS_AARCH64(_INS_LDSET), CS_AARCH64(_INS_LDSETA), CS_AARCH64(_INS_LDSETAL), CS_AARCH64(_INS_LDSETL), + * CS_AARCH64(_INS_STSETB), CS_AARCH64(_INS_STSETLB), CS_AARCH64(_INS_STSETH), CS_AARCH64(_INS_STSETLH), CS_AARCH64(_INS_STSET), CS_AARCH64(_INS_STSETL), + * CS_AARCH64(_INS_LDSMAXB), CS_AARCH64(_INS_LDSMAXAB), CS_AARCH64(_INS_LDSMAXALB), CS_AARCH64(_INS_LDSMAXLB), + * CS_AARCH64(_INS_LDSMAXH), CS_AARCH64(_INS_LDSMAXAH), CS_AARCH64(_INS_LDSMAXALH), CS_AARCH64(_INS_LDSMAXLH) + * CS_AARCH64(_INS_LDSMAX), CS_AARCH64(_INS_LDSMAXA), CS_AARCH64(_INS_LDSMAXAL), CS_AARCH64(_INS_LDSMAXL), + * CS_AARCH64(_INS_STSMAXB), CS_AARCH64(_INS_STSMAXLB), CS_AARCH64(_INS_STSMAXH), CS_AARCH64(_INS_STSMAXLH), CS_AARCH64(_INS_STSMAX), CS_AARCH64(_INS_STSMAXL), + * CS_AARCH64(_INS_LDSMINB), CS_AARCH64(_INS_LDSMINAB), CS_AARCH64(_INS_LDSMINALB), CS_AARCH64(_INS_LDSMINLB), + * CS_AARCH64(_INS_LDSMINH), CS_AARCH64(_INS_LDSMINAH), CS_AARCH64(_INS_LDSMINALH), CS_AARCH64(_INS_LDSMINLH) + * CS_AARCH64(_INS_LDSMIN), CS_AARCH64(_INS_LDSMINA), CS_AARCH64(_INS_LDSMINAL), CS_AARCH64(_INS_LDSMINL), + * CS_AARCH64(_INS_STSMINB), CS_AARCH64(_INS_STSMINLB), CS_AARCH64(_INS_STSMINH), CS_AARCH64(_INS_STSMINLH), CS_AARCH64(_INS_STSMIN), CS_AARCH64(_INS_STSMINL), + * CS_AARCH64(_INS_LDUMAXB), CS_AARCH64(_INS_LDUMAXAB), CS_AARCH64(_INS_LDUMAXALB), CS_AARCH64(_INS_LDUMAXLB), + * CS_AARCH64(_INS_LDUMAXH), CS_AARCH64(_INS_LDUMAXAH), CS_AARCH64(_INS_LDUMAXALH), CS_AARCH64(_INS_LDUMAXLH) + * CS_AARCH64(_INS_LDUMAX), CS_AARCH64(_INS_LDUMAXA), CS_AARCH64(_INS_LDUMAXAL), CS_AARCH64(_INS_LDUMAXL), + * CS_AARCH64(_INS_STUMAXB), CS_AARCH64(_INS_STUMAXLB), CS_AARCH64(_INS_STUMAXH), CS_AARCH64(_INS_STUMAXLH), CS_AARCH64(_INS_STUMAX), CS_AARCH64(_INS_STUMAXL), + * CS_AARCH64(_INS_LDUMINB), CS_AARCH64(_INS_LDUMINAB), CS_AARCH64(_INS_LDUMINALB), CS_AARCH64(_INS_LDUMINLB), + * CS_AARCH64(_INS_LDUMINH), CS_AARCH64(_INS_LDUMINAH), CS_AARCH64(_INS_LDUMINALH), CS_AARCH64(_INS_LDUMINLH) + * CS_AARCH64(_INS_LDUMIN), CS_AARCH64(_INS_LDUMINA), CS_AARCH64(_INS_LDUMINAL), CS_AARCH64(_INS_LDUMINL), + * CS_AARCH64(_INS_STUMINB), CS_AARCH64(_INS_STUMINLB), CS_AARCH64(_INS_STUMINH), CS_AARCH64(_INS_STUMINLH), CS_AARCH64(_INS_STUMIN), CS_AARCH64(_INS_STUMINL) * ARM: ldadd, ldadda, ldaddal, ldaddl, ldaddb, ldaddab, ldaddalb, ldaddlb, ldaddh, ldaddah, ldaddalh, ldaddlh, * stadd, staddl, staddb, staddlb, stadd, * ldclr, ldclra, ldclral, ldclrl, ldclrb, ldclrab, ldclralb, ldclrlb, ldclrh, ldclrah, ldclralh, ldclrlh, @@ -1301,7 +1411,7 @@ static RzILOpEffect *ldadd(cs_insn *insn) { if (!ISMEM(addr_op)) { return NULL; } - arm64_reg addend_reg = REGID(0); + CS_aarch64_reg() addend_reg = REGID(0); ut64 loadsz; enum { OP_ADD, @@ -1314,208 +1424,254 @@ static RzILOpEffect *ldadd(cs_insn *insn) { OP_UMIN } op = OP_ADD; switch (insn->id) { - case ARM64_INS_LDCLRB: - case ARM64_INS_LDCLRAB: - case ARM64_INS_LDCLRALB: - case ARM64_INS_LDCLRLB: - case ARM64_INS_STCLRB: - case ARM64_INS_STCLRLB: + case CS_AARCH64(_INS_LDCLRB): + case CS_AARCH64(_INS_LDCLRAB): + case CS_AARCH64(_INS_LDCLRALB): + case CS_AARCH64(_INS_LDCLRLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STCLRB): + case CS_AARCH64(_INS_STCLRLB): +#endif op = OP_CLR; loadsz = 8; break; - case ARM64_INS_LDEORB: - case ARM64_INS_LDEORAB: - case ARM64_INS_LDEORALB: - case ARM64_INS_LDEORLB: - case ARM64_INS_STEORB: - case ARM64_INS_STEORLB: + case CS_AARCH64(_INS_LDEORB): + case CS_AARCH64(_INS_LDEORAB): + case CS_AARCH64(_INS_LDEORALB): + case CS_AARCH64(_INS_LDEORLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STEORB): + case CS_AARCH64(_INS_STEORLB): +#endif op = OP_EOR; loadsz = 8; break; - case ARM64_INS_LDSETB: - case ARM64_INS_LDSETAB: - case ARM64_INS_LDSETALB: - case ARM64_INS_LDSETLB: - case ARM64_INS_STSETB: - case ARM64_INS_STSETLB: + case CS_AARCH64(_INS_LDSETB): + case CS_AARCH64(_INS_LDSETAB): + case CS_AARCH64(_INS_LDSETALB): + case CS_AARCH64(_INS_LDSETLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSETB): + case CS_AARCH64(_INS_STSETLB): +#endif op = OP_SET; loadsz = 8; break; - case ARM64_INS_LDSMAXB: - case ARM64_INS_LDSMAXAB: - case ARM64_INS_LDSMAXALB: - case ARM64_INS_LDSMAXLB: - case ARM64_INS_STSMAXB: - case ARM64_INS_STSMAXLB: + case CS_AARCH64(_INS_LDSMAXB): + case CS_AARCH64(_INS_LDSMAXAB): + case CS_AARCH64(_INS_LDSMAXALB): + case CS_AARCH64(_INS_LDSMAXLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMAXB): + case CS_AARCH64(_INS_STSMAXLB): +#endif op = OP_SMAX; loadsz = 8; break; - case ARM64_INS_LDSMINB: - case ARM64_INS_LDSMINAB: - case ARM64_INS_LDSMINALB: - case ARM64_INS_LDSMINLB: - case ARM64_INS_STSMINB: - case ARM64_INS_STSMINLB: + case CS_AARCH64(_INS_LDSMINB): + case CS_AARCH64(_INS_LDSMINAB): + case CS_AARCH64(_INS_LDSMINALB): + case CS_AARCH64(_INS_LDSMINLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMINB): + case CS_AARCH64(_INS_STSMINLB): +#endif op = OP_SMIN; loadsz = 8; break; - case ARM64_INS_LDUMAXB: - case ARM64_INS_LDUMAXAB: - case ARM64_INS_LDUMAXALB: - case ARM64_INS_LDUMAXLB: - case ARM64_INS_STUMAXB: - case ARM64_INS_STUMAXLB: + case CS_AARCH64(_INS_LDUMAXB): + case CS_AARCH64(_INS_LDUMAXAB): + case CS_AARCH64(_INS_LDUMAXALB): + case CS_AARCH64(_INS_LDUMAXLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMAXB): + case CS_AARCH64(_INS_STUMAXLB): +#endif op = OP_UMAX; loadsz = 8; break; - case ARM64_INS_LDUMINB: - case ARM64_INS_LDUMINAB: - case ARM64_INS_LDUMINALB: - case ARM64_INS_LDUMINLB: - case ARM64_INS_STUMINB: - case ARM64_INS_STUMINLB: + case CS_AARCH64(_INS_LDUMINB): + case CS_AARCH64(_INS_LDUMINAB): + case CS_AARCH64(_INS_LDUMINALB): + case CS_AARCH64(_INS_LDUMINLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMINB): + case CS_AARCH64(_INS_STUMINLB): +#endif op = OP_UMIN; loadsz = 8; break; - case ARM64_INS_LDADDB: - case ARM64_INS_LDADDAB: - case ARM64_INS_LDADDALB: - case ARM64_INS_LDADDLB: - case ARM64_INS_STADDB: - case ARM64_INS_STADDLB: + case CS_AARCH64(_INS_LDADDB): + case CS_AARCH64(_INS_LDADDAB): + case CS_AARCH64(_INS_LDADDALB): + case CS_AARCH64(_INS_LDADDLB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STADDB): + case CS_AARCH64(_INS_STADDLB): +#endif loadsz = 8; break; - case ARM64_INS_LDCLRH: - case ARM64_INS_LDCLRAH: - case ARM64_INS_LDCLRALH: - case ARM64_INS_LDCLRLH: - case ARM64_INS_STCLRH: - case ARM64_INS_STCLRLH: + case CS_AARCH64(_INS_LDCLRH): + case CS_AARCH64(_INS_LDCLRAH): + case CS_AARCH64(_INS_LDCLRALH): + case CS_AARCH64(_INS_LDCLRLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STCLRH): + case CS_AARCH64(_INS_STCLRLH): +#endif op = OP_CLR; loadsz = 16; break; - case ARM64_INS_LDEORH: - case ARM64_INS_LDEORAH: - case ARM64_INS_LDEORALH: - case ARM64_INS_LDEORLH: - case ARM64_INS_STEORH: - case ARM64_INS_STEORLH: + case CS_AARCH64(_INS_LDEORH): + case CS_AARCH64(_INS_LDEORAH): + case CS_AARCH64(_INS_LDEORALH): + case CS_AARCH64(_INS_LDEORLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STEORH): + case CS_AARCH64(_INS_STEORLH): +#endif op = OP_EOR; loadsz = 16; break; - case ARM64_INS_LDSETH: - case ARM64_INS_LDSETAH: - case ARM64_INS_LDSETALH: - case ARM64_INS_LDSETLH: - case ARM64_INS_STSETH: - case ARM64_INS_STSETLH: + case CS_AARCH64(_INS_LDSETH): + case CS_AARCH64(_INS_LDSETAH): + case CS_AARCH64(_INS_LDSETALH): + case CS_AARCH64(_INS_LDSETLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSETH): + case CS_AARCH64(_INS_STSETLH): +#endif op = OP_SET; loadsz = 16; break; - case ARM64_INS_LDSMAXH: - case ARM64_INS_LDSMAXAH: - case ARM64_INS_LDSMAXALH: - case ARM64_INS_LDSMAXLH: - case ARM64_INS_STSMAXH: - case ARM64_INS_STSMAXLH: + case CS_AARCH64(_INS_LDSMAXH): + case CS_AARCH64(_INS_LDSMAXAH): + case CS_AARCH64(_INS_LDSMAXALH): + case CS_AARCH64(_INS_LDSMAXLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMAXH): + case CS_AARCH64(_INS_STSMAXLH): +#endif op = OP_SMAX; loadsz = 16; break; - case ARM64_INS_LDSMINH: - case ARM64_INS_LDSMINAH: - case ARM64_INS_LDSMINALH: - case ARM64_INS_LDSMINLH: - case ARM64_INS_STSMINH: - case ARM64_INS_STSMINLH: + case CS_AARCH64(_INS_LDSMINH): + case CS_AARCH64(_INS_LDSMINAH): + case CS_AARCH64(_INS_LDSMINALH): + case CS_AARCH64(_INS_LDSMINLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMINH): + case CS_AARCH64(_INS_STSMINLH): +#endif op = OP_SMIN; loadsz = 16; break; - case ARM64_INS_LDUMAXH: - case ARM64_INS_LDUMAXAH: - case ARM64_INS_LDUMAXALH: - case ARM64_INS_LDUMAXLH: - case ARM64_INS_STUMAXH: - case ARM64_INS_STUMAXLH: + case CS_AARCH64(_INS_LDUMAXH): + case CS_AARCH64(_INS_LDUMAXAH): + case CS_AARCH64(_INS_LDUMAXALH): + case CS_AARCH64(_INS_LDUMAXLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMAXH): + case CS_AARCH64(_INS_STUMAXLH): +#endif op = OP_UMAX; loadsz = 16; break; - case ARM64_INS_LDUMINH: - case ARM64_INS_LDUMINAH: - case ARM64_INS_LDUMINALH: - case ARM64_INS_LDUMINLH: - case ARM64_INS_STUMINH: - case ARM64_INS_STUMINLH: + case CS_AARCH64(_INS_LDUMINH): + case CS_AARCH64(_INS_LDUMINAH): + case CS_AARCH64(_INS_LDUMINALH): + case CS_AARCH64(_INS_LDUMINLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMINH): + case CS_AARCH64(_INS_STUMINLH): +#endif op = OP_UMIN; loadsz = 16; break; - case ARM64_INS_LDADDH: - case ARM64_INS_LDADDAH: - case ARM64_INS_LDADDALH: - case ARM64_INS_LDADDLH: - case ARM64_INS_STADDH: - case ARM64_INS_STADDLH: + case CS_AARCH64(_INS_LDADDH): + case CS_AARCH64(_INS_LDADDAH): + case CS_AARCH64(_INS_LDADDALH): + case CS_AARCH64(_INS_LDADDLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STADDH): + case CS_AARCH64(_INS_STADDLH): +#endif loadsz = 16; break; - case ARM64_INS_LDCLR: - case ARM64_INS_LDCLRA: - case ARM64_INS_LDCLRAL: - case ARM64_INS_LDCLRL: - case ARM64_INS_STCLR: - case ARM64_INS_STCLRL: + case CS_AARCH64(_INS_LDCLR): + case CS_AARCH64(_INS_LDCLRA): + case CS_AARCH64(_INS_LDCLRAL): + case CS_AARCH64(_INS_LDCLRL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STCLR): + case CS_AARCH64(_INS_STCLRL): +#endif op = OP_CLR; goto size_from_reg; - case ARM64_INS_LDEOR: - case ARM64_INS_LDEORA: - case ARM64_INS_LDEORAL: - case ARM64_INS_LDEORL: - case ARM64_INS_STEOR: - case ARM64_INS_STEORL: + case CS_AARCH64(_INS_LDEOR): + case CS_AARCH64(_INS_LDEORA): + case CS_AARCH64(_INS_LDEORAL): + case CS_AARCH64(_INS_LDEORL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STEOR): + case CS_AARCH64(_INS_STEORL): +#endif op = OP_EOR; goto size_from_reg; - case ARM64_INS_LDSET: - case ARM64_INS_LDSETA: - case ARM64_INS_LDSETAL: - case ARM64_INS_LDSETL: - case ARM64_INS_STSET: - case ARM64_INS_STSETL: + case CS_AARCH64(_INS_LDSET): + case CS_AARCH64(_INS_LDSETA): + case CS_AARCH64(_INS_LDSETAL): + case CS_AARCH64(_INS_LDSETL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSET): + case CS_AARCH64(_INS_STSETL): +#endif op = OP_SET; goto size_from_reg; - case ARM64_INS_LDSMAX: - case ARM64_INS_LDSMAXA: - case ARM64_INS_LDSMAXAL: - case ARM64_INS_LDSMAXL: - case ARM64_INS_STSMAX: - case ARM64_INS_STSMAXL: + case CS_AARCH64(_INS_LDSMAX): + case CS_AARCH64(_INS_LDSMAXA): + case CS_AARCH64(_INS_LDSMAXAL): + case CS_AARCH64(_INS_LDSMAXL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMAX): + case CS_AARCH64(_INS_STSMAXL): +#endif op = OP_SMAX; goto size_from_reg; - case ARM64_INS_LDSMIN: - case ARM64_INS_LDSMINA: - case ARM64_INS_LDSMINAL: - case ARM64_INS_LDSMINL: - case ARM64_INS_STSMIN: - case ARM64_INS_STSMINL: + case CS_AARCH64(_INS_LDSMIN): + case CS_AARCH64(_INS_LDSMINA): + case CS_AARCH64(_INS_LDSMINAL): + case CS_AARCH64(_INS_LDSMINL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMIN): + case CS_AARCH64(_INS_STSMINL): +#endif op = OP_SMIN; goto size_from_reg; - case ARM64_INS_LDUMAX: - case ARM64_INS_LDUMAXA: - case ARM64_INS_LDUMAXAL: - case ARM64_INS_LDUMAXL: - case ARM64_INS_STUMAX: - case ARM64_INS_STUMAXL: + case CS_AARCH64(_INS_LDUMAX): + case CS_AARCH64(_INS_LDUMAXA): + case CS_AARCH64(_INS_LDUMAXAL): + case CS_AARCH64(_INS_LDUMAXL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMAX): + case CS_AARCH64(_INS_STUMAXL): +#endif op = OP_UMAX; goto size_from_reg; - case ARM64_INS_LDUMIN: - case ARM64_INS_LDUMINA: - case ARM64_INS_LDUMINAL: - case ARM64_INS_LDUMINL: - case ARM64_INS_STUMIN: - case ARM64_INS_STUMINL: + case CS_AARCH64(_INS_LDUMIN): + case CS_AARCH64(_INS_LDUMINA): + case CS_AARCH64(_INS_LDUMINAL): + case CS_AARCH64(_INS_LDUMINL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMIN): + case CS_AARCH64(_INS_STUMINL): +#endif op = OP_UMIN; // fallthrough size_from_reg: - default: // ARM64_INS_LDADD, ARM64_INS_LDADDA, ARM64_INS_LDADDAL, ARM64_INS_LDADDL, ARM64_INS_STADD, ARM64_INS_STADDL + default: // CS_AARCH64(_INS_LDADD), CS_AARCH64(_INS_LDADDA), CS_AARCH64(_INS_LDADDAL), CS_AARCH64(_INS_LDADDL), CS_AARCH64(_INS_STADD), CS_AARCH64(_INS_STADDL) loadsz = is_wreg(addend_reg) ? 32 : 64; break; } @@ -1532,7 +1688,7 @@ static RzILOpEffect *ldadd(cs_insn *insn) { rz_il_op_pure_free(addr); return NULL; } - arm64_reg dst_reg = REGID(1); + CS_aarch64_reg() dst_reg = REGID(1); dst_reg = xreg_of_reg(dst_reg); ld_eff = write_reg(dst_reg, loadsz != 64 ? UNSIGNED(64, VARL("old")) : VARL("old")); if (!ld_eff) { @@ -1585,7 +1741,7 @@ static RzILOpEffect *ldadd(cs_insn *insn) { #endif /** - * Capstone: ARM64_INS_MADD, ARM64_INS_MSUB + * Capstone: CS_AARCH64(_INS_MADD), CS_AARCH64(_INS_MSUB) * ARM: madd, msub */ static RzILOpEffect *madd(cs_insn *insn) { @@ -1603,7 +1759,7 @@ static RzILOpEffect *madd(cs_insn *insn) { return NULL; } RzILOpBitVector *res; - if (insn->id == ARM64_INS_MSUB) { + if (insn->id == CS_AARCH64(_INS_MSUB)) { res = SUB(addend, MUL(ma, mb)); } else { res = ADD(MUL(ma, mb), addend); @@ -1612,7 +1768,7 @@ static RzILOpEffect *madd(cs_insn *insn) { } /** - * Capstone: ARM64_INS_MUL, ARM64_INS_MNEG + * Capstone: CS_AARCH64(_INS_MUL), CS_AARCH64(_INS_MNEG) * ARM: mul, mneg */ static RzILOpEffect *mul(cs_insn *insn) { @@ -1631,32 +1787,52 @@ static RzILOpEffect *mul(cs_insn *insn) { return NULL; } RzILOpBitVector *res = MUL(ma, mb); - if (insn->id == ARM64_INS_MNEG) { +#if CS_NEXT_VERSION < 6 + if (insn->id == CS_AARCH64(_INS_MNEG)) { + res = NEG(res); + } +#else + if (insn->alias_id == AArch64_INS_ALIAS_MNEG) { res = NEG(res); } +#endif return write_reg(REGID(0), res); } static RzILOpEffect *movn(cs_insn *insn); /** - * Capstone: ARM64_INS_MOV, ARM64_INS_MOVZ + * Capstone: CS_AARCH64(_INS_MOV), CS_AARCH64(_INS_MOVZ) * ARM: mov, movz */ static RzILOpEffect *mov(cs_insn *insn) { if (!ISREG(0)) { return NULL; } +#if CS_NEXT_VERSION < 6 if (ISIMM(1) && IMM(1) == 0 && !strcmp(insn->mnemonic, "movn")) { // Capstone bug making 0000a012 indistinguishable from 0000a052 // https://github.com/capstone-engine/capstone/issues/1857 return movn(insn); } +#endif ut32 bits = REGBITS(0); if (!bits) { return NULL; } +#if CS_NEXT_VERSION < 6 RzILOpBitVector *src = ARG(1, &bits); +#else + RzILOpBitVector *src = NULL; + if ((insn->alias_id == AArch64_INS_ALIAS_MOV || insn->alias_id == AArch64_INS_ALIAS_MOVZ) && + (REGID(1) == AArch64_REG_XZR || REGID(1) == AArch64_REG_WZR)) { + // Sometimes regs are ORed with the zero register for the MOV alias. + // Sometimes not. + src = ARG(2, &bits); + } else { + src = ARG(1, &bits); + } +#endif if (!src) { return NULL; } @@ -1664,7 +1840,7 @@ static RzILOpEffect *mov(cs_insn *insn) { } /** - * Capstone: ARM64_INS_MOVK + * Capstone: CS_AARCH64(_INS_MOVK) * ARM: movk */ static RzILOpEffect *movk(cs_insn *insn) { @@ -1676,13 +1852,13 @@ static RzILOpEffect *movk(cs_insn *insn) { if (!src) { return NULL; } - cs_arm64_op *op = &insn->detail->arm64.operands[1]; - ut32 shift = op->shift.type == ARM64_SFT_LSL ? op->shift.value : 0; + CS_aarch64_op() *op = &insn->detail->CS_aarch64_.operands[1]; + ut32 shift = op->shift.type == CS_AARCH64(_SFT_LSL) ? op->shift.value : 0; return write_reg(REGID(0), LOGOR(LOGAND(src, UN(bits, ~(0xffffull << shift))), UN(bits, ((ut64)op->imm) << shift))); } /** - * Capstone: ARM64_INS_MOVN + * Capstone: CS_AARCH64(_INS_MOVN) * ARM: movn */ static RzILOpEffect *movn(cs_insn *insn) { @@ -1692,8 +1868,8 @@ static RzILOpEffect *movn(cs_insn *insn) { // The only case where the movn encoding should be disassembled as "movn" is // when (IsZero(imm16) && hw != '00'), according to the "alias conditions" in the reference manual. // Unfortunately, capstone v4 seems to always disassemble as movn, so we still have to implement this. - cs_arm64_op *op = &insn->detail->arm64.operands[1]; - ut32 shift = op->shift.type == ARM64_SFT_LSL ? op->shift.value : 0; + CS_aarch64_op() *op = &insn->detail->CS_aarch64_.operands[1]; + ut32 shift = op->shift.type == CS_AARCH64(_SFT_LSL) ? op->shift.value : 0; ut32 bits = REGBITS(0); if (!bits) { return NULL; @@ -1702,17 +1878,21 @@ static RzILOpEffect *movn(cs_insn *insn) { } /** - * Capstone: ARM64_INS_MSR + * Capstone: CS_AARCH64(_INS_MSR) * ARM: msr */ static RzILOpEffect *msr(cs_insn *insn) { - cs_arm64_op *op = &insn->detail->arm64.operands[0]; -#if CS_API_MAJOR > 4 - if (op->type != ARM64_OP_SYS || (ut64)op->sys != (ut64)ARM64_SYSREG_NZCV) { + CS_aarch64_op() *op = &insn->detail->CS_aarch64_.operands[0]; +#if CS_NEXT_VERSION >= 6 + if (op->type != CS_AARCH64(_OP_SYSREG) || (ut64)op->sysop.reg.sysreg != (ut64)CS_AARCH64(_SYSREG_NZCV)) { + return NULL; + } +#elif CS_API_MAJOR > 4 && CS_NEXT_VERSION < 6 + if (op->type != CS_AARCH64(_OP_SYS) || (ut64)op->sys != (ut64)ARM64_SYSREG_NZCV) { return NULL; } #else - if (op->type != ARM64_OP_REG_MSR || op->reg != 0xda10) { + if (op->type != CS_AARCH64(_OP_REG_MSR) || op->reg != 0xda10) { return NULL; } #endif @@ -1730,7 +1910,7 @@ static RzILOpEffect *msr(cs_insn *insn) { #if CS_API_MAJOR > 4 /** - * Capstone: ARM64_INS_RMIF + * Capstone: CS_AARCH64(_INS_RMIF) * ARM: rmif */ static RzILOpEffect *rmif(cs_insn *insn) { @@ -1764,10 +1944,10 @@ static RzILOpEffect *rmif(cs_insn *insn) { #endif /** - * Capstone: ARM64_INS_SBFX, ARM64_INS_SBFIZ, ARM64_INS_UBFX, ARM64_INS_UBFIZ + * Capstone: CS_AARCH64(_INS_SBFX), CS_AARCH64(_INS_SBFIZ), CS_AARCH64(_INS_UBFX), CS_AARCH64(_INS_UBFIZ) * ARM: sbfx, sbfiz, ubfx, ubfiz */ -static RzILOpEffect *sbfx(cs_insn *insn) { +static RzILOpEffect *usbfm(cs_insn *insn) { if (!ISREG(0) || !ISIMM(2) || !ISIMM(3)) { return NULL; } @@ -1782,32 +1962,62 @@ static RzILOpEffect *sbfx(cs_insn *insn) { ut64 lsb = IMM(2); ut64 width = IMM(3); RzILOpBitVector *res; - if (insn->id == ARM64_INS_SBFIZ || insn->id == ARM64_INS_UBFIZ) { +#if CS_NEXT_VERSION < 6 + if (insn->id == CS_AARCH64(_INS_SBFIZ) || insn->id == CS_AARCH64(_INS_UBFIZ)) { res = SHIFTL0(UNSIGNED(width + lsb, src), UN(6, lsb)); } else { - // ARM64_INS_SBFX, ARM64_INS_UBFX + // CS_AARCH64(_INS_SBFX), CS_AARCH64(_INS_UBFX) res = UNSIGNED(width, SHIFTR0(src, UN(6, lsb))); } - bool is_signed = insn->id == ARM64_INS_SBFX || insn->id == ARM64_INS_SBFIZ; + bool is_signed = insn->id == CS_AARCH64(_INS_SBFX) || insn->id == CS_AARCH64(_INS_SBFIZ); +#else + if (insn->alias_id == AArch64_INS_ALIAS_SBFIZ || insn->alias_id == AArch64_INS_ALIAS_UBFIZ) { + // TODO: modulo usage depends on N and SF bit. + // sf == 0 && N == 0 => mod 32. + // sf == 1 && N == 1 => mod 64. + width += 1; + lsb = -lsb % 64; + res = SHIFTL0(UNSIGNED(width + lsb, src), UN(6, lsb)); + } else if (insn->alias_id == AArch64_INS_ALIAS_SBFX || insn->alias_id == AArch64_INS_ALIAS_UBFX) { + width = width - lsb + 1; + res = UNSIGNED(width, SHIFTR0(src, UN(6, lsb))); + } else if (insn->alias_id == AArch64_INS_ALIAS_LSL) { + // imms != 0x1f => mod 32 + // imms != 0x3f => mod 64 + ut32 m = IMM(3) != 0x1f ? 32 : 64; + return write_reg(REGID(0), SHIFTL0(src, UN(6, -IMM(2) % m))); + } else if (insn->alias_id == AArch64_INS_ALIAS_LSR) { + return write_reg(REGID(0), SHIFTR0(src, UN(6, IMM(2)))); + } else if (insn->alias_id == AArch64_INS_ALIAS_ASR) { + return write_reg(REGID(0), SHIFTR(MSB(src), DUP(src), UN(6, IMM(2)))); + } else { + return NULL; + } + bool is_signed = insn->alias_id == AArch64_INS_ALIAS_SBFX || insn->alias_id == AArch64_INS_ALIAS_SBFIZ; +#endif res = LET("res", res, is_signed ? SIGNED(bits, VARLP("res")) : UNSIGNED(bits, VARLP("res"))); return write_reg(REGID(0), res); } /** - * Capstone: ARM64_INS_MRS + * Capstone: CS_AARCH64(_INS_MRS) * ARM: mrs */ static RzILOpEffect *mrs(cs_insn *insn) { if (!ISREG(0)) { return NULL; } - cs_arm64_op *op = &insn->detail->arm64.operands[1]; -#if CS_API_MAJOR > 4 - if (op->type != ARM64_OP_SYS || (ut64)op->sys != (ut64)ARM64_SYSREG_NZCV) { + CS_aarch64_op() *op = &insn->detail->CS_aarch64_.operands[1]; +#if CS_NEXT_VERSION >= 6 + if (op->type != CS_AARCH64(_OP_SYSREG) || (ut64)op->sysop.reg.sysreg != (ut64)CS_AARCH64(_SYSREG_NZCV)) { + return NULL; + } +#elif CS_API_MAJOR > 4 && CS_NEXT_VERSION < 6 + if (op->type != CS_AARCH64(_OP_SYS) || (ut64)op->sys != (ut64)ARM64_SYSREG_NZCV) { return NULL; } #else - if (op->type != ARM64_OP_REG_MRS || op->reg != 0xda10) { + if (op->type != CS_AARCH64(_OP_REG_MRS) || op->reg != 0xda10) { return NULL; } #endif @@ -1823,7 +2033,7 @@ static RzILOpEffect *mrs(cs_insn *insn) { } /** - * Capstone: ARM64_INS_MVN, ARM64_INS_NEG, ARM64_INS_NEGS, ARM64_INS_NGC, ARM64_INS_NGCS + * Capstone: CS_AARCH64(_INS_MVN), CS_AARCH64(_INS_NEG), CS_AARCH64(_INS_NEGS), CS_AARCH64(_INS_NGC), CS_AARCH64(_INS_NGCS) * ARM: mvn, neg, negs, ngc, ngcs */ static RzILOpEffect *mvn(cs_insn *insn) { @@ -1831,33 +2041,63 @@ static RzILOpEffect *mvn(cs_insn *insn) { return NULL; } ut32 bits = 0; +#if CS_NEXT_VERSION < 6 RzILOpBitVector *val = ARG(1, &bits); +#else + // Reg at 1 is zero register + RzILOpBitVector *val = ARG(2, &bits); +#endif if (!val) { return NULL; } RzILOpBitVector *res; +#if CS_NEXT_VERSION < 6 switch (insn->id) { - case ARM64_INS_NEG: - case ARM64_INS_NEGS: + case CS_AARCH64(_INS_NEG): + case CS_AARCH64(_INS_NEGS): + res = NEG(val); + break; + case CS_AARCH64(_INS_NGC): + case CS_AARCH64(_INS_NGCS): + res = NEG(ADD(val, ITE(VARG("cf"), UN(bits, 0), UN(bits, 1)))); + break; + default: // CS_AARCH64(_INS_MVN) + res = LOGNOT(val); + break; + } +#else + switch (insn->alias_id) { + case AArch64_INS_ALIAS_NEG: + case AArch64_INS_ALIAS_NEGS: res = NEG(val); break; - case ARM64_INS_NGC: - case ARM64_INS_NGCS: + case AArch64_INS_ALIAS_NGC: + case AArch64_INS_ALIAS_NGCS: res = NEG(ADD(val, ITE(VARG("cf"), UN(bits, 0), UN(bits, 1)))); break; - default: // ARM64_INS_MVN + case AArch64_INS_ALIAS_MVN: res = LOGNOT(val); break; + default: + return NULL; } +#endif RzILOpEffect *set = write_reg(REGID(0), res); if (!set) { return NULL; } - if (insn->detail->arm64.update_flags) { + if (insn->detail->CS_aarch64_.update_flags) { + // MSVC pre-processor can't parse "#if CS_NEXT... SETG(...) ..." if it is inlined. + // So we define a variable here. Otherwise we get "error C2121". +#if CS_NEXT_VERSION < 6 + RzILOpEffect *set_cf = SETG("cf", sub_carry(UN(bits, 0), VARL("b"), insn->id == CS_AARCH64(_INS_NGC), bits)); +#else + RzILOpEffect *set_cf = SETG("cf", sub_carry(UN(bits, 0), VARL("b"), insn->alias_id == AArch64_INS_ALIAS_NGC, bits)); +#endif return SEQ5( SETL("b", DUP(val)), set, - SETG("cf", sub_carry(UN(bits, 0), VARL("b"), insn->id == ARM64_INS_NGC, bits)), + set_cf, SETG("vf", sub_overflow(UN(bits, 0), VARL("b"), REG(0))), update_flags_zn(REG(0))); } @@ -1865,7 +2105,7 @@ static RzILOpEffect *mvn(cs_insn *insn) { } /** - * Capstone: ARM64_INS_RBIT + * Capstone: CS_AARCH64(_INS_RBIT) * ARM: rbit */ static RzILOpEffect *rbit(cs_insn *insn) { @@ -1894,7 +2134,7 @@ static RzILOpEffect *rbit(cs_insn *insn) { } /** - * Capstone: ARM64_INS_REV, ARM64_INS_REV32, ARM64_INS_REV16 + * Capstone: CS_AARCH64(_INS_REV), CS_AARCH64(_INS_REV32), CS_AARCH64(_INS_REV16) * ARM: rev, rev32, rev16 */ static RzILOpEffect *rev(cs_insn *insn) { @@ -1905,11 +2145,11 @@ static RzILOpEffect *rev(cs_insn *insn) { if (!dst_bits) { return NULL; } - arm64_reg src_reg = xreg_of_reg(REGID(1)); + CS_aarch64_reg() src_reg = xreg_of_reg(REGID(1)); ut32 container_bits = dst_bits; - if (insn->id == ARM64_INS_REV32) { + if (insn->id == CS_AARCH64(_INS_REV32)) { container_bits = 32; - } else if (insn->id == ARM64_INS_REV16) { + } else if (insn->id == CS_AARCH64(_INS_REV16)) { container_bits = 16; } RzILOpBitVector *src = read_reg(src_reg); @@ -1960,7 +2200,7 @@ static RzILOpEffect *rev(cs_insn *insn) { } /** - * Capstone: ARM64_INS_SDIV + * Capstone: CS_AARCH64(_INS_SDIV) * ARM: sdiv */ static RzILOpEffect *sdiv(cs_insn *insn) { @@ -1986,7 +2226,7 @@ static RzILOpEffect *sdiv(cs_insn *insn) { } /** - * Capstone: ARM64_INS_UDIV + * Capstone: CS_AARCH64(_INS_UDIV) * ARM: udiv */ static RzILOpEffect *udiv(cs_insn *insn) { @@ -2010,7 +2250,7 @@ static RzILOpEffect *udiv(cs_insn *insn) { #if CS_API_MAJOR > 4 /** - * Capstone: ARM64_INS_SETF8, ARM64_INS_SETF16 + * Capstone: CS_AARCH64(_INS_SETF8), CS_AARCH64(_INS_SETF16) * ARM: setf8, setf16 */ static RzILOpEffect *setf(cs_insn *insn) { @@ -2021,7 +2261,7 @@ static RzILOpEffect *setf(cs_insn *insn) { if (!val) { return NULL; } - ut32 bits = insn->id == ARM64_INS_SETF16 ? 16 : 8; + ut32 bits = insn->id == CS_AARCH64(_INS_SETF16) ? 16 : 8; return SEQ2( SETG("vf", XOR(MSB(UNSIGNED(bits + 1, val)), MSB(UNSIGNED(bits, DUP(val))))), update_flags_zn(UNSIGNED(bits, DUP(val)))); @@ -2029,7 +2269,7 @@ static RzILOpEffect *setf(cs_insn *insn) { #endif /** - * Capstone: ARM64_INS_SMADDL, ARM64_INS_SMSUBL, ARM64_INS_UMADDL, ARM64_INS_UMSUBL + * Capstone: CS_AARCH64(_INS_SMADDL), CS_AARCH64(_INS_SMSUBL), CS_AARCH64(_INS_UMADDL), CS_AARCH64(_INS_UMSUBL) * ARM: smaddl, smsubl, umaddl, umsubl */ static RzILOpEffect *smaddl(cs_insn *insn) { @@ -2047,9 +2287,9 @@ static RzILOpEffect *smaddl(cs_insn *insn) { rz_il_op_pure_free(addend); return NULL; } - bool is_signed = insn->id == ARM64_INS_SMADDL || insn->id == ARM64_INS_SMSUBL; + bool is_signed = insn->id == CS_AARCH64(_INS_SMADDL) || insn->id == CS_AARCH64(_INS_SMSUBL); RzILOpBitVector *res = MUL(is_signed ? SIGNED(64, x) : UNSIGNED(64, x), is_signed ? SIGNED(64, y) : UNSIGNED(64, y)); - if (insn->id == ARM64_INS_SMSUBL || insn->id == ARM64_INS_UMSUBL) { + if (insn->id == CS_AARCH64(_INS_SMSUBL) || insn->id == CS_AARCH64(_INS_UMSUBL)) { res = SUB(addend, res); } else { res = ADD(addend, res); @@ -2058,7 +2298,7 @@ static RzILOpEffect *smaddl(cs_insn *insn) { } /** - * Capstone: ARM64_INS_SMULL, ARM64_INS_SMNEGL, ARM64_INS_UMULL, ARM64_INS_UMNEGL + * Capstone: CS_AARCH64(_INS_SMULL), CS_AARCH64(_INS_SMNEGL), CS_AARCH64(_INS_UMULL), CS_AARCH64(_INS_UMNEGL) * ARM: smull, smnegl, umull, umnegl */ static RzILOpEffect *smull(cs_insn *insn) { @@ -2073,16 +2313,26 @@ static RzILOpEffect *smull(cs_insn *insn) { rz_il_op_pure_free(y); return NULL; } - bool is_signed = insn->id == ARM64_INS_SMULL || insn->id == ARM64_INS_SMNEGL; +#if CS_NEXT_VERSION < 6 + bool is_signed = insn->id == CS_AARCH64(_INS_SMULL) || insn->id == CS_AARCH64(_INS_SMNEGL); +#else + bool is_signed = insn->alias_id == AArch64_INS_ALIAS_SMULL || insn->alias_id == AArch64_INS_ALIAS_SMNEGL; +#endif RzILOpBitVector *res = MUL(is_signed ? SIGNED(64, x) : UNSIGNED(64, x), is_signed ? SIGNED(64, y) : UNSIGNED(64, y)); - if (insn->id == ARM64_INS_SMNEGL || insn->id == ARM64_INS_UMNEGL) { +#if CS_NEXT_VERSION < 6 + if (insn->id == CS_AARCH64(_INS_SMNEGL) || insn->id == CS_AARCH64(_INS_UMNEGL)) { res = NEG(res); } +#else + if (insn->alias_id == AArch64_INS_ALIAS_SMNEGL || insn->alias_id == AArch64_INS_ALIAS_UMNEGL) { + res = NEG(res); + } +#endif return write_reg(REGID(0), res); } /** - * Capstone: ARM64_INS_SMULH, ARM64_INS_UMULH + * Capstone: CS_AARCH64(_INS_SMULH), CS_AARCH64(_INS_UMULH) * ARM: smulh, umulh */ static RzILOpEffect *smulh(cs_insn *insn) { @@ -2097,16 +2347,16 @@ static RzILOpEffect *smulh(cs_insn *insn) { rz_il_op_pure_free(y); return NULL; } - bool is_signed = insn->id == ARM64_INS_SMULH; + bool is_signed = insn->id == CS_AARCH64(_INS_SMULH); RzILOpBitVector *res = MUL(is_signed ? SIGNED(128, x) : UNSIGNED(128, x), is_signed ? SIGNED(128, y) : UNSIGNED(128, y)); return write_reg(REGID(0), UNSIGNED(64, SHIFTR0(res, UN(7, 64)))); } #if CS_API_MAJOR > 4 /** - * Capstone: ARM64_INS_SWP, ARM64_INS_SWPA, ARM64_INS_SWPAL, ARM64_INS_SWPL, - * ARM64_INS_SWPB, ARM64_INS_SWPAB, ARM64_INS_SWPALB, ARM64_INS_SWPLB - * ARM64_INS_SWPH, ARM64_INS_SWPAH, ARM64_INS_SWPALH, ARM64_INS_SWPLH + * Capstone: CS_AARCH64(_INS_SWP), CS_AARCH64(_INS_SWPA), CS_AARCH64(_INS_SWPAL), CS_AARCH64(_INS_SWPL), + * CS_AARCH64(_INS_SWPB), CS_AARCH64(_INS_SWPAB), CS_AARCH64(_INS_SWPALB), CS_AARCH64(_INS_SWPLB) + * CS_AARCH64(_INS_SWPH), CS_AARCH64(_INS_SWPAH), CS_AARCH64(_INS_SWPALH), CS_AARCH64(_INS_SWPLH) * ARM: swp, swpa, swpal, swpl, swpb, swpab, swpalb, swplb, swph, swpah, swpalh, swplh */ static RzILOpEffect *swp(cs_insn *insn) { @@ -2115,19 +2365,19 @@ static RzILOpEffect *swp(cs_insn *insn) { } ut32 bits; switch (insn->id) { - case ARM64_INS_SWPB: - case ARM64_INS_SWPAB: - case ARM64_INS_SWPALB: - case ARM64_INS_SWPLB: + case CS_AARCH64(_INS_SWPB): + case CS_AARCH64(_INS_SWPAB): + case CS_AARCH64(_INS_SWPALB): + case CS_AARCH64(_INS_SWPLB): bits = 8; break; - case ARM64_INS_SWPH: - case ARM64_INS_SWPAH: - case ARM64_INS_SWPALH: - case ARM64_INS_SWPLH: + case CS_AARCH64(_INS_SWPH): + case CS_AARCH64(_INS_SWPAH): + case CS_AARCH64(_INS_SWPALH): + case CS_AARCH64(_INS_SWPLH): bits = 16; break; - default: // ARM64_INS_SWP, ARM64_INS_SWPA, ARM64_INS_SWPAL, ARM64_INS_SWPL: + default: // CS_AARCH64(_INS_SWP), CS_AARCH64(_INS_SWPA), CS_AARCH64(_INS_SWPAL), CS_AARCH64(_INS_SWPL): bits = REGBITS(0); if (!bits) { return NULL; @@ -2146,8 +2396,8 @@ static RzILOpEffect *swp(cs_insn *insn) { return NULL; } RzILOpEffect *store_eff = bits == 8 ? STORE(addr, store_val) : STOREW(addr, store_val); - arm64_reg ret_reg = xreg_of_reg(REGID(1)); - if (ret_reg == ARM64_REG_XZR) { + CS_aarch64_reg() ret_reg = xreg_of_reg(REGID(1)); + if (ret_reg == CS_AARCH64(_REG_XZR)) { return store_eff; } RzILOpEffect *ret_eff = write_reg(ret_reg, bits != 64 ? UNSIGNED(64, VARL("ret")) : VARL("ret")); @@ -2163,7 +2413,7 @@ static RzILOpEffect *swp(cs_insn *insn) { #endif /** - * Capstone: ARM64_INS_SXTB, ARM64_INS_SXTH, ARM64_INS_SXTW, ARM64_INS_UXTB, ARM64_INS_UXTH + * Capstone: CS_AARCH64(_INS_SXTB), CS_AARCH64(_INS_SXTH), CS_AARCH64(_INS_SXTW), CS_AARCH64(_INS_UXTB), CS_AARCH64(_INS_UXTH) * ARM: sxtb, sxth, sxtw, uxtb, uxth */ static RzILOpEffect *sxt(cs_insn *insn) { @@ -2172,23 +2422,45 @@ static RzILOpEffect *sxt(cs_insn *insn) { } ut32 bits; bool is_signed = true; +#if CS_NEXT_VERSION < 6 switch (insn->id) { - case ARM64_INS_UXTB: + case CS_AARCH64(_INS_UXTB): is_signed = false; // fallthrough - case ARM64_INS_SXTB: + case CS_AARCH64(_INS_SXTB): bits = 8; break; - case ARM64_INS_UXTH: + case CS_AARCH64(_INS_UXTH): is_signed = false; // fallthrough - case ARM64_INS_SXTH: + case CS_AARCH64(_INS_SXTH): bits = 16; break; - default: // ARM64_INS_SXTW + default: // CS_AARCH64(_INS_SXTW) bits = 32; break; } +#else + switch (insn->alias_id) { + default: + return NULL; + case AArch64_INS_ALIAS_UXTB: + is_signed = false; + // fallthrough + case AArch64_INS_ALIAS_SXTB: + bits = 8; + break; + case AArch64_INS_ALIAS_UXTH: + is_signed = false; + // fallthrough + case AArch64_INS_ALIAS_SXTH: + bits = 16; + break; + case AArch64_INS_ALIAS_SXTW: + bits = 32; + break; + } +#endif RzILOpBitVector *src = ARG(1, &bits); if (!src) { return NULL; @@ -2197,7 +2469,7 @@ static RzILOpEffect *sxt(cs_insn *insn) { } /** - * Capstone: ARM64_INS_TBNZ, ARM64_TBZ + * Capstone: CS_AARCH64(_INS_TBNZ), ARM64_TBZ * ARM: tbnz, tbz */ static RzILOpEffect *tbz(cs_insn *insn) { @@ -2213,19 +2485,25 @@ static RzILOpEffect *tbz(cs_insn *insn) { return NULL; } RzILOpBool *c = LSB(SHIFTR0(src, UN(6, IMM(1)))); - return insn->id == ARM64_INS_TBNZ + return insn->id == CS_AARCH64(_INS_TBNZ) ? BRANCH(c, JMP(tgt), NULL) : BRANCH(c, NULL, JMP(tgt)); } /** - * Capstone: ARM64_INS_TST + * Capstone: CS_AARCH64(_INS_TST) * ARM: tst */ static RzILOpEffect *tst(cs_insn *insn) { ut32 bits = 0; +#if CS_NEXT_VERSION < 6 RzILOpBitVector *a = ARG(0, &bits); RzILOpBitVector *b = ARG(1, &bits); +#else + // Operand 0 is the zero register the result is written to. + RzILOpBitVector *a = ARG(1, &bits); + RzILOpBitVector *b = ARG(2, &bits); +#endif if (!a || !b) { rz_il_op_pure_free(a); rz_il_op_pure_free(b); @@ -2311,439 +2589,533 @@ static RzILOpEffect *tst(cs_insn *insn) { */ RZ_IPI RzILOpEffect *rz_arm_cs_64_il(csh *handle, cs_insn *insn) { switch (insn->id) { - case ARM64_INS_NOP: - case ARM64_INS_HINT: - case ARM64_INS_PRFM: - case ARM64_INS_PRFUM: - case ARM64_INS_SEV: - case ARM64_INS_SEVL: - case ARM64_INS_WFE: - case ARM64_INS_WFI: - case ARM64_INS_YIELD: + case CS_AARCH64(_INS_HINT): + case CS_AARCH64(_INS_PRFM): + case CS_AARCH64(_INS_PRFUM): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_NOP): + case CS_AARCH64(_INS_SEV): + case CS_AARCH64(_INS_SEVL): + case CS_AARCH64(_INS_WFE): + case CS_AARCH64(_INS_WFI): + case CS_AARCH64(_INS_YIELD): +#endif return NOP(); - case ARM64_INS_ADD: - case ARM64_INS_ADC: - case ARM64_INS_SUB: - case ARM64_INS_SBC: + case CS_AARCH64(_INS_ADD): + case CS_AARCH64(_INS_ADC): + case CS_AARCH64(_INS_SUB): + case CS_AARCH64(_INS_SBC): #if CS_API_MAJOR > 4 - case ARM64_INS_ADDS: - case ARM64_INS_SUBS: - case ARM64_INS_ADCS: - case ARM64_INS_SBCS: + case CS_AARCH64(_INS_ADDS): + case CS_AARCH64(_INS_SUBS): + case CS_AARCH64(_INS_ADCS): + case CS_AARCH64(_INS_SBCS): +#endif +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == AArch64_INS_ALIAS_MOV || + insn->alias_id == AArch64_INS_ALIAS_MOVZ) { + return mov(insn); + } else if (insn->alias_id == AArch64_INS_ALIAS_CMP || + insn->alias_id == AArch64_INS_ALIAS_CMN) { + return cmp(insn); + } else if (insn->alias_id == AArch64_INS_ALIAS_NEG || + insn->alias_id == AArch64_INS_ALIAS_NGC || + insn->alias_id == AArch64_INS_ALIAS_NEGS || + insn->alias_id == AArch64_INS_ALIAS_NGCS) { + return mvn(insn); + } #endif return add_sub(insn); - case ARM64_INS_ADR: - case ARM64_INS_ADRP: + case CS_AARCH64(_INS_ADR): + case CS_AARCH64(_INS_ADRP): return adr(insn); - case ARM64_INS_AND: + case CS_AARCH64(_INS_AND): #if CS_API_MAJOR > 4 - case ARM64_INS_ANDS: + case CS_AARCH64(_INS_ANDS): +#endif + case CS_AARCH64(_INS_EOR): + case CS_AARCH64(_INS_EON): + case CS_AARCH64(_INS_ORN): + case CS_AARCH64(_INS_ORR): +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == AArch64_INS_ALIAS_MOV || + insn->alias_id == AArch64_INS_ALIAS_MOVZ) { + return mov(insn); + } else if (insn->alias_id == AArch64_INS_ALIAS_TST) { + return tst(insn); + } else if (insn->alias_id == AArch64_INS_ALIAS_MVN) { + return mvn(insn); + } #endif - case ARM64_INS_EOR: - case ARM64_INS_EON: - case ARM64_INS_ORN: - case ARM64_INS_ORR: return bitwise(insn); - case ARM64_INS_ASR: - case ARM64_INS_LSL: - case ARM64_INS_LSR: - case ARM64_INS_ROR: + case CS_AARCH64(_INS_ASR): + case CS_AARCH64(_INS_LSL): + case CS_AARCH64(_INS_LSR): + case CS_AARCH64(_INS_ROR): return shift(insn); - case ARM64_INS_B: - case ARM64_INS_BR: - case ARM64_INS_RET: + case CS_AARCH64(_INS_B): + case CS_AARCH64(_INS_BR): + case CS_AARCH64(_INS_RET): #if CS_API_MAJOR > 4 - case ARM64_INS_BRAA: - case ARM64_INS_BRAAZ: - case ARM64_INS_BRAB: - case ARM64_INS_BRABZ: - case ARM64_INS_RETAA: - case ARM64_INS_RETAB: + case CS_AARCH64(_INS_BRAA): + case CS_AARCH64(_INS_BRAAZ): + case CS_AARCH64(_INS_BRAB): + case CS_AARCH64(_INS_BRABZ): + case CS_AARCH64(_INS_RETAA): + case CS_AARCH64(_INS_RETAB): #endif return branch(insn); - case ARM64_INS_BL: - case ARM64_INS_BLR: + case CS_AARCH64(_INS_BL): + case CS_AARCH64(_INS_BLR): #if CS_API_MAJOR > 4 - case ARM64_INS_BLRAA: - case ARM64_INS_BLRAAZ: - case ARM64_INS_BLRAB: - case ARM64_INS_BLRABZ: + case CS_AARCH64(_INS_BLRAA): + case CS_AARCH64(_INS_BLRAAZ): + case CS_AARCH64(_INS_BLRAB): + case CS_AARCH64(_INS_BLRABZ): #endif return bl(insn); - case ARM64_INS_BFM: - case ARM64_INS_BFI: - case ARM64_INS_BFXIL: + case CS_AARCH64(_INS_BFM): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_BFI): + case CS_AARCH64(_INS_BFXIL): +#endif return bfm(insn); - case ARM64_INS_BIC: + case CS_AARCH64(_INS_BIC): #if CS_API_MAJOR > 4 - case ARM64_INS_BICS: + case CS_AARCH64(_INS_BICS): #endif return bic(insn); #if CS_API_MAJOR > 4 - case ARM64_INS_CAS: - case ARM64_INS_CASA: - case ARM64_INS_CASAL: - case ARM64_INS_CASL: - case ARM64_INS_CASB: - case ARM64_INS_CASAB: - case ARM64_INS_CASALB: - case ARM64_INS_CASLB: - case ARM64_INS_CASH: - case ARM64_INS_CASAH: - case ARM64_INS_CASALH: - case ARM64_INS_CASLH: + case CS_AARCH64(_INS_CAS): + case CS_AARCH64(_INS_CASA): + case CS_AARCH64(_INS_CASAL): + case CS_AARCH64(_INS_CASL): + case CS_AARCH64(_INS_CASB): + case CS_AARCH64(_INS_CASAB): + case CS_AARCH64(_INS_CASALB): + case CS_AARCH64(_INS_CASLB): + case CS_AARCH64(_INS_CASH): + case CS_AARCH64(_INS_CASAH): + case CS_AARCH64(_INS_CASALH): + case CS_AARCH64(_INS_CASLH): return cas(insn); - case ARM64_INS_CASP: - case ARM64_INS_CASPA: - case ARM64_INS_CASPAL: - case ARM64_INS_CASPL: + case CS_AARCH64(_INS_CASP): + case CS_AARCH64(_INS_CASPA): + case CS_AARCH64(_INS_CASPAL): + case CS_AARCH64(_INS_CASPL): return casp(insn); #endif - case ARM64_INS_CBZ: - case ARM64_INS_CBNZ: + case CS_AARCH64(_INS_CBZ): + case CS_AARCH64(_INS_CBNZ): return cbz(insn); - case ARM64_INS_CMP: - case ARM64_INS_CMN: - case ARM64_INS_CCMP: - case ARM64_INS_CCMN: +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_CMP): + case CS_AARCH64(_INS_CMN): +#endif + case CS_AARCH64(_INS_CCMP): + case CS_AARCH64(_INS_CCMN): return cmp(insn); #if CS_API_MAJOR > 4 - case ARM64_INS_CFINV: + case CS_AARCH64(_INS_CFINV): return SETG("cf", INV(VARG("cf"))); #endif - case ARM64_INS_CINC: - case ARM64_INS_CSINC: - case ARM64_INS_CINV: - case ARM64_INS_CSINV: - case ARM64_INS_CNEG: - case ARM64_INS_CSNEG: - case ARM64_INS_CSEL: + case CS_AARCH64(_INS_CSINC): + case CS_AARCH64(_INS_CSINV): + case CS_AARCH64(_INS_CSNEG): + case CS_AARCH64(_INS_CSEL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_CINC): + case CS_AARCH64(_INS_CINV): + case CS_AARCH64(_INS_CNEG): +#else + if (insn->alias_id == AArch64_INS_ALIAS_CSET || + insn->alias_id == AArch64_INS_ALIAS_CSETM) { + return cset(insn); + } +#endif return csinc(insn); - case ARM64_INS_CSET: - case ARM64_INS_CSETM: +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_CSET): + case CS_AARCH64(_INS_CSETM): return cset(insn); - case ARM64_INS_CLS: +#endif + case CS_AARCH64(_INS_CLS): return cls(insn); - case ARM64_INS_CLZ: + case CS_AARCH64(_INS_CLZ): return clz(insn); - case ARM64_INS_EXTR: + case CS_AARCH64(_INS_EXTR): +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == AArch64_INS_ALIAS_ROR) { + return shift(insn); + } +#endif return extr(insn); - case ARM64_INS_HVC: + case CS_AARCH64(_INS_HVC): return hvc(insn); - case ARM64_INS_SVC: + case CS_AARCH64(_INS_SVC): return svc(insn); - case ARM64_INS_LDR: - case ARM64_INS_LDRB: - case ARM64_INS_LDRH: - case ARM64_INS_LDUR: - case ARM64_INS_LDURB: - case ARM64_INS_LDURH: - case ARM64_INS_LDRSW: - case ARM64_INS_LDRSB: - case ARM64_INS_LDRSH: - case ARM64_INS_LDURSW: - case ARM64_INS_LDURSB: - case ARM64_INS_LDURSH: - case ARM64_INS_LDAR: - case ARM64_INS_LDARB: - case ARM64_INS_LDARH: - case ARM64_INS_LDAXP: - case ARM64_INS_LDXP: - case ARM64_INS_LDAXR: - case ARM64_INS_LDAXRB: - case ARM64_INS_LDAXRH: - case ARM64_INS_LDP: - case ARM64_INS_LDNP: - case ARM64_INS_LDPSW: - case ARM64_INS_LDTR: - case ARM64_INS_LDTRB: - case ARM64_INS_LDTRH: - case ARM64_INS_LDTRSW: - case ARM64_INS_LDTRSB: - case ARM64_INS_LDTRSH: - case ARM64_INS_LDXR: - case ARM64_INS_LDXRB: - case ARM64_INS_LDXRH: + case CS_AARCH64(_INS_LDR): + case CS_AARCH64(_INS_LDRB): + case CS_AARCH64(_INS_LDRH): + case CS_AARCH64(_INS_LDUR): + case CS_AARCH64(_INS_LDURB): + case CS_AARCH64(_INS_LDURH): + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDRSB): + case CS_AARCH64(_INS_LDRSH): + case CS_AARCH64(_INS_LDURSW): + case CS_AARCH64(_INS_LDURSB): + case CS_AARCH64(_INS_LDURSH): + case CS_AARCH64(_INS_LDAR): + case CS_AARCH64(_INS_LDARB): + case CS_AARCH64(_INS_LDARH): + case CS_AARCH64(_INS_LDAXP): + case CS_AARCH64(_INS_LDXP): + case CS_AARCH64(_INS_LDAXR): + case CS_AARCH64(_INS_LDAXRB): + case CS_AARCH64(_INS_LDAXRH): + case CS_AARCH64(_INS_LDP): + case CS_AARCH64(_INS_LDNP): + case CS_AARCH64(_INS_LDPSW): + case CS_AARCH64(_INS_LDTR): + case CS_AARCH64(_INS_LDTRB): + case CS_AARCH64(_INS_LDTRH): + case CS_AARCH64(_INS_LDTRSW): + case CS_AARCH64(_INS_LDTRSB): + case CS_AARCH64(_INS_LDTRSH): + case CS_AARCH64(_INS_LDXR): + case CS_AARCH64(_INS_LDXRB): + case CS_AARCH64(_INS_LDXRH): #if CS_API_MAJOR > 4 - case ARM64_INS_LDAPR: - case ARM64_INS_LDAPRB: - case ARM64_INS_LDAPRH: - case ARM64_INS_LDAPUR: - case ARM64_INS_LDAPURB: - case ARM64_INS_LDAPURH: - case ARM64_INS_LDAPURSB: - case ARM64_INS_LDAPURSH: - case ARM64_INS_LDAPURSW: - case ARM64_INS_LDLAR: - case ARM64_INS_LDLARB: - case ARM64_INS_LDLARH: - case ARM64_INS_LDRAA: - case ARM64_INS_LDRAB: + case CS_AARCH64(_INS_LDAPR): + case CS_AARCH64(_INS_LDAPRB): + case CS_AARCH64(_INS_LDAPRH): + case CS_AARCH64(_INS_LDAPUR): + case CS_AARCH64(_INS_LDAPURB): + case CS_AARCH64(_INS_LDAPURH): + case CS_AARCH64(_INS_LDAPURSB): + case CS_AARCH64(_INS_LDAPURSH): + case CS_AARCH64(_INS_LDAPURSW): + case CS_AARCH64(_INS_LDLAR): + case CS_AARCH64(_INS_LDLARB): + case CS_AARCH64(_INS_LDLARH): + case CS_AARCH64(_INS_LDRAA): + case CS_AARCH64(_INS_LDRAB): #endif return ldr(insn); #if CS_API_MAJOR > 4 - case ARM64_INS_LDADD: - case ARM64_INS_LDADDA: - case ARM64_INS_LDADDAL: - case ARM64_INS_LDADDL: - case ARM64_INS_LDADDB: - case ARM64_INS_LDADDAB: - case ARM64_INS_LDADDALB: - case ARM64_INS_LDADDLB: - case ARM64_INS_LDADDH: - case ARM64_INS_LDADDAH: - case ARM64_INS_LDADDALH: - case ARM64_INS_LDADDLH: - case ARM64_INS_STADD: - case ARM64_INS_STADDL: - case ARM64_INS_STADDB: - case ARM64_INS_STADDLB: - case ARM64_INS_STADDH: - case ARM64_INS_STADDLH: - case ARM64_INS_LDCLRB: - case ARM64_INS_LDCLRAB: - case ARM64_INS_LDCLRALB: - case ARM64_INS_LDCLRLB: - case ARM64_INS_LDCLRH: - case ARM64_INS_LDCLRAH: - case ARM64_INS_LDCLRALH: - case ARM64_INS_LDCLRLH: - case ARM64_INS_LDCLR: - case ARM64_INS_LDCLRA: - case ARM64_INS_LDCLRAL: - case ARM64_INS_LDCLRL: - case ARM64_INS_STCLR: - case ARM64_INS_STCLRL: - case ARM64_INS_STCLRB: - case ARM64_INS_STCLRLB: - case ARM64_INS_STCLRH: - case ARM64_INS_STCLRLH: - case ARM64_INS_LDEORB: - case ARM64_INS_LDEORAB: - case ARM64_INS_LDEORALB: - case ARM64_INS_LDEORLB: - case ARM64_INS_LDEORH: - case ARM64_INS_LDEORAH: - case ARM64_INS_LDEORALH: - case ARM64_INS_LDEORLH: - case ARM64_INS_LDEOR: - case ARM64_INS_LDEORA: - case ARM64_INS_LDEORAL: - case ARM64_INS_LDEORL: - case ARM64_INS_STEOR: - case ARM64_INS_STEORL: - case ARM64_INS_STEORB: - case ARM64_INS_STEORLB: - case ARM64_INS_STEORH: - case ARM64_INS_STEORLH: - case ARM64_INS_LDSETB: - case ARM64_INS_LDSETAB: - case ARM64_INS_LDSETALB: - case ARM64_INS_LDSETLB: - case ARM64_INS_LDSETH: - case ARM64_INS_LDSETAH: - case ARM64_INS_LDSETALH: - case ARM64_INS_LDSETLH: - case ARM64_INS_LDSET: - case ARM64_INS_LDSETA: - case ARM64_INS_LDSETAL: - case ARM64_INS_LDSETL: - case ARM64_INS_STSET: - case ARM64_INS_STSETL: - case ARM64_INS_STSETB: - case ARM64_INS_STSETLB: - case ARM64_INS_STSETH: - case ARM64_INS_STSETLH: - case ARM64_INS_LDSMAXB: - case ARM64_INS_LDSMAXAB: - case ARM64_INS_LDSMAXALB: - case ARM64_INS_LDSMAXLB: - case ARM64_INS_LDSMAXH: - case ARM64_INS_LDSMAXAH: - case ARM64_INS_LDSMAXALH: - case ARM64_INS_LDSMAXLH: - case ARM64_INS_LDSMAX: - case ARM64_INS_LDSMAXA: - case ARM64_INS_LDSMAXAL: - case ARM64_INS_LDSMAXL: - case ARM64_INS_STSMAX: - case ARM64_INS_STSMAXL: - case ARM64_INS_STSMAXB: - case ARM64_INS_STSMAXLB: - case ARM64_INS_STSMAXH: - case ARM64_INS_STSMAXLH: - case ARM64_INS_LDSMINB: - case ARM64_INS_LDSMINAB: - case ARM64_INS_LDSMINALB: - case ARM64_INS_LDSMINLB: - case ARM64_INS_LDSMINH: - case ARM64_INS_LDSMINAH: - case ARM64_INS_LDSMINALH: - case ARM64_INS_LDSMINLH: - case ARM64_INS_LDSMIN: - case ARM64_INS_LDSMINA: - case ARM64_INS_LDSMINAL: - case ARM64_INS_LDSMINL: - case ARM64_INS_STSMIN: - case ARM64_INS_STSMINL: - case ARM64_INS_STSMINB: - case ARM64_INS_STSMINLB: - case ARM64_INS_STSMINH: - case ARM64_INS_STSMINLH: - case ARM64_INS_LDUMAXB: - case ARM64_INS_LDUMAXAB: - case ARM64_INS_LDUMAXALB: - case ARM64_INS_LDUMAXLB: - case ARM64_INS_LDUMAXH: - case ARM64_INS_LDUMAXAH: - case ARM64_INS_LDUMAXALH: - case ARM64_INS_LDUMAXLH: - case ARM64_INS_LDUMAX: - case ARM64_INS_LDUMAXA: - case ARM64_INS_LDUMAXAL: - case ARM64_INS_LDUMAXL: - case ARM64_INS_STUMAX: - case ARM64_INS_STUMAXL: - case ARM64_INS_STUMAXB: - case ARM64_INS_STUMAXLB: - case ARM64_INS_STUMAXH: - case ARM64_INS_STUMAXLH: - case ARM64_INS_LDUMINB: - case ARM64_INS_LDUMINAB: - case ARM64_INS_LDUMINALB: - case ARM64_INS_LDUMINLB: - case ARM64_INS_LDUMINH: - case ARM64_INS_LDUMINAH: - case ARM64_INS_LDUMINALH: - case ARM64_INS_LDUMINLH: - case ARM64_INS_LDUMIN: - case ARM64_INS_LDUMINA: - case ARM64_INS_LDUMINAL: - case ARM64_INS_LDUMINL: - case ARM64_INS_STUMIN: - case ARM64_INS_STUMINL: - case ARM64_INS_STUMINB: - case ARM64_INS_STUMINLB: - case ARM64_INS_STUMINH: - case ARM64_INS_STUMINLH: + case CS_AARCH64(_INS_LDADD): + case CS_AARCH64(_INS_LDADDA): + case CS_AARCH64(_INS_LDADDAL): + case CS_AARCH64(_INS_LDADDL): + case CS_AARCH64(_INS_LDADDB): + case CS_AARCH64(_INS_LDADDAB): + case CS_AARCH64(_INS_LDADDALB): + case CS_AARCH64(_INS_LDADDLB): + case CS_AARCH64(_INS_LDADDH): + case CS_AARCH64(_INS_LDADDAH): + case CS_AARCH64(_INS_LDADDALH): + case CS_AARCH64(_INS_LDADDLH): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STADD): + case CS_AARCH64(_INS_STADDL): + case CS_AARCH64(_INS_STADDB): + case CS_AARCH64(_INS_STADDLB): + case CS_AARCH64(_INS_STADDH): + case CS_AARCH64(_INS_STADDLH): +#endif + case CS_AARCH64(_INS_LDCLRB): + case CS_AARCH64(_INS_LDCLRAB): + case CS_AARCH64(_INS_LDCLRALB): + case CS_AARCH64(_INS_LDCLRLB): + case CS_AARCH64(_INS_LDCLRH): + case CS_AARCH64(_INS_LDCLRAH): + case CS_AARCH64(_INS_LDCLRALH): + case CS_AARCH64(_INS_LDCLRLH): + case CS_AARCH64(_INS_LDCLR): + case CS_AARCH64(_INS_LDCLRA): + case CS_AARCH64(_INS_LDCLRAL): + case CS_AARCH64(_INS_LDCLRL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STCLR): + case CS_AARCH64(_INS_STCLRL): + case CS_AARCH64(_INS_STCLRB): + case CS_AARCH64(_INS_STCLRLB): + case CS_AARCH64(_INS_STCLRH): + case CS_AARCH64(_INS_STCLRLH): +#endif + case CS_AARCH64(_INS_LDEORB): + case CS_AARCH64(_INS_LDEORAB): + case CS_AARCH64(_INS_LDEORALB): + case CS_AARCH64(_INS_LDEORLB): + case CS_AARCH64(_INS_LDEORH): + case CS_AARCH64(_INS_LDEORAH): + case CS_AARCH64(_INS_LDEORALH): + case CS_AARCH64(_INS_LDEORLH): + case CS_AARCH64(_INS_LDEOR): + case CS_AARCH64(_INS_LDEORA): + case CS_AARCH64(_INS_LDEORAL): + case CS_AARCH64(_INS_LDEORL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STEOR): + case CS_AARCH64(_INS_STEORL): + case CS_AARCH64(_INS_STEORB): + case CS_AARCH64(_INS_STEORLB): + case CS_AARCH64(_INS_STEORH): + case CS_AARCH64(_INS_STEORLH): +#endif + case CS_AARCH64(_INS_LDSETB): + case CS_AARCH64(_INS_LDSETAB): + case CS_AARCH64(_INS_LDSETALB): + case CS_AARCH64(_INS_LDSETLB): + case CS_AARCH64(_INS_LDSETH): + case CS_AARCH64(_INS_LDSETAH): + case CS_AARCH64(_INS_LDSETALH): + case CS_AARCH64(_INS_LDSETLH): + case CS_AARCH64(_INS_LDSET): + case CS_AARCH64(_INS_LDSETA): + case CS_AARCH64(_INS_LDSETAL): + case CS_AARCH64(_INS_LDSETL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSET): + case CS_AARCH64(_INS_STSETL): + case CS_AARCH64(_INS_STSETB): + case CS_AARCH64(_INS_STSETLB): + case CS_AARCH64(_INS_STSETH): + case CS_AARCH64(_INS_STSETLH): +#endif + case CS_AARCH64(_INS_LDSMAXB): + case CS_AARCH64(_INS_LDSMAXAB): + case CS_AARCH64(_INS_LDSMAXALB): + case CS_AARCH64(_INS_LDSMAXLB): + case CS_AARCH64(_INS_LDSMAXH): + case CS_AARCH64(_INS_LDSMAXAH): + case CS_AARCH64(_INS_LDSMAXALH): + case CS_AARCH64(_INS_LDSMAXLH): + case CS_AARCH64(_INS_LDSMAX): + case CS_AARCH64(_INS_LDSMAXA): + case CS_AARCH64(_INS_LDSMAXAL): + case CS_AARCH64(_INS_LDSMAXL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMAX): + case CS_AARCH64(_INS_STSMAXL): + case CS_AARCH64(_INS_STSMAXB): + case CS_AARCH64(_INS_STSMAXLB): + case CS_AARCH64(_INS_STSMAXH): + case CS_AARCH64(_INS_STSMAXLH): +#endif + case CS_AARCH64(_INS_LDSMINB): + case CS_AARCH64(_INS_LDSMINAB): + case CS_AARCH64(_INS_LDSMINALB): + case CS_AARCH64(_INS_LDSMINLB): + case CS_AARCH64(_INS_LDSMINH): + case CS_AARCH64(_INS_LDSMINAH): + case CS_AARCH64(_INS_LDSMINALH): + case CS_AARCH64(_INS_LDSMINLH): + case CS_AARCH64(_INS_LDSMIN): + case CS_AARCH64(_INS_LDSMINA): + case CS_AARCH64(_INS_LDSMINAL): + case CS_AARCH64(_INS_LDSMINL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STSMIN): + case CS_AARCH64(_INS_STSMINL): + case CS_AARCH64(_INS_STSMINB): + case CS_AARCH64(_INS_STSMINLB): + case CS_AARCH64(_INS_STSMINH): + case CS_AARCH64(_INS_STSMINLH): +#endif + case CS_AARCH64(_INS_LDUMAXB): + case CS_AARCH64(_INS_LDUMAXAB): + case CS_AARCH64(_INS_LDUMAXALB): + case CS_AARCH64(_INS_LDUMAXLB): + case CS_AARCH64(_INS_LDUMAXH): + case CS_AARCH64(_INS_LDUMAXAH): + case CS_AARCH64(_INS_LDUMAXALH): + case CS_AARCH64(_INS_LDUMAXLH): + case CS_AARCH64(_INS_LDUMAX): + case CS_AARCH64(_INS_LDUMAXA): + case CS_AARCH64(_INS_LDUMAXAL): + case CS_AARCH64(_INS_LDUMAXL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMAX): + case CS_AARCH64(_INS_STUMAXL): + case CS_AARCH64(_INS_STUMAXB): + case CS_AARCH64(_INS_STUMAXLB): + case CS_AARCH64(_INS_STUMAXH): + case CS_AARCH64(_INS_STUMAXLH): +#endif + case CS_AARCH64(_INS_LDUMINB): + case CS_AARCH64(_INS_LDUMINAB): + case CS_AARCH64(_INS_LDUMINALB): + case CS_AARCH64(_INS_LDUMINLB): + case CS_AARCH64(_INS_LDUMINH): + case CS_AARCH64(_INS_LDUMINAH): + case CS_AARCH64(_INS_LDUMINALH): + case CS_AARCH64(_INS_LDUMINLH): + case CS_AARCH64(_INS_LDUMIN): + case CS_AARCH64(_INS_LDUMINA): + case CS_AARCH64(_INS_LDUMINAL): + case CS_AARCH64(_INS_LDUMINL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_STUMIN): + case CS_AARCH64(_INS_STUMINL): + case CS_AARCH64(_INS_STUMINB): + case CS_AARCH64(_INS_STUMINLB): + case CS_AARCH64(_INS_STUMINH): + case CS_AARCH64(_INS_STUMINLH): +#endif return ldadd(insn); #endif - case ARM64_INS_MADD: - case ARM64_INS_MSUB: + case CS_AARCH64(_INS_MADD): + case CS_AARCH64(_INS_MSUB): +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == AArch64_INS_ALIAS_MUL || + insn->alias_id == AArch64_INS_ALIAS_MNEG) { + return mul(insn); + } +#endif return madd(insn); - case ARM64_INS_MUL: - case ARM64_INS_MNEG: + case CS_AARCH64(_INS_MUL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_MNEG): +#endif return mul(insn); - case ARM64_INS_MOV: - case ARM64_INS_MOVZ: + case CS_AARCH64(_INS_MOV): + case CS_AARCH64(_INS_MOVZ): return mov(insn); - case ARM64_INS_MOVK: + case CS_AARCH64(_INS_MOVK): return movk(insn); - case ARM64_INS_MOVN: + case CS_AARCH64(_INS_MOVN): return movn(insn); - case ARM64_INS_MSR: + case CS_AARCH64(_INS_MSR): return msr(insn); - case ARM64_INS_MRS: + case CS_AARCH64(_INS_MRS): return mrs(insn); - case ARM64_INS_MVN: - case ARM64_INS_NEG: - case ARM64_INS_NGC: - case ARM64_INS_NEGS: - case ARM64_INS_NGCS: + case CS_AARCH64(_INS_NEG): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_MVN): + case CS_AARCH64(_INS_NGC): + case CS_AARCH64(_INS_NEGS): + case CS_AARCH64(_INS_NGCS): +#endif return mvn(insn); - case ARM64_INS_RBIT: + case CS_AARCH64(_INS_RBIT): return rbit(insn); - case ARM64_INS_REV: - case ARM64_INS_REV32: - case ARM64_INS_REV16: + case CS_AARCH64(_INS_REV): + case CS_AARCH64(_INS_REV32): + case CS_AARCH64(_INS_REV16): return rev(insn); #if CS_API_MAJOR > 4 - case ARM64_INS_RMIF: + case CS_AARCH64(_INS_RMIF): return rmif(insn); #endif - case ARM64_INS_SBFIZ: - case ARM64_INS_SBFX: - case ARM64_INS_UBFIZ: - case ARM64_INS_UBFX: - return sbfx(insn); - case ARM64_INS_SDIV: + case CS_AARCH64(_INS_SBFM): + case CS_AARCH64(_INS_UBFM): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_SBFIZ): + case CS_AARCH64(_INS_SBFX): + case CS_AARCH64(_INS_UBFIZ): + case CS_AARCH64(_INS_UBFX): +#else + if (insn->alias_id == AArch64_INS_ALIAS_UXTH || + insn->alias_id == AArch64_INS_ALIAS_UXTB || + insn->alias_id == AArch64_INS_ALIAS_SXTH || + insn->alias_id == AArch64_INS_ALIAS_SXTB || + insn->alias_id == AArch64_INS_ALIAS_SXTW) { + return sxt(insn); + } +#endif + return usbfm(insn); + case CS_AARCH64(_INS_SDIV): return sdiv(insn); #if CS_API_MAJOR > 4 - case ARM64_INS_SETF8: - case ARM64_INS_SETF16: + case CS_AARCH64(_INS_SETF8): + case CS_AARCH64(_INS_SETF16): return setf(insn); #endif - case ARM64_INS_SMADDL: - case ARM64_INS_SMSUBL: - case ARM64_INS_UMADDL: - case ARM64_INS_UMSUBL: + case CS_AARCH64(_INS_SMADDL): + case CS_AARCH64(_INS_SMSUBL): + case CS_AARCH64(_INS_UMADDL): + case CS_AARCH64(_INS_UMSUBL): +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == AArch64_INS_ALIAS_SMULL || + insn->alias_id == AArch64_INS_ALIAS_UMULL || + insn->alias_id == AArch64_INS_ALIAS_SMNEGL || + insn->alias_id == AArch64_INS_ALIAS_UMNEGL) { + return smull(insn); + } +#endif return smaddl(insn); - case ARM64_INS_SMULL: - case ARM64_INS_SMNEGL: - case ARM64_INS_UMULL: - case ARM64_INS_UMNEGL: + case CS_AARCH64(_INS_SMULL): + case CS_AARCH64(_INS_UMULL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_SMNEGL): + case CS_AARCH64(_INS_UMNEGL): +#endif return smull(insn); - case ARM64_INS_SMULH: - case ARM64_INS_UMULH: + case CS_AARCH64(_INS_SMULH): + case CS_AARCH64(_INS_UMULH): return smulh(insn); - case ARM64_INS_STR: - case ARM64_INS_STUR: - case ARM64_INS_STRB: - case ARM64_INS_STURB: - case ARM64_INS_STRH: - case ARM64_INS_STURH: - case ARM64_INS_STLR: - case ARM64_INS_STLRB: - case ARM64_INS_STLRH: - case ARM64_INS_STP: - case ARM64_INS_STNP: - case ARM64_INS_STXR: - case ARM64_INS_STXRB: - case ARM64_INS_STXRH: - case ARM64_INS_STXP: - case ARM64_INS_STLXR: - case ARM64_INS_STLXRB: - case ARM64_INS_STLXRH: - case ARM64_INS_STLXP: - case ARM64_INS_STTR: - case ARM64_INS_STTRB: - case ARM64_INS_STTRH: + case CS_AARCH64(_INS_STR): + case CS_AARCH64(_INS_STUR): + case CS_AARCH64(_INS_STRB): + case CS_AARCH64(_INS_STURB): + case CS_AARCH64(_INS_STRH): + case CS_AARCH64(_INS_STURH): + case CS_AARCH64(_INS_STLR): + case CS_AARCH64(_INS_STLRB): + case CS_AARCH64(_INS_STLRH): + case CS_AARCH64(_INS_STP): + case CS_AARCH64(_INS_STNP): + case CS_AARCH64(_INS_STXR): + case CS_AARCH64(_INS_STXRB): + case CS_AARCH64(_INS_STXRH): + case CS_AARCH64(_INS_STXP): + case CS_AARCH64(_INS_STLXR): + case CS_AARCH64(_INS_STLXRB): + case CS_AARCH64(_INS_STLXRH): + case CS_AARCH64(_INS_STLXP): + case CS_AARCH64(_INS_STTR): + case CS_AARCH64(_INS_STTRB): + case CS_AARCH64(_INS_STTRH): #if CS_API_MAJOR > 4 - case ARM64_INS_STLLR: - case ARM64_INS_STLLRB: - case ARM64_INS_STLLRH: - case ARM64_INS_STLUR: - case ARM64_INS_STLURB: - case ARM64_INS_STLURH: + case CS_AARCH64(_INS_STLLR): + case CS_AARCH64(_INS_STLLRB): + case CS_AARCH64(_INS_STLLRH): + case CS_AARCH64(_INS_STLUR): + case CS_AARCH64(_INS_STLURB): + case CS_AARCH64(_INS_STLURH): #endif return str(insn); #if CS_API_MAJOR > 4 - case ARM64_INS_SWP: - case ARM64_INS_SWPA: - case ARM64_INS_SWPAL: - case ARM64_INS_SWPL: - case ARM64_INS_SWPB: - case ARM64_INS_SWPAB: - case ARM64_INS_SWPALB: - case ARM64_INS_SWPLB: - case ARM64_INS_SWPH: - case ARM64_INS_SWPAH: - case ARM64_INS_SWPALH: - case ARM64_INS_SWPLH: + case CS_AARCH64(_INS_SWP): + case CS_AARCH64(_INS_SWPA): + case CS_AARCH64(_INS_SWPAL): + case CS_AARCH64(_INS_SWPL): + case CS_AARCH64(_INS_SWPB): + case CS_AARCH64(_INS_SWPAB): + case CS_AARCH64(_INS_SWPALB): + case CS_AARCH64(_INS_SWPLB): + case CS_AARCH64(_INS_SWPH): + case CS_AARCH64(_INS_SWPAH): + case CS_AARCH64(_INS_SWPALH): + case CS_AARCH64(_INS_SWPLH): return swp(insn); #endif - case ARM64_INS_SXTB: - case ARM64_INS_SXTH: - case ARM64_INS_SXTW: - case ARM64_INS_UXTB: - case ARM64_INS_UXTH: + case CS_AARCH64(_INS_SXTB): + case CS_AARCH64(_INS_SXTH): + case CS_AARCH64(_INS_SXTW): + case CS_AARCH64(_INS_UXTB): + case CS_AARCH64(_INS_UXTH): return sxt(insn); - case ARM64_INS_TBNZ: - case ARM64_INS_TBZ: + case CS_AARCH64(_INS_TBNZ): + case CS_AARCH64(_INS_TBZ): return tbz(insn); - case ARM64_INS_TST: +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_TST): return tst(insn); - case ARM64_INS_UDIV: +#endif + case CS_AARCH64(_INS_UDIV): return udiv(insn); default: break; diff --git a/librz/analysis/fcn.c b/librz/analysis/fcn.c index ae98ee9a851..a76f2335746 100644 --- a/librz/analysis/fcn.c +++ b/librz/analysis/fcn.c @@ -1418,7 +1418,7 @@ static RzAnalysisBBEndCause run_basic_block_analysis(RzAnalysisTaskItem *item, R rz_analysis_task_item_new(analysis, tasks, fcn, NULL, op.jump, sp); goto beach; } - if (!op.cond) { + if (op.cond == RZ_TYPE_COND_AL) { RZ_LOG_DEBUG("RET 0x%08" PFMT64x ". overlap=%s %" PFMT64u " %" PFMT64u "\n", addr + delay.un_idx - oplen, rz_str_bool(overlapped), bb->size, rz_analysis_function_linear_size(fcn)); diff --git a/librz/analysis/op.c b/librz/analysis/op.c index 35122dee4f4..25e963d468c 100644 --- a/librz/analysis/op.c +++ b/librz/analysis/op.c @@ -31,6 +31,7 @@ RZ_API void rz_analysis_op_init(RzAnalysisOp *op) { op->val = UT64_MAX; op->disp = UT64_MAX; op->mmio_address = UT64_MAX; + op->stackptr = RZ_ANALYSIS_OP_INVALID_STACKPTR; } } @@ -213,6 +214,8 @@ RZ_API bool rz_analysis_op_ismemref(int t) { case RZ_ANALYSIS_OP_TYPE_STORE: case RZ_ANALYSIS_OP_TYPE_LEA: case RZ_ANALYSIS_OP_TYPE_CMP: + case RZ_ANALYSIS_OP_TYPE_POP: + case RZ_ANALYSIS_OP_TYPE_PUSH: return true; default: return false; diff --git a/librz/analysis/p/analysis_arm_cs.c b/librz/analysis/p/analysis_arm_cs.c index d0725d811a7..5710c5efac8 100644 --- a/librz/analysis/p/analysis_arm_cs.c +++ b/librz/analysis/p/analysis_arm_cs.c @@ -169,6 +169,14 @@ static const char *vector_data_type_name(arm_vectordata_type type) { } } +static bool cc_holds_cond(CS_aarch64_cc() cc) { +#if CS_NEXT_VERSION >= 6 + return (cc != CS_AARCH64CC(_Invalid) && cc != CS_AARCH64CC(_AL) && cc != CS_AARCH64CC(_NV)); +#else + return (cc != CS_AARCH64CC(_INVALID) && cc != CS_AARCH64CC(_AL) && cc != CS_AARCH64CC(_NV)); +#endif +} + static void opex(RzStrBuf *buf, csh handle, cs_insn *insn) { int i; PJ *pj = pj_new(); @@ -316,94 +324,94 @@ static void opex(RzStrBuf *buf, csh handle, cs_insn *insn) { pj_free(pj); } -static const char *cc_name64(arm64_cc cc) { +static const char *cc_name64(CS_aarch64_cc() cc) { switch (cc) { - case ARM64_CC_EQ: // Equal + case CS_AARCH64CC(_EQ): // Equal return "eq"; - case ARM64_CC_NE: // Not equal: Not equal, or unordered + case CS_AARCH64CC(_NE): // Not equal: Not equal, or unordered return "ne"; - case ARM64_CC_HS: // Unsigned higher or same: >, ==, or unordered + case CS_AARCH64CC(_HS): // Unsigned higher or same: >, ==, or unordered return "hs"; - case ARM64_CC_LO: // Unsigned lower or same: Less than + case CS_AARCH64CC(_LO): // Unsigned lower or same: Less than return "lo"; - case ARM64_CC_MI: // Minus, negative: Less than + case CS_AARCH64CC(_MI): // Minus, negative: Less than return "mi"; - case ARM64_CC_PL: // Plus, positive or zero: >, ==, or unordered + case CS_AARCH64CC(_PL): // Plus, positive or zero: >, ==, or unordered return "pl"; - case ARM64_CC_VS: // Overflow: Unordered + case CS_AARCH64CC(_VS): // Overflow: Unordered return "vs"; - case ARM64_CC_VC: // No overflow: Ordered + case CS_AARCH64CC(_VC): // No overflow: Ordered return "vc"; - case ARM64_CC_HI: // Unsigned higher: Greater than, or unordered + case CS_AARCH64CC(_HI): // Unsigned higher: Greater than, or unordered return "hi"; - case ARM64_CC_LS: // Unsigned lower or same: Less than or equal + case CS_AARCH64CC(_LS): // Unsigned lower or same: Less than or equal return "ls"; - case ARM64_CC_GE: // Greater than or equal: Greater than or equal + case CS_AARCH64CC(_GE): // Greater than or equal: Greater than or equal return "ge"; - case ARM64_CC_LT: // Less than: Less than, or unordered + case CS_AARCH64CC(_LT): // Less than: Less than, or unordered return "lt"; - case ARM64_CC_GT: // Signed greater than: Greater than + case CS_AARCH64CC(_GT): // Signed greater than: Greater than return "gt"; - case ARM64_CC_LE: // Signed less than or equal: <, ==, or unordered + case CS_AARCH64CC(_LE): // Signed less than or equal: <, ==, or unordered return "le"; default: return ""; } } -static const char *extender_name(arm64_extender extender) { +static const char *extender_name(CS_aarch64_extender() extender) { switch (extender) { - case ARM64_EXT_UXTB: + case CS_AARCH64(_EXT_UXTB): return "uxtb"; - case ARM64_EXT_UXTH: + case CS_AARCH64(_EXT_UXTH): return "uxth"; - case ARM64_EXT_UXTW: + case CS_AARCH64(_EXT_UXTW): return "uxtw"; - case ARM64_EXT_UXTX: + case CS_AARCH64(_EXT_UXTX): return "uxtx"; - case ARM64_EXT_SXTB: + case CS_AARCH64(_EXT_SXTB): return "sxtb"; - case ARM64_EXT_SXTH: + case CS_AARCH64(_EXT_SXTH): return "sxth"; - case ARM64_EXT_SXTW: + case CS_AARCH64(_EXT_SXTW): return "sxtw"; - case ARM64_EXT_SXTX: + case CS_AARCH64(_EXT_SXTX): return "sxtx"; default: return ""; } } -static const char *vas_name(arm64_vas vas) { +static const char *vas_name(CS_aarch64_vas() vas) { switch (vas) { - case ARM64_VAS_8B: + case CS_AARCH64_VL_(8B): return "8b"; - case ARM64_VAS_16B: + case CS_AARCH64_VL_(16B): return "16b"; - case ARM64_VAS_4H: + case CS_AARCH64_VL_(4H): return "4h"; - case ARM64_VAS_8H: + case CS_AARCH64_VL_(8H): return "8h"; - case ARM64_VAS_2S: + case CS_AARCH64_VL_(2S): return "2s"; - case ARM64_VAS_4S: + case CS_AARCH64_VL_(4S): return "4s"; - case ARM64_VAS_2D: + case CS_AARCH64_VL_(2D): return "2d"; - case ARM64_VAS_1D: + case CS_AARCH64_VL_(1D): return "1d"; - case ARM64_VAS_1Q: + case CS_AARCH64_VL_(1Q): return "1q"; -#if CS_API_MAJOR > 4 - case ARM64_VAS_1B: +#if CS_API_MAJOR > 4 && CS_NEXT_VERSION < 6 + case CS_AARCH64_VL_(1B): return "8b"; - case ARM64_VAS_4B: + case CS_AARCH64_VL_(4B): return "8b"; - case ARM64_VAS_2H: + case CS_AARCH64_VL_(2H): return "2h"; - case ARM64_VAS_1H: + case CS_AARCH64_VL_(1H): return "1h"; - case ARM64_VAS_1S: + case CS_AARCH64_VL_(1S): return "1s"; #endif default: @@ -436,45 +444,46 @@ static void opex64(RzStrBuf *buf, csh handle, cs_insn *insn) { } pj_o(pj); pj_ka(pj, "operands"); - cs_arm64 *x = &insn->detail->arm64; + CS_cs_aarch64() *x = &insn->detail->CS_aarch64_; for (i = 0; i < x->op_count; i++) { - cs_arm64_op *op = x->operands + i; + CS_aarch64_op() *op = x->operands + i; pj_o(pj); switch (op->type) { - case ARM64_OP_REG: + case CS_AARCH64(_OP_REG): pj_ks(pj, "type", "reg"); pj_ks(pj, "value", cs_reg_name(handle, op->reg)); break; - case ARM64_OP_REG_MRS: + case CS_AARCH64(_OP_REG_MRS): pj_ks(pj, "type", "reg_mrs"); // TODO value break; - case ARM64_OP_REG_MSR: + case CS_AARCH64(_OP_REG_MSR): pj_ks(pj, "type", "reg_msr"); // TODO value break; - case ARM64_OP_IMM: + case CS_AARCH64(_OP_IMM): pj_ks(pj, "type", "imm"); pj_kN(pj, "value", op->imm); break; - case ARM64_OP_MEM: + case CS_AARCH64(_OP_MEM): pj_ks(pj, "type", "mem"); - if (op->mem.base != ARM64_REG_INVALID) { + if (op->mem.base != CS_AARCH64(_REG_INVALID)) { pj_ks(pj, "base", cs_reg_name(handle, op->mem.base)); } - if (op->mem.index != ARM64_REG_INVALID) { + if (op->mem.index != CS_AARCH64(_REG_INVALID)) { pj_ks(pj, "index", cs_reg_name(handle, op->mem.index)); } pj_ki(pj, "disp", op->mem.disp); break; - case ARM64_OP_FP: + case CS_AARCH64(_OP_FP): pj_ks(pj, "type", "fp"); pj_kd(pj, "value", op->fp); break; - case ARM64_OP_CIMM: + case CS_AARCH64(_OP_CIMM): pj_ks(pj, "type", "cimm"); pj_kN(pj, "value", op->imm); break; +#if CS_NEXT_VERSION < 6 case ARM64_OP_PSTATE: pj_ks(pj, "type", "pstate"); switch (op->pstate) { @@ -503,26 +512,64 @@ static void opex64(RzStrBuf *buf, csh handle, cs_insn *insn) { pj_ks(pj, "type", "prefetch"); pj_ki(pj, "value", op->barrier - 1); break; +#else + case AArch64_OP_SYSALIAS: + switch (op->sysop.sub_type) { + default: + pj_ks(pj, "type", "sys"); + pj_kn(pj, "value", op->sysop.alias.raw_val); + break; + case AArch64_OP_PSTATEIMM0_1: + pj_ks(pj, "type", "pstate"); + pj_ki(pj, "value", op->sysop.alias.pstateimm0_1); + break; + case AArch64_OP_PSTATEIMM0_15: + pj_ks(pj, "type", "pstate"); + switch (op->sysop.alias.pstateimm0_15) { + case AArch64_PSTATEIMM0_15_SPSEL: + pj_ks(pj, "value", "spsel"); + break; + case AArch64_PSTATEIMM0_15_DAIFSET: + pj_ks(pj, "value", "daifset"); + break; + case AArch64_PSTATEIMM0_15_DAIFCLR: + pj_ks(pj, "value", "daifclr"); + break; + default: + pj_ki(pj, "value", op->sysop.alias.pstateimm0_15); + } + break; + case AArch64_OP_PRFM: + pj_ks(pj, "type", "prefetch"); + pj_ki(pj, "value", op->sysop.alias.prfm); + break; + case AArch64_OP_DB: + pj_ks(pj, "type", "prefetch"); + pj_ki(pj, "value", op->sysop.alias.db); + break; + } + break; +#endif default: pj_ks(pj, "type", "invalid"); break; } - if (op->shift.type != ARM64_SFT_INVALID) { + if (op->shift.type != CS_AARCH64(_SFT_INVALID)) { pj_ko(pj, "shift"); switch (op->shift.type) { - case ARM64_SFT_LSL: + case CS_AARCH64(_SFT_LSL): pj_ks(pj, "type", "lsl"); break; - case ARM64_SFT_MSL: + case CS_AARCH64(_SFT_MSL): pj_ks(pj, "type", "msl"); break; - case ARM64_SFT_LSR: + case CS_AARCH64(_SFT_LSR): pj_ks(pj, "type", "lsr"); break; - case ARM64_SFT_ASR: + case CS_AARCH64(_SFT_ASR): pj_ks(pj, "type", "asr"); break; - case ARM64_SFT_ROR: + case CS_AARCH64(_SFT_ROR): pj_ks(pj, "type", "ror"); break; default: @@ -531,13 +578,17 @@ static void opex64(RzStrBuf *buf, csh handle, cs_insn *insn) { pj_kn(pj, "value", (ut64)op->shift.value); pj_end(pj); } - if (op->ext != ARM64_EXT_INVALID) { + if (op->ext != CS_AARCH64(_EXT_INVALID)) { pj_ks(pj, "ext", extender_name(op->ext)); } if (op->vector_index != -1) { pj_ki(pj, "vector_index", op->vector_index); } - if (op->vas != ARM64_VAS_INVALID) { +#if CS_NEXT_VERSION < 6 + if (op->vas != CS_AARCH64_VL_(INVALID)) { +#else + if (op->vas != AArch64Layout_Invalid) { +#endif pj_ks(pj, "vas", vas_name(op->vas)); } #if CS_API_MAJOR == 4 @@ -551,10 +602,14 @@ static void opex64(RzStrBuf *buf, csh handle, cs_insn *insn) { if (x->update_flags) { pj_kb(pj, "update_flags", true); } +#if CS_NEXT_VERSION < 6 if (x->writeback) { +#else + if (insn->detail->writeback) { +#endif pj_kb(pj, "writeback", true); } - if (x->cc != ARM64_CC_INVALID && x->cc != ARM64_CC_AL && x->cc != ARM64_CC_NV) { + if (cc_holds_cond(x->cc)) { pj_ks(pj, "cc", cc_name64(x->cc)); } pj_end(pj); @@ -564,8 +619,12 @@ static void opex64(RzStrBuf *buf, csh handle, cs_insn *insn) { pj_free(pj); } -static int cond_cs2r2_32(int cc) { - if (cc == CS_ARMCC(AL) || cc < 0) { +static int cond_cs2rz_32(int cc) { +#if CS_NEXT_VERSION >= 6 + if (cc == ARMCC_AL || cc < 0 || cc == ARMCC_UNDEF) { +#else + if (cc == ARM_CC_AL || cc < 0) { +#endif cc = RZ_TYPE_COND_AL; } else { switch (cc) { @@ -588,25 +647,29 @@ static int cond_cs2r2_32(int cc) { return cc; } -static int cond_cs2r2_64(int cc) { - if (cc == ARM64_CC_AL || cc < 0) { +static int cond_cs2rz_64(int cc) { + if (cc == CS_AARCH64CC(_AL) || cc < 0) { cc = RZ_TYPE_COND_AL; } else { switch (cc) { - case ARM64_CC_EQ: cc = RZ_TYPE_COND_EQ; break; - case ARM64_CC_NE: cc = RZ_TYPE_COND_NE; break; - case ARM64_CC_HS: cc = RZ_TYPE_COND_HS; break; - case ARM64_CC_LO: cc = RZ_TYPE_COND_LO; break; - case ARM64_CC_MI: cc = RZ_TYPE_COND_MI; break; - case ARM64_CC_PL: cc = RZ_TYPE_COND_PL; break; - case ARM64_CC_VS: cc = RZ_TYPE_COND_VS; break; - case ARM64_CC_VC: cc = RZ_TYPE_COND_VC; break; - case ARM64_CC_HI: cc = RZ_TYPE_COND_HI; break; - case ARM64_CC_LS: cc = RZ_TYPE_COND_LS; break; - case ARM64_CC_GE: cc = RZ_TYPE_COND_GE; break; - case ARM64_CC_LT: cc = RZ_TYPE_COND_LT; break; - case ARM64_CC_GT: cc = RZ_TYPE_COND_GT; break; - case ARM64_CC_LE: cc = RZ_TYPE_COND_LE; break; + case CS_AARCH64CC(_EQ): cc = RZ_TYPE_COND_EQ; break; + case CS_AARCH64CC(_NE): cc = RZ_TYPE_COND_NE; break; + case CS_AARCH64CC(_HS): cc = RZ_TYPE_COND_HS; break; + case CS_AARCH64CC(_LO): cc = RZ_TYPE_COND_LO; break; + case CS_AARCH64CC(_MI): cc = RZ_TYPE_COND_MI; break; + case CS_AARCH64CC(_PL): cc = RZ_TYPE_COND_PL; break; + case CS_AARCH64CC(_VS): cc = RZ_TYPE_COND_VS; break; + case CS_AARCH64CC(_VC): cc = RZ_TYPE_COND_VC; break; + case CS_AARCH64CC(_HI): cc = RZ_TYPE_COND_HI; break; + case CS_AARCH64CC(_LS): cc = RZ_TYPE_COND_LS; break; + case CS_AARCH64CC(_GE): cc = RZ_TYPE_COND_GE; break; + case CS_AARCH64CC(_LT): cc = RZ_TYPE_COND_LT; break; + case CS_AARCH64CC(_GT): cc = RZ_TYPE_COND_GT; break; + case CS_AARCH64CC(_LE): cc = RZ_TYPE_COND_LE; break; + case CS_AARCH64CC(_NV): cc = RZ_TYPE_COND_AL; break; +#if CS_NEXT_VERSION >= 6 + case CS_AARCH64CC(_Invalid): cc = RZ_TYPE_COND_AL; break; +#endif } } return cc; @@ -616,6 +679,7 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { csh handle = ctx->handle; ut64 addr = op->addr; +#if CS_NEXT_VERSION < 6 /* grab family */ if (cs_insn_group(handle, insn, ARM64_GRP_CRYPTO)) { op->family = RZ_ANALYSIS_OP_FAMILY_CRYPTO; @@ -630,18 +694,34 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { } else { op->family = RZ_ANALYSIS_OP_FAMILY_CPU; } +#else + /* grab family */ + if (cs_insn_group(handle, insn, AArch64_FEATURE_HasAES)) { + op->family = RZ_ANALYSIS_OP_FAMILY_CRYPTO; + } else if (cs_insn_group(handle, insn, AArch64_FEATURE_HasCRC)) { + op->family = RZ_ANALYSIS_OP_FAMILY_CRYPTO; + } else if (cs_insn_group(handle, insn, AArch64_GRP_PRIVILEGE)) { + op->family = RZ_ANALYSIS_OP_FAMILY_PRIV; + } else if (cs_insn_group(handle, insn, AArch64_FEATURE_HasNEON)) { + op->family = RZ_ANALYSIS_OP_FAMILY_MMX; + } else if (cs_insn_group(handle, insn, AArch64_FEATURE_HasFPARMv8)) { + op->family = RZ_ANALYSIS_OP_FAMILY_FPU; + } else { + op->family = RZ_ANALYSIS_OP_FAMILY_CPU; + } +#endif - op->cond = cond_cs2r2_64(insn->detail->arm64.cc); + op->cond = cond_cs2rz_64(insn->detail->CS_aarch64_.cc); if (op->cond == RZ_TYPE_COND_NV) { op->type = RZ_ANALYSIS_OP_TYPE_NOP; return; } - switch (insn->detail->arm64.cc) { - case ARM64_CC_GE: - case ARM64_CC_GT: - case ARM64_CC_LE: - case ARM64_CC_LT: + switch (insn->detail->CS_aarch64_.cc) { + case CS_AARCH64CC(_GE): + case CS_AARCH64CC(_GT): + case CS_AARCH64CC(_LE): + case CS_AARCH64CC(_LT): op->sign = true; break; default: @@ -650,62 +730,69 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { switch (insn->id) { #if CS_API_MAJOR > 4 - case ARM64_INS_PACDA: - case ARM64_INS_PACDB: - case ARM64_INS_PACDZA: - case ARM64_INS_PACDZB: - case ARM64_INS_PACGA: - case ARM64_INS_PACIA: - case ARM64_INS_PACIA1716: - case ARM64_INS_PACIASP: - case ARM64_INS_PACIAZ: - case ARM64_INS_PACIB: - case ARM64_INS_PACIB1716: - case ARM64_INS_PACIBSP: - case ARM64_INS_PACIBZ: - case ARM64_INS_PACIZA: - case ARM64_INS_PACIZB: - case ARM64_INS_AUTDA: - case ARM64_INS_AUTDB: - case ARM64_INS_AUTDZA: - case ARM64_INS_AUTDZB: - case ARM64_INS_AUTIA: - case ARM64_INS_AUTIA1716: - case ARM64_INS_AUTIASP: - case ARM64_INS_AUTIAZ: - case ARM64_INS_AUTIB: - case ARM64_INS_AUTIB1716: - case ARM64_INS_AUTIBSP: - case ARM64_INS_AUTIBZ: - case ARM64_INS_AUTIZA: - case ARM64_INS_AUTIZB: - case ARM64_INS_XPACD: - case ARM64_INS_XPACI: - case ARM64_INS_XPACLRI: + case CS_AARCH64(_INS_PACDA): + case CS_AARCH64(_INS_PACDB): + case CS_AARCH64(_INS_PACDZA): + case CS_AARCH64(_INS_PACDZB): + case CS_AARCH64(_INS_PACGA): + case CS_AARCH64(_INS_PACIA): + case CS_AARCH64(_INS_PACIB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_PACIA1716): + case CS_AARCH64(_INS_PACIASP): + case CS_AARCH64(_INS_PACIAZ): + case CS_AARCH64(_INS_PACIB1716): + case CS_AARCH64(_INS_PACIBSP): + case CS_AARCH64(_INS_PACIBZ): +#endif + case CS_AARCH64(_INS_PACIZA): + case CS_AARCH64(_INS_PACIZB): + case CS_AARCH64(_INS_AUTDA): + case CS_AARCH64(_INS_AUTDB): + case CS_AARCH64(_INS_AUTDZA): + case CS_AARCH64(_INS_AUTDZB): + case CS_AARCH64(_INS_AUTIA): + case CS_AARCH64(_INS_AUTIB): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_AUTIA1716): + case CS_AARCH64(_INS_AUTIASP): + case CS_AARCH64(_INS_AUTIAZ): + case CS_AARCH64(_INS_AUTIB1716): + case CS_AARCH64(_INS_AUTIBSP): + case CS_AARCH64(_INS_AUTIBZ): + case CS_AARCH64(_INS_XPACLRI): +#endif + case CS_AARCH64(_INS_AUTIZA): + case CS_AARCH64(_INS_AUTIZB): + case CS_AARCH64(_INS_XPACD): + case CS_AARCH64(_INS_XPACI): op->type = RZ_ANALYSIS_OP_TYPE_CMP; op->family = RZ_ANALYSIS_OP_FAMILY_SECURITY; break; #endif - case ARM64_INS_SVC: + case CS_AARCH64(_INS_SVC): op->type = RZ_ANALYSIS_OP_TYPE_SWI; op->val = IMM64(0); break; - case ARM64_INS_ADRP: - case ARM64_INS_ADR: + case CS_AARCH64(_INS_ADRP): + case CS_AARCH64(_INS_ADR): op->type = RZ_ANALYSIS_OP_TYPE_LEA; op->ptr = IMM64(1); break; - case ARM64_INS_NOP: + case CS_AARCH64(_INS_HINT): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_NOP): +#endif op->type = RZ_ANALYSIS_OP_TYPE_NOP; op->cycles = 1; break; - case ARM64_INS_SUB: - if (ISREG64(0) && REGID64(0) == ARM64_REG_SP) { + case CS_AARCH64(_INS_SUB): + if (ISREG64(0) && REGID64(0) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; if (ISIMM64(1)) { // sub sp, 0x54 op->stackptr = IMM(1); - } else if (ISIMM64(2) && ISREG64(1) && REGID64(1) == ARM64_REG_SP) { + } else if (ISIMM64(2) && ISREG64(1) && REGID64(1) == CS_AARCH64(_REG_SP)) { // sub sp, sp, 0x10 op->stackptr = IMM64(2); } @@ -713,31 +800,31 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { } op->cycles = 1; /* fallthru */ - case ARM64_INS_MSUB: + case CS_AARCH64(_INS_MSUB): op->type = RZ_ANALYSIS_OP_TYPE_SUB; break; - case ARM64_INS_FDIV: - case ARM64_INS_SDIV: - case ARM64_INS_UDIV: + case CS_AARCH64(_INS_FDIV): + case CS_AARCH64(_INS_SDIV): + case CS_AARCH64(_INS_UDIV): op->cycles = 4; op->type = RZ_ANALYSIS_OP_TYPE_DIV; break; - case ARM64_INS_MUL: - case ARM64_INS_SMULL: - case ARM64_INS_FMUL: - case ARM64_INS_UMULL: + case CS_AARCH64(_INS_MUL): + case CS_AARCH64(_INS_SMULL): + case CS_AARCH64(_INS_FMUL): + case CS_AARCH64(_INS_UMULL): /* TODO: if next instruction is also a MUL, cycles are /=2 */ /* also known as Register Indexing Addressing */ op->cycles = 4; op->type = RZ_ANALYSIS_OP_TYPE_MUL; break; - case ARM64_INS_ADD: - if (ISREG64(0) && REGID64(0) == ARM64_REG_SP) { + case CS_AARCH64(_INS_ADD): + if (ISREG64(0) && REGID64(0) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; if (ISIMM64(1)) { // add sp, 0x54 op->stackptr = -(st64)IMM(1); - } else if (ISIMM64(2) && ISREG64(1) && REGID64(1) == ARM64_REG_SP) { + } else if (ISIMM64(2) && ISREG64(1) && REGID64(1) == CS_AARCH64(_REG_SP)) { // add sp, sp, 0x10 op->stackptr = -(st64)IMM64(2); } @@ -747,22 +834,24 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { } op->cycles = 1; /* fallthru */ - case ARM64_INS_ADC: - // case ARM64_INS_ADCS: - case ARM64_INS_UMADDL: - case ARM64_INS_SMADDL: - case ARM64_INS_FMADD: - case ARM64_INS_MADD: + case CS_AARCH64(_INS_ADC): + // case CS_AARCH64(_INS_ADCS): + case CS_AARCH64(_INS_UMADDL): + case CS_AARCH64(_INS_SMADDL): + case CS_AARCH64(_INS_FMADD): + case CS_AARCH64(_INS_MADD): op->type = RZ_ANALYSIS_OP_TYPE_ADD; break; - case ARM64_INS_CSEL: - case ARM64_INS_FCSEL: - case ARM64_INS_CSET: - case ARM64_INS_CINC: + case CS_AARCH64(_INS_CSEL): + case CS_AARCH64(_INS_FCSEL): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_CSET): + case CS_AARCH64(_INS_CINC): +#endif op->type = RZ_ANALYSIS_OP_TYPE_CMOV; break; - case ARM64_INS_MOV: - if (REGID64(0) == ARM64_REG_SP) { + case CS_AARCH64(_INS_MOV): + if (REGID64(0) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_RESET; op->stackptr = 0; } @@ -771,169 +860,181 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { } op->cycles = 1; /* fallthru */ - case ARM64_INS_MOVI: - case ARM64_INS_MOVK: - case ARM64_INS_MOVN: - case ARM64_INS_SMOV: - case ARM64_INS_UMOV: - case ARM64_INS_FMOV: - case ARM64_INS_SBFX: - case ARM64_INS_UBFX: - case ARM64_INS_UBFM: - case ARM64_INS_SBFIZ: - case ARM64_INS_UBFIZ: - case ARM64_INS_BIC: - case ARM64_INS_BFI: - case ARM64_INS_BFXIL: + case CS_AARCH64(_INS_MOVI): + case CS_AARCH64(_INS_MOVK): + case CS_AARCH64(_INS_MOVN): + case CS_AARCH64(_INS_SMOV): + case CS_AARCH64(_INS_UMOV): + case CS_AARCH64(_INS_FMOV): + case CS_AARCH64(_INS_UBFM): + case CS_AARCH64(_INS_BIC): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_SBFX): + case CS_AARCH64(_INS_UBFX): + case CS_AARCH64(_INS_SBFIZ): + case CS_AARCH64(_INS_UBFIZ): + case CS_AARCH64(_INS_BFI): + case CS_AARCH64(_INS_BFXIL): +#endif op->type = RZ_ANALYSIS_OP_TYPE_MOV; break; - case ARM64_INS_MRS: - case ARM64_INS_MSR: + case CS_AARCH64(_INS_MRS): + case CS_AARCH64(_INS_MSR): op->type = RZ_ANALYSIS_OP_TYPE_MOV; op->family = RZ_ANALYSIS_OP_FAMILY_PRIV; break; - case ARM64_INS_MOVZ: + case CS_AARCH64(_INS_MOVZ): op->type = RZ_ANALYSIS_OP_TYPE_MOV; op->ptr = 0LL; op->ptrsize = 8; op->val = IMM64(1); break; - case ARM64_INS_UXTB: - case ARM64_INS_SXTB: + case CS_AARCH64(_INS_UXTB): + case CS_AARCH64(_INS_SXTB): op->type = RZ_ANALYSIS_OP_TYPE_CAST; op->ptr = 0LL; op->ptrsize = 1; break; - case ARM64_INS_UXTH: - case ARM64_INS_SXTH: + case CS_AARCH64(_INS_UXTH): + case CS_AARCH64(_INS_SXTH): op->type = RZ_ANALYSIS_OP_TYPE_MOV; op->ptr = 0LL; op->ptrsize = 2; break; - case ARM64_INS_UXTW: - case ARM64_INS_SXTW: + case CS_AARCH64(_INS_UXTW): + case CS_AARCH64(_INS_SXTW): op->type = RZ_ANALYSIS_OP_TYPE_MOV; op->ptr = 0LL; op->ptrsize = 4; break; - case ARM64_INS_BRK: - case ARM64_INS_HLT: + case CS_AARCH64(_INS_BRK): + case CS_AARCH64(_INS_HLT): op->type = RZ_ANALYSIS_OP_TYPE_TRAP; // hlt stops the process, not skips some cycles like in x86 break; - case ARM64_INS_DMB: - case ARM64_INS_DSB: - case ARM64_INS_ISB: + case CS_AARCH64(_INS_DMB): + case CS_AARCH64(_INS_DSB): + case CS_AARCH64(_INS_ISB): op->family = RZ_ANALYSIS_OP_FAMILY_THREAD; +#if CS_NEXT_VERSION < 6 // intentional fallthrough - case ARM64_INS_IC: // instruction cache invalidate - case ARM64_INS_DC: // data cache invalidate + case CS_AARCH64(_INS_IC): // instruction cache invalidate + case CS_AARCH64(_INS_DC): // data cache invalidate +#endif op->type = RZ_ANALYSIS_OP_TYPE_SYNC; // or cache break; // XXX unimplemented instructions - case ARM64_INS_DUP: - case ARM64_INS_XTN: - case ARM64_INS_XTN2: - case ARM64_INS_REV64: - case ARM64_INS_EXT: - case ARM64_INS_INS: + case CS_AARCH64(_INS_DUP): + case CS_AARCH64(_INS_XTN): + case CS_AARCH64(_INS_XTN2): + case CS_AARCH64(_INS_REV64): + case CS_AARCH64(_INS_EXT): + case CS_AARCH64(_INS_INS): op->type = RZ_ANALYSIS_OP_TYPE_MOV; break; - case ARM64_INS_LSL: + case CS_AARCH64(_INS_LSL): op->cycles = 1; /* fallthru */ - case ARM64_INS_SHL: - case ARM64_INS_USHLL: + case CS_AARCH64(_INS_SHL): + case CS_AARCH64(_INS_USHLL): op->type = RZ_ANALYSIS_OP_TYPE_SHL; break; - case ARM64_INS_LSR: + case CS_AARCH64(_INS_LSR): op->cycles = 1; op->type = RZ_ANALYSIS_OP_TYPE_SHR; break; - case ARM64_INS_ASR: + case CS_AARCH64(_INS_ASR): op->cycles = 1; op->type = RZ_ANALYSIS_OP_TYPE_SAR; break; - case ARM64_INS_NEG: - case ARM64_INS_NEGS: + case CS_AARCH64(_INS_NEG): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_NEGS): +#endif op->type = RZ_ANALYSIS_OP_TYPE_NOT; break; - case ARM64_INS_FCMP: - case ARM64_INS_CCMP: - case ARM64_INS_CCMN: - case ARM64_INS_CMP: - case ARM64_INS_CMN: - case ARM64_INS_TST: + case CS_AARCH64(_INS_FCMP): + case CS_AARCH64(_INS_CCMP): + case CS_AARCH64(_INS_CCMN): +#if CS_NEXT_VERSION < 6 + case CS_AARCH64(_INS_CMP): + case CS_AARCH64(_INS_CMN): + case CS_AARCH64(_INS_TST): +#endif op->type = RZ_ANALYSIS_OP_TYPE_CMP; break; - case ARM64_INS_ROR: + case CS_AARCH64(_INS_ROR): op->cycles = 1; op->type = RZ_ANALYSIS_OP_TYPE_ROR; break; - case ARM64_INS_AND: + case CS_AARCH64(_INS_AND): op->type = RZ_ANALYSIS_OP_TYPE_AND; break; - case ARM64_INS_ORR: - case ARM64_INS_ORN: + case CS_AARCH64(_INS_ORR): + case CS_AARCH64(_INS_ORN): op->type = RZ_ANALYSIS_OP_TYPE_OR; if (ISIMM64(2)) { op->val = IMM64(2); } break; - case ARM64_INS_EOR: - case ARM64_INS_EON: + case CS_AARCH64(_INS_EOR): + case CS_AARCH64(_INS_EON): op->type = RZ_ANALYSIS_OP_TYPE_XOR; break; - case ARM64_INS_STRB: - case ARM64_INS_STURB: - case ARM64_INS_STUR: - case ARM64_INS_STR: - case ARM64_INS_STP: - case ARM64_INS_STNP: - case ARM64_INS_STXR: - case ARM64_INS_STXRH: - case ARM64_INS_STLXR: - case ARM64_INS_STLXRH: - case ARM64_INS_STXRB: + case CS_AARCH64(_INS_STRB): + case CS_AARCH64(_INS_STURB): + case CS_AARCH64(_INS_STUR): + case CS_AARCH64(_INS_STR): + case CS_AARCH64(_INS_STP): + case CS_AARCH64(_INS_STNP): + case CS_AARCH64(_INS_STXR): + case CS_AARCH64(_INS_STXRH): + case CS_AARCH64(_INS_STLXR): + case CS_AARCH64(_INS_STLXRH): + case CS_AARCH64(_INS_STXRB): op->type = RZ_ANALYSIS_OP_TYPE_STORE; - if (ISPREINDEX64() && REGBASE64(2) == ARM64_REG_SP) { + if (ISPREINDEX64() && REGBASE64(2) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = -MEMDISP64(2); - } else if (ISPOSTINDEX64() && REGID64(2) == ARM64_REG_SP) { + } else if (ISPOSTINDEX64() && REGID64(2) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = -IMM64(3); - } else if (ISPREINDEX64() && REGBASE64(1) == ARM64_REG_SP) { + } else if (ISPREINDEX64() && REGBASE64(1) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = -MEMDISP64(1); - } else if (ISPOSTINDEX64() && REGID64(1) == ARM64_REG_SP) { + } else if (ISPOSTINDEX64() && REGID64(1) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = -IMM64(2); } break; - case ARM64_INS_LDUR: - case ARM64_INS_LDURB: - case ARM64_INS_LDRSW: - case ARM64_INS_LDRSB: - case ARM64_INS_LDRSH: - case ARM64_INS_LDR: - case ARM64_INS_LDURSW: - case ARM64_INS_LDP: - case ARM64_INS_LDNP: - case ARM64_INS_LDPSW: - case ARM64_INS_LDRH: - case ARM64_INS_LDRB: - if (ISPREINDEX64() && REGBASE64(2) == ARM64_REG_SP) { + case CS_AARCH64(_INS_LDUR): + case CS_AARCH64(_INS_LDURB): + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDRSB): + case CS_AARCH64(_INS_LDRSH): + case CS_AARCH64(_INS_LDR): + case CS_AARCH64(_INS_LDURSW): + case CS_AARCH64(_INS_LDP): + case CS_AARCH64(_INS_LDNP): + case CS_AARCH64(_INS_LDPSW): + case CS_AARCH64(_INS_LDRH): + case CS_AARCH64(_INS_LDRB): + if (ISPREINDEX64() && REGBASE64(2) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = -MEMDISP64(2); - } else if (ISPOSTINDEX64() && REGID64(2) == ARM64_REG_SP) { + } else if (ISPOSTINDEX64() && REGID64(2) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = -IMM64(3); - } else if (ISPREINDEX64() && REGBASE64(1) == ARM64_REG_SP) { + } else if (ISPREINDEX64() && REGBASE64(1) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = -MEMDISP64(1); - } else if (ISPOSTINDEX64() && REGID64(1) == ARM64_REG_SP) { + } else if (ISPOSTINDEX64() && REGID64(1) == CS_AARCH64(_REG_SP)) { op->stackop = RZ_ANALYSIS_STACK_INC; +#if CS_NEXT_VERSION >= 6 + op->stackptr = -MEMDISP64(1); +#else op->stackptr = -IMM64(2); +#endif } if (REGID(0) == ARM_REG_PC) { op->type = RZ_ANALYSIS_OP_TYPE_UJMP; @@ -945,14 +1046,14 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { op->type = RZ_ANALYSIS_OP_TYPE_LOAD; } switch (insn->id) { - case ARM64_INS_LDPSW: - case ARM64_INS_LDRSW: - case ARM64_INS_LDRSH: - case ARM64_INS_LDRSB: + case CS_AARCH64(_INS_LDPSW): + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDRSH): + case CS_AARCH64(_INS_LDRSB): op->sign = true; break; } - if (REGBASE64(1) == ARM64_REG_X29) { + if (REGBASE64(1) == CS_AARCH64(_REG_X29)) { op->stackop = RZ_ANALYSIS_STACK_GET; op->stackptr = 0; op->ptr = MEMDISP64(1); @@ -969,73 +1070,73 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { } break; #if CS_API_MAJOR > 4 - case ARM64_INS_BLRAA: - case ARM64_INS_BLRAAZ: - case ARM64_INS_BLRAB: - case ARM64_INS_BLRABZ: + case CS_AARCH64(_INS_BLRAA): + case CS_AARCH64(_INS_BLRAAZ): + case CS_AARCH64(_INS_BLRAB): + case CS_AARCH64(_INS_BLRABZ): op->family = RZ_ANALYSIS_OP_FAMILY_SECURITY; op->type = RZ_ANALYSIS_OP_TYPE_RCALL; break; - case ARM64_INS_BRAA: - case ARM64_INS_BRAAZ: - case ARM64_INS_BRAB: - case ARM64_INS_BRABZ: + case CS_AARCH64(_INS_BRAA): + case CS_AARCH64(_INS_BRAAZ): + case CS_AARCH64(_INS_BRAB): + case CS_AARCH64(_INS_BRABZ): op->family = RZ_ANALYSIS_OP_FAMILY_SECURITY; op->type = RZ_ANALYSIS_OP_TYPE_RJMP; break; - case ARM64_INS_LDRAA: - case ARM64_INS_LDRAB: + case CS_AARCH64(_INS_LDRAA): + case CS_AARCH64(_INS_LDRAB): op->family = RZ_ANALYSIS_OP_FAMILY_SECURITY; op->type = RZ_ANALYSIS_OP_TYPE_LOAD; break; - case ARM64_INS_RETAA: - case ARM64_INS_RETAB: - case ARM64_INS_ERETAA: - case ARM64_INS_ERETAB: + case CS_AARCH64(_INS_RETAA): + case CS_AARCH64(_INS_RETAB): + case CS_AARCH64(_INS_ERETAA): + case CS_AARCH64(_INS_ERETAB): op->family = RZ_ANALYSIS_OP_FAMILY_SECURITY; op->type = RZ_ANALYSIS_OP_TYPE_RET; break; #endif - case ARM64_INS_ERET: + case CS_AARCH64(_INS_ERET): op->family = RZ_ANALYSIS_OP_FAMILY_PRIV; op->type = RZ_ANALYSIS_OP_TYPE_RET; break; - case ARM64_INS_RET: + case CS_AARCH64(_INS_RET): op->type = RZ_ANALYSIS_OP_TYPE_RET; break; - case ARM64_INS_BL: // bl 0x89480 + case CS_AARCH64(_INS_BL): // bl 0x89480 op->type = RZ_ANALYSIS_OP_TYPE_CALL; op->jump = IMM64(0); op->fail = addr + 4; break; - case ARM64_INS_BLR: // blr x0 + case CS_AARCH64(_INS_BLR): // blr x0 op->type = RZ_ANALYSIS_OP_TYPE_RCALL; op->reg = cs_reg_name(handle, REGID64(0)); op->fail = addr + 4; // op->jump = IMM64(0); break; - case ARM64_INS_CBZ: - case ARM64_INS_CBNZ: + case CS_AARCH64(_INS_CBZ): + case CS_AARCH64(_INS_CBNZ): op->type = RZ_ANALYSIS_OP_TYPE_CJMP; op->jump = IMM64(1); op->fail = addr + op->size; break; - case ARM64_INS_TBZ: - case ARM64_INS_TBNZ: + case CS_AARCH64(_INS_TBZ): + case CS_AARCH64(_INS_TBNZ): op->type = RZ_ANALYSIS_OP_TYPE_CJMP; op->jump = IMM64(2); op->fail = addr + op->size; break; - case ARM64_INS_BR: + case CS_AARCH64(_INS_BR): op->type = RZ_ANALYSIS_OP_TYPE_RJMP; op->reg = cs_reg_name(handle, REGID64(0)); op->eob = true; break; - case ARM64_INS_B: + case CS_AARCH64(_INS_B): // BX LR == RET - if (insn->detail->arm64.operands[0].reg == ARM64_REG_LR) { + if (insn->detail->CS_aarch64_.operands[0].reg == CS_AARCH64(_REG_LR)) { op->type = RZ_ANALYSIS_OP_TYPE_RET; - } else if (insn->detail->arm64.cc) { + } else if (cc_holds_cond(insn->detail->CS_aarch64_.cc)) { op->type = RZ_ANALYSIS_OP_TYPE_CJMP; op->jump = IMM64(0); op->fail = addr + op->size; @@ -1050,13 +1151,26 @@ static void anop64(ArmCSContext *ctx, RzAnalysisOp *op, cs_insn *insn) { } } +/** + * \brief Checks if a given intruction returns from a sub-routine + * and sets the analysis op data accordingly. + */ +inline static void set_ret(const cs_insn *insn, RZ_BORROW RzAnalysisOp *op) { +#if CS_NEXT_VERSION >= 6 + if (rz_arm_cs_is_group_member(insn, ARM_GRP_RET)) { + op->eob = true; + op->type = RZ_ANALYSIS_OP_TYPE_RET; + } +#endif +} + static void anop32(RzAnalysis *a, csh handle, RzAnalysisOp *op, cs_insn *insn, bool thumb, const ut8 *buf, int len) { ArmCSContext *ctx = (ArmCSContext *)a->plugin_data; const ut64 addr = op->addr; const int pcdelta = thumb ? 4 : 8; int i; - op->cond = cond_cs2r2_32(insn->detail->arm.cc); + op->cond = cond_cs2rz_32(insn->detail->arm.cc); if (op->cond == RZ_TYPE_COND_NV) { op->type = RZ_ANALYSIS_OP_TYPE_NOP; return; @@ -1145,13 +1259,22 @@ jmp $$ + 4 + ( [delta] * 2 ) op->type = RZ_ANALYSIS_OP_TYPE_TRAP; op->cycles = 4; break; +#if CS_NEXT_VERSION < 6 case ARM_INS_NOP: +#else + case ARM_INS_HINT: + if (insn->alias_id != ARM_INS_ALIAS_NOP) { + break; + } +#endif op->type = RZ_ANALYSIS_OP_TYPE_NOP; op->cycles = 1; break; case ARM_INS_POP: - op->stackop = RZ_ANALYSIS_STACK_INC; + op->type = RZ_ANALYSIS_OP_TYPE_POP; + op->stackop = RZ_ANALYSIS_STACK_DEC; op->stackptr = -4LL * insn->detail->arm.op_count; + set_ret(insn, op); // fallthrough case ARM_INS_FLDMDBX: case ARM_INS_FLDMIAX: @@ -1159,7 +1282,18 @@ jmp $$ + 4 + ( [delta] * 2 ) case ARM_INS_LDMDB: case ARM_INS_LDMIB: case ARM_INS_LDM: - op->type = RZ_ANALYSIS_OP_TYPE_POP; +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == ARM_INS_ALIAS_POP || insn->alias_id == ARM_INS_ALIAS_POPW || insn->alias_id == ARM_INS_ALIAS_VPOP) { + op->type = RZ_ANALYSIS_OP_TYPE_POP; + op->stackop = RZ_ANALYSIS_STACK_DEC; + op->stackptr = -4LL * (insn->detail->arm.op_count - 1); + set_ret(insn, op); + break; + } +#endif + if (insn->id != ARM_INS_POP) { + op->type = RZ_ANALYSIS_OP_TYPE_LOAD; + } op->cycles = 2; for (i = 0; i < insn->detail->arm.op_count; i++) { if (insn->detail->arm.operands[i].type == ARM_OP_REG && @@ -1328,16 +1462,27 @@ jmp $$ + 4 + ( [delta] * 2 ) op->type = RZ_ANALYSIS_OP_TYPE_SAR; break; case ARM_INS_PUSH: + op->type = RZ_ANALYSIS_OP_TYPE_PUSH; op->stackop = RZ_ANALYSIS_STACK_INC; op->stackptr = 4LL * insn->detail->arm.op_count; // fallthrough case ARM_INS_STM: case ARM_INS_STMDA: case ARM_INS_STMDB: - op->type = RZ_ANALYSIS_OP_TYPE_PUSH; +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == ARM_INS_ALIAS_PUSH || insn->alias_id == ARM_INS_ALIAS_PUSHW || insn->alias_id == ARM_INS_ALIAS_VPUSH) { + op->type = RZ_ANALYSIS_OP_TYPE_PUSH; + op->stackop = RZ_ANALYSIS_STACK_INC; + op->stackptr = 4LL * (insn->detail->arm.op_count - 1); + break; + } +#endif + if (insn->id != ARM_INS_PUSH) { + op->type = RZ_ANALYSIS_OP_TYPE_STORE; + } // 0x00008160 04202de5 str r2, [sp, -4]! // 0x000082a0 28000be5 str r0, [fp, -0x28] - if (REGBASE(1) == ARM_REG_FP) { + if (ISMEM(1) && REGBASE(1) == ARM_REG_FP) { op->stackop = RZ_ANALYSIS_STACK_SET; op->stackptr = 0; op->ptr = MEMDISP(1); @@ -1357,6 +1502,14 @@ jmp $$ + 4 + ( [delta] * 2 ) case ARM_INS_STRHT: case ARM_INS_STRT: op->cycles = 4; +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == ARM_INS_ALIAS_PUSH || insn->alias_id == ARM_INS_ALIAS_PUSHW) { + op->type = RZ_ANALYSIS_OP_TYPE_PUSH; + op->stackop = RZ_ANALYSIS_STACK_INC; + op->stackptr = 4LL * (insn->detail->arm.op_count - 1); + break; + } +#endif op->type = RZ_ANALYSIS_OP_TYPE_STORE; if (REGBASE(1) == ARM_REG_FP) { op->stackop = RZ_ANALYSIS_STACK_SET; @@ -1387,6 +1540,15 @@ jmp $$ + 4 + ( [delta] * 2 ) case ARM_INS_LDRSHT: case ARM_INS_LDRT: op->cycles = 4; +#if CS_NEXT_VERSION >= 6 + if (insn->alias_id == ARM_INS_ALIAS_POP || insn->alias_id == ARM_INS_ALIAS_POPW) { + op->type = RZ_ANALYSIS_OP_TYPE_POP; + op->stackop = RZ_ANALYSIS_STACK_DEC; + op->stackptr = -4LL * (insn->detail->arm.op_count - 1); + set_ret(insn, op); + break; + } +#endif // 0x000082a8 28301be5 ldr r3, [fp, -0x28] if (INSOP(1).mem.scale != -1) { op->scale = INSOP(1).mem.scale << LSHIFT(1); @@ -1552,7 +1714,7 @@ jmp $$ + 4 + ( [delta] * 2 ) ARMCondCodeToString(insn->detail->arm.cc), insn->op_str[0] ? " " : "", insn->op_str); - op->cond = (RzTypeCond)insn->detail->arm.cc; + op->cond = cond_cs2rz_32(insn->detail->arm.cc); } } @@ -1582,8 +1744,8 @@ static int parse_reg_name(RzReg *reg, RzRegItem **reg_base, RzRegItem **reg_delt return 0; } -static bool is_valid64(arm64_reg reg) { - return reg != ARM64_REG_INVALID; +static bool is_valid64(CS_aarch64_reg() reg) { + return reg != CS_AARCH64(_REG_INVALID); } static char *reg_list[] = { @@ -1597,12 +1759,12 @@ static char *reg_list[] = { }; static int parse_reg64_name(RzReg *reg, RzRegItem **reg_base, RzRegItem **reg_delta, csh handle, cs_insn *insn, int reg_num) { - cs_arm64_op armop = INSOP64(reg_num); + CS_aarch64_op() armop = INSOP64(reg_num); switch (armop.type) { - case ARM64_OP_REG: + case CS_AARCH64(_OP_REG): *reg_base = rz_reg_get(reg, cs_reg_name(handle, armop.reg), RZ_REG_TYPE_ANY); break; - case ARM64_OP_MEM: + case CS_AARCH64(_OP_MEM): if (is_valid64(armop.mem.base) && is_valid64(armop.mem.index)) { *reg_base = rz_reg_get(reg, cs_reg_name(handle, armop.mem.base), RZ_REG_TYPE_ANY); *reg_delta = rz_reg_get(reg, cs_reg_name(handle, armop.mem.index), RZ_REG_TYPE_ANY); @@ -1624,9 +1786,11 @@ static int parse_reg64_name(RzReg *reg, RzRegItem **reg_base, RzRegItem **reg_de static void set_opdir(RzAnalysisOp *op) { switch (op->type & RZ_ANALYSIS_OP_TYPE_MASK) { case RZ_ANALYSIS_OP_TYPE_LOAD: + case RZ_ANALYSIS_OP_TYPE_POP: op->direction = RZ_ANALYSIS_OP_DIR_READ; break; case RZ_ANALYSIS_OP_TYPE_STORE: + case RZ_ANALYSIS_OP_TYPE_PUSH: op->direction = RZ_ANALYSIS_OP_DIR_WRITE; break; case RZ_ANALYSIS_OP_TYPE_LEA: @@ -1645,7 +1809,7 @@ static void set_opdir(RzAnalysisOp *op) { static void set_src_dst(RzAnalysisValue *val, RzReg *reg, csh *handle, cs_insn *insn, int x, int bits) { cs_arm_op armop = INSOP(x); - cs_arm64_op arm64op = INSOP64(x); + CS_aarch64_op() arm64op = INSOP64(x); if (bits == 64) { parse_reg64_name(reg, &val->reg, &val->regdelta, *handle, insn, x); } else { @@ -1653,14 +1817,14 @@ static void set_src_dst(RzAnalysisValue *val, RzReg *reg, csh *handle, cs_insn * } if (bits == 64) { switch (arm64op.type) { - case ARM64_OP_REG: + case CS_AARCH64(_OP_REG): val->type = RZ_ANALYSIS_VAL_REG; break; - case ARM64_OP_MEM: + case CS_AARCH64(_OP_MEM): val->type = RZ_ANALYSIS_VAL_MEM; val->delta = arm64op.mem.disp; break; - case ARM64_OP_IMM: + case CS_AARCH64(_OP_IMM): val->type = RZ_ANALYSIS_VAL_IMM; val->imm = arm64op.imm; break; @@ -1675,7 +1839,7 @@ static void set_src_dst(RzAnalysisValue *val, RzReg *reg, csh *handle, cs_insn * case ARM_OP_MEM: val->type = RZ_ANALYSIS_VAL_MEM; val->mul = armop.mem.scale << armop.mem.lshift; -#if CS_NEXT_VERSION == 6 +#if CS_NEXT_VERSION >= 6 val->delta = MEMDISP(x); #else val->delta = armop.mem.disp; @@ -1701,7 +1865,7 @@ static void create_src_dst(RzAnalysisOp *op) { static void op_fillval(RzAnalysis *analysis, RzAnalysisOp *op, csh handle, cs_insn *insn, int bits) { create_src_dst(op); int i, j; - int count = bits == 64 ? insn->detail->arm64.op_count : insn->detail->arm.op_count; + int count = bits == 64 ? insn->detail->CS_aarch64_.op_count : insn->detail->arm.op_count; switch (op->type & RZ_ANALYSIS_OP_TYPE_MASK) { case RZ_ANALYSIS_OP_TYPE_MOV: case RZ_ANALYSIS_OP_TYPE_CMP: @@ -1725,7 +1889,7 @@ static void op_fillval(RzAnalysis *analysis, RzAnalysisOp *op, csh handle, cs_in case RZ_ANALYSIS_OP_TYPE_CAST: for (i = 1; i < count; i++) { if (bits == 64) { - cs_arm64_op arm64op = INSOP64(i); + CS_aarch64_op() arm64op = INSOP64(i); if (arm64op.access == CS_AC_WRITE) { continue; } @@ -1746,8 +1910,8 @@ static void op_fillval(RzAnalysis *analysis, RzAnalysisOp *op, csh handle, cs_in case RZ_ANALYSIS_OP_TYPE_STORE: if (count > 2) { if (bits == 64) { - cs_arm64_op arm64op = INSOP64(count - 1); - if (arm64op.type == ARM64_OP_IMM) { + CS_aarch64_op() arm64op = INSOP64(count - 1); + if (arm64op.type == CS_AARCH64(_OP_IMM)) { count--; } } else { @@ -1812,10 +1976,11 @@ static int analysis_op(RzAnalysis *a, RzAnalysisOp *op, ut64 addr, const ut8 *bu op->size = (a->bits == 16) ? 2 : 4; op->addr = addr; if (ctx->handle == 0) { - ret = (a->bits == 64) ? cs_open(CS_ARCH_ARM64, mode, &ctx->handle) : cs_open(CS_ARCH_ARM, mode, &ctx->handle); + ret = (a->bits == 64) ? cs_open(CS_AARCH64pre(CS_ARCH_), mode, &ctx->handle) : cs_open(CS_ARCH_ARM, mode, &ctx->handle); cs_option(ctx->handle, CS_OPT_DETAIL, CS_OPT_ON); #if CS_NEXT_VERSION >= 6 cs_option(ctx->handle, CS_OPT_SYNTAX, CS_OPT_SYNTAX_CS_REG_ALIAS); + cs_option(ctx->handle, CS_OPT_DETAIL, CS_OPT_DETAIL_REAL); #endif if (ret != CS_ERR_OK) { ctx->handle = 0; @@ -2325,47 +2490,47 @@ static ut8 *analysis_mask(RzAnalysis *analysis, int size, const ut8 *data, ut64 case 4: if (analysis->bits == 64) { switch (op->id) { - case ARM64_INS_LDP: - case ARM64_INS_LDXP: - case ARM64_INS_LDXR: - case ARM64_INS_LDXRB: - case ARM64_INS_LDXRH: - case ARM64_INS_LDPSW: - case ARM64_INS_LDNP: - case ARM64_INS_LDTR: - case ARM64_INS_LDTRB: - case ARM64_INS_LDTRH: - case ARM64_INS_LDTRSB: - case ARM64_INS_LDTRSH: - case ARM64_INS_LDTRSW: - case ARM64_INS_LDUR: - case ARM64_INS_LDURB: - case ARM64_INS_LDURH: - case ARM64_INS_LDURSB: - case ARM64_INS_LDURSH: - case ARM64_INS_LDURSW: - case ARM64_INS_STP: - case ARM64_INS_STNP: - case ARM64_INS_STXR: - case ARM64_INS_STXRB: - case ARM64_INS_STXRH: + case CS_AARCH64(_INS_LDP): + case CS_AARCH64(_INS_LDXP): + case CS_AARCH64(_INS_LDXR): + case CS_AARCH64(_INS_LDXRB): + case CS_AARCH64(_INS_LDXRH): + case CS_AARCH64(_INS_LDPSW): + case CS_AARCH64(_INS_LDNP): + case CS_AARCH64(_INS_LDTR): + case CS_AARCH64(_INS_LDTRB): + case CS_AARCH64(_INS_LDTRH): + case CS_AARCH64(_INS_LDTRSB): + case CS_AARCH64(_INS_LDTRSH): + case CS_AARCH64(_INS_LDTRSW): + case CS_AARCH64(_INS_LDUR): + case CS_AARCH64(_INS_LDURB): + case CS_AARCH64(_INS_LDURH): + case CS_AARCH64(_INS_LDURSB): + case CS_AARCH64(_INS_LDURSH): + case CS_AARCH64(_INS_LDURSW): + case CS_AARCH64(_INS_STP): + case CS_AARCH64(_INS_STNP): + case CS_AARCH64(_INS_STXR): + case CS_AARCH64(_INS_STXRB): + case CS_AARCH64(_INS_STXRH): rz_write_ble(ret + idx, 0xffffffff, analysis->big_endian, 32); break; - case ARM64_INS_STRB: - case ARM64_INS_STURB: - case ARM64_INS_STURH: - case ARM64_INS_STUR: - case ARM64_INS_STR: - case ARM64_INS_STTR: - case ARM64_INS_STTRB: - case ARM64_INS_STRH: - case ARM64_INS_STTRH: - case ARM64_INS_LDR: - case ARM64_INS_LDRB: - case ARM64_INS_LDRH: - case ARM64_INS_LDRSB: - case ARM64_INS_LDRSW: - case ARM64_INS_LDRSH: { + case CS_AARCH64(_INS_STRB): + case CS_AARCH64(_INS_STURB): + case CS_AARCH64(_INS_STURH): + case CS_AARCH64(_INS_STUR): + case CS_AARCH64(_INS_STR): + case CS_AARCH64(_INS_STTR): + case CS_AARCH64(_INS_STTRB): + case CS_AARCH64(_INS_STRH): + case CS_AARCH64(_INS_STTRH): + case CS_AARCH64(_INS_LDR): + case CS_AARCH64(_INS_LDRB): + case CS_AARCH64(_INS_LDRH): + case CS_AARCH64(_INS_LDRSB): + case CS_AARCH64(_INS_LDRSW): + case CS_AARCH64(_INS_LDRSH): { bool is_literal = (opcode & 0x38000000) == 0x18000000; if (is_literal) { rz_write_ble(ret + idx, 0xff000000, analysis->big_endian, 32); @@ -2374,22 +2539,22 @@ static ut8 *analysis_mask(RzAnalysis *analysis, int size, const ut8 *data, ut64 } break; } - case ARM64_INS_B: - case ARM64_INS_BL: - case ARM64_INS_CBZ: - case ARM64_INS_CBNZ: + case CS_AARCH64(_INS_B): + case CS_AARCH64(_INS_BL): + case CS_AARCH64(_INS_CBZ): + case CS_AARCH64(_INS_CBNZ): if (op->type == RZ_ANALYSIS_OP_TYPE_CJMP) { rz_write_ble(ret + idx, 0xff00001f, analysis->big_endian, 32); } else { rz_write_ble(ret + idx, 0xfc000000, analysis->big_endian, 32); } break; - case ARM64_INS_TBZ: - case ARM64_INS_TBNZ: + case CS_AARCH64(_INS_TBZ): + case CS_AARCH64(_INS_TBNZ): rz_write_ble(ret + idx, 0xfff8001f, analysis->big_endian, 32); break; - case ARM64_INS_ADR: - case ARM64_INS_ADRP: + case CS_AARCH64(_INS_ADR): + case CS_AARCH64(_INS_ADRP): rz_write_ble(ret + idx, 0xff00001f, analysis->big_endian, 32); break; default: diff --git a/librz/analysis/var.c b/librz/analysis/var.c index cbb28f9455f..fafa17d36e2 100644 --- a/librz/analysis/var.c +++ b/librz/analysis/var.c @@ -1079,7 +1079,7 @@ static inline bool is_not_read_nor_write(const RzAnalysisOpDirection direction) } /** - * Try to extract any args from a single op + * \brief Try to extract any args from a single op * * \param reg name of the register to look at for accesses * \param from_sp whether \p reg is the sp or bp @@ -1241,7 +1241,7 @@ static void extract_stack_var(RzAnalysis *analysis, RzAnalysisFunction *fcn, RzA if (varname) { RzAnalysisVarStorage stor; rz_analysis_var_storage_init_stack(&stor, stack_off); - RzAnalysisVar *var = rz_analysis_function_set_var(fcn, &stor, vartype, analysis->bits / 8, varname); + RzAnalysisVar *var = rz_analysis_function_set_var(fcn, &stor, vartype, rz_analysis_guessed_mem_access_width(analysis), varname); if (var) { rz_analysis_var_set_access(var, reg, op->addr, rw, addend); } @@ -1257,7 +1257,7 @@ static void extract_stack_var(RzAnalysis *analysis, RzAnalysisFunction *fcn, RzA } char *varname = rz_str_newf("%s_%" PFMT64x "h", VARPREFIX, RZ_ABS(stor.stack_off)); if (varname) { - RzAnalysisVar *var = rz_analysis_function_set_var(fcn, &stor, NULL, analysis->bits / 8, varname); + RzAnalysisVar *var = rz_analysis_function_set_var(fcn, &stor, NULL, rz_analysis_guessed_mem_access_width(analysis), varname); if (var) { rz_analysis_var_set_access(var, reg, op->addr, rw, addend); } diff --git a/librz/asm/arch/arm/aarch64_meta_macros.h b/librz/asm/arch/arm/aarch64_meta_macros.h new file mode 100644 index 00000000000..540f07a42b7 --- /dev/null +++ b/librz/asm/arch/arm/aarch64_meta_macros.h @@ -0,0 +1,69 @@ +// SPDX-FileCopyrightText: 2023 Rot127 +// SPDX-License-Identifier: LGPL-3.0-only + +#ifndef AARCH64_META_MACROS_H +#define AARCH64_META_MACROS_H + +#ifdef USE_SYS_CAPSTONE + +/// Macro for meta programming. +/// Meant for projects using Capstone and need to support multiple +/// versions of it. +/// These macros replace several instances of the old "ARM64" with +/// the new "AArch64" name depending on the CS version. +#if CS_NEXT_VERSION < 6 +#define CS_AARCH64(x) ARM64##x +#else +#define CS_AARCH64(x) AArch64##x +#endif + +#if CS_NEXT_VERSION < 6 +#define CS_AARCH64pre(x) x##ARM64 +#else +#define CS_AARCH64pre(x) x##AARCH64 +#endif + +#if CS_NEXT_VERSION < 6 +#define CS_AARCH64CC(x) ARM64_CC##x +#else +#define CS_AARCH64CC(x) AArch64CC##x +#endif + +#if CS_NEXT_VERSION < 6 +#define CS_AARCH64_VL_(x) ARM64_VAS_##x +#else +#define CS_AARCH64_VL_(x) AArch64Layout_VL_##x +#endif + +#if CS_NEXT_VERSION < 6 +#define CS_aarch64_ arm64 +#else +#define CS_aarch64_ aarch64 +#endif + +#if CS_NEXT_VERSION < 6 +#define CS_aarch64(x) arm64##x +#else +#define CS_aarch64(x) aarch64##x +#endif + +#if CS_NEXT_VERSION < 6 +#define CS_aarch64_op() cs_arm64_op +#define CS_aarch64_reg() arm64_reg +#define CS_aarch64_cc() arm64_cc +#define CS_cs_aarch64() cs_arm64 +#define CS_aarch64_extender() arm64_extender +#define CS_aarch64_shifter() arm64_shifter +#define CS_aarch64_vas() arm64_vas +#else +#define CS_aarch64_op() cs_aarch64_op +#define CS_aarch64_reg() aarch64_reg +#define CS_aarch64_cc() AArch64CC_CondCode +#define CS_cs_aarch64() cs_aarch64 +#define CS_aarch64_extender() aarch64_extender +#define CS_aarch64_shifter() aarch64_shifter +#define CS_aarch64_vas() AArch64Layout_VectorLayout +#endif + +#endif // USE_SYS_CAPSTONE +#endif // AARCH64_META_MACROS_H diff --git a/librz/asm/arch/arm/armass64.c b/librz/asm/arch/arm/armass64.c index 7d3386f230c..7f4f5dee8a8 100644 --- a/librz/asm/arch/arm/armass64.c +++ b/librz/asm/arch/arm/armass64.c @@ -22,8 +22,9 @@ typedef enum regtype_t { ARM_REG64 = 1, ARM_REG32 = 2, ARM_SP = 4, - ARM_PC = 8, - ARM_SIMD = 16 + ARM_LR = 8, + ARM_PC = 16, + ARM_SIMD = 32 } RegType; typedef enum shifttype_t { @@ -1226,6 +1227,16 @@ static bool parseOperands(char *str, ArmOp *op) { op->operands[operand].mem_option = mem_opt; } break; + case 'f': + if (token[1] == 'p') { + // lr register alias + op->operands_count++; + op->operands[operand].type = ARM_GPR; + op->operands[operand].reg_type = ARM_FP | ARM_REG64; + op->operands[operand].reg = 29; + break; + } + break; case 'L': case 'l': case 'I': @@ -1236,6 +1247,14 @@ static bool parseOperands(char *str, ArmOp *op) { case 'o': case 'p': case 'P': + if (token[1] == 'r') { + // lr register alias + op->operands_count++; + op->operands[operand].type = ARM_GPR; + op->operands[operand].reg_type = ARM_LR | ARM_REG64; + op->operands[operand].reg = 30; + break; + } mem_opt = get_mem_option(token); if (mem_opt != -1) { op->operands_count++; diff --git a/librz/asm/arch/arm/asm-arm.h b/librz/asm/arch/arm/asm-arm.h index e669817688c..7ee8d8267e1 100644 --- a/librz/asm/arch/arm/asm-arm.h +++ b/librz/asm/arch/arm/asm-arm.h @@ -5,6 +5,7 @@ #define _INCLUDE_ARMASS_H_ #include +#include "../arch/arm/aarch64_meta_macros.h" int armass_assemble(const char *str, ut64 off, int thumb); diff --git a/librz/asm/p/asm_arm_cs.c b/librz/asm/p/asm_arm_cs.c index fa1ac56864b..9e7c84dc1be 100644 --- a/librz/asm/p/asm_arm_cs.c +++ b/librz/asm/p/asm_arm_cs.c @@ -131,7 +131,7 @@ static int disassemble(RzAsm *a, RzAsmOp *op, const ut8 *buf, int len) { rz_strbuf_set(&op->buf_asm, ""); } if (!ctx->cd || mode != ctx->omode) { - ret = (a->bits == 64) ? cs_open(CS_ARCH_ARM64, mode, &ctx->cd) : cs_open(CS_ARCH_ARM, mode, &ctx->cd); + ret = (a->bits == 64) ? cs_open(CS_AARCH64pre(CS_ARCH_), mode, &ctx->cd) : cs_open(CS_ARCH_ARM, mode, &ctx->cd); if (ret) { ret = -1; goto beach; diff --git a/librz/core/canalysis.c b/librz/core/canalysis.c index 1f37e98312c..a28d24d687b 100644 --- a/librz/core/canalysis.c +++ b/librz/core/canalysis.c @@ -3068,7 +3068,7 @@ static int esilbreak_reg_write(RzAnalysisEsil *esil, const char *name, ut64 *val EsilBreakCtx *ctx = esil->user; RzAnalysisOp *op = ctx->op; RzCore *core = analysis->coreb.core; - handle_var_stack_access(esil, *val, RZ_ANALYSIS_VAR_ACCESS_TYPE_PTR, esil->analysis->bits / 8); + handle_var_stack_access(esil, *val, RZ_ANALYSIS_VAR_ACCESS_TYPE_PTR, rz_analysis_guessed_mem_access_width(esil->analysis)); // specific case to handle blx/bx cases in arm through emulation // XXX this thing creates a lot of false positives ut64 at = *val; diff --git a/librz/core/cmd/cmd_search.c b/librz/core/cmd/cmd_search.c index 0f4eef3953f..ebcf854bddd 100644 --- a/librz/core/cmd/cmd_search.c +++ b/librz/core/cmd/cmd_search.c @@ -2071,7 +2071,7 @@ static bool do_analysis_search(RzCore *core, struct search_parameters *param, co if (*input == 'c') { match = true; // aop.cond; } else { - match = !aop.cond; + match = aop.cond == RZ_TYPE_COND_AL; } } else { match = true; diff --git a/librz/include/rz_analysis.h b/librz/include/rz_analysis.h index f6bbb920a14..f962223eccd 100644 --- a/librz/include/rz_analysis.h +++ b/librz/include/rz_analysis.h @@ -10,6 +10,8 @@ // still required by core in lot of places #define USE_VARSUBS 0 +#define RZ_ANALYSIS_OP_INVALID_STACKPTR 0 + #include #include #include @@ -352,6 +354,7 @@ typedef enum { RZ_ANALYSIS_STACK_NULL = 0, RZ_ANALYSIS_STACK_NOP, RZ_ANALYSIS_STACK_INC, + RZ_ANALYSIS_STACK_DEC, RZ_ANALYSIS_STACK_GET, RZ_ANALYSIS_STACK_SET, RZ_ANALYSIS_STACK_RESET, @@ -718,6 +721,8 @@ typedef enum rz_analysis_var_kind_t { RZ_ANALYSIS_VAR_KIND_END ///< Number of RzAnalysisVarKind enums } RzAnalysisVarKind; +RZ_API ut32 rz_analysis_guessed_mem_access_width(RZ_NONNULL const RzAnalysis *analysis); + typedef struct dwarf_variable_t { ut64 offset; ///< DIE offset of the variable RzBinDwarfLocation *location; ///< location description diff --git a/meson.build b/meson.build index 640e938ae89..5b68819aab2 100644 --- a/meson.build +++ b/meson.build @@ -175,7 +175,7 @@ cmake_package_relative_path = run_command(py3_exe, cmake_package_prefix_dir_py, subproject_clean_error_msg = 'Subprojects are not updated. Please run `git clean -dxff subprojects/` to delete all local subprojects directories. If you want to compile against current subprojects then set option `subprojects_check=false`.' # handle capstone dependency -capstone_dep = dependency('capstone', version: '>=3.0.4', required: get_option('use_sys_capstone'), static: is_static_build) +capstone_dep = dependency('capstone', version: '>=4.0.2', required: get_option('use_sys_capstone'), static: is_static_build) if not capstone_dep.found() capstone_version = get_option('use_capstone_version') if fs.is_file('subprojects/capstone-' + capstone_version + '.wrap') @@ -188,6 +188,11 @@ if not capstone_dep.found() error('Wrong capstone version selected. Please use one of the supported versions.') endif capstone_dep = capstone_proj.get_variable('capstone_dep') +else + # Package managers CS version has no meta programming macros for AArch64 -> ARM64 renaming + # (because it is outdated). + # With this define we includeour copy of those macros. + add_project_arguments(['-DUSE_SYS_CAPSTONE'], language: 'c') endif # handle magic library diff --git a/subprojects/capstone-next.wrap b/subprojects/capstone-next.wrap index aa999f3109b..77a6ee74e29 100644 --- a/subprojects/capstone-next.wrap +++ b/subprojects/capstone-next.wrap @@ -1,5 +1,5 @@ [wrap-git] url = https://github.com/capstone-engine/capstone.git -revision = b87cf06209bbcc092692cb85438e7ece20994104 +revision = 484c7e550bce01369a8b963441e244fd589f4cdf directory = capstone-next patch_directory = capstone-next diff --git a/subprojects/capstone-v4.wrap b/subprojects/capstone-v4.wrap index a1e260fa8ba..0994ae596d6 100644 --- a/subprojects/capstone-v4.wrap +++ b/subprojects/capstone-v4.wrap @@ -1,6 +1,5 @@ -[wrap-file] -source_url = https://github.com/capstone-engine/capstone/archive/4.0.2.tar.gz -source_filename = 4.0.2.tar.gz -source_hash = 7c81d798022f81e7507f1a60d6817f63aa76e489aa4e7055255f21a22f5e526a +[wrap-git] +url = https://github.com/capstone-engine/capstone.git +revision = v4 patch_directory = capstone-4.0.2 directory = capstone-4.0.2 diff --git a/subprojects/capstone-v5.wrap b/subprojects/capstone-v5.wrap index 433a5e14422..a353a2a5269 100644 --- a/subprojects/capstone-v5.wrap +++ b/subprojects/capstone-v5.wrap @@ -1,6 +1,5 @@ -[wrap-file] -source_url = https://github.com/capstone-engine/capstone/archive/5.0.1.tar.gz -source_filename = capstone-5.0.1.tar.gz -source_hash = 2b9c66915923fdc42e0e32e2a9d7d83d3534a45bb235e163a70047951890c01a +[wrap-git] +url = https://github.com/capstone-engine/capstone.git +revision = v5 patch_directory = capstone-5.0.1 directory = capstone-5.0.1 diff --git a/subprojects/packagefiles/capstone-auto-sync-aarch64/meson.build b/subprojects/packagefiles/capstone-auto-sync-aarch64/meson.build new file mode 100644 index 00000000000..64318608e4e --- /dev/null +++ b/subprojects/packagefiles/capstone-auto-sync-aarch64/meson.build @@ -0,0 +1,98 @@ +project('capstone', 'c', version: '5.0.1', meson_version: '>=0.55.0') + +cs_files = [ + 'arch/AArch64/AArch64BaseInfo.c', + 'arch/AArch64/AArch64Disassembler.c', + 'arch/AArch64/AArch64DisassemblerExtension.c', + 'arch/AArch64/AArch64InstPrinter.c', + 'arch/AArch64/AArch64Mapping.c', + 'arch/AArch64/AArch64Module.c', + 'arch/ARM/ARMBaseInfo.c', + 'arch/ARM/ARMDisassembler.c', + 'arch/ARM/ARMDisassemblerExtension.c', + 'arch/ARM/ARMInstPrinter.c', + 'arch/ARM/ARMMapping.c', + 'arch/ARM/ARMModule.c', + 'arch/M680X/M680XDisassembler.c', + 'arch/M680X/M680XInstPrinter.c', + 'arch/M680X/M680XModule.c', + 'arch/M68K/M68KDisassembler.c', + 'arch/M68K/M68KInstPrinter.c', + 'arch/M68K/M68KModule.c', + 'arch/Mips/MipsDisassembler.c', + 'arch/Mips/MipsInstPrinter.c', + 'arch/Mips/MipsMapping.c', + 'arch/Mips/MipsModule.c', + 'arch/PowerPC/PPCDisassembler.c', + 'arch/PowerPC/PPCInstPrinter.c', + 'arch/PowerPC/PPCMapping.c', + 'arch/PowerPC/PPCModule.c', + 'arch/Sparc/SparcDisassembler.c', + 'arch/Sparc/SparcInstPrinter.c', + 'arch/Sparc/SparcMapping.c', + 'arch/Sparc/SparcModule.c', + 'arch/SystemZ/SystemZDisassembler.c', + 'arch/SystemZ/SystemZInstPrinter.c', + 'arch/SystemZ/SystemZMapping.c', + 'arch/SystemZ/SystemZMCTargetDesc.c', + 'arch/SystemZ/SystemZModule.c', + 'arch/TMS320C64x/TMS320C64xDisassembler.c', + 'arch/TMS320C64x/TMS320C64xInstPrinter.c', + 'arch/TMS320C64x/TMS320C64xMapping.c', + 'arch/TMS320C64x/TMS320C64xModule.c', + 'arch/X86/X86ATTInstPrinter.c', + 'arch/X86/X86Disassembler.c', + 'arch/X86/X86DisassemblerDecoder.c', + 'arch/X86/X86IntelInstPrinter.c', + 'arch/X86/X86Mapping.c', + 'arch/X86/X86Module.c', + 'arch/X86/X86InstPrinterCommon.c', + 'arch/XCore/XCoreDisassembler.c', + 'arch/XCore/XCoreInstPrinter.c', + 'arch/XCore/XCoreMapping.c', + 'arch/XCore/XCoreModule.c', + 'arch/TriCore/TriCoreDisassembler.c', + 'arch/TriCore/TriCoreInstPrinter.c', + 'arch/TriCore/TriCoreMapping.c', + 'arch/TriCore/TriCoreModule.c', + 'cs.c', + 'Mapping.c', + 'MCInst.c', + 'MCInstrDesc.c', + 'MCInstPrinter.c', + 'MCRegisterInfo.c', + 'SStream.c', + 'utils.c', +] + +capstone_includes = [include_directories('include'), include_directories('include/capstone')] + +libcapstone_c_args = [ + '-DCAPSTONE_X86_ATT_DISABLE_NO', + '-DCAPSTONE_X86_REDUCE_NO', + '-DCAPSTONE_USE_SYS_DYN_MEM', + '-DCAPSTONE_DIET_NO', + '-DCAPSTONE_HAS_ARM', + '-DCAPSTONE_HAS_AARCH64', + '-DCAPSTONE_HAS_M68K', + '-DCAPSTONE_HAS_M680X', + '-DCAPSTONE_HAS_MIPS', + '-DCAPSTONE_HAS_POWERPC', + '-DCAPSTONE_HAS_SPARC', + '-DCAPSTONE_HAS_SYSZ', + '-DCAPSTONE_HAS_X86', + '-DCAPSTONE_HAS_XCORE', + '-DCAPSTONE_HAS_TMS320C64X', + '-DCAPSTONE_HAS_TRICORE', +] + +libcapstone = library('capstone', cs_files, + c_args: libcapstone_c_args, + include_directories: capstone_includes, + implicit_include_directories: false +) + +capstone_dep = declare_dependency( + link_with: libcapstone, + include_directories: capstone_includes +) diff --git a/subprojects/packagefiles/capstone-next/meson.build b/subprojects/packagefiles/capstone-next/meson.build index 93220396e68..7fc9bfb6f59 100644 --- a/subprojects/packagefiles/capstone-next/meson.build +++ b/subprojects/packagefiles/capstone-next/meson.build @@ -3,6 +3,7 @@ project('capstone', 'c', version: 'next', meson_version: '>=0.55.0') cs_files = [ 'arch/AArch64/AArch64BaseInfo.c', 'arch/AArch64/AArch64Disassembler.c', + 'arch/AArch64/AArch64DisassemblerExtension.c', 'arch/AArch64/AArch64InstPrinter.c', 'arch/AArch64/AArch64Mapping.c', 'arch/AArch64/AArch64Module.c', @@ -72,7 +73,7 @@ libcapstone_c_args = [ '-DCAPSTONE_USE_SYS_DYN_MEM', '-DCAPSTONE_DIET_NO', '-DCAPSTONE_HAS_ARM', - '-DCAPSTONE_HAS_ARM64', + '-DCAPSTONE_HAS_AARCH64', '-DCAPSTONE_HAS_M68K', '-DCAPSTONE_HAS_M680X', '-DCAPSTONE_HAS_MIPS', @@ -85,6 +86,10 @@ libcapstone_c_args = [ '-DCAPSTONE_HAS_TRICORE', ] +if meson.get_compiler('c').has_argument('-Wmaybe-uninitialized') + libcapstone_c_args += '-Wno-maybe-uninitialized' +endif + libcapstone = library('capstone', cs_files, c_args: libcapstone_c_args, include_directories: capstone_includes, diff --git a/test/db/analysis/arm b/test/db/analysis/arm index f943aadb337..3b31b5003e3 100644 --- a/test/db/analysis/arm +++ b/test/db/analysis/arm @@ -748,28 +748,28 @@ aa afvl~var EOF EXPECT=<> (var x2) (cast 6 false (var x3)) (msb (var x2)))) d "asr w1, w2, w3" 4128c31a 0x0 (set x1 (cast 64 false (>> (cast 32 false (var x2)) (cast 5 false (var x3)) (msb (cast 32 false (var x2)))))) -d "asr w1, w2, 3" 417c0313 0x0 (set x1 (cast 64 false (>> (cast 32 false (var x2)) (bv 5 0x3) (msb (cast 32 false (var x2)))))) +d "asr w1, w2, 3" 417c0313 0x0 (set x1 (cast 64 false (>> (cast 32 false (var x2)) (bv 6 0x3) (msb (cast 32 false (var x2)))))) d "lsl x1, x2, x3" 4120c39a 0x0 (set x1 (<< (var x2) (cast 6 false (var x3)) false)) d "lsl w1, w2, w3" 4120c31a 0x0 (set x1 (cast 64 false (<< (cast 32 false (var x2)) (cast 5 false (var x3)) false))) -d "lsl w1, w2, 0x1f" 41000153 0x0 (set x1 (cast 64 false (<< (cast 32 false (var x2)) (bv 5 0x1f) false))) +d "lsl w1, w2, 31" 41000153 0x0 (set x1 (cast 64 false (<< (cast 32 false (var x2)) (bv 6 0x1f) false))) d "lsr x1, x2, x3" 4124c39a 0x0 (set x1 (>> (var x2) (cast 6 false (var x3)) false)) d "lsr w1, w2, w3" 4124c31a 0x0 (set x1 (cast 64 false (>> (cast 32 false (var x2)) (cast 5 false (var x3)) false))) -d "lsr x1, x2, 0xd" 41fc4dd3 0x0 (set x1 (>> (var x2) (bv 6 0xd) false)) +d "lsr x1, x2, 13" 41fc4dd3 0x0 (set x1 (>> (var x2) (bv 6 0xd) false)) d "ror x1, x2, 0x2a" 41a8c293 0x0 (set x1 (| (>> (var x2) (bv 6 0x2a) false) (<< (var x2) (~- (bv 6 0x2a)) false))) d "ror x1, x2, x3" 412cc39a 0x0 (set x1 (| (>> (var x2) (cast 6 false (var x3)) false) (<< (var x2) (~- (cast 6 false (var x3))) false))) d "b 0x1c0ffe0" f83f3014 0x1000000 (jmp (bv 64 0x1c0ffe0)) @@ -469,10 +469,10 @@ a "b.cc 0x11234" a3910054 0x10000# same as b.lo # Capstone v5 required for these: # d "bfc x1, 3, 5" e1137db3 0x0 (set x1 (& (var x1) (bv 64 0xf8))) # d "bfc w1, 3, 5" e1131d33 0x0 (set x1 (cast 64 false (& (cast 32 false (var x1)) (bv 32 0xf8)))) -d "bfi x1, x2, 0x16, 0xe" 41346ab3 0x0 (set x1 (| (& (var x1) (bv 64 0xfffffff0003fffff)) (<< (& (var x2) (bv 64 0x3fff)) (bv 6 0x16) false))) -d "bfi w1, w2, 0xa, 0xe" 41341633 0x0 (set x1 (cast 64 false (| (& (cast 32 false (var x1)) (bv 32 0xff0003ff)) (<< (& (cast 32 false (var x2)) (bv 32 0x3fff)) (bv 6 0xa) false)))) -d "bfxil x1, x2, 0xd, 0x1e" 41a84db3 0x0 (set x1 (| (& (var x1) (bv 64 0xffffffffc0000000)) (>> (& (var x2) (bv 64 0x7ffffffe000)) (bv 6 0xd) false))) -d "bfxil w1, w2, 0xd, 0xa" 41580d33 0x0 (set x1 (cast 64 false (| (& (cast 32 false (var x1)) (bv 32 0xfffffc00)) (>> (& (cast 32 false (var x2)) (bv 32 0x7fe000)) (bv 6 0xd) false)))) +d "bfi x1, x2, 22, 14" 41346ab3 0x0 (set x1 (| (& (var x1) (bv 64 0xfffffff0003fffff)) (<< (& (var x2) (bv 64 0x3fff)) (bv 6 0x16) false))) +d "bfi w1, w2, 10, 14" 41341633 0x0 (set x1 (cast 64 false (| (& (cast 32 false (var x1)) (bv 32 0xff0003ff)) (<< (& (cast 32 false (var x2)) (bv 32 0x3fff)) (bv 6 0xa) false)))) +d "bfxil x1, x2, 13, 30" 41a84db3 0x0 (set x1 (| (& (var x1) (bv 64 0xffffffffc0000000)) (>> (& (var x2) (bv 64 0x7ffffffe000)) (bv 6 0xd) false))) +d "bfxil w1, w2, 13, 10" 41580d33 0x0 (set x1 (cast 64 false (| (& (cast 32 false (var x1)) (bv 32 0xfffffc00)) (>> (& (cast 32 false (var x2)) (bv 32 0x7fe000)) (bv 6 0xd) false)))) d "bic x0, x1, x2" 2000228a 0x0 (set x0 (& (var x1) (~ (var x2)))) d "bic x0, x0, x2" 0000228a 0x0 (set x0 (& (var x0) (~ (var x2)))) d "bic w0, w1, w2" 2000220a 0x0 (set x0 (cast 64 false (& (cast 32 false (var x1)) (~ (cast 32 false (var x2)))))) @@ -537,7 +537,7 @@ d "ldr x1, [x2, 0x18]" 410c40f9 0x0 (set x1 (loadw 0 64 (+ (var x2) (bv 64 0x18) d "ldr x1, [x2, 0x18]!" 418c41f8 0x0 (seq (set x1 (loadw 0 64 (+ (var x2) (bv 64 0x18)))) (set x2 (+ (var x2) (bv 64 0x18)))) d "ldr x1, [x2, -0x18]!" 418c5ef8 0x0 (seq (set x1 (loadw 0 64 (- (var x2) (bv 64 0x18)))) (set x2 (- (var x2) (bv 64 0x18)))) d "ldr x1, [x2], 0x18" 418441f8 0x0 (seq (set x1 (loadw 0 64 (var x2))) (set x2 (+ (var x2) (bv 64 0x18)))) -d "ldr x1, [x2], 0xffffffffffffffe8" 41845ef8 0x0 (seq (set x1 (loadw 0 64 (var x2))) (set x2 (- (var x2) (bv 64 0x18)))) +d "ldr x1, [x2], -0x18" 41845ef8 0x0 (seq (set x1 (loadw 0 64 (var x2))) (set x2 (- (var x2) (bv 64 0x18)))) d "ldr x1, 0x1028" 41010058 0x1000 (set x1 (loadw 0 64 (bv 64 0x1028))) d "ldr w1, 0x1044" 21020018 0x1000 (set x1 (cast 64 false (loadw 0 32 (bv 64 0x1044)))) d "ldrb w1, [x2]" 41004039 0x0 (set x1 (cast 64 false (load 0 (var x2)))) @@ -552,7 +552,7 @@ d "ldr x8, [x25, x8, lsl 3]" 287b68f8 0x0 (set x8 (loadw 0 64 (+ (var x25) (<< ( d "ldr x0, [x1, w2, uxtw]" 204862f8 0x0 (set x0 (loadw 0 64 (+ (var x1) (cast 64 false (cast 32 false (var x2)))))) d "ldr x0, [x1, w2, sxtw]" 20c862f8 0x0 (set x0 (loadw 0 64 (+ (var x1) (cast 64 (msb (cast 32 false (var x2))) (cast 32 false (var x2)))))) d "ldr x0, [x1, w2, sxtw 3]" 20d862f8 0x0 (set x0 (loadw 0 64 (+ (var x1) (<< (cast 64 (msb (cast 32 false (var x2))) (cast 32 false (var x2))) (bv 6 0x3) false)))) -d "ldr x0, [x1, x2, sxtx]" 20e862f8 0x0 (set x0 (loadw 0 64 (+ (var x1) (cast 64 (msb (cast 64 false (var x2))) (cast 64 false (var x2)))))) +d "ldr x0, [x1, x2, sxtx]" 20e862f8 0x0 (set x0 (loadw 0 64 (+ (var x1) (cast 64 (msb (var x2)) (var x2))))) d "ldrsb x0, [x1]" 20008039 0x0 (set x0 (cast 64 (msb (load 0 (var x1))) (load 0 (var x1)))) d "ldrsh x0, [x1]" 20008079 0x0 (set x0 (cast 64 (msb (loadw 0 16 (var x1))) (loadw 0 16 (var x1)))) d "ldrsw x0, [x1]" 200080b9 0x0 (set x0 (cast 64 (msb (loadw 0 32 (var x1))) (loadw 0 32 (var x1)))) @@ -635,8 +635,8 @@ d "movn x0, 0, lsl 16" 0000a092 0x0 (set x0 (bv 64 0xffffffffffffffff)) d "movn w0, 0, lsl 16" 0000a012 0x0 (set x0 (cast 64 false (bv 32 0xffffffff))) d "movz x0, 0, lsl 16" 0000a0d2 0x0 (set x0 (bv 64 0x0)) d "movz w0, 0, lsl 16" 0000a052 0x0 (set x0 (cast 64 false (bv 32 0x0))) -d "msr nzcv, x1" 01421bd5 0x0 (seq (set nf (! (is_zero (& (var x1) (bv 64 0x80000000))))) (set zf (! (is_zero (& (var x1) (bv 64 0x40000000))))) (set cf (! (is_zero (& (var x1) (bv 64 0x20000000))))) (set vf (! (is_zero (& (var x1) (bv 64 0x10000000)))))) -d "mrs x1, nzcv" 01423bd5 0x0 (set x1 (| (ite (var nf) (bv 64 0x80000000) (bv 64 0x0)) (| (ite (var zf) (bv 64 0x40000000) (bv 64 0x0)) (| (ite (var cf) (bv 64 0x20000000) (bv 64 0x0)) (ite (var vf) (bv 64 0x10000000) (bv 64 0x0)))))) +d "msr NZCV, x1" 01421bd5 0x0 (seq (set nf (! (is_zero (& (var x1) (bv 64 0x80000000))))) (set zf (! (is_zero (& (var x1) (bv 64 0x40000000))))) (set cf (! (is_zero (& (var x1) (bv 64 0x20000000))))) (set vf (! (is_zero (& (var x1) (bv 64 0x10000000)))))) +d "mrs x1, NZCV" 01423bd5 0x0 (set x1 (| (ite (var nf) (bv 64 0x80000000) (bv 64 0x0)) (| (ite (var zf) (bv 64 0x40000000) (bv 64 0x0)) (| (ite (var cf) (bv 64 0x20000000) (bv 64 0x0)) (ite (var vf) (bv 64 0x10000000) (bv 64 0x0)))))) d "mvn x1, x2" e10322aa 0x0 (set x1 (~ (var x2))) d "mvn w1, w2" e103222a 0x0 (set x1 (cast 64 false (~ (cast 32 false (var x2))))) d "mvn x1, x2, lsl 3" e10f22aa 0x0 (set x1 (~ (<< (var x2) (bv 6 0x3) false))) @@ -733,9 +733,9 @@ d "uxtb w1, w2" 411c0053 0x0 (set x1 (cast 64 false (cast 32 false (cast 8 false d "uxtb w1, w2" 411c0053 0x0 (set x1 (cast 64 false (cast 32 false (cast 8 false (var x2))))) d "uxth w1, w2" 413c0053 0x0 (set x1 (cast 64 false (cast 32 false (cast 16 false (var x2))))) d "uxth w1, w2" 413c0053 0x0 (set x1 (cast 64 false (cast 32 false (cast 16 false (var x2))))) -d "tbz w1, 0xd, 0x100100" 01086836 0x100000 (branch (lsb (>> (cast 64 false (var x1)) (bv 6 0xd) false)) nop (jmp (bv 64 0x100100))) +d "tbz w1, 0xd, 0x100100" 01086836 0x100000 (branch (lsb (>> (cast 64 false (cast 32 false (var x1))) (bv 6 0xd) false)) nop (jmp (bv 64 0x100100))) d "tbz x1, 0x2a, 0x100100" 010850b6 0x100000 (branch (lsb (>> (var x1) (bv 6 0x2a) false)) nop (jmp (bv 64 0x100100))) -d "tbnz w1, 0xd, 0x100100" 01086837 0x100000 (branch (lsb (>> (cast 64 false (var x1)) (bv 6 0xd) false)) (jmp (bv 64 0x100100)) nop) +d "tbnz w1, 0xd, 0x100100" 01086837 0x100000 (branch (lsb (>> (cast 64 false (cast 32 false (var x1))) (bv 6 0xd) false)) (jmp (bv 64 0x100100)) nop) d "tbnz x1, 0x2a, 0x100100" 010850b7 0x100000 (branch (lsb (>> (var x1) (bv 6 0x2a) false)) (jmp (bv 64 0x100100)) nop) d "tst x0, 6" 1f047ff2 0x0 (seq (set zf (is_zero (& (var x0) (bv 64 0x6)))) (set nf (msb (& (var x0) (bv 64 0x6)))) (set cf false) (set vf false)) d "tst w0, 6" 1f041f72 0x0 (seq (set zf (is_zero (& (cast 32 false (var x0)) (bv 32 0x6)))) (set nf (msb (& (cast 32 false (var x0)) (bv 32 0x6)))) (set cf false) (set vf false)) diff --git a/test/db/cmd/cmd_plf b/test/db/cmd/cmd_plf index 7af62ed721f..97106fead1b 100644 --- a/test/db/cmd/cmd_plf +++ b/test/db/cmd/cmd_plf @@ -45,11 +45,11 @@ plf EOF EXPECT=<> (cast 64 false (var x8)) (bv 6 0x1) false)) (jmp (bv 64 0x10)) nop) +0x4 (branch (lsb (>> (cast 64 false (cast 32 false (var x8))) (bv 6 0x1) false)) (jmp (bv 64 0x10)) nop) 0x8 (set x0 (cast 64 false (bv 32 0x0))) 0xc (jmp (var x30)) 0x10 (set x9 (cast 64 false (loadw 0 32 (+ (var x2) (bv 64 0x100))))) -0x14 (set x8 (cast 64 false (<< (cast 32 false (var x9)) (bv 5 0x2) false))) +0x14 (set x8 (cast 64 false (<< (cast 32 false (var x9)) (bv 6 0x2) false))) 0x18 (set x8 (+ (var x2) (<< (cast 64 false (cast 32 false (var x8))) (bv 6 0x2) false))) 0x1c (seq (set addr (var x0)) (set x10 (cast 64 false (loadw 0 32 (var addr)))) (set x11 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x20 (seq (set addr (var x8)) (set x12 (cast 64 false (loadw 0 32 (var addr)))) (set x13 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) @@ -59,7 +59,7 @@ EXPECT=<> (cast 32 false (var x14)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x74 (set x9 (+ (var x8) (bv 64 0x800))) 0x78 (set x0 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x10))) (bv 6 0x2) false))))) -0x7c (set x3 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0x7c (set x3 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0x80 (set x10 (+ (var x8) (bv 64 0xc00))) 0x84 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x88 (seq (set addr (+ (var x2) (bv 64 0xd0))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) @@ -89,7 +89,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0xb0 (set x3 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) -0xb4 (set x4 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0xb4 (set x4 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0xb8 (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0xbc (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x5))))) 0xc0 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x0))))) @@ -103,7 +103,7 @@ EXPECT=<> (cast 32 false (var x12)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0xe4 (set x5 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0xe8 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x3))))) -0xec (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x18) false))) +0xec (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0xf0 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0xf4 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x5))))) 0xf8 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x3))))) @@ -113,7 +113,7 @@ EXPECT=<> (cast 32 false (var x13)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x110 (set x13 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x13))) (bv 6 0x2) false))))) -0x114 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0x114 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0x118 (set x12 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0x11c (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x6))))) 0x120 (set x14 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x14))))) @@ -126,7 +126,7 @@ EXPECT=<> (cast 32 false (var x0)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x140 (set x14 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x14))) (bv 6 0x2) false))))) 0x144 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x4))))) -0x148 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 5 0x18) false))) +0x148 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0x14c (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x150 (seq (set addr (+ (var x2) (bv 64 0xc0))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x154 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x4))))) @@ -139,7 +139,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x174 (set x4 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x178 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x3))))) -0x17c (set x3 (cast 64 false (>> (cast 32 false (var x0)) (bv 5 0x18) false))) +0x17c (set x3 (cast 64 false (>> (cast 32 false (var x0)) (bv 6 0x18) false))) 0x180 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x184 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x5))))) 0x188 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x14))))) @@ -153,7 +153,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x1b4 (set x5 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x1b8 (set x5 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x1bc (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x3))))) 0x1c0 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x4))))) @@ -167,7 +167,7 @@ EXPECT=<> (cast 32 false (var x16)) (bv 5 0x18) false))) +0x1ec (set x11 (cast 64 false (>> (cast 32 false (var x16)) (bv 6 0x18) false))) 0x1f0 (set x10 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x11))) (bv 6 0x2) false))))) 0x1f4 (set x15 (cast 64 false (^ (cast 32 false (var x9)) (cast 32 false (var x10))))) 0x1f8 (set x9 (cast 64 false (& (cast 32 false (var x12)) (bv 32 0xff)))) @@ -178,7 +178,7 @@ EXPECT=<> (cast 32 false (var x14)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x210 (set x9 (+ (var x8) (bv 64 0x800))) 0x214 (set x0 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x10))) (bv 6 0x2) false))))) -0x218 (set x3 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0x218 (set x3 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0x21c (set x10 (+ (var x8) (bv 64 0xc00))) 0x220 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x224 (seq (set addr (+ (var x2) (bv 64 0xb0))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) @@ -192,7 +192,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x24c (set x3 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) -0x250 (set x4 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0x250 (set x4 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0x254 (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x258 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x5))))) 0x25c (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x0))))) @@ -206,7 +206,7 @@ EXPECT=<> (cast 32 false (var x12)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x280 (set x5 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x284 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x3))))) -0x288 (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x288 (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x28c (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x290 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x5))))) 0x294 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x3))))) @@ -216,7 +216,7 @@ EXPECT=<> (cast 32 false (var x13)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x2ac (set x13 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x13))) (bv 6 0x2) false))))) -0x2b0 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0x2b0 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0x2b4 (set x12 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0x2b8 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x6))))) 0x2bc (set x14 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x14))))) @@ -229,7 +229,7 @@ EXPECT=<> (cast 32 false (var x0)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x2dc (set x14 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x14))) (bv 6 0x2) false))))) 0x2e0 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x4))))) -0x2e4 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 5 0x18) false))) +0x2e4 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0x2e8 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x2ec (seq (set addr (+ (var x2) (bv 64 0xa0))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x2f0 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x4))))) @@ -242,7 +242,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x310 (set x4 (cast 64 false (loadw 0 32 (+ (var x9) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x314 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x3))))) -0x318 (set x3 (cast 64 false (>> (cast 32 false (var x0)) (bv 5 0x18) false))) +0x318 (set x3 (cast 64 false (>> (cast 32 false (var x0)) (bv 6 0x18) false))) 0x31c (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x320 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x5))))) 0x324 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x14))))) @@ -256,7 +256,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x350 (set x5 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x354 (set x5 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x358 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x3))))) 0x35c (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x4))))) @@ -270,7 +270,7 @@ EXPECT=<> (cast 32 false (var x16)) (bv 5 0x18) false))) +0x388 (set x11 (cast 64 false (>> (cast 32 false (var x16)) (bv 6 0x18) false))) 0x38c (set x10 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x11))) (bv 6 0x2) false))))) 0x390 (set x15 (cast 64 false (^ (cast 32 false (var x9)) (cast 32 false (var x10))))) 0x394 (set x9 (cast 64 false (& (cast 32 false (var x12)) (bv 32 0xff)))) @@ -281,7 +281,7 @@ EXPECT=<> (cast 32 false (var x14)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x3ac (set x10 (+ (var x8) (bv 64 0x800))) 0x3b0 (set x0 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x11))) (bv 6 0x2) false))))) -0x3b4 (set x3 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0x3b4 (set x3 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0x3b8 (set x11 (+ (var x8) (bv 64 0xc00))) 0x3bc (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x3c0 (seq (set addr (+ (var x2) (bv 64 0x90))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) @@ -295,7 +295,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x3e4 (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x3e8 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x3))))) -0x3ec (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0x3ec (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0x3f0 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x3f4 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x5))))) 0x3f8 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x0))))) @@ -307,7 +307,7 @@ EXPECT=<> (cast 32 false (var x12)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x418 (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) -0x41c (set x5 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x41c (set x5 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x420 (set x5 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x424 (seq (set addr (+ (var x2) (bv 64 0x98))) (set x6 (cast 64 false (loadw 0 32 (var addr)))) (set x7 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x428 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x6))))) @@ -320,7 +320,7 @@ EXPECT=<> (cast 32 false (var x13)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x44c (set x13 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x13))) (bv 6 0x2) false))))) -0x450 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0x450 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0x454 (set x12 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0x458 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x7))))) 0x45c (set x14 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x14))))) @@ -332,7 +332,7 @@ EXPECT=<> (cast 32 false (var x0)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x47c (set x15 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x15))) (bv 6 0x2) false))))) -0x480 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 5 0x18) false))) +0x480 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0x484 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x488 (seq (set addr (+ (var x2) (bv 64 0x80))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x48c (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x4))))) @@ -345,7 +345,7 @@ EXPECT=<> (cast 32 false (var x13)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x4b0 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) -0x4b4 (set x4 (cast 64 false (>> (cast 32 false (var x0)) (bv 5 0x18) false))) +0x4b4 (set x4 (cast 64 false (>> (cast 32 false (var x0)) (bv 6 0x18) false))) 0x4b8 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x4bc (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x5))))) 0x4c0 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x15))))) @@ -357,7 +357,7 @@ EXPECT=<> (cast 32 false (var x16)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x4dc (set x5 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x4e0 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x4))))) -0x4e4 (set x4 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0x4e4 (set x4 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0x4e8 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x4ec (seq (set addr (+ (var x2) (bv 64 0x88))) (set x6 (cast 64 false (loadw 0 32 (var addr)))) (set x7 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x4f0 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x6))))) @@ -370,7 +370,7 @@ EXPECT=<> (cast 32 false (var x17)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x510 (set x17 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x17))) (bv 6 0x2) false))))) 0x514 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x4))))) -0x518 (set x16 (cast 64 false (>> (cast 32 false (var x16)) (bv 5 0x18) false))) +0x518 (set x16 (cast 64 false (>> (cast 32 false (var x16)) (bv 6 0x18) false))) 0x51c (set x16 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x16))) (bv 6 0x2) false))))) 0x520 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x7))))) 0x524 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x0))))) @@ -382,7 +382,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x544 (set x0 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x0))) (bv 6 0x2) false))))) -0x548 (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0x548 (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0x54c (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x550 (seq (set addr (+ (var x2) (bv 64 0x70))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x554 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x4))))) @@ -395,7 +395,7 @@ EXPECT=<> (cast 32 false (var x16)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x578 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) -0x57c (set x4 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x57c (set x4 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x580 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x584 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x5))))) 0x588 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x0))))) @@ -408,7 +408,7 @@ EXPECT=<> (cast 32 false (var x16)) (bv 5 0x18) false))) +0x5b0 (set x6 (cast 64 false (>> (cast 32 false (var x16)) (bv 6 0x18) false))) 0x5b4 (set x6 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x6))) (bv 6 0x2) false))))) 0x5b8 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x3))))) 0x5bc (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x5))))) @@ -419,7 +419,7 @@ EXPECT=<> (cast 32 false (var x14)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x5d8 (set x14 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x14))) (bv 6 0x2) false))))) -0x5dc (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0x5dc (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0x5e0 (set x12 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0x5e4 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x7))))) 0x5e8 (set x15 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x15))))) @@ -432,7 +432,7 @@ EXPECT=<> (cast 32 false (var x0)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x608 (set x16 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x16))) (bv 6 0x2) false))))) 0x60c (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x4))))) -0x610 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 5 0x18) false))) +0x610 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0x614 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x618 (seq (set addr (+ (var x2) (bv 64 0x60))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x61c (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x4))))) @@ -445,7 +445,7 @@ EXPECT=<> (cast 32 false (var x12)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x63c (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x640 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x3))))) -0x644 (set x3 (cast 64 false (>> (cast 32 false (var x0)) (bv 5 0x18) false))) +0x644 (set x3 (cast 64 false (>> (cast 32 false (var x0)) (bv 6 0x18) false))) 0x648 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x64c (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x5))))) 0x650 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x16))))) @@ -457,7 +457,7 @@ EXPECT=<> (cast 32 false (var x13)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x670 (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) -0x674 (set x5 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0x674 (set x5 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0x678 (set x5 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x67c (seq (set addr (+ (var x2) (bv 64 0x68))) (set x6 (cast 64 false (loadw 0 32 (var addr)))) (set x7 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x680 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x6))))) @@ -470,7 +470,7 @@ EXPECT=<> (cast 32 false (var x17)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x6a4 (set x17 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x17))) (bv 6 0x2) false))))) -0x6a8 (set x13 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0x6a8 (set x13 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0x6ac (set x13 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x13))) (bv 6 0x2) false))))) 0x6b0 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x7))))) 0x6b4 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x0))))) @@ -482,7 +482,7 @@ EXPECT=<> (cast 32 false (var x16)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x6d4 (set x0 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x0))) (bv 6 0x2) false))))) -0x6d8 (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x6d8 (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x6dc (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x6e0 (seq (set addr (+ (var x2) (bv 64 0x50))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x6e4 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x4))))) @@ -495,7 +495,7 @@ EXPECT=<> (cast 32 false (var x13)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x708 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) -0x70c (set x4 (cast 64 false (>> (cast 32 false (var x16)) (bv 5 0x18) false))) +0x70c (set x4 (cast 64 false (>> (cast 32 false (var x16)) (bv 6 0x18) false))) 0x710 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x714 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x5))))) 0x718 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x0))))) @@ -507,7 +507,7 @@ EXPECT=<> (cast 32 false (var x14)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x734 (set x5 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x738 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x4))))) -0x73c (set x4 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0x73c (set x4 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0x740 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x744 (seq (set addr (+ (var x2) (bv 64 0x58))) (set x6 (cast 64 false (loadw 0 32 (var addr)))) (set x7 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x748 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x6))))) @@ -520,7 +520,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x768 (set x15 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x15))) (bv 6 0x2) false))))) 0x76c (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x4))))) -0x770 (set x14 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0x770 (set x14 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0x774 (set x14 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x14))) (bv 6 0x2) false))))) 0x778 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x7))))) 0x77c (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x16))))) @@ -532,7 +532,7 @@ EXPECT=<> (cast 32 false (var x0)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x79c (set x16 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x16))) (bv 6 0x2) false))))) -0x7a0 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 5 0x18) false))) +0x7a0 (set x3 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0x7a4 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x7a8 (seq (set addr (+ (var x2) (bv 64 0x40))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x7ac (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x4))))) @@ -545,7 +545,7 @@ EXPECT=<> (cast 32 false (var x14)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x7d0 (set x3 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) -0x7d4 (set x4 (cast 64 false (>> (cast 32 false (var x0)) (bv 5 0x18) false))) +0x7d4 (set x4 (cast 64 false (>> (cast 32 false (var x0)) (bv 6 0x18) false))) 0x7d8 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x7dc (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x5))))) 0x7e0 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x16))))) @@ -559,7 +559,7 @@ EXPECT=<> (cast 32 false (var x12)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x804 (set x5 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x808 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x3))))) -0x80c (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0x80c (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0x810 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x814 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x5))))) 0x818 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x3))))) @@ -569,7 +569,7 @@ EXPECT=<> (cast 32 false (var x17)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x830 (set x17 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x17))) (bv 6 0x2) false))))) -0x834 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0x834 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0x838 (set x12 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0x83c (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x6))))) 0x840 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x0))))) @@ -582,7 +582,7 @@ EXPECT=<> (cast 32 false (var x16)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x860 (set x0 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x0))) (bv 6 0x2) false))))) 0x864 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x4))))) -0x868 (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x868 (set x3 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x86c (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x870 (seq (set addr (+ (var x2) (bv 64 0x30))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x874 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x4))))) @@ -595,7 +595,7 @@ EXPECT=<> (cast 32 false (var x17)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x894 (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0x898 (set x12 (cast 64 false (^ (cast 32 false (var x12)) (cast 32 false (var x3))))) -0x89c (set x3 (cast 64 false (>> (cast 32 false (var x16)) (bv 5 0x18) false))) +0x89c (set x3 (cast 64 false (>> (cast 32 false (var x16)) (bv 6 0x18) false))) 0x8a0 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x8a4 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x5))))) 0x8a8 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x0))))) @@ -609,7 +609,7 @@ EXPECT=<> (cast 32 false (var x17)) (bv 5 0x18) false))) +0x8d4 (set x5 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0x8d8 (set x5 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x8dc (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x3))))) 0x8e0 (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x4))))) @@ -620,7 +620,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x8fc (set x15 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x15))) (bv 6 0x2) false))))) -0x900 (set x13 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0x900 (set x13 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0x904 (set x13 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x13))) (bv 6 0x2) false))))) 0x908 (set x17 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x6))))) 0x90c (set x16 (cast 64 false (^ (cast 32 false (var x17)) (cast 32 false (var x16))))) @@ -632,7 +632,7 @@ EXPECT=<> (cast 32 false (var x0)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x92c (set x17 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x17))) (bv 6 0x2) false))))) -0x930 (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0x930 (set x3 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0x934 (set x3 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x3))) (bv 6 0x2) false))))) 0x938 (seq (set addr (+ (var x2) (bv 64 0x20))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0x93c (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x4))))) @@ -644,7 +644,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x95c (set x4 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) -0x960 (set x6 (cast 64 false (>> (cast 32 false (var x0)) (bv 5 0x18) false))) +0x960 (set x6 (cast 64 false (>> (cast 32 false (var x0)) (bv 6 0x18) false))) 0x964 (set x6 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x6))) (bv 6 0x2) false))))) 0x968 (set x13 (cast 64 false (^ (cast 32 false (var x13)) (cast 32 false (var x3))))) 0x96c (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x5))))) @@ -659,7 +659,7 @@ EXPECT=<> (cast 32 false (var x15)) (bv 5 0x18) false))) +0x99c (set x5 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0x9a0 (set x5 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0x9a4 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x3))))) 0x9a8 (set x16 (cast 64 false (^ (cast 32 false (var x16)) (cast 32 false (var x4))))) @@ -670,7 +670,7 @@ EXPECT=<> (cast 32 false (var x14)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x9c4 (set x14 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x14))) (bv 6 0x2) false))))) -0x9c8 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0x9c8 (set x12 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0x9cc (set x12 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0x9d0 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x6))))) 0x9d4 (set x15 (cast 64 false (^ (cast 32 false (var x15)) (cast 32 false (var x16))))) @@ -682,7 +682,7 @@ EXPECT=<> (cast 32 false (var x3)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0x9f4 (set x16 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x16))) (bv 6 0x2) false))))) -0x9f8 (set x0 (cast 64 false (>> (cast 32 false (var x17)) (bv 5 0x18) false))) +0x9f8 (set x0 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0x9fc (set x0 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x0))) (bv 6 0x2) false))))) 0xa00 (seq (set addr (+ (var x2) (bv 64 0x10))) (set x4 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) 0xa04 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x4))))) @@ -695,7 +695,7 @@ EXPECT=<> (cast 32 false (var x12)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0xa28 (set x0 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x0))) (bv 6 0x2) false))))) -0xa2c (set x4 (cast 64 false (>> (cast 32 false (var x3)) (bv 5 0x18) false))) +0xa2c (set x4 (cast 64 false (>> (cast 32 false (var x3)) (bv 6 0x18) false))) 0xa30 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0xa34 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x5))))) 0xa38 (set x14 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x16))))) @@ -709,7 +709,7 @@ EXPECT=<> (cast 32 false (var x12)) (bv 5 0x18) false))) +0xa64 (set x4 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0xa68 (set x4 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x4))) (bv 6 0x2) false))))) 0xa6c (set x0 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x5))))) 0xa70 (set x14 (cast 64 false (^ (cast 32 false (var x0)) (cast 32 false (var x14))))) @@ -719,7 +719,7 @@ EXPECT=<> (cast 32 false (var x17)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0xa88 (set x10 (cast 64 false (loadw 0 32 (+ (var x10) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) -0xa8c (set x12 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0xa8c (set x12 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0xa90 (set x11 (cast 64 false (loadw 0 32 (+ (var x11) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0xa94 (set x17 (cast 64 false (^ (cast 32 false (var x14)) (cast 32 false (var x4))))) 0xa98 (set x8 (cast 64 false (^ (cast 32 false (var x8)) (cast 32 false (var x6))))) @@ -727,7 +727,7 @@ EXPECT=<> (cast 32 false (var x8)) (bv 6 0x8) false)) (cast 32 false (var res))))) @@ -736,7 +736,7 @@ EXPECT=<> (cast 32 false (var x17)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0xac8 (set x3 (+ (var x10) (bv 64 0x800))) 0xacc (set x12 (cast 64 false (loadw 0 32 (+ (var x3) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) -0xad0 (set x13 (cast 64 false (>> (cast 32 false (var x16)) (bv 5 0x18) false))) +0xad0 (set x13 (cast 64 false (>> (cast 32 false (var x16)) (bv 6 0x18) false))) 0xad4 (set x4 (+ (var x10) (bv 64 0xc00))) 0xad8 (set x13 (cast 64 false (loadw 0 32 (+ (var x4) (<< (cast 64 false (cast 32 false (var x13))) (bv 6 0x2) false))))) 0xadc (seq (set addr (var x2)) (set x14 (cast 64 false (loadw 0 32 (var addr)))) (set x5 (cast 64 false (loadw 0 32 (+ (var addr) (bv 64 0x4)))))) @@ -750,7 +750,7 @@ EXPECT=<> (cast 32 false (var x8)) (bv 6 0x10) false)) (cast 32 false (var res))))) 0xb00 (set x6 (cast 64 false (loadw 0 32 (+ (var x3) (<< (cast 64 false (cast 32 false (var x12))) (bv 6 0x2) false))))) 0xb04 (set x12 (cast 64 false (^ (cast 32 false (var x9)) (cast 32 false (var x13))))) -0xb08 (set x9 (cast 64 false (>> (cast 32 false (var x17)) (bv 5 0x18) false))) +0xb08 (set x9 (cast 64 false (>> (cast 32 false (var x17)) (bv 6 0x18) false))) 0xb0c (set x9 (cast 64 false (loadw 0 32 (+ (var x4) (<< (cast 64 false (cast 32 false (var x9))) (bv 6 0x2) false))))) 0xb10 (set x11 (cast 64 false (^ (cast 32 false (var x11)) (cast 32 false (var x5))))) 0xb14 (set x11 (cast 64 false (^ (cast 32 false (var x11)) (cast 32 false (var x14))))) @@ -764,7 +764,7 @@ EXPECT=<> (cast 32 false (var x8)) (bv 5 0x18) false))) +0xb40 (set x5 (cast 64 false (>> (cast 32 false (var x8)) (bv 6 0x18) false))) 0xb44 (set x5 (cast 64 false (loadw 0 32 (+ (var x4) (<< (cast 64 false (cast 32 false (var x5))) (bv 6 0x2) false))))) 0xb48 (set x9 (cast 64 false (^ (cast 32 false (var x9)) (cast 32 false (var x11))))) 0xb4c (set x9 (cast 64 false (^ (cast 32 false (var x9)) (cast 32 false (var x14))))) @@ -778,36 +778,36 @@ EXPECT=<> (cast 32 false (var x15)) (bv 5 0x18) false))) +0xb78 (set x9 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0xb7c (set x9 (cast 64 false (loadw 0 32 (+ (var x4) (<< (cast 64 false (cast 32 false (var x9))) (bv 6 0x2) false))))) 0xb80 (set x15 (cast 64 false (^ (cast 32 false (var x8)) (cast 32 false (var x9))))) 0xb84 (store 0 (var x1) (cast 8 false (var x12))) -0xb88 (set x8 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x8) false))) +0xb88 (set x8 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x8) false))) 0xb8c (store 0 (+ (var x1) (bv 64 0x1)) (cast 8 false (var x8))) -0xb90 (set x8 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x10) false))) +0xb90 (set x8 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x10) false))) 0xb94 (store 0 (+ (var x1) (bv 64 0x2)) (cast 8 false (var x8))) -0xb98 (set x8 (cast 64 false (>> (cast 32 false (var x12)) (bv 5 0x18) false))) +0xb98 (set x8 (cast 64 false (>> (cast 32 false (var x12)) (bv 6 0x18) false))) 0xb9c (store 0 (+ (var x1) (bv 64 0x3)) (cast 8 false (var x8))) 0xba0 (store 0 (+ (var x1) (bv 64 0x4)) (cast 8 false (var x13))) -0xba4 (set x8 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x8) false))) +0xba4 (set x8 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x8) false))) 0xba8 (store 0 (+ (var x1) (bv 64 0x5)) (cast 8 false (var x8))) -0xbac (set x8 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x10) false))) +0xbac (set x8 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x10) false))) 0xbb0 (store 0 (+ (var x1) (bv 64 0x6)) (cast 8 false (var x8))) -0xbb4 (set x8 (cast 64 false (>> (cast 32 false (var x13)) (bv 5 0x18) false))) +0xbb4 (set x8 (cast 64 false (>> (cast 32 false (var x13)) (bv 6 0x18) false))) 0xbb8 (store 0 (+ (var x1) (bv 64 0x7)) (cast 8 false (var x8))) 0xbbc (store 0 (+ (var x1) (bv 64 0x8)) (cast 8 false (var x14))) -0xbc0 (set x8 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x8) false))) +0xbc0 (set x8 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x8) false))) 0xbc4 (store 0 (+ (var x1) (bv 64 0x9)) (cast 8 false (var x8))) -0xbc8 (set x8 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x10) false))) +0xbc8 (set x8 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x10) false))) 0xbcc (store 0 (+ (var x1) (bv 64 0xa)) (cast 8 false (var x8))) -0xbd0 (set x8 (cast 64 false (>> (cast 32 false (var x14)) (bv 5 0x18) false))) +0xbd0 (set x8 (cast 64 false (>> (cast 32 false (var x14)) (bv 6 0x18) false))) 0xbd4 (store 0 (+ (var x1) (bv 64 0xb)) (cast 8 false (var x8))) 0xbd8 (store 0 (+ (var x1) (bv 64 0xc)) (cast 8 false (var x15))) -0xbdc (set x8 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x8) false))) +0xbdc (set x8 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x8) false))) 0xbe0 (store 0 (+ (var x1) (bv 64 0xd)) (cast 8 false (var x8))) -0xbe4 (set x8 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x10) false))) +0xbe4 (set x8 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x10) false))) 0xbe8 (store 0 (+ (var x1) (bv 64 0xe)) (cast 8 false (var x8))) -0xbec (set x8 (cast 64 false (>> (cast 32 false (var x15)) (bv 5 0x18) false))) +0xbec (set x8 (cast 64 false (>> (cast 32 false (var x15)) (bv 6 0x18) false))) 0xbf0 (set x0 (cast 64 false (bv 32 0x1))) 0xbf4 (store 0 (+ (var x1) (bv 64 0xf)) (cast 8 false (var x8))) 0xbf8 (jmp (var x30)) diff --git a/test/db/cmd/dwarf b/test/db/cmd/dwarf index aeda2460dd6..fa2f76673d6 100644 --- a/test/db/cmd/dwarf +++ b/test/db/cmd/dwarf @@ -7128,11 +7128,11 @@ pdf @ sym.__do_global_dtors_aux aaa EOF EXPECT=<