Single data item loads and stores: VLDR.64 Dd, [Rn{, #}] Dd:=[address, size] VLDR.64 Dd, [Rn, #-8]! Rn:=Rn-8; Dd:=[address, size] VLDR.64 Dd, [Rn], #8 Dd:=[address, size]; Rn:=Rn+8 VLDR.64 Dd, label Dd:=[label, size] VLDR. Dd, =constant Dd:=[label, size] VSTR.64 Dd, [Rn{, #}] [address, size]:=Dd VSTR.64 Dd, label [label, size]:=Dd VSTR.64 Dd, [Rn, #-8]! Rn:=Rn+8; [address, size]:=Dd VSTR.64 Dd, [Rn], #8 [address, size]:=Dd; Rn:=Rn+8 : Optional multiple of 4 between -1020 and +1020. The pre-decrement and post-increment versions are assembled into equivalent VLDM or VSTM instructions. label: Word aligned, +/- 1024 bytes from the current instruction. : <8,16,32,64>, or F32 Other memory operations: VLDM{} Rn{!}, VPOP eq. VLDMFD r13!, VPUSH eq. VSTMFD r13!, : IA: increment after (default), DB: decrement before, EA: empty ascending (DB for loads, IA for stores), or FD: full descending (IA for loads, DB for stores) : Can be S, D, or Q registers, but not mixed. At most 16 D registers, or 8 Q registers. Q registers are translated to their equivalent D registers. VLD. , [Rn{@}]{!} D(d+i)[x]:=[address+i*(/8),]; Rn:=Rn+*(/8) VLD. , [Rn{@}], Rm D(d+i)[x]:=[address+i*(/8),]; Rn:=Rn+Rm VLD. , [Rn{@}]{!} D(d+i)[x]:=[address+i*(/8),]; Rn:=Rn+*(/8) VLD. , [Rn{@}], Rm D(d+i)[x]:=[address+i*(/8),]; Rn:=Rn+Rm VLD. , [Rn{@}]{!} D(d+i)[x]:=[address+(x*+i)*(/8),]; Rn:=Rn+* VLD. , [Rn{@}], Rm D(d+i)[x]:=[address+(x*+i)*(/8),]; Rn:=Rn+Rm VST. , [Rn{@}]{!} [address+i*(/8),]:=D(d+i)[x] Rn:=Rn+*(/8) VST. , [Rn{@}], Rm [address+i*(/8),]:=D(d+i)[x] Rn:=Rn+Rm VST. , [Rn{@}]{!} [address+(x*+i)*(/8),]:=D(d+i)[x] Rn:=Rn+* VST. , [Rn{@}], Rm [address+(x*+i)*(/8),]:=D(d+i)[x] Rn:=Rn+Rm Rn: ARM register other than PC. Rm: ARM register other than r13 or PC. : 1, 2, 3, or 4. : {Dd[x],D(d+1)[x],...}, or {Dd[x],D(d+2)[x],...}, for > 1, > 8. : {Dd[],D(d+1)[],...}, or {Dd[],D(d+2)[],...} for > 1. : {Dd,D(d+1),...}, or {Dd,D(d+2),...} for > 1. : 8, 16, 32, 64, 128, or 256. Note: The Dd[] form can fill 1 or 2 registers when == 1 (i.e., D or Q). The Dd form can fill 1, 2, 3, or 4 when == 1, and 2 or 4 when == 2 (but only 2 when the registers are evenly spaced). Otherwise the list must have exactly registers. See http://infocenter.arm.com/help/topic/com.arm.doc.dui0489a/CIHEIIGI.html for full details. Move data: VMOV Rd, Sn Rd:=Sn VMOV Sn, Rd Sn:=Rd VMOV Dm, Rd, Rn Dm[63:32]:=Rn; Dm[31:0]:=Rd VMOV Rd, Rn, Dm Rd:=Dm[31:0]; Rn:=Dm[63:32] VMOV Sm, S(m+1), Rd, Rn Sm:=Rd; S(m+1):=Rn VMOV Rd, Rn, Sm, S(m+1) Rd:=Sm; Rn:=S(m+1) VMOV{.} Dn[x], Rd Dn[(x+1)*size-1:x*size]:=Rd[size-1:0] VMOV{.} Rd, Dn[x] Rd:=Extend(Dn[(x+1)*size-1:x*size] Rd, Rn: ARM registers other than PC. : 8, 16, or 32 (default) : S8, S16, U8, U16, or 32 (default) VMOV{.} Qd, Qm Qd:=Qm VMOV{.} Dd, Dm Dd:=Dm VMOV. Qd, # Qd[x]:=# VMOV. Dd, # Dd[x]:=# VMVN{.} Qd, Qm Qd:=~Qm VMVN{.} Dd, Dm Dd:=~Dm VMVN. Qd, # Qd[x]:=~# VMVN. Dd, # Dd[x]:=~# VSWP{.} Qd, Qm Swap(Qd,Qm) VSWP{.} Dd, Dm Swap(Dd,Dm) : Ignored for non-# forms. I<8,16,32,64> or F32 for VMOV with # I<16,32> for VMVN with # : For VMOV form (VMVN requires ~# to have one of these forms): I8 I16 I32 I64 F32 0xXY 0x00XY 0x000000XY 0xGGHHJJKKLLMMNNPP +/- n*2**-r 0xXY00 0x0000XY00 GG...PP must be 16<=n<=31, 0x00XY0000 0x00 or 0xFF. 0<=r<=7. 0xXY000000 VMOVL. Qd, Dm Qd[x]:=Dm[x] VMOVN. Dd, Qm Dd[x]:=(Qm[x]) V{Q}MOVN. Dd, Qm Dd[x]:=Saturate(Qm[x]) VQMOVUN. Dd, Qm Dd[x]:=UnsignedSat(Qm[x]) : S<8,16,32> or U<8,16,32> for VMOVL I<16,32,64> for VMOVN S<16,32,64> or U<16,32,64> for VQMOVN S<16,32,64> for VQMOVUN VDUP. Qd, Dm[x] Qd[y]:=Dm[x] VDUP. Dd, Dm[x] Dd[y]:=Dm[x] VDUP. Qd, Rm Qd[y]:=Rm VDUP. Dd, Rm Dd[y]:=Rm : 8, 16, or 32 Rm: ARM register other than PC. VEXT. {Qd,} Qn, Qm, #immX Qd[x]:=(Qm:Qn)[x+#immX] VEXT. {Dd,} Dn, Dm, #immX Dd[x]:=(Dm:Dn)[x+#immX] : 8, 16, 32, or 64. Assembled to the equivalent .8 form. : 0 to 64/-1 for doublewords, 128/-1 for quadwords. VREV. Qd, Qm Qd[x*(/)+y]:=Qm[(x+1)*(/)-y-1] VREV. Dd, Dm Dd[x*(/)+y]:=Dm[(x+1)*(/)-y-1] : 16, 32, or 64. : 8, 16, or 32, with < . VTBL.8 Dd, , Dm Dd[x]:=[Dm[x]] VTBX.8 Dd, , Dm Dd[x]:=Dm[x] < size()?[Dm[x]]:Dd[x] : {Dn}, {Dn,D(n+1)}, {Dn,D(n+1),D(n+2)}, {Dn,D(n+1),D(n+2),D(n+3)}, or {Qn,Q(n+1)}. May not wrap around the reigster bank. Q registers are assembled to their equivalent D registers. VTRN. Qd, Qm Swap(Qd[2*x+1],Qm[2*x]) VTRN. Dd, Dm Swap(Qd[2*x+1],Qm[2*x]) : 8, 16, or 32. VZIP. Qd, Qm (Qm:Qd)[2*x]:=Qd[x]; (Qm:Qd)[2*x+1]:=Qm[x] VZIP. Dd, Dm (Dm:Dd)[2*x]:=Dd[x]; (Dm:Dd)[2*x+1]:=Dm[x] Qd.32 = D C B A => F B E A Qm.32 = H G F E => H D G C or Qd.32 = G E C A => D C B A Qm.32 = H F D B = > H G F E VUZP. Qd, Qm Qd[x]:=(Qm:Qd)[2*x]; Qm[x]:=(Qm:Qd)[2*x+1] VUZP. Dd, Dm Dd[x]:=(Dm:Dd)[2*x]; Dm[x]:=(Dm:Dd)[2*x+1] Qd.32 = D C B A => G E C A Qm.32 = H G F E => H F D B or Qd.32 = F B E A => D C B A Qm.32 = H D G C => H G F E Shift: VSHL. {Qd,} Qm, # Qd[x]:=Qm[x]<<# VSHL. {Dd,} Dm, # Dd[x]:=Dm[x]<<# VQSHL. {Qd,} Qm, # Qd[x]:=Saturate(Qm[x]<<#) VQSHL. {Dd,} Dm, # Dd[x]:=Saturate(Dm[x]<<#) VQSHLU. {Qd,} Qm, # Qd[x]:=UnsignedSat(Qm[x]<<#) VQSHLU. {Dd,} Dm, # Dd[x]:=UnsignedSat(Dm[x]<<#) VSHLL. Qd, Dm, # Qd[x]:=Extend(Dm[x])<<# V{R}SHL. {Qd,} Qm, Qn Qd[x]:=s<0?Qm[x]+({R}?1<<-s>>1:0)>>-s:Qm[x]< {Qd,} Qm, Qn Qd[x]:=s<0?Qm[x]+({R}?1<<-s>>1:0)>>-s:Saturate(Qm[x]< {Qd,} Qm, # Qd[x]:=Qm[x]+({R}?1<<#>>1:0)>># V{R}SHR. {Dd,} Dm, # Dd[x]:=Dm[x]+({R}?1<<#>>1:0)>># V{R}SRA. {Qd,} Qm, # Qd[x]:=Qd[x]+(Qm[x]+({R}?1<<#>>1:0)>>#) V{R}SRA. {Dd,} Dm, # Dd[x]:=Dd[x]+(Dm[x]+({R}?1<<#>>1:0)>>#) V{R}SHRN. Dd, Qm, # Dd[x]:=Qm[x]+({R}?1<<#>>1:0)>># VQ{R}SHRN. Dd, Qm, # Dd[x]:=Saturate(Qm[x]+({R}?1<<#>>1:0)>>#) VQ{R}SHRUN. Dd, Qm, # Dd[x]:=UnsignedSat(Qm[x]+({R}?1<<#>>1:0)>>#) VSLI. {Qd,} Qm, # Qd[x][-1:#]:=Qm[x][-#-1:0] VSLI. {Dd,} Dm, # Dd[x][-1:#]:=Dm[x][-#-1:0] VSRI. {Qd,} Qm, # Qd[x][-#-1:0]:=Qm[x][-1:#] VSRI. {Dd,} Dm, # Dd[x][-#-1:0]:=Dm[x][-1:#] : I<8,16,32,64> for VSHL, S<8,16,32,64> or U<8,16,32,64> for VQSHL, V{Q}RSHL, V{R}SHR, or V{R}SRA, S<8,16,32,64> for VQSHLU, S<8,16,32> or U<8,16,32> for VSHLL, I<16,32,64> for V{R}SHRN, S<16,32,64> or U<16,32,64> for VQ{R}SHRN, S<16,32,64> for VQSHRNU. : 8, 16, 32, or 64. : 1 to size()-1. 0 is permitted but is assembled to VMOV for V{Q}{R}SHL. : 1 to size(). 0 is permitted but is assembled to VMOVL. Logical: V{.} {Qd,} Qn, Qm V{.} {Dd,} Dn, Dm : AND, EOR, ORR, ORN, or BIC. : Ignored. VORR with {Q,D}n == {Q,D}m is assembled as VMOV. V{.} Qd, # V{.} Dd, # : ORR or BIC, or AND or ORN if they can be converted to an equivalent BIC or ORR. : I{8,16,32,64}. : I16 I32 0x00XY 0x000000XY 0xXY00 0x0000XY00 0x00XY0000 0xXY000000 I8 or I64 get converted to I16 or I32 equivalents, or generate an error. Bitwise Select: VBIT{.} {Qd,} Qn, Qm Qd[i]:=Qm[i]?Qn[i]:Qd[i] VBIT{.} {Dd,} Dn, Dm Dd[i]:=Dm[i]?Dn[i]:Dd[i] VBIF{.} {Qd,} Qn, Qm Qd[i]:=Qm[i]?Qd[i]:Qn[i] VBIF{.} {Dd,} Dn, Dm Dd[i]:=Dm[i]?Dd[i]:Dn[i] VBSL{.} {Qd,} Qn, Qm Qd[i]:=Qd[i]?Qn[i]:Qm[i] VBSL{.} {Dd,} Dn, Dm Dd[i]:=Dd[i]?Qn[i]:Qm[i] : Ignored. Compare: VC. {Qd,} Qn, Qm Qd[x]:=-(Qn[x]Qm[x]) VC. {Dd,} Dn, Dm Dd[x]:=-(Dn[x]Dm[x]) VC. {Qd,} Qn, #0 Qd[x]:=-(Qn[x]#0) VC. {Dd,} Dn, #0 Dd[x]:=-(Dn[x]#0) : EQ, GE, GT, LE, or LT. VCLE and VCLT with vector operands are assembled into VCGT and VCGE with the operands reversed. : I<8,16,32> or F32 for EQ, S<8,16,32>, U<8,16,32>, or F32 for GE, GT, LE, or LT, S<8,16,32> or F32 for GE, GT, LE, LT (#0 form). VTST. {Qd,} Qn, Qm Qd[x]:=-((Qn[x]&Qm[x])!=0) VTST. {Dd,} Dn, Dm Dd[x]:=-((Dn[x]&Dm[x])!=0) : 8, 16, or 32 Arithmetic: VABD. {Qd,} Qn, Qm Qd[x]:=Abs(Qn[x]-Qm[x]) VABD. {Dd,} Dn, Dm Dd[x]:=Abs(Dn[x]-Dm[x]) VABDL. Qd, Dn, Dm Qd[x]:=Abs(Dn[x]-Dm[x]) VABA. {Qd,} Qn, Qm Qd[x]:=Qd[x]+Abs(Qn[x]-Qm[x]) VABA. {Dd,} Dn, Dm Dd[x]:=Dd[x]+Abs(Dn[x]-Dm[x]) VABAL. Qd, Dn, Dm Qd[x]:=Qd[x]+Abs(Dn[x]-Dm[x]) : <8,16,32> for VABA{L} or VABDL, <8,16,32> or F32 for VABD. VABS. Qd, Qm Qd[x]:=Abs(Qn[x]) VABS. Dd, Dm Dd[x]:=Abs(Dn[x]) VQABS. Qd, Qm Qd[x]:=UnsignedSat(Abs(Qn[x])) VQABS. Dd, Dm Dd[x]:=UnsignedSat(Abs(Dn[x])) VNEG. Qd, Qm Qd[x]:=-Qn[x] VNEG. Dd, Dm Dd[x]:=-Dn[x] VQNEG. Qd, Qm Qd[x]:=SignedSat(-Qn[x]) VQNEG. Dd, Dm Dd[x]:=SignedSat(-Dn[x]) : S<8,16,32> for VQABS or VQNEG, S<8,16,32> or F32 for VABS or VNEG. VADD. {Qd,} Qn, Qm Qd[x]:=Qn[x]+Qm[x] VADD. {Dd,} Dn, Dm Dd[x]:=Dn[x]+Dm[x] VSUB. {Qd,} Qn, Qm Qd[x]:=Qn[x]-Qm[x] VSUB. {Dd,} Dn, Dm Dd[x]:=Dn[x]-Dm[x] VQADD. {Qd,} Qn, Qm Qd[x]:=Saturate(Qn[x]+Qm[x]) VQADD. {Dd,} Dn, Dm Dd[x]:=Saturate(Dn[x]+Dm[x]) VQSUB. {Qd,} Qn, Qm Qd[x]:=Saturate(Qn[x]-Qm[x]) VQSUB. {Dd,} Dn, Dm Dd[x]:=Saturate(Dn[x]-Dm[x]) VADDL. Qd, Dn, Dm Qd[x]:=Dn[x]+Dm[x] VSUBL. Qd, Dn, Dm Qd[x]:=Dn[x]-Dm[x] VADDW. {Qd,} Qn, Dm Qd[x]:=Qn[x]+Dm[x] VSUBW. {Qd,} Qn, Dm Qd[x]:=Qn[x]-Dm[x] : I<8,16,32,64> or F32 for VADD or VSUB, <8,16,32,64> for VQADD or VQSUB, <8,16,32> for VADD or VSUB. V{R}ADDHN. Dd, Qn, Qm Dd[x]:=(Qn[x]+Qm[x]+({R}?1<) V{R}SUBHN. Dd, Qn, Qm Dd[x]:=(Qn[x]-Qm[x]+({R}?1<) : I<16,32,64> V{R}HADD. Dd, Qn, Qm Dd[x]:=Qn[x]+Qm[x]+({R}?1:0)>>1 V{R}HSUB. Dd, Qn, Qm Dd[x]:=Qn[x]+Qm[x]+({R}?1:0)>>1 : <8,16,32> VPADD. {Dd,} Dn, Dm Dd[x]:=(Dm:Dn)[2*x]+(Dm:Dn)[2*x+1] VPADDL. Qd, Qm Qd[x]:=Qm[2*x]+Qm[2*x+1] VPADDL. Dd, Dm Dd[x]:=Dm[2*x]+Dm[2*x+1] VPADAL. Qd, Qm Qd[x]:=Qd[x]+Qm[2*x]+Qm[2*x+1] VPADAL. Dd, Dm Dd[x]:=Dd[x]+Dm[2*x]+Dm[2*x+1] : I<8,16,32> or F32 for VPADD, <8,16,32> for VPADDL or VPADAL. VMAX. Qd, Qn, Qm Qd[x]:=Max(Qn[x],Qm[x]) VMAX. Dd, Dn, Dm Dd[x]:=Max(Dn[x],Dm[x]) VMIN. Qd, Qn, Qm Qd[x]:=Min(Qn[x],Qm[x]) VMIN. Dd, Dn, Dm Dd[x]:=Min(Dn[x],Dm[x]) VPMAX. Dd, Dn, Dm Dd[x]:=Max((Dm:Dn)[2*x],(Dm:Dn)[2*x+1]) VPMIN. Dd, Dn, Dm Dd[x]:=Min((Dm:Dn)[2*x],(Dm:Dn)[2*x+1]) : <8,16,32> or F32. VCLS. Qd, Qm Qd[x]:=Clz(Qm[x]^-Qm[x][size()-1]) VCLZ. Qd, Qm Qd[x]:=Clz(Qm[x]) VCNT. Qd, Qm Qd[x]:=PopCount(Qm[x]) : S<8,16,32> for VCLS, I<8,16,32> for VCLZ, I8 for VCNT. VMUL. {Qd,} Qn, Qm Qd[x]:=Qn[x]*Qm[x] VMUL. {Dd,} Dn, Dm Dd[x]:=Dn[x]*Dm[x] VMUL. {Qd,} Qn, Dm[x] Qd[y]:=Qn[y]*Dm[x] VMUL. {Dd,} Dn, Dm[x] Dd[y]:=Dn[y]*Dm[x] VMLA. {Qd,} Qn, Qm Qd[x]:=Qd[x]+Qn[x]*Qm[x] VMLA. {Dd,} Dn, Dm Dd[x]:=Dd[x]+Dn[x]*Dm[x] VMLA. {Qd,} Qn, Dm[x] Qd[y]:=Qd[y]+Qn[y]*Dm[x] VMLA. {Dd,} Dn, Dm[x] Dd[y]:=Dd[y]+Dn[y]*Dm[x] VMLS. {Qd,} Qn, Qm Qd[x]:=Qd[x]-Qn[x]*Qm[x] VMLS. {Dd,} Dn, Dm Dd[x]:=Dd[x]-Dn[x]*Dm[x] VMLS. {Qd,} Qn, Qm[x] Qd[y]:=Qd[y]-Qn[y]*Dm[x] VMLS. {Dd,} Dn, Dm[x] Dd[y]:=Dd[y]-Dn[y]*Dm[x] VMULL. Qd, Dn, Dm Qd[x]:=Dn[x]*Dm[x] VMULL. Qd, Dn, Dm[x] Qd[y]:=Dn[y]*Dm[x] VMLAL. Qd, Dn, Dm Qd[x]:=Qd[x]+Dn[x]*Dm[x] VMLAL. Qd, Dn, Dm[x] Qd[y]:=Qd[y]+Dn[y]*Dm[x] VMLSL. Qd, Dn, Dm Qd[x]:=Qd[x]-Dn[x]*Dm[x] VMLSL. Qd, Dn, Dm[x] Qd[y]:=Qd[y]-Dn[y]*Dm[x] VQDMULL. Qd, Dn, Dm Qd[x]:=Saturate(2*Dn[x]*Dm[x]) VQDMULL. Qd, Dn, Dm[x] Qd[y]:=Saturate(2*Dn[y]*Dm[x]) VQDMLAL. Qd, Dn, Dm Qd[x]:=Saturate(Qd[x]+2*Dn[x]*Dm[x]) VQDMLAL. Qd, Dn, Dm[x] Qd[y]:=Saturate(Qd[y]+2*Dn[y]*Dm[x]) VQDMLSL. Qd, Dn, Dm Qd[x]:=Saturate(Qd[x]-2*Dn[x]*Dm[x]) VQDMLSL. Qd, Dn, Dm[x] Qd[y]:=Saturate(Qd[y]-2*Dn[y]*Dm[x]) VQ{R}DMULH. {Qd,} Qn, Qm Qd[x]:=Saturate(2*Qn[x]*Qm[x]+({R}?1<) VQ{R}DMULH. {Dd,} Dn, Dm Dd[x]:=Saturate(2*Dn[x]*Dm[x]+({R}?1<) VQ{R}DMULH. {Qd,} Qn, Qm[x] Qd[y]:=Saturate(2*Qn[y]*Qm[x]+({R}?1<) VQ{R}DMULH. {Dd,} Dn, Dm[x] Dd[y]:=Saturate(2*Dn[y]*Dm[x]+({R}?1<) : I<8,16,32>, F32, or P8 for VMUL, VMLA, or VMLS, <8,16,32> or P8 for VMULL, <8,16,32> for VMULL, VMLAL, or VMLSL, S<16,32> for VQ{R}DMUL, VQ{R}DMLA, or VQ{R}DMLS. 8-bit datatypes are not available for scalar versions. The doubled multiplies do not support 8-bit datatypes nor unsigned datatypes. Move to or from PSR: VMRS Rd, Rd:= VMSR , Rd :=Rd : Usually FPSCR, FPSID, or FPEXC. Rd: ARM register other than PC, or APSR_nzcv if is FPSCR. Floating-point only: VAC.F32 {Qd,} Qn, Qm Qd[x]:=-(fabs(Qn[x])fabs(Qm[x])) VAC.F32 {Dd,} Dn, Dm Dd[x]:=-(fabs(Dn[x])fabs(Dm[x])) : GE, GT, LE, or LT. VACLE and VACLT are assembled into VACGT and VACGE with the operands reversed. PMOVMSKB: ; 8-bit VNEG.S8 D0, D0 VMOV r0, r1, D0 ORR r0, r0, r1, LSL #4 ORR r0, r0, r0, LSR #7 ORR r0, r0, r0, LSR #14 UXTB r0, r0 ; 16-bit VNEG.S8 Q0, Q0 VZIP.8 D0, D1 VSLI.8 D0, D1, #4 VMOV r0, r1, D0 ORR r0, r0, r1, LSL #2 ORR r0, r0, r0, LSR #15 UXTH r0, r0 Legacy opcode names: VFMX -> VPMAX VFMN -> VPMIN VQ{R}DMLH -> VQ{R}DMULH VCAGE -> VACGE VCAGT -> VACGT VSUM[L] -> VPADD[L] V{R}ADH -> V{R}ADDHN V{R}SBH -> V{R}SUBHN VQ{R}DMLH -> VQ{R}DMULH VMVHW -> VSHLL VSMAL -> VPADAL gcc register names: r10 -> sl r11 -> fp r12 -> ip r13 -> sp r14 -> lr r15 -> pc