Please enter what you're looking for to continue your search
 

vmulq_f16
ADD TO COMPARE ADDED TO COMPARE

 Arm 64-bit (64 bits)/ NEON  View official documentation
 Location: Arithmetic  >  Vector Multiply
 Supported Architectures: A32, A64
Purpose:

VMUL multiplies corresponding elements in two vectors. Elements in the result vector and input vectors have the same width.

Result:

float16x8_t

Example:
#include <arm_neon.h>
#include <stdio.h>
int main() {
 float16x8_t a = {
  1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0
 };
 float16x8_t b = {
  8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0
 };
 float16x8_t result = vmulq_f16(a, b);
 float16_t* res = (float16_t*)&result;
 for(int i = 0; i < 8; i++) {
   printf("%f ", (float)res[i]);
  }

  return 0;
 }

Prototypes

Assembly Instruction:
FMUL
Usage:
float16x8_t result = vmulq_f16( float16x8_t a, float16x8_t b )
Performance Metrics:
📊 Unlock Performance Insights

Get access to detailed performance metrics including latency, throughput, and CPU-specific benchmarks for this intrinsic.

SIMD Intrinsics Summary
SIMD Engines: 6
C Intrinsics: 10444
NEON: 4353
AVX2: 405
AVX512: 4717
SSE4.2: 598
VSX: 192
IBM-Z: 179