cancel
Showing results for 
Search instead for 
Did you mean: 

OpenGL & Vulkan

phridrich
Adept I

DirectX 12 per instance data fetch

 

Hi,

I'm creating DirectX 12 application and using Radeon GPU Profiler for profiling on 5700 XT card. I'm using indirect drawing for rasterizing a scene, and using per instance vertex buffers to provide mesh-related data to shaders. Here is one of vertex shaders which use this principle:

void main(
    in  float3   in_position  : POSITION,   // per-vertex
    in  float4x4 in_transform : TRANSFORM,  // per-instance
    out float4   out_position : SV_Position
)
{
    float4 hdc_position = mul( float4( in_position, 1.0f ), in_transform );
    out_position = float4( hdc_position.xyz, 1.0f );
}

According to RGP, this results in folowing Radeon ISA code:

s_inst_prefetch 0x3                                                                                 // 000000000000: BFA00003
s_getpc_b64 s[0:1]                                                                                  // 000000000004: BE801F80
s_mov_b32 s0, s5                                                                                    // 000000000008: BE800305
s_load_dwordx8 s[4:11], s[0:1], 0x0                                                                 // 00000000000C: F40C0100 FA000000
v_add_nc_u32_e32 v0, s2, v0                                                                         // 000000000014: 4A000002
v_add_nc_u32_e32 v1, s3, v3                                                                         // 000000000018: 4A020603
s_waitcnt lgkmcnt(0)                                                                                // 00000000001C: BF8CC07F
tbuffer_load_format_xyz v[2:4], v0, s[4:7],  format:74, 0 idxen                                     // 000000000020: EA522000 80010200
s_clause 0x3                                                                                        // 000000000028: BFA10003
tbuffer_load_format_xyz v[5:7], v1, s[8:11],  format:74, 0 idxen offset:16                          // 00000000002C: EA522010 80020501
tbuffer_load_format_xyz v[8:10], v1, s[8:11],  format:74, 0 idxen                                   // 000000000034: EA522000 80020801
tbuffer_load_format_xyz v[11:13], v1, s[8:11],  format:74, 0 idxen offset:32                        // 00000000003C: EA522020 80020B01
tbuffer_load_format_xyz v[14:16], v1, s[8:11],  format:74, 0 idxen offset:48                        // 000000000044: EA522030 80020E01
s_waitcnt vmcnt(3)                                                                                  // 00000000004C: BF8C3F73
v_mul_f32_e32 v1, v3, v5                                                                            // 000000000050: 10020B03
v_mul_f32_e32 v5, v3, v6                                                                            // 000000000054: 100A0D03
v_mul_f32_e32 v3, v3, v7                                                                            // 000000000058: 10060F03
s_waitcnt vmcnt(2)                                                                                  // 00000000005C: BF8C3F72
v_mac_f32_e32 v1, v2, v8                                                                            // 000000000060: 3E021102
v_mac_f32_e32 v5, v2, v9                                                                            // 000000000064: 3E0A1302
v_mac_f32_e32 v3, v2, v10                                                                           // 000000000068: 3E061502
s_waitcnt vmcnt(1)                                                                                  // 00000000006C: BF8C3F71
v_mac_f32_e32 v1, v4, v11                                                                           // 000000000070: 3E021704
v_mac_f32_e32 v5, v4, v12                                                                           // 000000000074: 3E0A1904
v_mac_f32_e32 v3, v4, v13                                                                           // 000000000078: 3E061B04
s_waitcnt vmcnt(0)                                                                                  // 00000000007C: BF8C3F70
v_add_f32_e32 v0, v14, v1                                                                           // 000000000080: 0600030E
v_add_f32_e32 v1, v15, v5                                                                           // 000000000084: 06020B0F
v_add_f32_e32 v2, v16, v3                                                                           // 000000000088: 06040710
v_mov_b32_e32 v3, 1.0                                                                               // 00000000008C: 7E0602F2
exp pos0 v0, v1, v2, v3 done                                                                        // 000000000090: F80008CF 03020100
s_endpgm                                                                                            // 000000000098: BF810000

The thing which gets my attention here is that vector memory load instructions are used for loading per-instance data. According to my understanding, vertex shader groups always process vertices from a single instance, so it's possible to use scalar memory loads here. So here are my questions:

1. Is my assumption about single instance for vertex shader group is valid? If not, it's indeed not valid to use scalar memory lodas here, and everything is fine.

2. If my assumption is valid, is these vector memory loads are actually a big problem? I assume scalar loads would be better, but the difference may be hardly visible due to memory caching.

3. Maybe there are some other limitations which prevent compiler from using scalr loads here? Maybe, I don't provide some critical information on the CPU side? Or it's just a matter of driver implementation?

3 Replies
dipak
Big Boss

Hi @phridrich ,

I've forwarded the above queries to the DirectX team. As soon as I get any feedback from them,  I'll share with you.

 

Thanks.

UPD. I tested my application on 6800 XT, and this issue still occurs. BTW, vertex shader ISA on 6800XT looks like hell:

Spoiler
    s_movk_i32 exec_lo, 0xffff
    s_getpc_b64 s[20:21]
    s_inst_prefetch 0x3
    v_mbcnt_lo_u32_b32_e64 v2, -1, 0
    s_and_b32 s0, s3, 0xff
    s_mov_b32 s1, exec_lo
    v_cmpx_lt_u32_e64 v2, s0
    s_cbranch_execz _L0
BBF0_0:
    s_mov_b32 s19, s21
    v_add_nc_u32_e32 v3, s10, v5
    v_add_nc_u32_e32 v4, s11, v8
    s_load_dwordx8 s[8:15], s[18:19], 0x0
    s_waitcnt lgkmcnt(0)
    s_clause 0x3
    tbuffer_load_format_xyz v[18:20], v4, s[12:15], 0 format:[BUF_FMT_32_32_32_FLOAT] idxen
    tbuffer_load_format_xyz v[21:23], v4, s[12:15], 0 format:[BUF_FMT_32_32_32_FLOAT] idxen offset:16
    tbuffer_load_format_xyz v[27:29], v4, s[12:15], 0 format:[BUF_FMT_32_32_32_FLOAT] idxen offset:32
    tbuffer_load_format_xyz v[24:26], v4, s[12:15], 0 format:[BUF_FMT_32_32_32_FLOAT] idxen offset:48
    tbuffer_load_format_xyz v[9:11], v3, s[8:11], 0 format:[BUF_FMT_32_32_32_FLOAT] idxen
_L0:
    s_mov_b32 exec_lo, s1
    s_bfe_u32 s1, s2, 0x9000c
    s_bfe_u32 s4, s3, 0x40018
    s_load_dword s5, s[6:7], 0x1c
    s_load_dword s8, s[6:7], 0x240
    s_cmp_ge_u32 s1, 16
    s_cselect_b32 s9, -1, 0
    s_lshl_b32 s10, s1, 5
    v_mad_u32_u24 v3, s4, 32, v2
    v_cmp_gt_u32_e32 vcc_lo, 9, v3
    s_and_b32 vcc_lo, vcc_lo, s9
    s_and_saveexec_b32 s11, vcc_lo
    v_mad_u32_u24 v4, v3, 4, s10
    s_cbranch_execz _L1
BBF0_1:
    v_mov_b32_e32 v6, 0
    ds_write_b32 v4, v6
_L1:
    s_mov_b32 exec_lo, s11
    v_mul_u32_u24_e32 v4, 32, v3
    v_cmp_lt_u32_e64 s11, v3, s1
    s_and_b32 s12, s11, s9
    s_and_saveexec_b32 s13, s12
    v_mov_b32_e32 v6, 0
    s_cbranch_execz _L2
BBF0_2:
    ds_write_b32 v4, v6 offset:16
_L2:
    s_mov_b32 exec_lo, s13
    v_cmp_gt_u32_e32 vcc_lo, s0, v2
    s_and_b32 vcc_lo, vcc_lo, s11
    s_and_b32 exec_lo, s13, vcc_lo
    s_waitcnt vmcnt(0)
    v_mul_f32_e32 v7, v10, v21
    s_cbranch_execz _L3
BBF0_3:
    v_mul_f32_e32 v21, v10, v22
    v_mul_f32_e32 v13, v10, v23
    v_mov_b32_e32 v12, 1.0
    v_fmac_f32_e32 v7, v9, v18
    v_fmac_f32_e32 v21, v9, v19
    v_fmac_f32_e32 v13, v9, v20
    v_fmac_f32_e32 v7, v11, v27
    v_fmac_f32_e32 v21, v11, v28
    v_fmac_f32_e32 v13, v11, v29
    v_add_f32_e32 v9, v24, v7
    v_add_f32_e32 v10, v25, v21
    v_add_f32_e32 v11, v26, v13
    ds_write_b128 v4, v[9:12]
_L3:
    s_mov_b32 exec_lo, s13
    s_bfe_u32 s2, s2, 0x90016
    s_and_b32 s11, s9, exec_lo
    s_cmp_eq_u32 s11, 0
    s_cbranch_scc1 _L4
BBF0_4:
    v_cmp_lt_u32_e64 s11, v3, s2
    s_waitcnt lgkmcnt(0)
    s_barrier
    s_bfe_u32 s3, s3, 0x80008
    v_cmp_gt_u32_e32 vcc_lo, s3, v2
    s_and_b32 vcc_lo, vcc_lo, s11
    s_and_b32 vcc_lo, vcc_lo, s9
    s_and_saveexec_b32 s3, vcc_lo
    v_lshlrev_b32_sdwa v6, 2, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
    s_cbranch_execz _L5
BBF0_5:
    ds_read_b128 v[13:16], v6
    v_lshlrev_b32_sdwa v6, 2, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_1
    v_lshlrev_b32_sdwa v7, 2, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
    ds_read_b128 v[17:20], v6
    ds_read_b128 v[21:24], v7
    s_load_dwordx4 s[16:19], s[6:7], 0x34
    s_waitcnt lgkmcnt(0)
    v_mul_f32_e32 v6, v18, v24
    v_mul_f32_e32 v7, v22, v20
    v_mul_f32_e32 v25, v14, v24
    v_mul_f32_e32 v26, v22, v16
    s_xor_b32 s6, s18, s16
    s_bfe_u32 s7, s5, 0x10002
    s_bfe_u32 s11, s6, 0x1001f
    v_sub_f32_e32 v6, v6, v7
    s_xor_b32 s6, s11, s7
    v_sub_f32_e32 v7, v25, v26
    v_mul_f32_e32 v25, v14, v20
    v_mul_f32_e32 v26, v18, v16
    v_mul_f32_e32 v6, v13, v6
    s_and_b32 s7, s5, 1
    v_mul_f32_e32 v7, v17, v7
    v_rcp_f32_e32 v27, v16
    v_sub_f32_e32 v26, v25, v26
    s_bfe_u32 s11, s5, 0x10001
    s_bfe_u32 s5, s5, 0x20003
    v_sub_f32_e32 v7, v6, v7
    s_xor_b32 s5, s5, 1
    v_rcp_f32_e32 v6, v20
    s_cmp_eq_u32 s8, 1
    s_cselect_b32 s8, 0, -1
    s_and_b32 s8, s8, exec_lo
    v_fmac_f32_e32 v7, v21, v26
    v_or3_b32 v25, v16, v20, v24
    v_and_b32_e32 v26, v16, v20
    v_rcp_f32_e32 v28, v24
    v_mul_f32_e32 v29, v21, v28
    v_and_b32_e32 v26, v24, v26
    v_cmp_gt_i32_e64 s13, v25, 0
    v_mul_f32_e32 v25, v22, v28
    v_fma_f32 v28, v29, s16, s17
    v_cmp_lt_i32_e64 s14, v26, 0
    v_fma_f32 v25, v25, s18, s19
    v_cvt_f32_i32_e32 v26, s6
    v_mul_f32_e32 v29, v13, v27
    v_mul_f32_e32 v27, v14, v27
    v_mul_f32_e32 v30, v17, v6
    v_mul_f32_e32 v6, v18, v6
    v_mov_b32_e32 v31, 0xbf400000
    v_fma_f32 v29, v29, s16, s17
    v_fma_f32 v27, v27, s18, s19
    v_fma_f32 v30, v30, s16, s17
    v_fma_f32 v6, v6, s18, s19
    v_cmp_lt_f32_e64 vcc_lo, s16, 0
    v_cndmask_b32_e32 v32, 0x3f400000, v31, vcc_lo
    v_cmp_lt_f32_e64 vcc_lo, s18, 0
    v_cndmask_b32_e32 v31, 0x3f400000, v31, vcc_lo
    v_sub_f32_e32 v33, 1.0, v26
    v_mul_f32_e64 v35, v7, 0x7f800000 clamp
    v_mul_f32_e64 v36, v7, 0xff800000 clamp
    v_subrev_f32_e64 v34, s16, s17
    v_add_f32_e64 v37, s16, s17
    v_subrev_f32_e64 v38, s18, s19
    v_add_f32_e64 v39, s18, s19
    v_subrev_f32_e32 v34, v32, v34
    v_add_f32_e32 v32, v32, v37
    v_subrev_f32_e32 v37, v31, v38
    v_add_f32_e32 v31, v31, v39
    v_mul_f32_e32 v38, v26, v35
    v_mul_f32_e32 v39, v33, v36
    v_mul_f32_e32 v26, v26, v36
    v_mul_f32_e32 v33, v33, v35
    v_min3_f32 v35, v29, v30, v28
    v_min3_f32 v36, v27, v6, v25
    v_max_f32_e32 v38, v39, v38
    v_cvt_f32_i32_e32 v39, s7
    v_max_f32_e32 v26, v33, v26
    v_med3_f32 v33, v34, v35, v32
    v_med3_f32 v35, v37, v36, v31
    v_cvt_f32_i32_e32 v36, s11
    v_mul_f32_e32 v38, v38, v39
    v_mul_f32_e64 v7, |v7|, 0x7f800000 clamp
    v_add_f32_e32 v33, 0xbb800000, v33
    v_add_f32_e32 v35, 0xbb800000, v35
    v_mul_f32_e32 v36, v26, v36
    v_max3_f32 v29, v29, v30, v28
    v_sub_f32_e32 v7, 1.0, v7
    v_rndne_f32_e32 v26, v33
    v_rndne_f32_e32 v28, v35
    v_max3_f32 v6, v27, v6, v25
    v_med3_f32 v25, v34, v29, v32
    v_max3_f32 v7, v38, v36, v7
    v_cvt_f32_i32_e32 v29, s5
    v_med3_f32 v6, v37, v6, v31
    v_add_f32_e32 v25, 0x3b800000, v25
    s_or_b32 s5, s14, s13
    v_add_f32_e32 v6, 0x3b800000, v6
    v_rndne_f32_e32 v25, v25
    v_and_b32_e32 v7, v7, v29
    v_rndne_f32_e32 v6, v6
    v_cmp_eq_f32_sdwa s6, v26, v25 src0_sel:DWORD src1_sel:DWORD
    v_cmp_eq_f32_e64 s7, v7, 1.0
    v_cmp_eq_f32_e32 vcc_lo, v28, v6
    s_or_b32 vcc_lo, s6, vcc_lo
    s_and_b32 vcc_lo, vcc_lo, s5
    s_and_b32 vcc_lo, vcc_lo, s8
    s_or_b32 vcc_lo, s7, vcc_lo
    v_cndmask_b32_e64 v15, 0, 1, vcc_lo
_L5:
    s_andn2_b32 exec_lo, s3, exec_lo
    v_mov_b32_e32 v15, 1
    s_mov_b32 exec_lo, s3
    s_mov_b32 s3, exec_lo
    v_cmpx_eq_u32_e64 v15, 0
    v_lshlrev_b32_sdwa v7, 2, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
    v_lshlrev_b32_sdwa v13, 2, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_1
    v_lshlrev_b32_sdwa v14, 2, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
    v_xor_b32_e32 v16, 1, v15
    s_cbranch_execz _L6
BBF0_6:
    ds_write_b32 v7, v16 offset:16
    ds_write_b32 v13, v16 offset:16
    ds_write_b32 v14, v16 offset:16
_L6:
    s_mov_b32 exec_lo, s3
    s_waitcnt lgkmcnt(0)
    s_barrier
    s_and_saveexec_b32 s3, s12
    s_cbranch_execz _L7
BBF0_7:
    ds_read_b32 v7, v4 offset:16
_L7:
    s_andn2_b32 exec_lo, s3, exec_lo
    s_waitcnt lgkmcnt(0)
    v_mov_b32_e32 v7, 0
    s_mov_b32 exec_lo, s3
    v_cmp_eq_u32_e64 s3, v7, 1
    s_sub_u32 s5, 8, s4
    s_mov_b32 s6, exec_lo
    v_cmpx_lt_u32_e64 v2, s5
    s_cbranch_execz _L8
BBF0_8:
    s_bcnt1_i32_b32 s5, s3
    v_add3_u32 v7, s4, v2, 1
    v_mov_b32_e32 v13, s5
    v_mad_u32_u24 v7, v7, 4, s10
    ds_add_u32 v7, v13
_L8:
    s_mov_b32 exec_lo, s6
    s_waitcnt lgkmcnt(0)
    s_barrier
    v_mov_b32_e32 v7, s10
    ds_read_b32 v7, v7 offset:32
    v_mad_u32_u24 v13, s4, 4, s10
    ds_read_b32 v13, v13
    s_waitcnt lgkmcnt(1)
    v_readfirstlane_b32 s6, v7
    v_cmp_gt_u32_e32 vcc_lo, s0, v2
    s_and_b32 vcc_lo, vcc_lo, s3
    s_and_b32 vcc_lo, vcc_lo, s9
    s_and_saveexec_b32 s0, vcc_lo
    s_cbranch_execz _L9
BBF0_9:
    s_and_b32 s3, s3, exec_lo
    v_mov_b32_e32 v14, v8
    v_mbcnt_lo_u32_b32_e64 v2, s3, 0
    s_waitcnt lgkmcnt(0)
    v_add_nc_u32_e32 v2, v13, v2
    v_mov_b32_e32 v13, v5
    v_mul_u32_u24_e32 v7, 32, v2
    ds_write_b32 v4, v2 offset:16
    ds_write_b128 v7, v[9:12]
    ds_write_b64 v7, v[13:14] offset:24
_L9:
    s_mov_b32 exec_lo, s0
    s_branch _L10
_L4:
    v_mov_b32_e32 v15, 0
    s_mov_b32 s6, s1
_L10:
    s_cmp_eq_u32 s6, 0
    s_waitcnt lgkmcnt(0)
    s_cselect_b32 s5, 0, s2
    s_cselect_b32 s3, 0, s6
    s_cmp_lt_u32 s4, 1
    s_cbranch_scc0 _L11
BBF0_10:
    s_lshl_b32 s4, s5, 12
    s_or_b32 m0, s3, s4
    s_sendmsg sendmsg(MSG_GS_ALLOC_REQ)
_L11:
    s_waitcnt lgkmcnt(0)
    s_barrier
    s_cmp_eq_u32 s6, 0
    s_cbranch_scc1 _L12
BBF0_11:
    s_cmp_lt_u32 s3, s1
    s_cbranch_scc0 _L13
BBF0_12:
    v_lshlrev_b32_sdwa v5, 2, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
    v_lshlrev_b32_sdwa v0, 2, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_1
    v_lshlrev_b32_sdwa v1, 2, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
    ds_read_b32 v2, v5 offset:16
    ds_read_b32 v0, v0 offset:16
    ds_read_b32 v13, v1 offset:16
    s_and_saveexec_b32 s0, s9
    s_cbranch_execz _L14
BBF0_13:
    ds_read_b128 v[9:12], v4
_L14:
    s_mov_b32 exec_lo, s0
    s_branch _L15
_L13:
    v_lshrrev_b32_sdwa v2, 3, v0 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
    v_bfe_u32 v0, v0, 19, 13
    v_lshrrev_b32_sdwa v13, 3, v1 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:DWORD src1_sel:WORD_0
_L15:
    s_min_u32 s0, s2, s5
    s_mov_b32 s1, exec_lo
    v_cmpx_lt_u32_e64 v3, s0
    s_waitcnt lgkmcnt(1)
    v_lshl_or_b32 v1, v0, 10, v2
    s_waitcnt lgkmcnt(0)
    v_lshl_or_b32 v0, v13, 20, v1
    s_cbranch_execz _L16
BBF0_14:
    v_cmp_eq_u32_e32 vcc_lo, 1, v15
    v_cndmask_b32_e64 v0, v0, 0x80000000, vcc_lo
    exp prim v0, off, off, off done
_L16:
    s_waitcnt expcnt(0)
    s_mov_b32 exec_lo, s1
    v_cmp_gt_u32_e32 vcc_lo, s3, v3
    s_and_b32 exec_lo, s1, vcc_lo
    s_cbranch_execz _L12
BBF0_15:
    exp pos0 v9, v10, v11, v12 done
_L12:
    s_endpgm

@dipak , are there any updates from DirectX team about this?

0 Likes

Hi @phridrich ,

Sorry, I haven't got any update on this. I'll check with the DX team and get back to you shortly.

 

Thanks.