In last time I've done some benchmarks for texture arrays in OpenGL. I compared texture array performance to standard textures. I used the latest version of GPU Perf Studio for profiling. After benchmarks I see that texture array sampling have 2x higher (worse) result in TexCostOfFiltering (200 to 100) and TexelFetchCount, but it have 2x lower (better) result in TexMissRate. I'm really interesting why sampling of texture array is 2x more expensive than standard textures. It looks like when we sample a texture array our sampling is doubled. When I was turn off a texture filtering I had the same value of TexCostOfFiltering, TexelFetchCount, TexMissRate for both array and standard textures. Is this normal behaviour for Radeon cards? I think that this is bug in video drivers when filtering is turn on, because I don't see this slowdown on nV GeForce cards when I check performance of texture arrays and in OpenGL Texture Array spec I don't see any info about extra filtering interpolation cost for texture arrays.
Problem with performance on Radeon cards.