Suddenly "Big Navi" seems very small...Of course, bigger isn't always better.
Up to 36 TeraFLOPs! How much heat? I think it would have to be immersion-cooled.
And more jobs for software testers porting code to a new platform.
Of course with the stockpile of cash Intel has they could sell the GPU at a loss as long as the architecture is good, until they shrink the die. That is one big chip! I am very interested to see what a third competitor does to pricing. Maybe it will help bring teams red and green back to pre mining boom prices like they should be.
36 TFLOPS is only twice what an RTX Titan and Radeon Pro Vega II is rated for. The RPVII is using, relatively speaking, antique architecture, and the RTX Titan is first generation silicon from 2018 and is rated for 250W. It's not inconceivable that the next generation of cards from AMD, which will be based on a totally new architecture, CDNA, and a much more refined architecture from nVidia, Ampere, could hit 25 TFLOPS in a 200-250w envelope, assuming the chips are of a monolithic design. Moving to a MCM design like Intel is doing may result in a much larger package, but that will also result in a much larger surface area for thermal transfer, enabling higher TDPs with relatively standard double thick 120mm liquid coolers.
The thing which interests me is... Why is it LGA? Is it because it's just an engineering sample meant for testing boards, or is Intel considering a GPU board with an interchangeable GPU and fixed GDDR for a cheaper upgrade path? Or are we actually looking at 4 GPU units next to 4 HBM memory stacks?
It looks like the size of an LGA3647. I am sure it would need 8 memory channels.
dont worry intel is 15 years behind in GPU architecture acceleration in comparison to AMD, 25 years compared to nvidia.
I'm sure Intel has hired (pilfered) people from AMD and Nvidia to get things done. Raja Koduri for one. Google it. There are alot more.
Intel's commercial DG1 GPU, which is part of their Xe LP (Low Power) range, has the power of the PS4 GPU. While it pales in comparison to modern offerings from AMD and especially nVidia, it does show how far they've come in a short time.
Not bad figures for a card which isn't for gaming, it's part of the low power development platform.
IMO the most interesting will be the features that they offer via Software / Code above the performance, personally as Linux user ...
Intel Lands More Graphics Code For Linux 5.5 - Jasper, More Intel Xe Multi-GPU Prepping - Phoronix
... and if they decide to offer OpenCL support without needing proprietary drivers the advantages will be for us the users.... market competition = market health
black_zion wrote: is Intel considering a GPU board with an interchangeable GPU and fixed GDDR for a cheaper upgrade path?
is Intel considering a GPU board with an interchangeable GPU and fixed GDDR for a cheaper upgrade path?
It could have massive ramifications in the enterprise market as well. Imagine if a supercomputer with a thousand GPUs, or more, could be built with the ability in a few years time to have those GPUs upgraded to models 75-100% faster at a cost of 50% the price of replacing the entire board. Not only would it no doubt win awards for sustainability, it'd also give them a major advantage over their competitors when you're talking a cost savings measured with six or more figures.
Retrieving data ...