One thing I think is a bit odd about AMD's CPU strategy is there is one socket aimed at somebody who only games on their PC (or at least that is the only demanding task), apparently not doing anything that would require more than a single GPU and one or two NVMe drives and another socket only really aimed at huge datacenter servers; it is certainly too big to fit into almost any consumer computer case and individual threads too slow to do something like gaming.
I bring this up because I fall into the range of doing many things with my home office computer (under Linux), needing seven expansion slots with good performance (though if the onboard NICs did not burn out under sustained heavy load I would only need 6), needing a somewhat high core count but not like super ridiculously high, and fast cores as I do also game on the box and in general do a lot of stuff that is performance end user oriented, not super high threaded server loads, though the server stuff is also there because Linux can handle having all of the above on one box. What it seems like AMD could offer in their next gen lineup, but have not announced / has not leaked is the medium sized socket option of up to 4 core tiles (up to 32 cores), 64 (or more) PCIe lanes, and 4 RAM buses. My options are limited because I do not have the physical space for many smaller boxes, just gotta fill it with a fairly big box (a Caselabs SMA8), don't have the time to work on many small systems, don't have the money to build many smaller systems, and yeah it just wouldn't work out. Even with the Caselabs SMA8 case while there was an SSI board option before they went out of business, I needed the extra space in the top chamber for both liquid cooling components and disk arrays. With both of those in there, there is just not enough space left for a giant motherboard. There is just enough space for up to a medium sized Intel socket 20xx workstation board, which fits well with the balance I am after between core counts and clock speeds while also having the reliability to handle many serious tasks concurrently. At this I have to leave hyperthreading turned on even though it has become more and more of a liability with the Intel platform because the limited core count CPU cannot keep up with all of the concurrent real time tasks I have running on this box at times without it and at this I have to tip toe around a little and mess with nice factors to get everything I am trying to do to play well on this one box. AMD could potentially be a good place for someone like me to go as you could make a higher performance part that slots into the same space with your superior technology.
Something I was thinking to add to this is especially now that Intel has taken a 28-core datacenter CPU and made a high clocked, power guzzling workstation version of it, I have suggestions for how AMD could answer and beat it:
1. Go for a direct contact integrated water block that is held in place by the CPU retainer as it will be part of the chip package. Use liquid metal for the interface material such as Thermalgizzly Conductonaught. I have delidded one of my cheap TIM paste Intel CPUs and did applied Conductonaught and it does indeed run much cooler and allows the die itself to expand and contract. In other words much better all around than either solid solder or cheap TIM paste. Just need to be careful with application as it is an electric conductor. Also direct die contact water blocks are made in small quantities now for certain easy to delid CPUs and they do lead to cooler running CPUs as there are fewer layers to the cooling stack and the coolant gets closer to the die it is cooling.
2. Have two high performance parts aimed specifically at liquid cooled rigs. One would be the aforementioned mid sized competitor as Intel has nothing in that category, leaving it wide open for AMD to exploit. The second would be the direct competitor, except instead of Intel's 28 core monstrosity, it would be AMD's 64-core monstrosity running at high clocks and insane power consumption. Have it when these CPUs are detected in the appropriate m/b, it will default to a normal TDP so that people don't accidentally fry their CPU, but then have the option to switch to high power / performance mode with a warning to test thermals in regular mode before enabling. If in performance mode thermal problems are detected, switch the mode back automatically and then do whatever additional throttling is needed to protect the CPU.
Some motivation for these ideas is right now I have an MSI 2080 Ti Seahawk EK X in a custom loop and while running Furmark, even at stock settings the GPU (nvidia-smi) reports 300W of continuous load and what this causes the card to do is hop up to the mid 30'sC and then level off after several minutes at 41C when the measured room temp from a sensor at the radiator intake measures ~24C and at this the cooling system is whisper quiet, granted this is with a 560mm radiator. If you pay attention to how hot your CPU gets or even your GPU gets on air under load and either direct measure or estimate watts burned, this temperature rise is nothing. I mean it is phonominally low and from a card you just buy and put into your computer. Why not try to get this kind of cooling improvement into a many core CPU and clock it to run like a bat out of hell?