Showing results for 
Search instead for 
Did you mean: 

Gaming Discussions

Adept III

RTX 3000 and what i think

Hy my fellow Red Teamers!

September 1st. marked the Presentation of the new RTX Cards.

For me as a long time AMD user it was really cool to see what Nvidia has done with RTX and AI.

Yes, they are on that Samsung 8nm Process and yes the cards consume a lot of power and they are not cheap like always with Nvidia.

I don't know but it seems like AMD has a lot of work to do. I am not happy with the removal of FRTC and i recently had a Black Screen "again". They have done some fine work and implemented some futures in to the Driver. Picture Quality is really nice when i compare to Pascal.

I hope AMD/Radeon have something fine in the works with Navi2x. I will wait to compare the new GPU's but i want to say that i was really impressed with RTX 3000.

What do you think?

54 Replies
Adept III

Lineup has some big gaps that should be filled. Disappointed not seeing display port 2.0. 8gb for 3070, 10gb 3080 seems somewhat on the low side for next generation which then balloons up 24gb. Concerned about the dense high power assembly on the reference models, putting a lot of power in a small space must be tricky or it can get ugly. Hot air from the reference models will blow right into an air CPU cooler, could degrade performance of the PC, maybe not a significant issue but look forward to test analyzing that aspect.

The suggested performance increase looks good but not really confirmed, no game play shown with actual FPS. 3090 three slot solution is a non starter, EVGA AIO and water block solutions are pretty cool except no known release date for those. Claim of 1.9x efficiency increase yet Jensen says the 3070 (a 220w card) is performance equivalent to a 2080Ti (a 250w card) how the hell does one do the math for that kind of efficiency increase? Marketing BS, unclear wordy misleading at best statement. Maybe at idle the 3070 fan stops while the 2080Ti keeps on. 

Really need the cards out there for real evaluation to understand more the pluses and negatives. I would not mind replacing my 1080Ti's with a 3080 but am hoping AMD has also some very viable next generation cards coming. My other machine with a 5700 XT AE is sweet but will not be powerful enough when I upgrade the display to a higher resolution.

I like the somewhat more reasonable pricing on the cards, at least on the 3070 and 3080. The 3090 seems bloated not only in size but cost, does have 24gb of DDR6x but not clear if it has a cache controller to extend that amount like on the Vega's HBCC making rendering options much more versatile, still a lot of ram there.

I am hoping AMD has a card that competes and basically beats a 3080, 2 slot at most and prefer a higher version that has liquid cooling. The Vega 64 LC design was awesome. More ram as in 12gb-16gb, less power usage. Display port 2.0 in addition to HDMI 2.1 -> that in itself would almost seal the deal since I keep my cards a rather long time and future monitor options, VR headsets etc. come into play.

AMD and software, they did some cool stuff in the drivers, Performance Tuning and Profiles are awesome, monitoring. Would like to see AMD Link Server be able to use the phone camera and mic and options in the Streaming editor for multiple cameras. Links for each section in Radeon settings for professionally done videos explaining the choices, get the youtube pro's to help out (pay them for their service)(JayZ running down Performance Tuning. Hardware Unbox game profiles. Game Nexus Web and Link software . . .). Nvidia Broadcast using AI and beyond type program in the drivers would be very useful.

AMD needs something like DLSS, call it MLSS (Machine Learning Super Sampling). DLSS is a very effective reconstruction technique not only increasing IQ but performance as well.

I think I will see what AMD options are and hopefully will not have to wait that long. Also if AMD has a better supply, they could be the one selling more this year than Nvidia if they do not.

You make some extremely compelling arguments. Thank you for such a through break down and I agree with you, I am now running my Vega 56 as my 1080ti degraded and does not perform well. It is now in my streaming machine as a encoding processor for recordings while I use a 1950x for the stream encoding. I hope AMD video cards get better on board encoding that will help me kick team green completely.


RE: no game play shown with actual FPS
Hands-on with RTX 3080 - is this really Nvidia's biggest leap in gen-on-gen performance? • Eurogamer... 
Nvidia GeForce RTX 3080 Early Look: Ampere Architecture Performance - Hands-On! - YouTube 

RE: 3090 three slot solution is a non starter
Is it really that much worse than RX Vega 64 AIB or RX5700XT AIB cards?
They are mostly 2.5-2.7 slot behemoths anyhow.

RE: but not clear if it has a cache controller to extend that amount like on the Vega's HBCC making rendering options much more versatile.
I guess you might mean Radeon ProRender?
Does it actually work?
HBCC is not supported on Blender Cycles Rendering last time I checked.
HBCC is not working at all on Borderlands3.
It looks like it has been dropped.

RE: AMD and software, they did some cool stuff in the drivers
Disagree. Should have stayed with Adrenalin 19.12.1 GUI/UI, and improved it.
Navi drivers have been a disaster.


Those Eurogamer Gen on Gen performance results are a bit skewed.  They compare the RTX 2080 to the RTX 3080, but is that really the comparison to make?  The RTX 3080 is the flagship card, so it should be compared to the RTX 2080 Ti.  The GTX 1080 to the GTX 980 occupied the same place in the product stack, behind their respective Ti's and Titans.  The RTX 2080 and RTX 3080 do not.


The RTX3080 costs the same as the price I paid for a Palit RTX2080 Gaming Pro OC (same specification as an RTX2080 FE). 
So it is a fair comparison.
The RTX 2080Ti is far more expensive than both. 

I do not care what Nvidia call the cards.
It is Performance/Price that I am interested in.


Right, but the Eurogamer review is looking at generation to generation performance.  The RTX 2080 cost the same as the GTX 1080 Ti.  Would you compare those two in a generation to generation comparison?


I am perfectly fine with new GPUs being compared on price.
Most reviewers do that.
Your  "generation to generation comparison" argument requiring looking at the top of the range GPUs at vastly different pricing is just silly.

I think you are unhappy that the Nvidia Price/Performance improvement looks so good in the Digital Foundary tests, even on that Samsung 8nm process.
Nvidia likely called the RTX3080 their "flagship card" because it has a real TDP of 320 Watts in a big expensive cooler, and there is not much room for more performance unless they go for that large RTX 3090 with a real TDP of 350 Watts.

Before you start slamming Nvidia for the Power Input of 320Watts for RTX3080, note that an RX Vega 64 Liquid and an RX5700XT pull more than that with the power slider set to +50% in the driver.


I think AMD are lucky that Nvidia do not have a better process to implement those GPUs on yet.


That is largely NVidia's own fault.  Their move back toward larger monolithic dies makes it financially impossible to move to smaller EUV based lithography as the defect rate would make it cost ineffective to even release the GPUs.  Those big dies are a direct result of all the proprietary hardware on the RTX series, like tensor cores and ray tracing cores.  But if software doesn't utilize the hardware, it is just wasted space.  It's the pixel/vertex shader problem all over again, a problem NVidia had a hand in killing with the Geforce 8000 series and universal shaders.

It does look like NVidia is just trying to get the hardware out there so developers start using it, and then moving to a far more efficient chiplet design built on an EUV process.  NVIDIA Next Generation Hopper GPU Leaked - Based On MCM Design, Launching After Ampere 

So that will actually be really interesting, as RDNA3 is also rumored to be an infinity fabric based chiplet approach, similar to what AMD did with Ryzen.  I think Intel is also looking at a chiplet design for it's Xe series in 2021?  In any event, GPUs will likely become far more scalable in the 2021/2022 timeframe.  

Not applicable

Nice topic.

The real question in my opinion is: in which extent AMD bet on consoles VS computer gaming.
Because Nvidia is pulling out nice stuff from both hardware and software side.
And i'm not sure AMD have the capacity to go much further than what's Nvidia proposed.

Especially because from some time now AMD is powering both console and computer gaming.
In my opinion, slowly drifting more and more toward console development.

Also note that Nvidia has been a pretty cleaver girl, releasing its gpu before console launch!

Fortunately for me i skipped AMD gpu for the last 2 generation and i'm actually running a 1080 under water.

But before that i had only AMD/ATI gpu, the first being a ATI Rage128Pro then a 9600Pro!

And as it is now, i do not advise anymore in any shapes of forms any AMD gpu's.

Nvidia RTX3090 3080 3070 just killed Radeon GPU lineup.
Even without this launch Radeon has been failing because of bad AIB GPU Card quality and bad drivers.
Lost 9% Market Share in discrete GPU since RX5700XT launch.

I think Big Navi is dead in the water before it releases based on what I have been seeing reported about it.
AMD were projecting 1.5x Performance / Watt improvement.
Nvidia claim 1.9x.

I think Navi 10 RX5700XT needs a price drop to 250 to stand a chance of selling as it has now dropped way down GPU performance hierarchy.

I expect delay in the launch of Big Navi until 2021 now and I think AMD will have to jump to using HBM2e to reduce power consumption of the memory and memory controller and increase performance. They can always blame "COVID-19".

I hope AMD do something about the Adrenalin 2020 GUI/UI, AMD Bug Reporting tool, Installer.
Big changes are needed at Radeon.

I have been using and supporting AMD GPUs for a long time but things are beyond a joke now.

Maybe I should not have responded as I am not a "Red Teamer".

Thank you all for your comments! I really enjoy this sort of discussion.

I don't have Twitter or any other Social Sickness ; )

I have to say that i really enjoy Ryzen! And i hope that AMD can do the same with Radeon.

The First Step to a better Future was to separate Gaming and Professional GPU's.

RDNA2 will be on a more mature 7nm Node.

AMD hired more Personell esp. for Raytracing and GPU Dev.

They will bring a whole new Line up from the bottom to High End.

I think AMD will also bring something like DLSS.

It's a good thing to have Consoles with your Hardware and i hope they will benefit from it.

They could really do things that Nvidia near Intel can do with that Ecosystem.


Zen 2 Ryzen 3000 series is excellent.
Zen + Ryzen 2000 series is very good, provided you become good at tweaking VBIOS, especially RAM timings.
I am primarily looking forward to Zen 3 launch, lots of interest for new AMD PC builds.

I will watch what RDNA2 brings with interest.

Not applicable

The real first steep was the come back of AMD into OEM market.

Which mean having finally partners that help you develop the product they need.
Thus selling more professional cards and cpu, enhancing the product development.

RDNA2 will come on the N7+ node, until now all the cpu, gpu were build on a N7, N7P node without the use of EUV.
N7+ is the first node to really use EUV and could have the same yield issues we leveraged with the 3000 series launch.

N7+ is not N7P compatible and require a re-implementation of the IP, bringing 10% clock speed at same power or 15% less power at same clock speed.
Ryzen XT cpu are a good example of the N7 to N7P process evolution with yield and binning changing over time!

I'm not happy that AMD is now powering both worlds, because i fear AMD doing a Star Citizen.

Getting money from one side to enhance the development of the other side project because being more lucrative.

side note: I started to write an article about photolithography nodes, but stopped because too long and delicate to explain.

Adept III

I think it's too expensive for the majority of buyers, and thus market share of these products will be small ~ 11% judging by the steam hardware survey market share for GPUs exceeding the 3070's launch price.

AMD could use their chiplet expertise to focus on volume production of small and cost effective "shader" dies and on chip integration of multiple shader chiplets to scale through mid and high end GPUs as per Ryzen. With their console order book, they could significantly undercut nvidia's production costs. Then they just need to execute on software, drivers, and brand appeal to increase market share on the PC.

RTX IO was interesting in that it introduced a problem i wasn't aware existed. Given that this technology is onboard the Xbox Series X and PlayStation 5, i'm sure we will be hearing from AMD about their implementation for PC gaming.


You do have to take what NVidia publishes with a grain of salt.  The performance improvement of the RTX 3000 series rests highly on the improved FP32 throughput of the CUDA cores.  The 8704 CUDA core mark only relates to the RTX 3080 in FP32 128 mode.  If there are any INT32 instructions to execute, then the GPU has to run in FP32 64 + INT32 64 mode.  That gives it 4352 functioning CUDA cores for both FP32 and INT32, exactly the same as the RTX 2080 Ti.

So in a lot of workloads the performance increase will just be down to clock speed increases and efficiency gains from the new process.  Memory bandwidth is also only 15% or so higher than the RTX 2080 Ti.

Having said that, NVidia is worst case releasing a 2080 Ti with higher clocks, more memory bandwidth and more efficient cores  for $699 vs the $1,199 of the original.  And in some FP32 bound workloads that cheaper GPU can see an even greater performance increase.  So if nothing else, it is a win on price alone.  Although this is effectively moving the flagship card price point back to the $700 mark that the GTX 1080 Ti occupied. So I guess less a win and more undoing Turing's fail.

Beyond that, these are still big/power hungry GPUs on a new process.  So volume will likely be extremely limited at launch.  AMD could have a chance to take market share if it can deliver more cards to the channel with the more mature TSMC 7nm process, as long as RDNA2 is priced competitively.   However, the recent ethereum boom really favors RDNA GPUs, and if RDNA2 is also and excellent mining GPU, AMD may lose their cards to miners as opposed to gamers.

Not applicable

Well, looking at the marketing materials released from both companies, following the Nvidia gpu announcement, one can clearly notice the marketing skills difference and approach between the two companies.

Where Nvidia addresses its products to the overall gaming audience, AMD directs its marketing toward its fanbase, insisting on the renew message.
AMD as usual, instead of acknowledging its own products and customers, wipes everything stating the beginning of a new era.

I cannot stress it enough, in my opinion this kind of marketing is not effective to put forward your own products, especially when Nvidia proposed a 1500e card.

One can argue at this point that AMD has no clear answer against the 3090, otherwise we would not have almost 800e difference between a 3080/3090.

But AMD may have something under the hood that will fit the artificial void left in the Nvidia lineup.

There are no gpu in the 800/1500e price range, which mean that Nvidia has enough room to tape out Ti, Super, Xtreme versions of its gpu later on.

So yea, i can only advise to customers to wait some months before buying a new cpu, gpu.

The market is too hot right now, companies will not bother on focusing on the customers aspects first.

And i can only guess that the AMD RDNA2 drivers are still a mess today as speaking.

As usual, AMD have to deal with engines running on different platforms, meanwhile Nvidia is directly feeding 3D engine developers with its own gpu.

Side note: apparently AMD is selling mountain bikes now...

NVidia is certainly very skilled when it comes to marketing.  The RTX 3090 is effectively a replacement to the RTX Titan.  The RTX Titan however, was never marketed towards gamers, the RTX 3090 however is.  So if Big Navi does manage to upset the RTX 3080, they still haven't topped NVidia's best "gaming" GPU since NVidia has redefined its Titan class product as a gaming GPU and brought the price down.

Not applicable

You don't know if Nvidia will or will not release Ampere Titan.

But there are things one can deduce looking at the transition from the A100 die to the GA102 die.
Nvidia architecture is quite flexible and modular, Nvidia could still push over the boundaries of the GA102 die size.

Nvidia could also propose an updated Cuda/Tensor/RT cores configuration, more powerful than the 3090 .


It seems unlikely, as the marketing for the RTX 3090 states "Titan class" performance.  It would be unseemly to state that, and then release a Titan that outperforms your "Titan Class" card.  

However, it is possible.  The GA102 could support a maximum of 96 SMs based on the GPU specs.  With only 82 SMs deployed in the RTX 3090, it should be possible to make a "full fat" GA102 capable of 6144 mixed mode calculations per clock.  Or 12288 CUDA cores if you base it off max FP32 throughput only.

Journeyman III

I might test out a 3070 but at this point I'm happy with my amd stuff, just a little software tweaks needed "please such as better camera shake compensation and maybe a little better aim assistant like what NVidia has I side by side and the NVidia has 75% better recoil control


Explain what you mean by recoil control. Also that aim assist sounds like a

feature on gaming computers, not an expansion board (which is what a modern

Desktop GPU is (most of the time)

Cyberstorm64 of UNYU
Tessellation Enjoyer.
Adept III

With some hindsight now with the RTX 3080 and the pitiful launch, yes I was trying to obtain a 3080 and never really had a shot of getting one. If Nvidia wants me to buy one of their cards then they better have one available for me to buy in a reasonable fashion. Expecting customers to bend over backwards to them is pure shame. Sony and the PS5 pre-orders is the better way to take care of your customers. Point is AMD best be clear on availability, hopefully with substantial numbers of cards, if not let us know. Recommend Pre-Orders on the AMD site if possible as well as other retailers for a given quantity(not all the cards)  then when the full launch occurs the rest. People in general like to be able to plan not only a graphics card but also system and other items like a monitor/HDTV etc. around it. TAKE CARE OF YOUR CUSTOMERS PLEASE.

I would like to see AMD showcase, as in a demonstration with Zen 3 launch, mystery system with a Zen 3 CPU and RNDA2 card combating an Intel Best 10900K and Nvidia 3080, a few games side by side maybe. Would be fun and definitely would hit hard on the internet if it shows rather favorably. Would be a lot of fun. Nothing like a mystery to pull people in.

As a side thought, I would think a wise options would be this. Have three options in the drivers for Performance:

  • Quiet mode
    • Quiet mode the card does exactly what AMD indicated in reaching the %50 perf/w over Navi
  • Standard mode
    • Standard mode the card meets all the stated specs in clock frequencies, power. Such as 250w, 2ghz etc.
  • Ruby Fast
    • Ruby Fast has the warning about going that fast and that the organization will deny any knowledge of involvement when you go to that mission and that your pushing the envelope at your own skill level and take all responsibility . . .
      1. Anyways this could push the card up to 300w or more, clock speeds as fast as reasonable beating let say a 3080
      2. Like a secret button for nitro that wins the race 

The above would be having your cake and eating it too or as close as you can be to that. 


Clearly you forget the Vega FE, Vega64/56, Radeon VII and RX57090XT launches.


I was able to pick up a Vega 64 and a Radeon VII within a week of launch with little difficulty.  So far, my local shop had 12 cards in the 17th and an additional 5 on the 18th.  Nothing since.  This is as bad as I have seen for launch volume. 

The RTX 3090 is unlikely to change much either.  Looks like early results are showing pretty minor performance gains over the RTX 3080 for double the price.


RE: I was able to pick up a Vega 64 and a Radeon VII within a week of launch with little difficulty.

Not my experience at all.
RX Vega 64 Liquid, for example was launched with "Aqua Pack" bundles which were impossible to get.
Initial price for RX Vega 64 was fake, just for the reviewers, with a small amout of cards at launch given with an AMD Rebate.
Radeon VII was sold out within 10-30 minutes of launch, I tried to buy one before even having time to read a review..Lucky miss.


 Not sure what you mean, bought a Vega 64 LC within a week, bought a 5700XT Lisa Sue like the next day after it went up for sell on AMD website. My Vega FE's bought later at a cheaper cost. I had no problem buying a Nvidia 1070 two weeks after launch. Nvidia messed up this launch, did not deliver what was promised and made a lot of folks waste time for something that was never really going to be readily available.

If Nvidia wants to sell me a graphics card they better have them available. I am not going to bend backwards, hit F5 over and over again, multiple pages, wait in line so I have the privilege of owning a Nvidia card. That is not how you take care of your customer. Nvidia had their chance, now it is AMDs. 


While we are talking and comparing RTX 3xxx to RDNA 2x Please Remember that RDNA is designed to be a Consumer GPU. The 5700XT Founders edition is currently the fastest and most powerful Card that uses the NAVI architecture, but the Recently Released Pro VII is also very powerful. (I might get two of those instead of a Pro SSG when I build my big workstation (when I have a large amount of money needed to buy a machine like that) Most Profesional Cards from AMD use some variation of the VEGA architecture at their core. But the Pro cards are way more expensive than the consumer cards. 

So, When is a Pro GPU better than a Consumer GPU? 

CAD and 3d modeling. 


Being Expensive

Supporting Special Rendering APIs

When it is worse? 

Some Games Behave strangely with Pro GPUs

The Pro SSG gets confused about what to cache and not cashe

Expensive, and Most games don't require that amount of horsepower. (The only thing you will notice is more stability compared to a consumer GPU)

But, if you can afford an RTX GPU you can afford the new Pro GPUs

So why not?

Cyberstorm64 of UNYU
Tessellation Enjoyer.

Radeon Pro VII is a Radeon VII with less limited FP64 (that is set by AMD) and infinity fabric link.
AMD Reveals Radeon Pro VII: A Workstation Card For When You Need It All 
You may as well buy a Radeon VII instead, and use the Radeon Pro Software for Enterprise Drivers on it, unless you really need certified drivers.


Yes, But the Normal VII is not easy to find

Cyberstorm64 of UNYU
Tessellation Enjoyer.

If you have a lot of money, then you could just buy GPU's until you find

one that you like. Go by Trial and Error.

On Tue, Sep 22, 2020 at 12:25 PM Alexander Hixson <>

Cyberstorm64 of UNYU
Tessellation Enjoyer.

Very funny.
Best all round GPU I have is Palit RTX2080 OC.
I do not need an RTX3080.
I will not be buying any more AMD GPUs for personal use, although i might do PC builds with them.


It looks like some Nvidia RTX3080 AIB GPUs have hardware problems due to Capacitor problems: The RTX 3080 Launch can't get any worse... Right? Wrong... - YouTube 


Cards from Colorful and "surprise surprise" Gigabyte have problems.

This article explains it in more detail.

RTX 3080 Crash to Desktop Problems Likely Connected to AIB-Designed Capacitor Choice | TechPowerUp 

RTX3080 Founders edition GPUs are good.


Take a look @ Igors Lab (Youtube and Homepage) about the NV Problem. That Samsung Chip and ... they have Big Problems with those GPU's


I read it.

Some manufactures like EVGA and Colorful already claim to have fixed the problem by changing the capacitor structure and have sent cards to reviewers.

An Nvidia Driver fix for somegames that crash has already been released.

Here is the link to the Igors Lab article.: NVIDIA GeForce RTX 3080 and RTX 3090 and the crashes - Why capacitors are so important and what’s be... 


Looks like Nvidia modded the Voltage / Frequency curve in new drivers to improve stability.