cancel
Showing results for 
Search instead for 
Did you mean: 

General Discussions

Anonymous
Not applicable

Optimized AMD super computers and software?

https://www.amd.com/en/products/exascale-era 

in the past some of the worlds fastest super computers were built with stackable opteron CPU's. 

but back then and even now most code and software is optimized for intel CPU's. But AMD CPU's have originally always had support for the same SSE and MMX extensions as intel but then added a huge number of extra extensions which were more complex and longer. Like mmx or SSE was maybe around 20 instructions long but AMD had 3dnow and other extensions which were closer to 30 long each or more. So as long as somebody properly optimized their games and software for AMD you could do more things like better and faster 3d rendering and animation or physics and many other things that require maths so yeah even video games should benefit. So AMD has released plugins and software like "pro render" developed with some staff who worked on pixars renderman and they made it a plugin you have to get and install to your 3d software its an industry standard. But years later nvidia sort of stole it and marketed it for games as far as I can see and called it RTX or something dumb like that is how things look to me. But basically to use an AMD CPU you need a special AMD compiler. You need to enable the correct flags for the compiler. But for the life of me I cant find all the lists of supported CPU extensions clearly shown and advertised on AMD's website for any of their CPU's.

Without knowing what extensions the CPU has and supports so we can compile for them or searching the AMD developer section for hours to find all the necessary tools and plugins to use 3d software correctly as its meant to be when nvidia just paid all the software developers or something so they're automatically loaded and ready and set by default.

This stuff should be on the front page HUGE BUTTON "HOW TO USE 3D SOFTWARE WITH AMD" "How to DEVELOP GAMES AND SOFTWARE FOR AMD"

You see when nvidia started out graphics cards were very expensive and used quadratic polygons. Nvidia wanted to halve their memory buffer and wireframe and hardware requirements so they cut the quadrangles (rectangles) in half and said our graphics cards are better they cost less and support triangles! and we draw double the number of polygons and have double the fill rate (every rectangle was two triangles one flipped and reversed so they counted them twice or whatever) but they still werent as fast as true graphics cards that cost alot so.. nvidia faked their benchmarks like you wouldnt believe for decades. Nvidia's literal exact precise and only business function was to "cut corners" theres no other way to say turning rectangles into triangles to save on hardware costs and report double the number of polygons isnt cutting corners.They crushed and bought out all the competition (bye bye voodoo 3dfx) and drastically slashed prices and sold cheap garbage that looked like it was similar to real graphics hardware. They used all this "memory clamping" and "specific limited texture compression" to make their games use less VRAM so they could sell cheaper cards for more money with lower VRAM usage while at the same time slowing things down for AMD probably. they looked at all the world standards and industry standards and 3d software and games and said "if we make our own games and software and sell enough graphics cards everyone will have to use our broken way of doing things" Their shadows were broken non standards their everything was half baked half arsed and backwards and whatever worked but looked similar was fine. Their texture quality was awful. But it was cheaper and the faked benchmarks paid reviewers and the world not knowing any better bought them up and physically altered their software and games and apps to default to nvidia's lousy garbage. Almost all the video game and 3d software companies on earth are outright bought and owned or developed by chinese companies like tencent. And the minute they're a publisher or developer thats owned by a company thats chinese or tencent or something they no longer run fast on AMD or other american hardware. They will never be optimized and barely run at all on AMD when they should maybe be a little faster than nvidia in many cases. 

One particular example is DOOM ETERNAL. Doom 2016 was fantastic and runs on vulkan so how could AMD's mantle API originally titled OPENGL NEXT but was taken over by a stampede of different industry giants probably owned by asians who changed its direction and created VULKAN its ideal for AMD systems and graphics cards doom2016 performs VERY WELL on AMD in vulkan. But then once DOOM and every other game under the sun changes owner or publisher and sells out they no longer run on AMD well at all like DOOM ETERNAL even red dead 2 appears better optimized bleargh.. Titles like fortnite had sky high FPS on AMD cards in some early alphas and betas. Then the unreal engine turned to garbage somehow overnight when they "optimized for nvidia".

I understand that all the poor people in third world countries dont know what a graphics card is and who dont have books or internet and happily pay more for intel CPU's or nvidia graphics cards throwing their money away on inferior goods.. So I can see that they are trying hard to reach their intended market with cheap goods for high prices to poor uneducated people. But its sooo very obvious when the same game gets released over and over and over and the only thing that changes is the graphics look a bit prettier. Look at titles like mass effect andromeda tell me that wasnt a step backwards!? all the call of duties and battlefields and what not they reuse the same 3d graphics libraries and animations which is fine but at least make the core gameplay and game mechanics better. A game should feel fun just using the control keys and mouse to move around and shooting stuff should feel satisfying like in doom 2016 jumping around and grapping ledges and doing kill animations and exploding heaps of demons was mighty satisfying.. but suddenly doom eternal tried to put in anticheat that makes your whole computer slow down by 20% call me an idiot but I think those guys are dumb just do or use whatever was in doom 2016. If it aint broke dont fix it.

I'd love to see some things optimized for AMD CPU's and AMD GPU's but I understand its maybe not obvious how to do this to people. So I hope somebody can take the time to learn and figure it out and post guides on windows and games and linux developer forums and intel and AMD CPU forums.

I think the only few computers out there actually optimized for AMD are the one or two super computers running AMD hardware with custom built made to order software probably made by AMD themselves. I'm pretty sure even microsoft havent been trying as hard as they maybe could with AMD.. i mean there should maybe be entirely separate OS installers and code and stuff you know? I reckon most of the OS just isnt AMD friendly would be my guess i mean you can say it runs but it isn't optimized at all probably. So am I the bad guy for wanting people to optimize for AMD? if they wont optimize for AMD they shouldnt say they support AMD they dont they only support intel and nvidia.

0 Likes
36 Replies
fyrel
Miniboss

Not sure if it's what you are looking for but if you head over to https://developer.amd.com/  you will find a load  of tools, SDK, libraries and other resources aimed at developers.

0 Likes

C++ is very close to the metal for software development. Not much need to go further with ASM coding unless its very demanding CPU intensive work.

I have lots of experience with parallel programming

0 Likes
Anonymous
Not applicable

C++ was very good back in the 80's and 90's when computers were single core single GPU systems and didnt need a wider variety of ways to do a wider variety of things. But its not longer the 80s and 90's. This is the problem with people.

Having a basic understanding of C++ is seen as good as it helps you learn basics of coding. Knowing it may often mean you can more easily or quickly grasp modern easier and better languages. But using it in todays times isnt the best in most cases for almost anything and everything. Have a read of the random article i googled for by typing into search the important words "best modern programming languages".

https://towardsdatascience.com/top-7-modern-programming-language-to-learn-now-156863bd1eec 

But there may be many other articles claiming one language better than the other or newer features or whatever it depends on what you intend to do with it really. But if you intend to do anything post 2007 you probably dont want C++ for the reasons stated in the article languages that arent from the 80's and 90s are much better than C++ to use on computers and devices that werent around in the 80's and 90's. You say C++ from the 80's while in the present future beyond the year 2000 microsofts own language C# (read:Csharp like musical note) or googles Dart language or Mozilla's Rust language which are all like C++ to learn and use in terms of syntax and structure so you type code in a similar way but they do much more and better and have important features C++ doesnt like memory protection/security or you know more functionality. So in conclusion basically anything else thats modern multiplatform used for programming that was created with the computers and mobile phones and other devices we use in the last 15 years in mind would clearly obviously and importantly be better.

This is part of the problem. you cant optimize for a modern secure multicore CPU or multi GPU AMD computer like a ryzen or a threadripper with single GPU single CPU insecure memory language from like the 90's FORGET ABOUT C++ already its day is done its in the graveyard. So you can understand why I believe nobody is optimizing for AMD, the idiots are blowing the dust of grandpas learn to code books and saying

"OH SO THATS HOW YOU PROGRAM!? now I must go develop modern games and applications and software using techniques that are decades old and create a real piece of garbage and tell everybody how great I am and make them think nvidia and intels hardware are worth real world money." 

Hmm ultimately people keep spending years of their lives trying to reinvent the wheel over and over. "hey check out my round thing that rolls, look i'm super smart, way smarter than everybody else at google or microsoft or any other decades long established multibillion dollar global tech company that basically invented the programming languages we use today. Look at me pat myself on the back. Its a shame my ancient wheels I made just sorta roll really poorly and don't seem to take off and take flight on AMD's futuristic VTOL jet fighter computer systems I better waste a truckload of money on more expensive intel and nvidia products so my wheels can roll a bit faster." is what i hear every man and his dog saying they're just using different words or offering different software and games.

0 Likes

OpenMP is the tool of choice for C++ which is supported with Visual Studio and Cray etc

0 Likes
Anonymous
Not applicable

If you read the "scope" of the specifications of openMP 5.0 in the HTML documentation here 

Scope 

It will have many many limitations and is only as good as the programmer is in using the tool and everything is MANUALLY made parallel individually by the developer coding it as such.

"The OpenMP API covers only user-directed parallelization, wherein the programmer explicitly specifies the actions to be taken by the compiler and runtime system in order to execute the program in parallel. OpenMP-compliant implementations are not required to check for data dependencies, data conflicts, race conditions, or deadlocks, any of which may occur in conforming programs. In addition, compliant implementations are not required to check for code sequences that cause a program to be classified as non-conforming. Application developers are responsible for correctly using the OpenMP API to produce a conforming program. The OpenMP API does not cover compiler-generated automatic parallelization." -This is what openMP is in the projects own words. If you cant understand those words let me help educate you on what openMP is and what it does.

It is probably/possibly widely used in industry for ancient older obsolete C++ programming langauge for things like specific existing apps and software which "ARE NOT PARALLEL" (basically all C++ code) and you "WANT TO MAKE small individual pieces of THEM PARALLEL" so you manually code them up to run with openMP so that specific parts of specific tasks can be executed in parallel for large servers with like 64 cores if your software has particular CPU intensive tasks that need to be done many times and often like say a voice chat app where you want to compress the voice streams as multiple phones in the company are all on VOIP calls and its an older dated C++ language system so its not a parallel multithreaded task for high volumes of compression requests or something I'm not too sure I'm just making up an example I think would be common enough and a good way to use and demonstrate it as every man and his dog are running some sort of old cheap/free linux C++ VOIP server. You get the idea. But theres a far far better way than needing to try and make small patches of code be parallelized and thats called "using a modern programming language" as far as I can tell. You can see how they say it on the end of the scope of what openMP is and what it doesnt do, it doesnt cover compiler generated automatic parallization because C++ isnt multiGPU/multicore. And what you're making parallel could be code for a graphics card from AMD from many years back that supports modern out of order code execution or it could be code for older C++ software running on modern CPU's but each task would have to be individually picked out and manually coded to try and force it to be parallel instead of just compiling the entire app in a modern programming langauge and having the ENTIRE THING be multiCPU and multiGPU. There is a huge difference in trying to pick small pieces of ancient C++ software and trying to modernize them and using modern progaming langauge to start with. 

For example there is a lot of confusion about automatic Out of Order Code Execution in modern progamming langauges at the compiler level and software apps that let you pretend a small piece things were written with a modern programming langauge (openMP). And multithreaded / multiCPU|GPU support from a modern API like VULKAN. Then theres programs with the word parallel in the title like something often used by business and webhosts and such "parallels" which is software Remote Application Software (RAS) it offers virtualization and makes desktops applications available as a HTML webpages and remotely accessible. However its a little hard to say whether or not or how much of the software "parallels" is in a modern langauge and supports dx12 or vulkan properly 

though lots of server software is modern as possible usually to make use of more threads/cores and such how much of it truly is modern programming language and code can often vary as they will use parts and pieces of older software/code for somethings often when developing software they dont bother modernizing software and tasks they think dont need it like the "steam" game store client will probably be 32bit only and not using any modern languages from now till the end of time? Lots of programs and software probably software like MAC OS X or whatever they call it nowadays maybe just grabs whatever source code they find out there and rip parts of much older C++ linux apps and code or pick and choose parts from businesses using or developing it on the BSD platform then overnight it becomes a new feature of MAC OS probably they just use better compiler and compiler flags? I'm not a programmer though so I wouldnt know but I do understand that you maybe dont understand?

0 Likes

Beware of Amdahl's Law

0 Likes
Anonymous
Not applicable

Amdahl's law only applies to specifically parallisation and not to other improvements. The optimization I was referring to is actually a MASSIVE MAJOR improvement in performance. you see back around the days of pentium 3's Divx video and mpeg4 video formats first came out and encoding a video took overnight. If your video encoding software was able to use things like CPU extensions SSE or MMX it could shave hours off your encoding time. But games dont seem to support any of AMD's CPU extensions and features. having CPU's with more cores and higher frequency means we can encode higher resolution higher quality more complex video formats like HEVC in 4k in very short amounts of time. If software was more optimized and written in modern programming langaues so EVERYTHING and ANYTHING is as parallel as possible and there was support for the AMD hardware features and extensions and the futuristic advanced ryzen processors and architecture like PCIexpress 4.0 could be used properly AMD computers would be screaming fast.

But because older storage media like spinning disc HDD's or CD ROMS are still in use the old hardware and controllers and methods of accessing data are still very present in our computers entire being and structure from hardware to drivers and software.

There are lots of different sort of barriers and flaws in the way files are accessed from HDD and loaded into RAM and executed by the CPU. Well now that we've got PCI express 4.0 and ultrafast NVME drives and things the chips and controllers are still doing things in really slow obsolete traditional ways. The Sony PS5 making bold statements that their new consoles Storage technology is faster than PC's is because they removed a heap of bottlenecks and developed new ways of accessing storage data so its roughly comparably fast as RAM is the idea that you wouldnt need to spend ages loading games or files hopefully no more loading screens and waiting. Similarly when windows 10 was first on shelves in stores directx 12 and windows 10 have modern programming languages at its core and the way games or software make requests for the CPU the pipeline and ordering and queueing and things were all redesigned removing bottlenecks so instead of a few thousands of instructions the CPU could handle while talking to the GPU you could with DX12 or VULKAN do millions of instructions instead.

AMD's graphics cards were the first and earliest to redesign the GPU hardware and the way it feeds data to GPU's ages ago long since supporting modern programming languages and parallel code while also supporting ASYNCHRONUS COMPUTE and with VULKAN improved texture streaming methods, elimination of many bottlenecks and slower steps in the graphics pipeline to speed things up allowing for higher FPS and lower latency. While nvidia spend many years forcing game devs to not support AMD's hardware and not let directx 12 features be used or implemented in software. They sort of faked all of the hardware functions and directx 12 support with software instead of being compliant its better to say that they've been compatible. As such hardly any games ever release that even pretend to use the features. Its most likely because of more security and different ways of doing things in the new dx12 or vulkan that nvidia and or intel dont like it because it maybe limits them from screencapture methods and viruses and stuff but also probably means they'd need to have modern hardware to run "modern programming languages and software". intels CPU's are on the 10th generation and are around 15 years old.. nvidia regularly comes out with new designs on their graphics cards but they sadly arent new in industry compliance very often at all if ever. They do their own thing and make industry pretend its meeting or making standards.

 

A good example of this is windows 10 out of the box with directx 12 lets game devs type something like "add.mgpu" and it means you could throw in any number and any combination of graphics cards like AMD and or Nvidia. 3amd GPU's 1intel GPU and one nvidia GPU and it'll all just work. But nvidia wanted to make these overpriced garbage things called "titans" and sell you one massively overpriced card that would get wrecked in performance by two midrange cards with mgpu. So they specifically ensured nobody learned how to type those words into game software somehow with some sort of magical pot of gold or something. you can use AMD cards in multiGPU for all sorts of things no need for those SLI bridges or nonsense because of this fact.

 

Linus of linus tech tips himself made a fool of himself by not understanding claims made by Sony about their next gen game console being better than PC storage. When there was a huge number of ways for this to easily be true. Sony never said it would be the worlds fastest hardware but they did say it was definitely better than existing consumer PC offerings. like VULKAN or modern programming languages and parallel is better than 80's software and C++.

See some youtube videos below to help you understand. If AMD makes better modern hardware like PCI E 4.0 but nobody has drivers or drives or devices that use it and refuse to use modern programming langauges and vulkan and drivers to properly allow them to function and be used its the dumbest thing in the universe.

Basically its like intel is an old 70's cadilac car that has a classic style. Nvidia would be like some unknown generic cheap chinese car with a body thats been refitted to make it look like an average luxury sedan car. Then AMD releases a modern supercar and people say "Well since its such a fast car we can save on money and run it on used cooking oil instead of hi octane fuel its a V18 or V16 or V12 engine but we'll never use all those extra cylinders lets just run it on a single cylinder for almost everything and use some patch job methods to pretend the other cylinders are firing nobody will notice or care. Lets try our hardest to make it drive like a cadilac and use cadilac parts in it and limit its speed and performance to match that of the 70's cadilacs or the chinese knock off cars."

I feel the videos below may be relevant.

I’ve Disappointed and Embarrassed Myself. - YouTube 

Adding Vulkan to Ghost Recon Breakpoint - NGON - YouTube 

0 Likes

At present I use a RTX 2080 which has some advanced features not widely leveraged by games. I do not own a console but I do have an Xbox 360 controller as it seems to be handy with select games.

I use C++ for game development, while ago I demonstrated ray casting on modern hardware. ray casting is so undemanding it will run on a potato.

Win32 has threads and spawned child processes etc. So using a discrete audio thread is not a problem with modern multicore processors.

My rig has a 2TB SSD so I can install a lot of games. Games vary widely from hard to SSD. Does not makes a lot of sense for some but games like Halo:CE are designed to minimize loading times.

0 Likes
Anonymous
Not applicable

What is ROCm? - YouTube Convert lousy CUDA into industry standards that will run on any GPU particularly AMD's COMPUTE CORES and AMD CPU's or IBM or whatever. 

AMD ProRender - YouTube You should use this with 3d software like blender/maya/3dsmax/houdini even if you own an nvidia card and use openCL. you download a pro render plugin for your 3d software and maybe need some plugin for openCL too not too sure hadnt checked. but then you can actually use TRUE 3D super quick none of this rtx rubbish actual real ray tracing rendering high performance. Pro render was developed by teams that made pixars renderman engine.

Nvidia's been fighting off adding dx12 hardware support and CPU and GPU pipeline queueing ordering and things for many many years and faking everything with software and making it so games are better on dx11 and only ran on dx11 because my guess is dx12 or vulkan are more secure and its harder for them to fake their FPS or steal screen recordings and things what with every app under the sun sneaking in an overlay of their own. But thats just an assumption on my part otherwise them insisting game devs dont support multigpu or asynchronous compute or any other modern programming langauge features or VULKAN and other things for the longest time and they just faked dx12 with software making them more compatible rather than compliant with industry standards for like forever.

 

If you look at nvidia's CUDA its a programmable shader its basically just a limited language with limited things for the nvidia's limited GPU hardware. but AMD GPU's have actual compute units like found in super computers and can run any code a CPU or computer can so are vastly superior. You can use HIP/ROCm stuff to make nvidia's rubbish get converted to real industry standards that will work across all computers using AMD's software and it will work on IBM or AMD CPU's and GPU's and intel and whatever because AMD is superior hardware but even then its maybe not fully optimizing for AMD hardware its just making nvidia's rubbish be run in a modern programming language games or whatever havent been built for the superior AMD hardware so they arent optimal even if you "port" them to real languages and real computer industry standards for any and all computers. What you would need to do is to add words like "add mgpu" and so on in the code if it didnt exist in what was being ported then it wont be there when its been wrapper converted to a modern language they MUST ENABLE THE FEATURES.

Do you understand?

0 Likes

with OpenMP and a modern CPU I can leverage a lot of resources with SSE4 etc.

A GPU is not strictly needed for a lot of coding tasks. The GPU is simply more parallel processing for a specialized task/

OpenMP is also available with FORTRAN as well as C etc

0 Likes
Anonymous
Not applicable

If you have ancient code that you must run like for banks or something where their systems are still largely something like DOS based custom made software for security and stability as all it needs to do is access a database for many tasks and they cant risk moving to modern programming languages and windows 10 or other things for security reasons and all the nations wealth and money on the line they cant just risk it all so they have the golden rule of "if it still works dont change it" and just use some 80's code they're feel is secure. But if they ever wanna look at their 80's and 90's software and say how can we make the older stuff slightly more modern or else we cant keep it going, they want it to be fractionally ever so slightly more parallel for certain tasks that are small bits of code that can be manaully altered to become slightly more parallel with things like openMP.

In japan each bank assigns you an account manager person to handle your transactions and oversee you.. they use office 97 and they use fax machines and paper records/documents paper trails for maximum security in the present day. So you see people who are literally living in the 80's for security reasons will need things like C++ and openMP. 

But for all of us who exist after the year 2000 and want to play video games or stream media or CAD and 3d design or scientific research or number crunching and we arent as concerned about security as say a national bank. Then sure we can run modern operating systems and use MODERN PROGRAMMING LANGUAGES because you'd be really uninspiring not to without a bloody good reason since they're much faster and much easier and just all around better for everything or they wouldnt exist.

Thousands of people in megacorporations like microsoft and google or AMD will spend decades of their lives improving software programming languages. They wouldnt make and release new ones and spend millions of dollars and decades of effort to create newer faster better things if they werent any better at all. Can you see people saying "you know what decades of programming and development and a few billion dollars for a shitty inferior programming language nobody needs or would ever want to use- yeah i like the sound of that lets do this! make it happen man!". Do you know how many employees just AMD has and how long and how expensive it is to design test and manufacture new hardware that supports the new programming languages only to have people say "but the junk from the 80's still works and when i load windows 98 on my brand new high end gaming PC i think it runs a bit faster than windows 10... so I think we should only ever use windows 98 or DOS and other old software on our new gaming PC's lets go play commander keen and jazz jackrabbit or space quest and other 90's software titles because they run quicker and better right?! I knew spending thousands on a new high end gaming computer was worth it now to load in some real software pass me that zip drive and those floppy disks! If anybody asks I'll say I'm a retro gamer!"

0 Likes
Anonymous
Not applicable

nvidia SLI only supported one GPU's memory, if you had a 6GB card and an 8GB nvidia card and you SLI them you could only have one cards memory be put into use because of the software addressing systems and the way the hardware communicates and because it lacks parallelism. So you'd either get 6GB or 8GB but never 14GB.

Directx12 came along and natively allowed any card with any other card allowing for 14GB total VRAM in such a scenario and no need for dumb SLI bridge cables and things like that. Any GPU with any other GPU. an intel GPU with an AMD GPU with an nvidia GPU with all their memory adding together to make a huge amount of useable VRAM.

You could then select different multiGPU modes like 1x1 optimized or AFR or AFR compatible and so on.

But people still refuse to acknowledge these things existing or that directx 12 exists or that vulkan exists when they are VASTLY superior to your windows 98 and or your windows XP or your nvidia SLI or your garbage nvidia overpriced titan cards. 

JUST ONLY THIS REASON ALONE MEANS YOU CAN TAKE YOUR 90's programming language and put them all in the bin! right there is the future that emerged the minute windows 10 hit stores. thats just one example thats dozens and dozens of other reasons to use modern programming languages and API's like VULKAN but you're free to sell your house to buy an nvidia titan card when you coulda bought two cheap AMD cards and outperformed it for half the price if people ever actually used some modern programming languages and added support for AMD hardware.

I know you probably hate being able to use hardware features that you've paid for in your computer.. But I honestly dont hate that at all. I'd love it if AMD's CPU's and GPU's SIMD and other features were actually supported and used more by developers. But I just honestly dont know what to say you're welcome to buy new hardware and never be able to use it.. Go to town and really have some fun with it. Waste all your money. If you can afford to waste it all on stuff you cant and wont ever get to use then why not just give me all your money instead?

0 Likes

I have Office 2019 but I have a my old Office 97 CD around on my media rack along with all the other versions of Office.

Reason I like win32, it is so fast that my programs load almost instantly and are ready for work. I build a shell in win32 that could act as a printout for development. Compiled for 64-bit it can potentially handle billions of lines of output.

These days one video card is powerful enough to handle my 4K panel fine. My RTX 2080 has it covered.

I was playing a game that first surfaced in 2010 earlier before coming here to check into messages. Alien Swarm: Reactive Drop using a custom server that is beyond brutal in difficulty.

0 Likes
Anonymous
Not applicable

BTW in case you didnt know the banks dont use office 97 unless they can help it but their customers demand compatability with their excel spreadsheets and what not and reports and things so they export from their dos whatevers and import and save in office format that their clients can read. Because running office software on critical systems is probably a security risk.

AMD are less expensive and cutting edge technology like 7nm fabrication process which intel and nvidia are a long way behind them on as advanced micro devices has the word advanced micro devices in their company name. However pioneering such massive leaps and advancements in design and production fabrication of CPU's is very experimental and research and development heavy meaning its expensive. So current generations of NAVI10 and so on in their GPU line up are currently aimed at high FPS 1440p gaming not 4k gaming as the expenses and costs in developing and making it and marketing it arent viable economically. Super expensive luxury graphics cards only sell in very small numbers like highest quality 8k and 4k 10bit HDR TVs with dolbyvision and HDR10 standards and high rec2020 coverage over 80% of the world doesnt own one and isnt planning to anytime soon.

Since nvidia GPU's also intels 15year old CPU core designs are like older ancient budget parts and designs and corner cutting cost cutting in their entire business model and are like 8+nm (rtx3000) 10nm or 12nm or 14nm and nowhere near AMD's bleeding edge 7nm fabrication and are far far simpler and less complex in design and less good hardware they can save so much on development and design costs and not even need to test things really just make the silicon bigger and cram more of the same garbage on there basically stamp a few out and say "yep it works just like all the other cheap garbage we make we can sell this" and their costs are hardly anything to speak of so they can afford to make a number of absurdly overpriced rubbish cards like titans and rtx 2080's or whatever because even if they dont sell if just a few of them sell they get all their initial investment costs back. 

So AMD have been perfecting their state of the art design processes and focusing on better designs and better hardware which is expensive so they wanted to get in the swing of production and bring costs down and develop further as they go before they can justify going for 4k and upwards in terms of gaming on their high tier graphics cards.

I mean sure I barely game at 1440p 120fps - 200fps on my 5700XT with antialiasing turned off and maybe low or medium on some settings like volumetrics or number of dynamic light sources but with the right graphics settings things can look very hard to spot the difference between my settings and all ultra and at 1440p and on my 4k TV 10bit HDR samsung QLED it looks good enough for now so I can wait until the bignavi cards like what they'll have in ps5 and the next gen xbox series X or whatever hit shelves with hdmi 2.1 for my 4k gaming or else i'd have to just purchase a second 5700xt or like two 5600's or 5700's something and game in 4k on that. I mean sure my 5700XT supports 4k120hz output with DP 1.4 and DSC but lots of AAA 3d game titles wont get that high with ultra settings anyway even on a RTX2080. (my 5700xt performs as good as a rtx2080 in graphically intensive titles like red dead redemption 2 anyway and cost like half as much at the time of purchase so I dont think you made a wise purchasing decision but thats just my own opinion) So yeah for me AMD is handling the market correctly with the fact that since most people dont own or buy high end 4k and 8k displays making super expensive cards to sell to those handful of people isnt a sound business strategy unless your cards are cheap garbage that cost nothing to develop design and test and manufacture (nvidiaGPU/intelCPU).

AMD RX 5700 XT VS NVIDIA RTX 2080 Ti | Ryzen 3950X 4.3GHz - YouTube 

See below video for the difference in price vs the difference in performance between a 2080ti 11GB and a rx5700xt.. and thats just at ultra 1440p with TAA. since you're clearly buying the 2080 to game in 4k like you said you'll find the difference in FPS at 4k ultra is actually less of a gap than you get with 1440p. Also make a note that many titles like shadow of the tomb raider and red dead redemption 2 are now available in VULKAN which would run much better and give AMD cards a bit of a boost in FPS. NOTE that AMD graphics cards have "tesellation" enabled in the adrenaline drivers by default and overriding it and turning it off gives a big boost to FPS to make it more like an nvidia card. Also although everything set to ultra is a good comparison with nvidia cards to see the hardware under load they're actually different cards and have different features like many graphics settings only apply to nvidia cards as they were developed by nvidia for example you may find some titles run better on AMD by adjusting the graphics settings and options to suit AMD's hardware like a classic example of this is the witcher 3's hairworks. Disabling hairworks which i think looked pretty terrible anyways will give a massive boost to AMD graphics cards in benchmarking and running the witcher 3 in FPS. but with all on in ultra settings it makes the game crawl on AMD and barely affects the nvidia cards it was designed for as it was specifically made to conflict with AMD's advanced tesellation features. Disabling the tesselation thats on by default on AMD may help in this regard. So turning on every nvidia setting and option in game graphics settings to all ultra can actually make AMD hardware perform less than they should can you imagine it? Do you understand what I am saying and what this means?

I just dont think you know how to use a graphics card or what one is or else you'd spend your money more wisely. I hope you can learn from my advice and spend more wisely in future.

0 Likes

LibreOffice can read the old binary Office format files but more recent versions of Office now using XML.

NVIDIA uses compression to reduce the amount of VRAM needed with games, not sure it helps much with workstation applications

Most games can handle 4K but some older titles are limited to 1920x1080 but my panel has all resolutions covered.

Hairworks is brual for NVIDIA users too. Most use a GTX 1060 to this day as the most popular card.

DX11 refocused tessellation which was present in DX9. DX12 now uses mesh shaders so the focus is different again.

0 Likes
Anonymous
Not applicable

You will notice that in modern DX12 and VULKAN titles have tessellation game settings in the graphics menu such as shadow of the tomb raider or metro exodus and other titles.

DX12 has all the features of DX11 plus new ones. So you can use tessellation and all the features of previous versions of DX in 12.

VULKAN has all the openGL features and more its feature set rivals DX12 but works on all platforms and is faster as its newer with more modern programming languages and its built by AMD because AMD developed  mantle API which was sold or handed over to khronos group to create their VULKAN API. They claim that VULKAN is "closer to the metal" as it probably uses some sort of special optimized assembly code and newer hardware and newer ways to interface with and interact with the hardware importantly its using modern programming languages.

Nvidia's memory clamping and things maybe reduced texture quality or did some other negative stuff so i believe they make the game files be packed in special ways so it stays smaller on nvidia but slows down and takes forever to load into memory on AMD and the computer struggles to unpack the nvidia compression on textures and maybe gives AMD cards strange texture pop in and other problems because of nvidia's non standard hardware and software and super anticompetitive all around general B grade movie villain attitude and conduct.

0 Likes

Mesh shaders are the way forward as these render fast and do not have real limits as to how big they are. With Mesh Shading, the GPU intelligently controls level of detail selection and tessellation for objects, enabling rich, open worlds with hundreds of thousands of objects.

Hardware-accelerated GPU scheduling allows for improved gaming performance by giving your video card full control over managing video memory. 

Reason I use NVIDIA's SDK, I use GeForce cards

0 Likes
Anonymous
Not applicable

Basically nvidia's texture quality and image quality has historically ALWAYS been TERRIBLE compared to AMD's.

So the same pictures on the same games arent the same at all. But nowadays nvidia's using waifu2x deep learning upscaling rebranded and called "DLSS" to upscale and add details to their textures that were never there to begin with. While AMD just has a high quality textures to start with. Its a very different approach.

Now just like the way AMD's Boost with select titles lets you lower the resolution when you move your view horizontally as things whizz past the screen the horizontal display of a monitor and output of graphics feed is different to the vertical the same way you will often see displays play back a movie with a panning left or right video shot and theres a noticable judder to it. So AMD lets your game dynamically downscale resolution and details during fast movement usually when panning left to right as you wont be focusing on those details. nvidia saw this technique and combined with their mipmapping levels to lower LOD on distant objects nvidia created something that lets developers custom specify objects in game development of low importance so texture quality and details can be prioritized on objects that are important. So basically nvidia's GPU's dont have enough power to output quality textures for the entire screen so they let developers lower quality on say a coffee cup or metal bars on an air duct and only try to get texture quality thats normal on say the character thats standing in front of you talking to you about a quest or whatever.

Dynamic resolution scaling is very popular nowadays and most people dont notice it when its done properly we saw this in the xbox 360 and xbox one and xbox one X. 

If the reason you love nvidia's development tools and software is because their hardware isnt good enough to do quality textures then uhh i honestly dont know what to say to you. They'd normally have to adjust the LOD or use heavier mipmapping.

0 Likes
Anonymous
Not applicable

I forgot to mention how CPU scheduling and GPU scheduling is usually done by the OS or probably the API.

So nvidia trying to have control over how their hardware functions by putting up a wall or barrier in the hardware or making their software try to seize control from windows or the OS is probably just a way for them to justify stealing more access and control so they can give or take priority but my moneys on them making their own scheduler in hardware so they can say it will only work best with Nvidia brand of hardware devices. Buy XYZ CPU and mainboard to make full use of our hardware scheduler otherwise your GPU may be bottlenecked or limited by the system... Probably some lame marketing angle to try and scam people into not trusting AMD computers.

After all we all saw for how many years microsoft OS CPU scheduler didnt work right with AMD CPU's and they only recently tried to patch it a number of months to a year ago maybe and it maybe still doesnt scale well with more cores. Its like they're intentionally keeping AMD's hardware performance below that of intels. I foresee another D!ck move by nvidia in the near future.

0 Likes

Resolution scaling is a bit of a misnomer. The classic game AquaNox actually came out very long ago and it can support 1920x1080 and more.

Halo: Combat Evolved can do custom resolutions via the command line and a while ago Bungie posted a fix for the game to support 4K panels from in-game. Playing Halo at 4K was eye opening at how good the artwork really was in the game. Halo is a procedurally generated game world.

Tron 2 was fixed by a third party to support widescreen and the game again was unbelievably good at higher resolutions.

Hardware-accelerated GPU scheduling allows for improved gaming performance by giving your video card full control over managing video memory. This is now a component in DX12 which allows for faster rendering with multiple compute units.

I have DLSS with my RTX 2080 and it can boost frame rates for the titles that support it. I expect more titles will use that down the road.

0 Likes
Anonymous
Not applicable

What you're talking about more direct memory management is in DX12 and in VULKAN and for ALL GPU's.

refer to the video I linked earlier from ubisoft about VULKAN in the latest ghost recon.

Anyway HALO IS NOT PROCEDURALLY GENERATED AT ALL!

Diablo 1 Diablo 2 diablo 3 are procedurally generated games. Halo has fixed maps that are the same always. Procedural generation means there are libraries of objects and walls/floor/rocks/trees whatever and the computer generates a new game map for each play through. So in diablo if you start a new game and play it the game map looks and is different to anybody else's game of diablo. Where you found the boss on a previous playthrough on a different character will be in a different location as the map is entirely new and unique and different. 

You can use procedural generation on other items than the map like for loot drops the random rolls on the specs and naming of items or objects. But yeah I think you dont understand what the words you are saying is.

And no resolution scaling isnt a misnomer. Titles on the xbox 360 like obvilion used HDMI output and said 1080p in some select titles but the actual game resolution for many game titles was something like 900pixels instead of 1080. The reason its called dynamic resolution scaling is if you walk into a small emtpy room the size of a bathroom and theres nothing in it the GPU has room to spare and the resolution may then go up to true full 1080p but then when theres 10 enemies on the screen throwing grenades and shooting lasers in the action parts the game will lower the resolution to as low as 900.

It depends on the game title and on the console but many titles will opt for changing resolutions as necessary as its built into the xbox console and the option is there for developers. Famous titles like Halo franchise used this in xbox 360 and xbox one. If you are saying that dynamic resolution scaling sounds like its referring to upscaling then yes it can be a little confusing in that sense as it can refer to both up or downscaling. 

If you plug your 1080p game console into a 4k TV your TV upscales the 1080p image to fill the 4k display. More expensive bigger TV's have higher quality upscaling. Many 4k bluray players have good quality 4k upscaling so you often have to test out which looks best for picture quality your bluray player or your TV and set your bluray player to enable or disable its upscaling depending on your preference. The exact same applies to video game consoles.

AMD graphics cards have always had the better upscaling out of nvidia and AMD GPU's and nvidia's waifu2x DLSS upscaling is long overdue but a welcome improvement to their poor upscaling. On my samsung QLED TV if I dont use the TV to upscale but use the GPU instead I get a cleaner sharper image noticeably because AMD's upscaling is very good but is really meant for video playback rather than gameplay as theres a limit to the FPS it can process the upscaling in currently 4k60hz it seems from what i've seen. But the average persons eyesight isnt good enough to notice much difference if they can at all unless u have a very big very high quality display. But my old nvidia gtx 1060 6GB card was rubbish upscaling with the GPU the cheap tv's and cheap monitors always looked better or identical. 

I have the new steam masterchief collection of halo and had been playing halo reach,1, and 2 and have enjoyed setting the graphics to enhanced mode and can run the game in any resolution as its a modern remake designed for PC. Halo infinite will be coming to steam soon too maybe in 3 or 4 months time. 

0 Likes

I have the new remastered Halo: MCC but I still have my original Halo box on the wall unit.

I notice Halo: Infinite on Steam so i wishlisted it as it will alert me when it available.

I use :LG IPS displays and the color quality is better than Samsung. Playing a DVD, vs a BD is eye opening as to the quality of the image. Playing a 4K BD is even better imagery but the H.265 compression is more demanding.

Down the road 8K is coming sooner than many realize. While HDMI and DisplayPort are still not there, the move to 8K is close at hand.

0 Likes
Anonymous
Not applicable

if my graphics card display port 1.4 does 4k120hz then 8k 60hz or 8k 30hz are possible most likely.

My samsung QLED is a 55 inch TV. The samsung QLED TV's have the widest colour gamut on the consumer market in general and the specs of them leave monitors in the dust by a large margin. Most $2000 30 something inch 4k monitors have 1000 nits and something like 60% of rec2020 while my QLED TV has like 1500 or 2000 nits or maybe even more on brand new samsung QLED TV top end models and 8K TV's and 89% or more rec2020 colour volume. People see older year 2000 PC colour standards like bluray BT 709 and SRGB and adobe SRGB and see full colour volume on the PC monitor for those things and think "damn my monitors good" but then 4k 10bit HDR bluray colourspace DCI p3 is far far far away from being complete on their display so its garbage. If you buy a budget TV display most are close to if not full DCI-p3 colourspace reproduction and have dolby vision and HDR10 support and freesync ultimate and 1500 nits or more of HDR for same price or less than a 30inch monitor but are 55inches or 60 inches and are MANY times brighter.

OLED TV's have less accurate colours and are far less bright and barely reach around 1000nits of HDR in the very latest highest most expensive models. OLED TV colour gamut is less wide than quantum dot. Samsung TV's have always generally been the best for reproducing the most colour and the most accurately for the longest lifespan of a consumer TV affordably without entering into pro displays which are tens of thousands. My samsung QLED TV Q7FN I bought a year or two ago so its couple years old supports 4k120hz so yeah in countries like japan they've had 8k TV for a long time and 8K displays for a long time too i'd imagine.

Just like Japan had 1080i television in like 1984 on their analog broadcasts and sony trinitron TV's it was broadcasts in like 1100i actually japan technically had full HD TV since the 80's compared to us losers in australia who probably still dont broadcast in 4k japans 8k im guessing. 

0 Likes

My LG panel supports Windows HDR fine and it far exceeds BT 709 and SRGB etc. Rec 2020 is the color space for UHD television. My panel does that too.

My panel is RGB10 which is over 1 billion colors and the brightness is high due to the array of LED backlights. IPS technology competes easily with anything Sony is flogging, same for Samsung and everyone else.

4K120 is double the bandwidth and more than DisplayPort 1.4 can achieve. 8K60 is double again the bandwidth.

0 Likes
Anonymous
Not applicable

DisplayPort - Wikipedia  You will notice 8k60hz with DSC or 4k 120hz is listed under displayport 1.4 with DSC HBR3 

my rx5700xt lists DP1.4 4k120hz on the product specs 4k120hz is shown on AMD's website or any 5700xt cards specs.

Some of the latest samsung gaming monitors are pretty much best on the market.

samsung odyssey G7 or G9 5k 240hz 1000nits curved monitor but even thats only not quite full coverage of DCI p3 coz its a PC monitor and they suck.. for that price you could buy a way better TV with dolbyvision and instead of like high DCI p3 coverage you'd get total full DCI p3 and high rec2020 coverage approaching or above 90% i dare say since my displays like 88 or 89% rec2020 and its couple years old and definitely not an 8k display.

any true 10bit display panel supports over 1billion colours compared to 8bits 16million. 

many TV's have IPS panels but some have VA panels. They're still an LCD technology with LED backlighting but the switching mechanisms are different depending on the panel they use the same displays and display types and switching mechanisms as computer desktop monitors. Modern 10bit HDR TV's are like a computer monitor exactly but bigger better and higher quality with higher nits of HDR and higher brightness and better upscaling and wider colour gamut coverage of rec2020 and so on. IPS stands for in plane switching or sometimes called in parallel switching. VA is vertical aligned switching mechanisms. VA panels may cost a little less have ever so slightly worse viewing angles but tend to have a bit more pop to the colours and vibrance maybe tiny bit darker blacks but are able to produce slightly less wide range of colours in the rec2020 and colour spaces than IPS but since VA may be cheaper and quicker and more darker/vibrant/contrasty they're all you need for gaming really and can often save you a few dollars. Unless you are doing pro video and photo or CAD design and content creation with custom calibrated colours and need as much colour reproduction as possible and as accurate as possible and require better viewing angles and more consistent colour uniformity in which case you would go with IPS. The average gamer or lets play video streamer doesnt need IPS really though its nice to have if theres little difference in price with a VA its not really much of an upgrade if you have to pay a lot more unless you need it for work.

I can understand you wasting lots of money on IPS because it appears slightly better on paper while not being of much benefit at all to the average gamer over VA panel because you buy nvidia cards. I can also understand you spending thousands on a 28 or 38 inch 10bit 1000nits HDR monitor with terrible rec2020 colour gamut and low nits of HDR compared to a same or cheaper priced large screen TV with 1500 or 2000 nits of HDR dolbyvision + HDR10 and like wayyy over 80% of rec2020 when your monitors probably only lucky to be 50%.. again because you buy nvidia cards..

I can also understand you not understanding displayport standards and probably HDMI standards and importantly probably the cables required for them too ... again because you buy nvidia cards... 

I think i'll just let you get to work making some awesome special games for *special* people who buy nvidia cards with their Nvidia SDK. I cant wait to see what sort of not even slightly optimized nvidia only mess you create without even knowing what a modern programming language is or is capable of or what its used for. You do you man, you just do what you gotta do. I'm sure your game will really mesh with people.. and be all RTX ON! and stuff and you'll double your waifu's with DLSS 2.0 ... double the double the waifus.. oh man. I just think i feel a tear forming I'm speaking with an honest to goodness nvidia fan. Its in every cell in your body. You couldnt possibly be more pro nvidia. Your wallet must be indestructible adamantine orichalcum stuff. Impervious to damage, throw your money at people like -Hey I'm going to spend possibly a weeks pay or more and I'm not even going to research a single thing but that jensen CEO guy said "it just works" so now I have to buy it. Its like I can see every thought you've ever had or never had... its kinda like i've known you all my life. Are you that guy I went to highschool with the year above me? He works in I.T. and is a diehard nvidia fan and invests in them even when I show him them being found guilty of investor fraud. Are you that guys cousin or something? 

0 Likes

DSC is a lossy compression scheme not far removed from JPEG. I suggest doing more research I have to better understand what if substantive and what is marketing hype.

Halo:MCC

Alien Swarm: Reactive Drop

image.jpeg

0 Likes
Anonymous
Not applicable

"DSC is a "visually lossless" encoding technique with up to a 3:1 compression ratio.[22] Using DSC with HBR3 transmission rates, DisplayPort 1.4 can support 8K UHD (7680 × 4320) at 60 Hz or 4K UHD (3840 × 2160) at 120 Hz with 30 bit/px RGB color and HDR. 4K at 60 Hz 30 bit/px RGB/HDR can be achieved without the need for DSC"

The wikipedia page I linked you to listed for DP 1.4 had that to say. Our eyes cant see any difference with DSC. The negatives or downside of using DSC is probably that it affects latency and it may not allow for freesync / VRR to be enabled?

DSC isnt a lossy compression like regular chroma subsampling. Its implementation was made to be lossless as it turns out that chroma subsampling compression can somewhat slightly negatively affect eyesight in certain conditions and situations.

0 Likes

eccentric wrote:

"DSC is a "visually lossless" encoding technique with up to a 3:1 compression ratio.[22] Using DSC with HBR3 transmission rates, DisplayPort 1.4 can support 8K UHD (7680 × 4320) at 60 Hz or 4K UHD (3840 × 2160) at 120 Hz with 30 bit/px RGB color and HDR. 4K at 60 Hz 30 bit/px RGB/HDR can be achieved without the need for DSC"

The wikipedia page I linked you to listed for DP 1.4 had that to say. Our eyes cant see any difference with DSC. The negatives or downside of using DSC is probably that it affects latency and it may not allow for freesync / VRR to be enabled?

DSC isnt a lossy compression like regular chroma subsampling. Its implementation was made to be lossless as it turns out that chroma subsampling compression can somewhat slightly negatively affect eyesight in certain conditions and situations.

 

maybe lossless to some myopic republicans but I can tell the difference immediately

my panel is 3840x2160 rated at 60 Hz. I use 10-bit RGB HDR and 4:4:4 chrominance.

that is about as good as DP 1.4a can do before corners are cut

0 Likes
Anonymous
Not applicable

DP1.4 "4K at 60 Hz 30 bit/px RGB/HDR can be achieved without the need for DSC. On displays which do not support DSC, the maximum limits are unchanged from DisplayPort 1.3 (4K 120 Hz, 5K 60 Hz, 8K 30 Hz)"

it means you are viewing uncompressed if you arent running 4k120hz display rate or 8k 60hz.

So your DP 1.4 which you claim you can see a difference you cant because it isnt compressed at all because your monitors display isnt high enough or good enough. My samsung Q7FN QLED quantum dot TV supports 4k 120hz and it went on sale a few years ago. But it only works via HDMI. So I'm stuck at 4k 60hz for now too unless I didnt want to use freesync and used some sort of DP1.4 to HDMI 2.1 adaptor like the one from Club3d listed in their may catalog.

So what you thought was lossy compression isn't actually compressed at all its just you bought a terrible quality display and are probably using a rubbish graphics card with it like an Nvidia one or something which has historically famously low image quality compared to AMD's offerings.

0 Likes

DisplayPort 2.0 improves a bit on the bandwidth problem but it still comes up short for ISP panels at 8K, I have 10-bit but IPS can do 12-bit depth which needs even more graphics bandwidth.

My RTX 2080 I believe has DisplayPort 1.4a and HDMI 2.0a so best bet is 4K60 with 10-bit RGB HDR.

Realistically not many games would be playable at 8K. At least with this weeks available video cards.

8K is being driven more by feature film while television is mired with MPEG2 and HD television at 720p or 1080i services.

One friend has a pair of RTX Titans feeding one of those Dell 8K panels. I told him it was overkill for as the video cards are no better than mine with limited real bandwidth. He does not use it for gaming however.

0 Likes
Anonymous
Not applicable

BTW that texture mesh stuff where they lower the quality on certain objects so the game runs faster in realtime as you play games with the nvidia developer SDK.. well uhh if its just lowering texture quality its not that much of a gain in performance. Like say try going to a game and setting textures to ultra or high.. then set them to lowest.. it looks terrible but the FPS is not much difference at all right? unless the meshing stuff is like disabling tesellation or forcing a polygon count limit and simplifying wireframes somehow or something then basically its going to be mostly a waste of time and effort as having a GPU thats powerful enough to draw all objects on the screen properly would be the better choice every time as theres not much performance difference. Plus if you lower the poly count and LOD complexity and disable tesellation for certain objects or just lower everything on a couple of objects in the game without it being too noticeable whats the point exactly? Isnt it better to just optimize your games graphics properly when its being developed instead of trying to find ways to say like "the characters fingers beard and hair are too complex to draw so we'll make them a single solid shape and low res but only when people wont notice them like if the're standing 2 meters to the left?" wow super cool game technology bro I could do that too by adjusting the LOD draw distance and the texture quality or tesellation options in my game settings menu. Looks to me like Nvidia once again has reinvented the wheel.

nvidia users probably dont know what graphics cards are or how to set their game graphics settings and configure them or how to adjust the driver config to game optimally because they've never tried they're clueless and just throw money away whenever given the chance. So Nvidia was like our game devs will have to take away the choice of low quality or medium quality or high quality because you n00bs dont understand how to adjust your graphics settings properly we'll make a weird mish mash of low and high quality that we've preselected for you that will be applied as you game so if you upgrade your PC it will select high quality for you and if you have a budget PC it will select low quality for you because we know nvidia owners cant figure it out.

0 Likes

Go look at Rage, that game has one of the best game worlds I have seen in a game. It is easily recognizable so driving around is actually possible.

0 Likes
Anonymous
Not applicable

Rage 2 seems like an alright game. I will consider purchasing it. Seems fun. Looks alot like mad max game which was lousy but hopefully this one you dont go scavenging for water and fuel and parts. just err scavenging for money to upgrade your gear?

Rage 2's skill trees and upgrades for weapons and things seems fun at least you can say its proper RPG like that.

but ehh its still a bit over $20 AUD here.. looks like maybe $24?? so I will wait till during and after xmas sales to see if I can purchase it. will probably be waiting a while for doom eternal to drop to that low a price. but eh what can you do I'm poor and have a massive steam library already from humble monthly subscription and game bundle deals sales. not to mention actual steam sales and all the freebies from the epic games store every week lately. So no matter how awesome the new game is I probably wont spend money on it till its super affordable as I maybe wont get to play it anyway.

0 Likes

Rage is a first person shooter. The game often is on feature for 75% off. Mad Max is different in many ways.

Rage 2 is still expensive and I have not seen it discounted as of yet. Halo: MCC has been lightly discount recently.

0 Likes
Anonymous
Not applicable

You meant 2D ray casting.. which is probably known as nvidia rtx once its been tweaked and fiddled with? then yeah 2d ray casting doesnt need much in terms of system resources

Coding Challenge #145: 2D Raycasting - YouTube 

0 Likes

Long ago I made a ray casted version of a game engine using some 128x128 textures and a simple array to determine the walls. The array simply was an index into the texture to be shown.

This is more or less how early DOS games worked and the idea was to show how with 64-bit the extent of the game could be so complicated as to be impossible to solve. Recall Doom had the player run around to find keys to open doors etc. Doom 3 had lockers and the player had to find the codes for them, so a printed list made that game slightly easier to play. Game puzzles have been used to add to the play time. Unreal was not as puzzle driven as it was maze driven,

Rage screen shot in 4K UHD

Batman: Arkham Knight

0 Likes