I might as well be one of the 0.1% who thinks that this game should have better optimization for AMD hardware.
Is it a game thing ? Sure.
Is AMD to blame as well ? Most likely.
This game runs a lot better on my notebook's archaic nvidia 8600M GT gpu than my desktop's HD5970s, with or without crossfirex. Ok, maybe i am exaggerating a bit.
It is just a shame cause my nephew has more fun playing this than me and i am jealous. We both adore trains.
1 of 1 people found this helpful
It's nice to see I'm not the only AMD-user enjoying trains
It's possible to slightly reduce the micro-stuttering by locking fps at 30 and using double Vsync, and increasing the Flip Queue Size to 5 also helps a tiny bit. Also NOT forcing Anisotropic Filtering to x16 (since the game only supports up to x8) gives quite a bit of a performance increase, remarkably (more in the fps department and not so much in the overall smoothness).
I didn't find a way to decrease the stutters, framedrops and hitches though, if that's even possible to reduce.
All workarounds are welcomed.
I am afraid i have the "console" disease lately, cause of limited recreation time at the time.
I just fire up whatever game plays more smoothly by default.
I don't even search for patches anymore.
What's wrong with me ?
Is it serious ?
Will it pass ?
Have you noticed the Afterburner graph of this game? I had custom settings for this and set around medium ish but still at 1080p. Notice the memory and core clock keep changing. When I changed the settings to highest ingame (the right graph) the clocks smoothed out as did the gpu usage. It actually ran better with the highest settings in the program's graphics options.
This is on an FX-6350 and R9-290x with no OCs.
Not much help I know. Interesting though.
And who doesn't like driving trains!!!???
Yes I forgot To mention I enabled the option to Always use
Verzonden met mijn Windows Phone
Yes I forgot to mention to always use full GPU clocks from within RadeonPro for the game
It indeed reduces fps fluctuations.
Seeing how awful the framerate graphs are, i can predict it is the same with frametime -> smoothness -> microstutter. Unreal 4 engine should fix most issues in the future, as expected.
Good feedback though.
I'm not sure about UT4 though, because of Gameworks...
Verzonden met mijn Windows Phone
Speaking of gameworks, FarCry 4 must be the only game that i could use nvidia god rays without much performance impact.
As long as i didn't enable soft shadows at the same time, that is...
Every other game seems to get an unnecessary performance impact from Gameworks, even for nvidia cards.
1 of 1 people found this helpful
I have a feeling you're referring to Fallout 4 in this case? Haven't played it yet myself but read about that performance impact.
It makes me wonder why Gameworks is required to make decent godrays. Weren't those in the Crysis series nice enough? Or even the Metro games?
I can understand for the need of middleware to make certain effects easier to achieve though, however it's a damn shame it's PhysX allover again: it only runs great on the latest nVidia hardware.
And since Fallout 4 (again) has an overhead issue, it makes me wonder why they didn't go for DirectX12 instead...
You instincts serve you well... the force is str.... oh, i got a little carried away there. I am preparing for the game & the movie.
About Gameworks, i agree with you, but keep in mind that nvidia owns and will always use the "technology" to get the advantage and cripple the performance for AMD cards a bit. Same with physx. Features that may or may not "enhance" your gaming experience, but are marketed and overhyped accordingly.
Far Cry 4 gave you altenative options to use and get a visual quality that is, more or less, on a par with gameworks effects. I also found the performance impact acceptable (after some serious patching...) for the gameworks effects on that game, and i actually liked the yellowish filter/ lighting of god rays. Some people didn't like it, but the choice was there and we all got the same amount of fun at the end of the day.
I also remember prefering HBAO in Far Cry 3 than AMD's HDAO, but i was fine with either of them, performance-wise and austhetically.
Witcher 3 gave you the, widely unnecessary, hairworks option to enable or not, but it didn't affect the rest of the technical area of the game.
Bottom line is, if the "advanced techniques" are not too attached to the game's core functionality, it's perfectly fine by me.
If it affects performance with the user unable to do something about it other than getting a specific AMD or NVIDIA card, then it bothers me.
Conscerning Fallout 4, i guess that it began production with DX11 in mind, but i also believe that DX12 needs more optimizaton.
Come to think of it, it needs newer generation of gpu cards to be introduced in the market, or at least that is what the companies have in plan to optimize their profits...
I agree that Gameworks should be entirely optional, but the software alternative should be decent to start with.
I remember Arkham Asylum for heaving hardware PhysX: But if one doesn't activate it there wouldn't be ANY volumetric fog and cloths at all, not even cheap software ones.
The Witcher 3 looks fine without hairworks, and HBAO+ doesn't seem to have a bigger performance hit than on nVidia cards. I don't have far cry 4 yet so I can't judge.
I wonder how it would have been if Intel didn't have shelved HavokFX...
I'm curious in what way DirectX 12 needs more optimization. There aren't many games supporting it at the moment.
So the question is, why isn't there more support in recent or upcoming games?
Dx12 utilization needs to mature.
For that to happen, the developers must invest time, money and resources.
The motivation is there. The know-how too.
The market waits... for the new cycle... for the "true" dx12 hardware. That is my own opinion though and as such i might have the wrong impression, or approach the matter from different or irrelevant perspective.
I am only a gamer.
And i suck in most games too !!
To get back at TS2016: It appears Crossfire DOES work (though partially). I tested it on the Köln - Koblenz route and where I get an fps of 30-40 using single card (at a VSR res of 3200x1800 VSync and fps locked at 60, and while crossing ze Zjerman Countryside), I get 50-60fps using Crossfire. However while testing an unoptimized mess like the NEC New York - New Haven route I just get an fps of 25 when just having started the Acela Express to Washington Career Scenario, both with CFX on or off... and a slight flashing on darker textures when having it enabled;
There is a huge difference between "working" and "scaling properly".
The latter means an approximate 70-90% more performance on the same quality gx preset in-game and within the vram limit of the card (to avoid potential bottleneck). Which is quite reasonable to expect from a technology (crossfirex/ sli) that is supposedly mature by now for both developers and driver engineers.
I don't want to go off-topic again but i'll use Witcher 3 again as a comparison, being such a good modern example.
Whether i use ultra or low presets in-game, the rendering power/ usage is equally distributed among 2 or 4 gpus (in my case - 2 x HD5970).
That means that if i use 1 gpu only on high settings, i get 99% usage, thus botlleneck, and consequently low fps.
If i use 2 gpus, i get ~60-80% usage on both, using the same high settings, but same low fps.
And if i use 4 gpus i get ~35-45% usage on ALL 4 gpus, using the same high settings, but exactly same low fps as the two previous setups.
Considering that i have a 1gb video memory limitation, i then test on low gfx settings and monitor vram usage.
I am now surely well within the margin, so i eliminated the possibility of vram bottleneck. CPU is not sweating either by any means.
A "good scaling" crossfirex profile would immediately allow my gpus to output more frames and reach the 60 fps limit i have, because vsync is enabled (i use 1080p and 60Hz refresh rate).
That is not the case though.
I get higher frames on 'LOW' settings, sure, but not even close to 60 fps. More like ~40.
What i am saying is that using 4 gpus outputs the same frames as if i was using only ONE gpu.
That means zero scaling.
Getting on topic, a gpu configuration like yours, Cornugon , should be more than enough to play a game like this, even on ONE gpu.
Enabling crossfirex would theoretically allow you to get a lot more consistent frames, not neccessarily smoother, if it actually scaled well.
It heavily favors nvidia hardware and, more importantly, software at the time being, so playing with AMD hardware is a bit frustrating.
Seeing how the market works, the absolute fun is only for the few lucky ones to have 2 dedicated systems, one with nvidia gpu and one with an AMD gpu. There is no other 'permanent' solution, unless gpus of different brands start working together on dx12 and above api, and on a driver level.
Wanna bet if that is ever gonna happen in the near future ?
I won't bother analyzing how TS: 2016 performs for me on different routes and settings... it's bad.
Your absolutely right, Backfire. The game should work smoothly on my setup with only one GPU in use. It's obviously not the case. However if I would replace my R9 290X with a similarly powerful GTX780Ti I would get a way better performance.
I'm guessing the CPU overhead is way too much for this game on AMD. Similar to the more recent Fallout 4.
I'm curious if the Witcher 3 CFX stutter still hasn't been fixed...
If you plan on ugrading, 2 x gtx 980 in sli or a single gtx 980 ti (my current favotite candidate to upgrade mine) would be the ideal solution for these games, especially if you game at 1080p and 2k native resolutions.
A FuryX is also a very powerful single gpu solution for all dx11 games and expected to be way more competitive when dx12 is gonna be used as the new standard. Not for TS 2016 though.
I'd definitely trade your current setup for one of the above and be done with the multi-gpu era.
Or just wait for the new 14nm architectures that are expected in Q4 2016- early 2017.
I'm not going to upgrade any time soon. The earliest window for me would be when a single GPU would be significantly faster than my two R9's combined. I don't know if my next card will be an nVidia. But since the current year obviously favours them I very likely would have gotten a 980Ti if I would upgrade now. I play games sponsored both by AMD and nVidia, but the last 'AMD' title I thought was worthwhile was Dragon Age Inquisition. I'm not condoning the crapworks guerrilla tactics, but looking at other stuff like driver overhead, which is probably going to change when DirectX 12 is getting more ground, it just seems to be the most sensible thing to go for the competition...
Well, an overclocked (~1400 Mhz) 980 ti is often very close or exceeds 2 980's (at default clocks) performance in many recent demanding games.
Same with the titanx and the furyx. And that is the reason why i like this particular gpu. Affordable and packs a serious punch on everything that is thrown at it. With the advantage of the nvidia-favored games. That's where furyx gets behind in dx11 games.
You need a very good scaling multi-gpu profile to benefit from a multi-gpu setup more than a single powerful gpu card, like the 980 ti and the furyx.
That is why 2 R9 290X gpus will often not be enough. Or smooth enough.
So, i understand "significant faster".
As long as it doesn't stay in theory.
Crossfirex and sli are not as useful as once were, cause of limited interest and support from the developers.
And because high/ top end gpu cards are getting quite strong to drive 2k and 4k games on higher gfx and decent framerate.
TS 2016 is one of the many (unfortunately) striking examples to remind you that the multi-gpu technology won't help unless there is actual support.
The highest end GPU's still aren't powerful enough to provide a steady 60fps framerate in 4K, when single. When paired with another one, most of them actually are for most recent games, but, like you said, only in theory...
I don't blame AMD for that particular development, nor do I blame the competitor, but developers should understand the need for proper multiGPU scaling in a time where 4K (and VR in the near future) are 'hot'. But the problems with lack of (day-1) Multi-GPU support are getting worse and worse it seems.
I'm curious how Assassin's Creed: Syndicate will turn out in this regard since Ubisoft promised proper multiGPU scaling for both nVidia and AMD hardware... It would be a welcome change especially looking at Ubi's (recent) track record...
For the record: while TS2016 doesn't run too well on CrossFire, it's 'source' Rail Simulator actually did on my 4870X2 back in the day...
"The highest end GPU's still aren't powerful enough to provide a steady 60fps framerate in 4K, when single"
It's not that far ahead actually.
Most AAA games that have decent optimization are playable at ~40-50 fps on highest settings, excluding MSAA or SSAA of course, right as we speak on a 980 ti and FuryX.
That is not negligible at all, is it ?
Just look at how well GTA V performs on these cards:
Sure, you can only see the titanx here, but an overclocked 980ti would definitely surpass the 51 fps mark to something more like ~55-60 fps. The FuryX would be close to 50-55 as well, right ?
Now take a look at Fallout 4:
A stock-clocked 980ti renders nearly double the frames of a stock R9 290X !!! The FuryX is not doing bad either, for a card that unfortunately cannot be overvolted/ overclocked too much.
Even if CrossFireX would scale perfectly on this game, i would still prefer the single-gpu solution over the 2 290x adapters, for the smoothness alone.
Now imagine if the rumors about the new generation of high-end dx12 gpu cards are true, which means 4k performance closer to 60 fps all the time.
Maybe even higher, who knows ?
I strongly believe that it will be possible pretty soon.
About AC: syndicate, i think it will be miles better than its predecessor on the technical field and performance optimization.
Here is an interesting article for this upcoming game that is easy to stumble upon searching the internet:
As for TS 2016, if i could afford a laptop with a powerful nvidia mobile gpu, i would get it just for this game.
I can only dream though... and replacing that HD5970s (or my whole system for that matter) is a priority for me now.
Have you tried Sweetfx or ENB Series? Both use custom dll injections which you can edit yourself and may help with performance. Hope this helps.
Thanks for the suggestion. I didn't try either of those. I fail to see though how image enhancing /altering mods could actually improve performance compared to the vanilla game.
What does seem to improve it for just a tiny bit is lowering the CPU core affinity to four cores (in my case). Disabling Hyperthreading doesn't improve things so it appears.
Some games are developed with NVidia in mind, using these types of dll injections and altering the settings may increase performance. Have a look online for 'performance settings for ENB Series' and apply to the directory of the application in question. Hope this helps.
Maybe GeDoSaTo could help ?
Why not sending a message directly to Durante ?
I am not being sarcastic.
Software Downsampling on AMD cards is even slower than using the in-game SSAA settings for Train Simulator, which are already really slow, especially when compared to nVidia (again...).
The weirdest thing even is that my current R9 290X wasn't even that much faster than the 6990-running-on-1-GPU in TS...
The issue seems to be similar to current Fallout 4 issues, which is caused by the inefficient way draw calls are heavily single-threaded in which nVidia seems to excell.
There's only so much AMD can do to alleviate this, but the same goes to Dovetail needing to completely overhaul the engine or change to a more efficient one (for which they chose the latter option)
I agree with your reply, and most issues should definitely be fixed on the game's core engine, but keep in mind that GeDoSaTo is not only about downsampling.
Yeah I know GeDos isn't just about downsampling. I'll check tonight if it has any additional useful new features compared to the pre-VSR days. Thanks for the tip mr. Fire!
Well, it sure is worth a try.
Contact durante if you can. Maybe you'll get a useful pro opinion.