1 of 1 people found this helpful
TBolt protocol is Intel propriety and building a new similar protocol would be loss of time.
Since the USB protocol is doing fairly well and will reach TB3 speed in the next years.
I mean Intel said not so long ago, it would release the TBolt architecture for free to allow wide spread along the user base.
Still haven't heard anything about since, not sure it is the main focus at Intel headquarter at the moment.
Simply if the TB3 remain so poorly used compared to USB, one day you would not even need TB3 since the USB is the same.
So Intel, imo, need to make a choice, continue the TB development releasing it to the public or let it die slowly against the USB protocol.
1 of 1 people found this helpful
Thunderbolt is more like a fabric. Thunderbolt is not about speed. It is only 22Gb/s for PCIe (artificial limitation and nobody aside from Intel knows why). USB 3.2 is already at 20Gb/s but that will look more like 12Gb/s after overheads, and USB is getting more and more inefficient. With Thunderbolt 3, you can actually hit 22Gb/s (2750MB/s) by using the dd command on an NVMe SSD or doing BufferBandwidth.exe from AMD APP SDK 3.0 to a Radeon card.
Issues with USB 3.X:
- Overheads - USB 2.0 has theoretical 60MB/s but we all know the 35MB/s limit when transferring files. The same proportion seems to apply to USB 3.X
- Crap latency for any real time application - the proof is people buy Thunderbolt audio interfaces - because it is all about latency, not the speed
- That latency will destroy random performance on new solid state storage devices
- Sprays 2.4GHz interference from the connector, and if cable is crap, from the cable, wiping out Wi-Fi in that band. It was capable of bringing mine down from hundreds to well below 10Mb/s
- USB is only backed by PCIe (adding another layer) and most vendors choke the ports (many dual-port 10Gb/s cards have PCIe 3.0 x1 which only provides 8Gb/s between both of the ports)
- There are a lot of devices that have never been made for USB such as graphics, possibly capture cards, 10GBASE-T NICs, etc
- I have seen people claim they can measure the difference in CPU usage and power consumption between USB and Thunderbolt 1GBASE-T Ethernet adapters (Apple Thunderbolt Ethernet adapter vs USB 3.X counterpart). If you can tell the difference in efficiency at 1Gb/s, then it will be 10x as obvious with hypothetical 10GbE adapters
In conclusion, it is not all about the speed.
Yes, it would be much better if Thunderbolt were not proprietary. I do not like Intel at all, but Thunderbolt 3 has changed everything for me. It is not ideal, but it is the best we have got by far. We can just hope that Intel finally releases it as an open standard one day. If they continue to control it, I guess we have OCuLink 2 which is ugly, bulky and non-reversible, but at least an open standard.
Titan Ridge is showing promise with myself and others showing it can work in unsupported systems with absolutely no messing around (somebody showed it in AMD). The only catch is that the hot plug will not work unless you use Linux - for Windows the devices need to be attached at boot - see forum thread at egpu.io "List of Intel Titan Ridge Thunderbolt 3 Devices" for a lot more on that. All it needs to work is a patch for the OS to reserve PCI bus numbers when it sees the card - which can be done in Linux because it is open source (although you actually do not even need to modify the source code). Windows is neither open source or built with being useful in mind.
If it helps, Thunderbolt can encapsulate packets from just about anything - it just so happens to be PCIe and DisplayPort packets right now (both of which are routable). In the future, there could possibly be native Thunderbolt devices that cut out the PCIe layer altogether, or perhaps a more modern standard can come forward and be adopted by Thunderbolt. Thunderbolt could route USB packets but there is no point - it is much more efficient to route PCIe through to a USB controller at the other end. Anyways, PCI is showing its age and its severe delays to PCIe 4.0 and 5.0 standards have emboldened the efforts of competing interconnects.
Right now it seems that Intel's plan is to slowly drop the bar for using Thunderbolt 3, instead of actually officially supporting AMD. I think it needed certification before to:
a) recoup R&D costs
b) because it had to be embedded in the BIOS firmware
c) it would be really easy to gain a bad reputation from dodgy implementations because so few people know what they are doing with stuff that is this low level
It seems that only after so many years, they have convinced Microsoft and Linux to take them seriously enough to make the difficult changes required to offload the functionality from the BIOS into the operating system. It is much cleaner - I have played with the Titan Ridge add-in cards. Perhaps on the next iteration, they will clean up all of the loose ends with Windows and people will just learn they can insert them into any motherboard at all - without Intel ever acknowledging it.
If you want to help, tell Microsoft on feedback hub (Windows 10), Facebook Messenger, forums and any other ways you can think of, to provide an option for Windows equivalent to pci=hpbussize=N parameter in Linux, to reserve PCI hotplug bridges at boot. That will complete Titan Ridge add-in cards to work flawlessly on any system. Provided you short two pins on the GPIO header, as shown at egpu.io forum mentioned before.
Thunderbolt have always been about speed, i mean from the first iteration in 2009 developed to suit the Apple needs to nowadays.
I understand that for some random users it's a long time ago, but for folks that follow tech as passion, memories do not fade so easily.
Thunderbolt was and will always be about speed, it's one of the major feature adverted on the Intel site, i mean the tag 40GB/s TB3 is wrote almost everywhere when using the thunderbolt certification.
I wonder how you could throw such an easy statement when from the beginning of the thunderbolt development, it have always been about getting better I/O speed.
To separate Apple from the other products, that were at the time using the USB standard and this obviously for a premium price!
If one look closely nowadays Thunderbolt failed against USB, since Apple use a combo USB-C 3.1/Thunderbolt now!
Then saying that the USB is getting inefficient, is like saying that the guys working at USB SIG are quite incompetent engineers, failing the development of the USB 3.1.
To me it's not the case, USB protocol is getting better and better delivering always more as open source protocol, the new power management of the USB 3.1 is a great example.
So again i would not dare to throw such easy statement out from nowhere, especially when comparing a free protocol used by million to a proprietary one used by few.
Developed initially to suit Apple premium needs, that was then granted a grow due to Apple market coverage success!
The truth about Thunderbolt is simple: it have been always proposed as a premium protocol for fast and secure I/O, it still remain proprietary and require a decent silicon computing power to do so.
Thunderbolt is destined to die over USB, simply because the engineers that work for the USB SIG develop something for the mass, instead a premium for few.
To be honest i'm not sure these guy would stop developing the USB standard because it's becoming inefficient, that's one of the most laughable joke i have ever heard!
I'm happy that Intel is finally getting over the Apple Thunderbolt lock, especially with the new Titan Ridge chip working more easily on other devices, like AMD motherboards.
But i would not call in Microsoft or any other companies for Intel own compatibility, firmware and general issues, since Intel is big and decent enough to be able to release something working if wanted. Still Thunderbolt is going to die IMO, the day USB will reach the same speeds, having universal standard protocol and connector, with no premium price, i don't think we will need Thunderbolt anymore. Until this day Intel is better move forward fast integrating more and more USB/Thunderbolt controller combo for professional devices that need to, as the eGPU for example.
Your post is quite audacious as always!
It is somewhat about speed. Not entirely. But if speed is everything, then why are there so many things that USB simply cannot do?
USB 3.2 is 20Gb/s (vs Thunderbolt 3 at 22Gbps for data) and that does not mean it can host external graphics.
Unless something comes along for external graphics, Thunderbolt is here to stay. The external graphics is starting to catch on and for people like me, it works wonders.
I don't know why you are so keen for it to disappear, at least when there is simply no alternative. You at least need to specify an alternative or make the desire for Thunderbolt to disappear conditional on there being an alternative.
Maybe USB will work for all the things you do. But for others, it is extremely important. Not just for external graphics. The number two is single cable docking with video, and the number three is low latency audio interfaces for musos.
Edit: to address the 40Gb/s being advertised, that will have been because it is the easiest metric to appeal to the mainstream. Even people who buy external graphics probably mostly do not realise that the low latency is equally or more important than raw speed.
I can put my eGPU at the end of a Thunderbolt 3 daisy chain and measure the throughput. Even with three docks in between it and the computer, I can still measure 2750MB/s - no degradation. Yet my framerate drops in games - because the latency got worse. So actual workloads that require a round trip response before they can do the next thing will become slower because they are spending more time waiting for the response. If you try USB latency with a graphics card.... well let's not go there.
I'm not keen it disappear at all, i'm just being honest about the long term tech evolution, looking at what we got now from the past 10 year.
I like the Thunderbolt features a lot, but if compared against the USB protocol, the USB gave the consumer, companies and technology in general so much!
So i'm more keen to support the USB protocol, that brought me so much over the years, even over a premium but very solid feature as thunderbolt is.
USB can't replicate what thunderbolt do, because first of all the thunderbolt IP technology are heavily patented.
Also i remember that the thunderbolt protocol could also work through optic fiber network, to overcome the 40GB/s imposed by copper wires.
So as said above, thunderbolt required a premium computation and signalling to work efficiently at high speeds, that's maybe why it's still limited to 40GB/s and did not grow speeds further.
Then if one compare it to the more simple computation and signalling required for the USB 3.1, one can already notice the USB 3.1 speed feat.
When comparing USB 3.1 to what the premium need to deliver the advertised speeds, i would suppose that the actual USB signalling is already good enough and can be further improved.
That's why i'm pretty confident that the USB protocol will continue to grow and reach thunderbolt speed soon or later.
I also agree with you about having an alternative even if premium, again i like and find thunderbolt technology good.
But i like it less if it become a premium lock, especially for companies that would need this feature for their new products, but refrain to use it cause struggle with cost and restrictions.
About latencies, i wonder if maybe it's not the case to integrate the thunderbolt controller directly into the CPU SOC, as we actually do with USB controllers.
This would eliminate a lot of the latencies since the controller is directly linked to the pci-e root of the cpu and to the iGPU DP.
This would help for sure managing latencies and bandwidth bottleneck.
With an USB controller integrated into a CPU SOC used for eGPU purposes, i suppose it would have lower performances and higher latencies of what an pci-e 1x 3.0 can deliver.
It is true tho i couldn't find any eGPU implementations using an USB3.1 protocol, but i suppose there would be a way to feed the pci-e into x number USB controllers to build a 4x pci-e link.
There are various eGPU solutions available at the moment, not all use Thunderbolt 3.0./2.0, but most do.
I am interested in what is available right now.
I want to have the flexibility to purchase an eGPU box, add the GPU I want, and have it run on Thunderbolt 1.0/2.0/ or 3.0.
That way I can use it with laptop or PC.
I would like to be able to add a Thunderbolt 3.0/2.0/1.0 capable card to a PCIe3.0x4 slot on an AMD Ryzen or Threadripper Motherboard so I can add more GPU's.
I do not want to have to use a Full PCIe3.0x16 slot to add a Thunderbolt 2.0 eGPU - that just does not make sense.
I can run additional GPU over PCIe2.0x1 Mining Adapter for some applications it works but such a low badwidth is not ideal.
AMD xConnect in Windows 10 does work with Thunderbolt 2.0 but there are some limitations on PC as I think it has been implemeted with laptop use in mind. For example Crossfire gets disabled once you connect the Thunderbolt eGPU ReLive gets turned off, and the Radeon Overly behavior seems get even more random than usual.
At the moment there is no other choice to run a thunderbolt 3 eGPU enclosure/pci-e card, you can visit the site EGPUIO if interested about, it review the available enclosures.
Thunderbolt 2 is now outdated and lack of a lot of the features available on thunderbolt 3, that's why AMD Connect X does not work as intended i suppose.
To sum up quickly, thunderbolt 3.0 = 2x thunderbolt 2.0 controller fused together, when it come to bandwidth capability.
So it's quite normal you experience some limitation when using a thunderbolt 2.0 or thunderbolt 3.0 adapter and so on.
The best you can do is upgrade to a thunderbolt 3.0 enclosure and Alpine/Titan Ridge pci-e card, you will get a nice performance bump over the thunderbolt 2.0.
Check the forum i cited above if you are interested in this stuff.
As pure laptop eGPU enthusiasts, the 4x pci-e to M.2 SSD implementation is the best for raw performances output, but totally lack of wide compatibility and user friendliness.
That's why thunderbolt 3 came so handy, even if you lose come bandwidth capability.
RE: Thunderbolt 2 is now outdated and lack of a lot of the features available on thunderbolt 3, that's why AMD Connect X does not work as intended i suppose.EGPUIO
No I dont think so. Thunderbolt 2 is PCIe2.0x4 and I can run it on Crossfire OK on Windows 8.1 64bit which does not have XConnect, and just sees the Thunderbolt 2.0 eGPU as a normally connected GPU. it does provide reasonable Crossfire Scaling benefit in some games when used as a secondary card linked to a Primary GPU on PCIe3.0x8.
I just think Crossfire is disabled in Windows 10 XConnect because - hey, who would be crossfiring on a laptop using an eGPU, and why would anyone connect an eGPU to a PC?
Or perhaps you can crossfire with XConnect on Thunderbolt 3 on a PC? Sure it is PCIe3.0x4 - likely to be of more benefit in Crossfire because of higher bandwidth. I do not know that answer yet. I was going to try to use a Thunderbolt 3.0 eGPU to connect an AIB RX Vega 64 to my PC because they are so big they would not fit in my PC case without having to remove other components in the PC. Problem is they were too big to fit in the eGPU case I wanted to use anyhow.
I own a few Thunderbolt 2.0 enclosures for eGPU & run them every day. I can connect them and run them from Laptop with Thunderbolt 1 port, for gaming if I want.
I know about the EGPUIO site thanks, I have been meaning to go back there to ask for some help / advice.
RE: As pure laptop eGPU enthusiasts, the 4x pci-e to M.2 SSD implementation is the best for raw performances output, but totally lack of wide compatibility and user friendliness.
Are you talking about the EXP GDC Beast that you can use by removing the wireless card from your laptop and connect via the wireless interface?
Built one of those as well. That wireless interface is much lower bandwidth than Thunderbolt 1/2/3. I am well aware they are a pain to get working. Getting them to run with > 2GB RAM is sometimes difficult.
Well you are right, not sure the crossfire would work with xconnect, however i'm pretty sure the multi-gpu computation features works fine using an eGPU.
I meant that maybe xconnect would work better with the latest thunderbolt 3.0 revision, since it's the main one used at the moment.
Also was just to point out that upgrading from thunderbolt 2 to thunderbolt 3 is somehow worth, there is a nice bandwidth performance gain.
But only if both the enclosure and the pc support the thunderbolt 3 controller.
There is no point upgrading to a brand new thunderbolt 3 enclosure, if the latest is used with an old thunderbolt 2 controller on the pc.
And yes, the other way is to use something like the EXP GDC, but that connect to the M.2 SSD 4x pci-e port, instead to the WIFI 1x pci-e slot.
Not all laptops support M.2 SSD, with an unlocked 4x pci-e and also with an unlocked bios unfortunately.
Still the implementation is really poorly user friendly even when it work well, since you need a pci-e rinser that come out from the M.2 slot.
I run a R9 270 using the EXP GDC with a 3630QM, it's enough to play decently AAA games at 720/1080p with low/medium settings.
Again it depend what kind of laptops is used for the eGPU build, some are really easy to work with and some need more time spent figuring out the right way to do it.
In your case, maybe you could try using a simple 16x pci-e riser, that would come out directly from the case.
In this way you would build your own eGPU enclosure, that fit your needs, able to fit near by or on top of your small case.
Hi - RE: XConnect would work better with the latest thunderbolt 3.0 revision. Yes it should run better with thunderbolt3. Thunderbolt 2.0 is not officially supported in AMD XConnect on Windows 10s - only for experiment and proof of concept and testing. Overall it works fine for compute and for Thunderbolt 1.0/2.0 equipped laptops. My PC Motherboards have thunderbolt 1 or 2. I would need to purcahse Thunderbolt 3 card and enclosure and again, yes the performance will be better with the higher bandwidth.
RE: I run a R9 270 using the EXP GDC with a 3630QM, it's enough to play decently AAA games at 720/1080p with low/medium settings.
O.K. so I currently run a heavily modded laptop with i7920XM with HD7970OC 6GB OC or R9 280X 3GB OC.
I run from Low Bandwidth Wireless connector.
My laptop is fully enclosed - an looks as it did before I modded it, apart from a flat black tail exiting the rear with a connector for the EXP GDC beast.
The gaming performance is surprisingly good.
The games take some time to load data into the VRAM, but once its in there - it's good to go.
How much VRAM memory does your R9 270 have exactly? - Is it 2GB of VRAM?
You might want to see what happens if you connect a card with more VRAM.
1 of 1 people found this helpful
I'm pretty sure your setup run well, i was also amazed about the overall performances and gameplay over a simple 1x pci-e.
Yes my R9 270 is 2GB and if i would upgrade i would go as you with a 280X, the R9 270 is not enough.
The cpu is enough to output a decent framerate if it is relieved a bit by a stronger gpu.
That why as you i would go with something 280X alike, a RX570 would begin to be too much for the setup.
If one is an enthusiast, getting a simple 4x M2 to pci-e 16x riser on modern laptop, allow you to run and take profit of high end gpu like vega or 1080 without issues.
Yes, eGPU on laptop needs to be made much easier. I don't think you necessarily need Thunderbolt 3.0 Bandwidth, although more bandwidth is better.
There should be other Plays.TV videos near to that link I sent to you, where I ran some other games and benchmarks using the EXP GDC Beast via laptop wireless port.
I also have a video showing an HD7970OC 6GB &/or R9 280XOC 3GB running Doom at Ultra/Nightmare settings somewhere - that's PCIe2.0-USB3-PCIe16 mining adapter. Again the performance is amazing given that low bandwidth and those old GPU's.
As long as enough game data gets loaded to VRAM at game launch, and you have enough VRAM, the games seem to run very well.
The game load times can be longer than running from a normal PCIe3.0/2.0x8 slot.
Anyhow I might see you on EGPUIO forum as I have a few questions about running EXPGDC Beast with upgraded to 16GB->32GB RAM on my laptop.
One issue is my laptop has a pre-GCN AMD discrete GPU so, Crimson/CrimsonRelive/Adrenalin will not install on the laptop and I am stuck runniing very old AMD Drivers for gaming (~ Last AMD Catalyst Driver).
There are a few examples of me running games and testing Plays.TV Video Beta Recorder - I was one of the beta testers for the recorder.
one example here:
colesdav - Test 13. Crysis 2. Plays.TV Beta Recorder on Customised Laptop with i7-920XM processor and Wireless Adapter c…
I did not record the FPS in the game because the Plays.TV CPU overhead was high at the time. Without running Plats.TV recorder I was getting ~ 45-60 FPS on Crysis 2 with high settings.
You could sound like you have hopes for improvements.
- At least Thunderbolt 3 is a superset and still offers all of the USB 3.1 and alt-modes that a regular port will, so it does cater for the masses also.
- If efficiency means nothing to you then you are blind. Moore's Law is pretty much dead now, and to get faster, we need to become more efficient - otherwise there will be more and more overheads, bottlenecks and wasted power.
- Just like how Intel with massive 28-core single die CPUs starts to have a lot of issues with inter-core communication - it becomes a mess and starts to spend a higher proportion of energy on management than actual processing.
- I did not realise CPUs had USB integrated - only PCH, that I knew of.
- You are aware that Thunderbolt 3 is 2x 20Gb/s differentially signalled links? There is no way copper will allow for 40Gb/s over any appreciable distance with current and affordable technology. Even 40GbE DAC cables for QSFP are 4x 10Gb/s links. Intel should have stayed with Thunderbolt as Lightpeak. It is not like the extra cost could have been noticed with their vendors gouging prices anyway.
- To have USB, you need PCIe to back it - why not cut out the extra subsystem and simply deliver the PCIe?
- Why could Thunderbolt not become as ubiquitous as USB? There is a chance that Intel will eventually release the specification and allow for third-party controllers. If they do not, then Thunderbolt will eventually have to be replaced.
- Isn't it about time that they started from scratch? It does not have to be Thunderbolt, but I don't really enjoy using something advertised at 10Gb/s but actually being 6Gb/s after overheads. And that is if you are lucky - most controllers are not backed with enough PCIe, anyway.
- Say that Thunderbolt was actually released to the public domain - it would provide an excellent bridge between USB and Thunderbolt - as the Thunderbolt port supports USB, allowing people to slowly transition out of their USB devices whilst enjoying the new technology.
In my mind, the ideal scenario is PCIe redesigns to be hotplug friendly without the assistance of Thunderbolt. It would need to ensure IOMMU is always used and make it electrically safe. Designing it so the same signal can be routed through a different connector externally.
Part of my motivation is how we are PCH bottlenecked and I can feel this closing in - if nothing is done then we are going to be choking on I/O resources.
Also, there are signals that have the characteristics that they can deliver ultra-low latency over long distances (fiber) and have the throughput to drive ultra-high resolution displays (I am thinking InfiniBand). If we unified PCIe and DisplayPort to be the same thing, then every single computer becomes a video capture card for free. Then we can use laptops as portable monitors.
I might sound a little crazy, but if there is a possibility, I want it to be available. More choices and freedom. This is made by reducing the number of standards we use.
I know we cannot simply throw away USB - but that is why Thunderbolt 3 acts as USB - so we can have the best of both.
Also USB *could* replicate Thunderbolt (except that such a radical change would make it no longer USB). Thunderbolt simply implements a PCI bridge, defined in the PCI standard (while making it convenient and hot-pluggable). And most of the work Intel have done with Microsoft and Linux to support native PCI enumeration by the operating system (allows Titan Ridge to work on AMD) is not Thunderbolt-specific. It can be re-used. The Thunderbolt proprietary stuff is only the protocol between their controller chips. The rest is all standard.
My last point is to say that if it *is* all about speed as you assert, then here is why USB cannot win without becoming more efficient. Even if we assume all vendors actually back USB 3.2 with enough PCIe lanes (highly doubtful), then we get about 12Gb/s out of 20Gb/s after overheads. If USB doubled to 40Gb/s then we are getting 24Gb/s after overheads. So about in line with Thunderbolt's 22Gb/s for data - but then Thunderbolt has the rest to use for DisplayPort, which means Thunderbolt could be used as a single cable docking station, whereas USB has to use DisplayLink, which incurs CPU overhead, causes driver issues and is generally sub-par.
Yes, I agree if Intel try to control Thunderbolt forever then it will have to disappear. But for now, it is worth backing in the mean time. Either Intel releases the spec, or it gains enough traction to get a competing open standard published by PCI-SIG or the like.
And the actual point of this thread: go and buy the Gigabyte GC-TITAN RIDGE add-in card and put it in a Ryzen.
The reason hot plug on Windows does not work is not Thunderbolt-specific. Windows just does not handle PCI properly.
The default behaviour on Linux is to botch it like Windows (take the allocations from BIOS). But that can easily be overridden in Linux.
Well the answer is simple, you are wall texting the wrong person!
You can copy paste this thread and post it on the Intel forum, hoping someone would even bother to read it and you are done.
Again, when you speak about standard, i would like to remember that standards are made to be enforced by anyone, to build i guess what we call a standard.
You can't have a standard if it can't be reach by anyone, i don't think you need to pay something when using an USB 3.1 controller aside what you paid for the chip.
Not sure it is the same with thunderbolt specification, correct me if i'm wrong here, but a standard can't be adopted easily if you need to pay for it.
So again in the real world, if a company want a product to become a standard, it surely not behave like Intel is behaving with thunderbolt.
And again if you would ask my opinion, i would clearly go with the USB 3.1 rather than with the thunderbolt alternative as it is now, unless i really really really need it for a product.
Having the transmission protocol proprietary is not what i call an "only" minor thing, when it come to standard i would suppose.
That's the thing. I am hopeful and believe they will release it. The thing is Thunderbolt has required a lot of changes to the PCIe handling of the OS and the like to support properly. We have gone from it being next to impossible for Thunderbolt to work on unsupported motherboards, to it almost working perfectly without any effort (Titan Ridge). This is no mistake - they know exactly what they are doing.
I could be naïve - but I think Intel knows that Thunderbolt will go the way of Firewire if it does not manage things correctly. Granted, Thunderbolt has a lot more going for it than Firewire ever did. If Intel is smart, then they will be looking to open the standard up at some point in the future and use it to bolster their flailing reputation - which is better for them than Thunderbolt slowly fizzling out like Firewire. Plus, Intel will likely be the sole supplier of Thunderbolt controllers for sometime after, and because they will have much more experience, they might be perceived as superior.
If they do not release it, well there is no harm in enjoying its benefits until some open competitor comes along to rival Thunderbolt 3.
There is nothing wrong with USB, but sometimes there being nothing wrong is not enough. If something fails to innovate and keep up, that does not make it faulty, but it means that it does fall behind. Granted, USB-C connector is the best innovation ever, and is part of the appeal of Thunderbolt 3. The USB-C connector will certainly keep USB relevant. But the question is, whether devices continue to use the USB signal coming from it, or something else (not necessarily Thunderbolt, there are room for more alternate modes).
If i remember well, it was announced that thunderbolt would have been released from it's chain to be royalty free.
As speaking today, still believing what was said in that announcement, i'm waiting the final leap to finally release thunderbolt royalty free.
Maybe you are right about Titan Ridge being a game change, allowing further compatibility with more devices, but i think it's not enough.
Again, i'm not saying that it could not be included into a standard, i said above that it couldn't be included in a standard as it is now.
I suppose i will keep waiting the great leap, until it happen imo, i don't see how the industry as whole would stop implementing and researching the USB.
There are pretty smart people out there, that could implement USB further, with or without thunderbolt.
From your first post, you said "To be honest i'm not sure these guy would stop developing the USB standard because it's becoming inefficient, that's one of the most laughable joke i have ever heard!"
Some people might have interpreted as what I said to mean "the USB people should keep developing the USB standard but make future iterations more efficient". Or even "the USB people could start to incorporate characteristics of Thunderbolt into future iterations of USB". You are definitely a glass completely empty type person.
It has been done before. Note how PCIe 2.0 to PCIe 3.0 cut down on encoding overheads significantly.
USB 3.1 (10Gb/s) still appears to have the same overhead proportion as USB 2.0 (480Mb/s), and USB 3.2 is just doubled-up USB 3.1 (two streams in one cable).
Again, it's rather simple.
If i know how stuff is made and the work needed to make it happen, i would really carefully weigh my words before saying anything bad unjustified about the work of others.
I understand that for some it doesn't matter and just saying whatever feels ok, but in reality it is not.
So again when i read that the USB is inefficient it make me laugh hard.
Simply because i can't find anything against USB at all, justifying being called inefficient when compared to thunderbolt.
Since there is no point in reinventing the wheel and USB 3.1 Class 2 is only at ~ speed of Thunderbolt 1.0 what we need is Thunderbolt 2.0 and / or Thunderbolt 3.0 Interface Cards and Thunderbolt Headers (unless Intel drop the requirement for the headers) on AMD Motherboards.
Current AMD Crimson ReLive / Adrenalin Drivers already work (but are not officially supported) with Thunderbolt 2.0 eGPU connected to a PC on Windows 8.1 64bit, where it treats the connected AMD GPU as if it were connected to a PCIe 2.0x4 slot on the motherboard. In this case you can actually Crossfire the Cards with any other AMD GPU on your machine in a standard PCIe x 16 slot on the motherboard. AMD Relive works. Unfortunately AMD dropped providing drivers for Windows 8.1 64bit ~ time Vega was launched... You can install latest Windows 7 drivers on Windows 8.1 64bit and the 'seem to work' but I wouldn't rely on it and you do so at your own risk. Last WHQL Driver for Windows 8.1 64bit is 17.4.4 does work with Thunderbolt anyhow though.
I am writing this on such a machine with a Thunderbolt 2.0 eGPU containing an R9 Nano connected to this PC at the moment. The PC also has a secondary R9 Nano and a Primary GTX 780Ti connected today.
In the case of Windows 10 64bit, AMD XConnect Technology kicks in. This stops ReLive running and prevents Crossfire from enabling on the PC.
However you can still connect the eGPU to the PC and use it as an additional card.
I have already reported this to AMD in AMD Reporting form. It needs fixed for the case of using eGPU with PC.
Just in case you do not believe me heree is a picture of the type of eGPU I use here: Scarface
So what do I use it for ....
Well for a number of things.
About to start to look into ROCm Compute situation - thunderbolt connected eGPU is experimenal / work in progress at the moment and they look for volunteers for testing and I may volunteer.
Blender MultiGPU Rendering.
Additional power for gaming on an ASUS G751JL Laptop.
Portable eGPU to take with me and attach to thunderbolt enabled PC at customer site if needed to demo work.
etc etc etc.
Regarding Linux - Thunderbolt support is turning up in recent versions of Fedora and Ubuntu but I admit I have not tested it yet.
I was just asking today about Threadripper Motherboards and Thunderbolt situation today.
I will add more info about that next.
@jj^4884 (@OP) sorry we have gotten off topic somewhat. Here is a post explicitly addressing your concerns.
Titan Ridge on AMD Threadripper system with no modifications: https://egpu.io/forums/builds/thunderbolt-3-on-amd-x399-threadripper-rtx-208032gbps-tb3-razer-core-x-win10-1803-theitsag…
The owner's lspci output shows that the eGPU must have been attached at boot time - because there are only just enough bus numbers for the eGPU. If no devices are attached at boot with Windows and an unsupported motherboard, it does not reserve any bus numbers, and no PCIe devices can work. This is the only catch. All we need is a patch for Windows to override the BIOS allocations and reserve some specified number of PCI bus numbers. Such a patch would not be Thunderbolt-specific, and applies to the PCI subsystem that Thunderbolt is based on. The Linux option which does exist applies to PCI, not Thunderbolt - and makes Titan Ridge work perfectly on Linux on any system. For reference "pci=realloc,assign-busses,hpbussize=0x33" in the kernel boot parameters.
These GC-TITAN RIDGE cards were everything I had hoped for and more. One port delivers 100W (20V@5A), enough for a Surface Book 2 or MacBook Pro 15" at full speed, and the other port delivers 27W (9V@3A), enough for all smartphones, and maybe some Chromebooks and ultra-ultralight laptops. The Thunderbolt 10Gb/s Networking works hot plugged, provided you place a jumper over the GPIOs so that it is awake at boot and the card itself is allocated. The DisplayPort pass-through is fine. The USB 3.1 controller works for regular USB devices works fine if the jumper is placed. This makes it worthwhile as a full-bandwidth USB-only card with high power delivery, even for people with no interest in Thunderbolt. It is definitely the best USB host card to ever exist, in my opinion.
For full discussion about using this card, go here where I and others are active - we have already put a lot of information out, and will happily answer more questions.
1 of 1 people found this helpful
From the PCI-express wikipedia page:
PCI Express 4.0 specs will also bring OCuLink-2, an alternative to Thunderbolt connector. OCuLink version 2 will have up to 16 GT/s (8 GB/s total for ×4 lanes), while the maximum bandwidth of a Thunderbolt 3 connector is 5 GB/s.
Hopefully this means there will be a possibility for manufacturers to add this connector to Zen 2 based systems. As it seems likely that the consumer oriented processors will also feature pci-e 4.0 support like their server counterparts.
Thanks you for the info, i have to admit honestly it's the 1st time i heard it or maybe heard it once.
So it mean it's something that is hidden under the hood or exclusively used in pro and server field, far for consumer space.
There is for sure some activity on the PCI SIG site specs, unfortunately i'm not registered so i couldn't access to the docs:
OCuLink already exist on server boards but not sure tho, even if updated, this tech will see the light in consumer market easily.
I saw AMD/Intel server mb already mounting the OCuLink port and share the protocol on the controller with the other NVME, SAS, SATA protocols.
So i don't expect seeing an OCuLink-2 port soon on consumers laptops or motherboards, along the next pci-e 4.0 release.
Even if AMD push for the tech, AMD alone surely not weigh enough to drive the whole industry with it, aside the whole industry really stick to it.
Only at this point i will even consider OCuLink being viable and not just a dream reserved to the server high profile field.
Tho truly hope being wrong and see the pci-e 4.0 dominate with this new OCuLink-2 port, in the near future.
Sure it open a whole new world as high speed, high bandwidth I/O interface.