Hello, I am a Ph.D. student in Poland at the Silesian University of Technology. And I wanted to start a general discussion on the classifier on the video stream improvement idea. I called it the "Multi-GPU & Multi-SET". Basically people use Multi-GPU and the "Syncing" for Convolutional Neural Networks. But nobody in my opinion tried multi GPU to classification. So what it is about? Well, it is about Multi-GPU detection and "Combining". You can imagine You Only Look Once (YOLO2 or YOLO3) models, right? For example, one is trained only to detect cars and people... another on the COCO set, another on VOC set... and so on. Now, when you have separation on detecting "specialization" on this AI/ML models, why not to combine and run on Multi-GPU? Many researchers work on the subject to make CNN works on as many classes as possible... but has someone tried this Multi-GPU approach? I think the answer is not. So, can you discuss this with me here?
There is one more thing... GPUs should not be connected with AMD Infinity Fabric Link because it combines the GPUs memory... and the model: OpenCL: Context of Multi-GPUs => Multi-Queues => Multi-Kernels simply does not work.