cancel
Showing results for 
Search instead for 
Did you mean: 

OpenCL

seanc4s
Adept I

Neural networks using fast transforms

I guess you have a FFT library.  You probably should write an efficient Walsh Hadamard transform library to allow people to better experiment with fast transform neural networks:

https://community.konduit.ai/t/fast-transform-neural-net-visual-example/497

6 Replies
seanc4s
Adept I

Since the Walsh Hadamard transform is just the additive part of the FFT maybe you just need to take the multiplies out of your FFT code!!!  Then maybe you could provide the switch slope at zero parametric functions.  That is all you need.

As part of a ML framework, maybe do auto differentiation.

0 Likes

Yeh, I am sure you could use that FFT code if you wanted to experiment with fast transform neural networks.

The FFT would act as a collection of non-adjustable dot products.  Then you would use individually adjustable parametric activation functions.

Ie. f(x)=a.x x>=0, f(x)=b.x x<0.  Where each individual activation function has its own adjustable parameters a and b.

Is it acceptable to change around the roles (being adjustable) of dot products and activation functions in such a way in a neural network?  Actually it is.  Especially if you understand ReLU as a switch.

On: f(x)=x  (connect)

Off: f(x)=0 (disconnect)

If you think about an audio source switch on an audio amplifier. It may only have 2 states, on and off.

However when on it lets through a complicated audio signal.

Then a conventional ReLU neural network is just a switched composition of dot products.

The dot product of a number of dot products is still a dot product.

If you think about it, for a particular input to a ReLU network the state of each switch is definitely decided. Which dot products connect which other ones are all know and can be condensed down into a simpler equivalent dot product.

Then the output vector is a simple matrix mapping of the input vector.  

Of course for each different input vector there is a different matrix, anyway pretty interesting.

0 Likes
sowson
Adept II

Is it a sample that you can improve?

gpucomp/ex06.c at master · sowson/gpucomp · GitHub 

Is it ReLU (from darknet/activation_kernels.cl at master · sowson/darknet · GitHub )?:

float relu_activate_kernel(float x){return x*(x>0);}

and

float relu_gradient_kernel(float x){return (x>0);}

I think it will be possible with the special small CFG on this:

GitHub - sowson/darknet: Convolutional Neural Networks on OpenCL on Intel & NVidia & AMD & Mali GPUs... 

Good Luck! ;-).

0 Likes
seanc4s
Adept I

Yeh, ReLU is very well known.

I have a version of the Walsh Hadamard transform that an autovectorizing compiler (eg. gcc) may produce fast code from:

GitHub - S6Regen/AutoVectorizedWHT: Walsh Hadamard transform for auto-vectorizing compilers 

You need to convert it to C which should be easy.  I don't know if GPU C compiler could produce efficient code from it.

seanc4s
Adept I

Fast Transform (aka Fixed Filter Bank) Neural Networks:

Evolution version:
https://s6regen.github.io/Fast-Transform-Neural-Network-Evolution/

Backpropagation version:
https://s6regen.github.io/Fast-Transform-Neural-Network-Backpropagation/


Basic LSH Associative Memory:
View:
https://editor.p5js.org/siobhan.491/present/MTPzfwYbo
Edit:
https://editor.p5js.org/siobhan.491/sketches/MTPzfwYbo


Block Vector LSH Associative Memory:
View:
https://editor.p5js.org/siobhan.491/present/tenYuNHNC
Edit:
https://editor.p5js.org/siobhan.491/sketches/tenYuNHNC


Hash Table LSH Associative Memory:
View:
https://editor.p5js.org/siobhan.491/present/zIANaDtiG
Edit:
https://editor.p5js.org/siobhan.491/sketches/zIANaDtiG