- AMD Community
- Communities
- Developers
- OpenCL
- Re: Neural networks using fast transforms

Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

05-25-2020
12:45 AM

Neural networks using fast transforms

I guess you have a FFT library. You probably should write an efficient Walsh Hadamard transform library to allow people to better experiment with fast transform neural networks:

https://community.konduit.ai/t/fast-transform-neural-net-visual-example/497

6 Replies

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

05-25-2020
02:55 AM

Since the Walsh Hadamard transform is just the additive part of the FFT maybe you just need to take the multiplies out of your FFT code!!! Then maybe you could provide the switch slope at zero parametric functions. That is all you need.

As part of a ML framework, maybe do auto differentiation.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

06-03-2020
03:24 AM

Hi seanc4s, did you checked: clMathLibraries · GitHub and in it: GitHub - clMathLibraries/clFFT: a software library containing FFT functions written in OpenCL also it: GitHub - clMathLibraries/rocFFT: Next generation FFT implementation for ROCm

Is that address your need?

Thanks!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

06-03-2020
03:54 AM

Yeh, I am sure you could use that FFT code if you wanted to experiment with fast transform neural networks.

The FFT would act as a collection of non-adjustable dot products. Then you would use individually adjustable parametric activation functions.

Ie. f(x)=a.x x>=0, f(x)=b.x x<0. Where each individual activation function has its own adjustable parameters a and b.

Is it acceptable to change around the roles (being adjustable) of dot products and activation functions in such a way in a neural network? Actually it is. Especially if you understand ReLU as a switch.

On: f(x)=x (connect)

Off: f(x)=0 (disconnect)

If you think about an audio source switch on an audio amplifier. It may only have 2 states, on and off.

However when on it lets through a complicated audio signal.

Then a conventional ReLU neural network is just a switched composition of dot products.

The dot product of a number of dot products is still a dot product.

If you think about it, for a particular input to a ReLU network the state of each switch is definitely decided. Which dot products connect which other ones are all know and can be condensed down into a simpler equivalent dot product.

Then the output vector is a simple matrix mapping of the input vector.

Of course for each different input vector there is a different matrix, anyway pretty interesting.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

06-03-2020
04:03 AM

Is it a sample that you can improve?

gpucomp/ex06.c at master · sowson/gpucomp · GitHub

Is it ReLU (from darknet/activation_kernels.cl at master · sowson/darknet · GitHub )?:

float relu_activate_kernel(float x){return x*(x>0);}

and

float relu_gradient_kernel(float x){return (x>0);}

I think it will be possible with the special small CFG on this:

Good Luck! ;-).

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

06-03-2020
08:20 AM

Yeh, ReLU is very well known.

I have a version of the Walsh Hadamard transform that an autovectorizing compiler (eg. gcc) may produce fast code from:

GitHub - S6Regen/AutoVectorizedWHT: Walsh Hadamard transform for auto-vectorizing compilers

You need to convert it to C which should be easy. I don't know if GPU C compiler could produce efficient code from it.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

06-06-2020
09:54 PM

Fast Transform (aka Fixed Filter Bank) Neural Networks:

Evolution version:

https://s6regen.github.io/Fast-Transform-Neural-Network-Evolution/

Backpropagation version:

https://s6regen.github.io/Fast-Transform-Neural-Network-Backpropagation/

Basic LSH Associative Memory:

View:

https://editor.p5js.org/siobhan.491/present/MTPzfwYbo

Edit:

https://editor.p5js.org/siobhan.491/sketches/MTPzfwYbo

Block Vector LSH Associative Memory:

View:

https://editor.p5js.org/siobhan.491/present/tenYuNHNC

Edit:

https://editor.p5js.org/siobhan.491/sketches/tenYuNHNC

Hash Table LSH Associative Memory:

View:

https://editor.p5js.org/siobhan.491/present/zIANaDtiG

Edit:

https://editor.p5js.org/siobhan.491/sketches/zIANaDtiG