Showing results for 
Search instead for 
Did you mean: 

General Discussions

Journeyman III

Disruptive Affordable AI

Vega Frontier GPU, $1,200.oo

Threadripper 1920x $700.oo

Fractal Design R6, $170.oo

Stereo Speakers, $100.oo

280gb Optane $360.oo

Motherboard $400.oo

MS Keyboard $30.oo

Headphones $80.oo

300gb SSD $100.oo

Razer Mpad $50.oo

Razer Mouse $50.oo

32gb DDR4, $350.oo

Optical Drive, $30.oo

Liquid Cooling $90.oo

Power Supply, $150.oo

Surge Protector, $100.oo

Samsung 32 QLED $600.oo

Total~ $4,500.oo

I’m a Taoist, and some of us have a warped sense of humor. This rig is for experimenting in developing a cheap and disturbingly disruptive technology, using up to 100 tiny fpga arithmetic accelerators, or AI circuits, all running in parallel to produce a cheap distributed computer, for filling an open source video game engine with, potentially, thousands of interactive automations running on a cheap computer, leveraging the AI circuits and a zero latency Mind Maze headset. The headset is made by a major medical research company and reads your brain waves to provide 30-70ms of warning to the computer, before you can so much as move a muscle. That’s just unheard of, and completely eliminates traditional lag problems, such as draw distances, and would provide the AI circuits the extra time they require, whenever you move around in VR, for them to buffer more frames for running new animations. The Mind Maze company is currently adding the ability to walk around in VR hands free, but the AI circuits commonly used today are ddr3, and only provide 8-16fps however, due to demand, we should see dramatic increases in their rendering rates within five or ten years at the outside. Even at such incredibly low frame rates, these tiny circuits, of as few as 120 transistors, provide compelling and complex animations, with a single algorithm running on one being capable of animating a bot to follow sidewalks and crosswalks, run, jump, crouch, roll, avoid obstacles, step over a stick you throw down in its path, and pick itself back up, and continue on if knocked down. Using a simple adaptive AI program, I’m hoping to run 64 interactive animations simultaneously within the player’s immediate proximity and field of view.

The Mind Maze gives the computer all the time it requires to load up new animations as you move around, but I’m attempting to also leverage the greater bandwidth capacities provided by the new AMD chips, with Vega being the first of its kind, optimized for its geometry pipeline and HBM2 memory. Significantly faster arithmetic accelerators already exist, and should be available within a few years, at a reasonable price, and hopefully, push their frames rates up towards a reliable 30fps or more. The Mind Maze provides eye-tracking, gestures, and expressions, and Otoy has made it possible to convert any existing rasterized video game engine to incorporate real time ray traced lighting, without taking an unreasonable hit on performance.

Between the AI provided by the ray tracing and headset, that makes it possible to easily program any bot to act naturally, as though they can actually see what you and all the other bots are doing, and can be programmed to respond accordingly. All in addition to their automated behavior being handled by the AI circuits, with the program being able to take over the job of animating anything at any time. For such rudimentary animations, these should be capable of easily being programmed for complex interactions, including the ability to look the player and each other in the eye, and act as if they recognize everyone’s gestures and expressions. The only thing the cpu processor has to handle is a simple adaptive AI program that animates the world as you move around, using a high/low buffer for batch processing 28-32 animations simultaneously, for 64 total going through the video card geometry pipeline at any given time.

     My own interest is in attempting to translate the metaphoric logic of the Tao Te Ching into a virtual reality interface. It requires studying six to ten versions of the text for at least fifteen to even be considered competent with its fuzzy logic, and an interactive virtual reality interface could make it not only more enjoyable, but quite a bit easier to learn.

0 Replies