Moore’s law is a term coined by Carver Mead, a Caltech professor who is also the father of Neuromorphic Engineering. It refers to the observation, now more hope than reality, that advances in technology will allow a doubling of compute capability in silicon every 18 months. Recent advances in the use of highly parallel compute methods, that are loosely based on neural systems in our brain, are changing how compute is accomplished. These techniques, collectively termed deep learning networks, burst onto to the world because of one reason: the ability to perform lots of parallel computations on graphics cards. However, it is in truly custom hardware, such as that pioneered by the Neuromorphic community that we will find the salvation of Moore’s law. When we blend powerful compute techniques with custom silicon architectures, we can keep the hope alive of continuing to double the compute capability of the world.
If you are in the space of deep learning or have heard about how GPUs have revolutionalized high performance computing, this talk will take you to the extreme bleeding edge of that world.
If you are in the space of deep learning or have heard about how GPUs have revolutionalized high performance computing, this talk will take you to the extreme bleeding edge of that world.
- Category
- Computing
Sign in or sign up to post comments.
Be the first to comment