A post about deep learning.

https://spectrum.ieee.org/amp/deep-lear ... 2655082754

**Moderators:** Site Moderators, FAHC Science Team

3 posts
• Page **1** of **1**

All very interesting, but the advent of quantum computing may make it irrelevant. And if we don't go to carbon-free power generation in a few years, the lack of machine learning support will be the least of our problems.

- JimF
**Posts:**609**Joined:**Thu Jan 21, 2010 3:03 pm

The article has a link to an older article about AI replacing folding.

I'm hypothesizing that, (and would be interested in the thoughts of some of the forum seniors and moderators who know more about this than I),

There is a possibility that AI can rather than use 32bit numbers to assign atoms to a fixed space,

Use the much quicker RT and AI cores (8 or 16 bit), to calculate their relative change to their fixed position.

It's slightly less precise, but there's no reason to calculate the exact place and position of an atom in a cluster, knowing that this atom will not move from it's position much further than x-amount of nm per x-amount of ms.

so instead of trying to calculate each atom's exact position using 12 digits, like position (123456789012,123456789012,123456789012), they could forego the first 8 or 9 digits if the atom can only move from eg: 123.456.789.012 to for instance 123.456.798.765 (resulting in a maximum of 123456798765,123456798765,123456798765 in the 3d space of the WU).

In such case the first few digits, don't need to be recalculated, and work can be assigned to calculate the last x-amount of digits, using shaders that will calculate the difference.

All you need is a beginning and an end of absolute coordinates.

This process should speed up folding significantly! Especially now that most manufacturers are working on AI hardware, rather than deep learning.

If you look at modern GPUs, the 3080 for instance which has following performance metrics:

TFLOPS FP32: 29.8

TFLOPS FP16: 119 (multiplier = 4x faster)

RT TFLOPS: 58.1 (Multiplier = 2x faster)

This combined with the fact that if the RT cores do double the math the shaders do, the RT cores can work simultaneously with the FP16 cores, resulting in roughly 8x faster workflow.

It's not sure how much power will be used, but it's entirely possible that working all the cores at 16bit, and adding the RT cores, consumes the same amount of power than currently the 32bit shaders do.

And there's a chance 16 and 32bit shaders can be used simultaneously on some GPUs.

Performance metrics would be quite interesting!

Considering that FAH was able to further increase the GPU's performance by about 10-15% or so (based on power consumption), resulting in a boost of 1,5-2x core 21's PPD;

Being able to boost a GPU by 7x should result in hundreds of million, if not a billion PPD on a single GPU like a 3080; leaving all prior PPD scores of people folding for decades, in the dust.

Food for thought!

I'm hypothesizing that, (and would be interested in the thoughts of some of the forum seniors and moderators who know more about this than I),

There is a possibility that AI can rather than use 32bit numbers to assign atoms to a fixed space,

Use the much quicker RT and AI cores (8 or 16 bit), to calculate their relative change to their fixed position.

It's slightly less precise, but there's no reason to calculate the exact place and position of an atom in a cluster, knowing that this atom will not move from it's position much further than x-amount of nm per x-amount of ms.

so instead of trying to calculate each atom's exact position using 12 digits, like position (123456789012,123456789012,123456789012), they could forego the first 8 or 9 digits if the atom can only move from eg: 123.456.789.012 to for instance 123.456.798.765 (resulting in a maximum of 123456798765,123456798765,123456798765 in the 3d space of the WU).

In such case the first few digits, don't need to be recalculated, and work can be assigned to calculate the last x-amount of digits, using shaders that will calculate the difference.

All you need is a beginning and an end of absolute coordinates.

This process should speed up folding significantly! Especially now that most manufacturers are working on AI hardware, rather than deep learning.

If you look at modern GPUs, the 3080 for instance which has following performance metrics:

TFLOPS FP32: 29.8

TFLOPS FP16: 119 (multiplier = 4x faster)

RT TFLOPS: 58.1 (Multiplier = 2x faster)

This combined with the fact that if the RT cores do double the math the shaders do, the RT cores can work simultaneously with the FP16 cores, resulting in roughly 8x faster workflow.

It's not sure how much power will be used, but it's entirely possible that working all the cores at 16bit, and adding the RT cores, consumes the same amount of power than currently the 32bit shaders do.

And there's a chance 16 and 32bit shaders can be used simultaneously on some GPUs.

Performance metrics would be quite interesting!

Considering that FAH was able to further increase the GPU's performance by about 10-15% or so (based on power consumption), resulting in a boost of 1,5-2x core 21's PPD;

Being able to boost a GPU by 7x should result in hundreds of million, if not a billion PPD on a single GPU like a 3080; leaving all prior PPD scores of people folding for decades, in the dust.

Food for thought!

- MeeLee
**Posts:**1298**Joined:**Tue Feb 19, 2019 11:16 pm

3 posts
• Page **1** of **1**

Return to The Science of FAH -- questions/answers

Users browsing this forum: No registered users and 2 guests