You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do you have plans to enable distributed attention methods (Ring Attention, Striped Attention, Tree Attention, etc.) via a FlexAttention-like interface? Most of the functional implementations today are on JAX.
The text was updated successfully, but these errors were encountered:
It should be possible to build to use FlexAttention as a component in many of these algorithms, we have an arg to "return_lse" which is typically required to merge different attention chunks. We would accept + add an example showing how to do this
Do you have plans to enable distributed attention methods (Ring Attention, Striped Attention, Tree Attention, etc.) via a FlexAttention-like interface? Most of the functional implementations today are on JAX.
I am working on this. I am trying to intergate BurstAttention with efficient sparse attention implementation.
(BurstAttention is like a RingAttention++ version, with lots of features like Striped/ZigZag workload balance, a novel way to optimize backward communication and some things like TreeAttention to optimize communication)
Do you have plans to enable distributed attention methods (Ring Attention, Striped Attention, Tree Attention, etc.) via a FlexAttention-like interface? Most of the functional implementations today are on JAX.
The text was updated successfully, but these errors were encountered: