-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FFTs with posits #64
Comments
I believe maxintfloats are Base.maxintfloat(::Type{Posit8}) = 8 # = 2^3
Base.maxintfloat(::Type{Posit16}) = 512 # = 2^9
Base.maxintfloat(::Type{Posit16_2}) = 1024 # = 2^10
Base.maxintfloat(::Type{Posit32}) = 8388608 # = 2^23 |
|
@tkgunaratne let me know if there's anything else needed that we can include in v0.5 for FFTs with posits |
Looks like they're converted to |
Oops didn't notice that :-(. Got excited as the FFT function worked. |
That must be an explicit conversion as I deliberately do not define promotion between floats and posits julia> 1f0*Posit16(1)
ERROR: promotion of types Float32 and Posit16 failed to change any arguments |
Base.sinpi(x::T) where {T<:AbstractPosit} = convert(T,sinpi(float(x))) then julia> using FastTransforms
julia> x = complex(Posit16.(randn(8)));
julia> FastTransforms.generic_fft_pow2(x)
8-element Vector{Complex{Posit16}}:
Posit16(3.8544922) + Posit16(0.0)im
Posit16(2.0048828) - Posit16(2.7148438)im
Posit16(-3.2929688) + Posit16(0.78149414)im
Posit16(1.4091797) - Posit16(1.3051758)im
Posit16(3.6767578) + Posit16(0.0)im
Posit16(1.4082031) + Posit16(1.3061523)im
Posit16(-3.2929688) - Posit16(0.78149414)im
Posit16(2.0039062) + Posit16(2.7148438)im |
As a sanity check I usually do the round-trip fft->ifft: julia> using FastTransforms, SoftPosit
julia> Base.sinpi(x::T) where {T<:AbstractPosit} = convert(T,sinpi(float(x)))
julia> x = complex(Posit16.(randn(8)))
8-element Vector{Complex{Posit16}}:
Posit16(-0.9309082) + Posit16(0.0)im
Posit16(0.44580078) + Posit16(0.0)im
Posit16(-2.0751953) + Posit16(0.0)im
Posit16(-1.7192383) + Posit16(0.0)im
Posit16(-2.0234375) + Posit16(0.0)im
Posit16(-0.2477417) + Posit16(0.0)im
Posit16(-1.5751953) + Posit16(0.0)im
Posit16(-1.5771484) + Posit16(0.0)im
julia> FastTransforms.generic_ifft_pow2(FastTransforms.generic_fft_pow2(x))
8-element Vector{Complex{Posit16}}:
Posit16(-0.93115234) + Posit16(0.0)im
Posit16(0.44604492) - Posit16(0.0002746582)im
Posit16(-2.0742188) + Posit16(0.0)im
Posit16(-1.7197266) + Posit16(0.00022888184)im
Posit16(-2.0234375) + Posit16(0.0)im
Posit16(-0.24731445) - Posit16(0.00015258789)im
Posit16(-1.5751953) + Posit16(0.0)im
Posit16(-1.5771484) + Posit16(0.00019836426)im Look close enough. Note that |
Forgot to add SoftPosit.jl/src/arithmetics.jl Lines 45 to 49 in 7f32549
|
We still can't do julia> using .SoftPosit
julia> using FastTransforms
julia> x = complex(Posit16.(randn(8)))
8-element Vector{Complex{Posit16}}:
Posit16(2.196289) + Posit16(0.0)im
Posit16(0.31066895) + Posit16(0.0)im
Posit16(0.6977539) + Posit16(0.0)im
Posit16(1.1914062) + Posit16(0.0)im
Posit16(0.2701416) + Posit16(0.0)im
Posit16(-0.9902344) + Posit16(0.0)im
Posit16(0.47131348) + Posit16(0.0)im
Posit16(0.2368164) + Posit16(0.0)im
julia> x2 = FastTransforms.generic_ifft_pow2(FastTransforms.generic_fft_pow2(x))
8-element Vector{Complex{Posit16}}:
Posit16(2.1972656) + Posit16(0.0)im
Posit16(0.31079102) + Posit16(6.67572e-6)im
Posit16(0.6977539) + Posit16(0.0)im
Posit16(1.1914062) - Posit16(0.00025081635)im
Posit16(0.27026367) + Posit16(0.0)im
Posit16(-0.98999023) + Posit16(0.00025081635)im
Posit16(0.4711914) + Posit16(0.0)im
Posit16(0.23693848) - Posit16(6.67572e-6)im
julia> x ≈ x2
ERROR: promotion of types Float64 and Posit16 failed to change any arguments
Stacktrace:
[1] error(::String, ::String, ::String)
@ Base ./error.jl:42
[2] sametype_error(input::Tuple{Float64, Posit16})
@ Base ./promotion.jl:374
[3] not_sametype(x::Tuple{Float64, Posit16}, y::Tuple{Float64, Posit16})
@ Base ./promotion.jl:368
[4] promote
@ ./promotion.jl:351 [inlined]
[5] +(x::Float64, y::Posit16)
@ Base ./promotion.jl:379
[6] generic_norm2(x::Vector{Complex{Posit16}})
@ LinearAlgebra /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/LinearAlgebra/src/generic.jl:507 it seems that the Otherwise julia> a = Posit16(rand())
julia> a ≈ a
true
julia> complex(a) ≈ complex(a)
true |
Could this line " |
It's already in #70 😉 |
I was just wondering whether any additional work has to be done to support for non-power of 2 fft/iffts with SoftPosit? |
This https://github.com/JuliaDSP/FourierTransforms.jl looks like a generic not-just-power2 FFT implementation to me. It hasn't been touched in two years though. I think in general, we should probabaly try to push for a generic Julia FFT in a standalone package, as has been previously discussed in other places (JuliaApproximation/FastTransforms.jl#86). I'd be happy to help out, although as a first step I'd vote to make power2 FFTs fast before tackling non-power2s, but maybe that could all go hand in hand. |
Having said that, we soon need a generic FFT for SpeedyWeather.jl which currently uses |
https://github.com/JuliaDSP/FourierTransforms.jl Edit: oh, it's the same one you mentioned above, but it has been touched just two weeks ago! |
I'm afraid that I upgraded to Julia 1.8.0 and that seemed to have broken the fft operation with Posits :-(
|
I cannot reproduce this issue julia> using SoftPosit, GenericFFT
julia> fft(Posit16.(randn(8)))
8-element Vector{Complex{Posit16}}:
Posit16(-0.24487305) + Posit16(0.0)im
Posit16(4.1367188) - Posit16(0.7182617)im
Posit16(0.8757324) - Posit16(1.0927734)im
Posit16(-1.3613281) + Posit16(0.60791016)im
Posit16(0.2409668) + Posit16(0.0)im
Posit16(-1.3603516) - Posit16(0.6088867)im
Posit16(0.8757324) + Posit16(1.0927734)im
Posit16(4.138672) + Posit16(0.7192383)im with Julia v1.8, SoftPosit v0.5, GenericFFT v0.1.1. Also it seems to have been triggered from julia> sinpi(Posit16(1.5))
Posit16(-1.0) Note in general that an error related to promotion is not due to SoftPosit.jl as we deliberately don't implement promotion between floats and posits to flag where unintentional type conversions happen (as users of this package are probably interested to do all arithmetic in posits and not floats). |
I greatly appreciate your prompt response. It seems that I have been using
Sofposit version 0.4 :-(. It seems to work fine now. I may take this
opportunity to ask if there is any progress on non power of 2 point FFTs.
…On Sat, Aug 20, 2022 at 5:29 AM Milan Klöwer ***@***.***> wrote:
I cannot reproduce this issue
julia> using SoftPosit, GenericFFT
julia> fft(Posit16.(randn(8)))8-element Vector{Complex{Posit16}}:
Posit16(-0.24487305) + Posit16(0.0)im
Posit16(4.1367188) - Posit16(0.7182617)im
Posit16(0.8757324) - Posit16(1.0927734)im
Posit16(-1.3613281) + Posit16(0.60791016)im
Posit16(0.2409668) + Posit16(0.0)im
Posit16(-1.3603516) - Posit16(0.6088867)im
Posit16(0.8757324) + Posit16(1.0927734)im
Posit16(4.138672) + Posit16(0.7192383)im
with Julia v1.8, SoftPosit v0.5, GenericFFT v0.1.1. Also it seems to have
been triggered from >=(x::Posit16, y::Float64) in sinpi(x::Posit16),
which in Julia v1.8 is fine
julia> sinpi(Posit16(1.5))Posit16(-1.0)
Note in general that an error related to promotion is not due to
SoftPosit.jl as we deliberately don't implement promotion between floats
and posits to flag where unintentional type conversions happen (as users of
this package are probably interested to do all arithmetic in posits and not
floats).
—
Reply to this email directly, view it on GitHub
<#64 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AUFYAAJHOD3RRWLM5GPVGLDV2DFTJANCNFSM5VKOLPDA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
It seems that julia> using SoftPosit, GenericFFT
julia> fft(Posit16.(randn(10)))
ERROR: MethodError: no method matching Posit16(::Irrational{:π}) is missing, which can quickly be added by conversion first to Float32/64 julia> Posit16(x::Irrational) = Posit16(Base.floattype(Posit16)(x)) But then there's also julia> fft(Posit16.(randn(10)))
ERROR: promotion of types Float64 and Posit16 failed to change any arguments
Stacktrace:
[1] error(::String, ::String, ::String)
@ Base ./error.jl:44
[2] sametype_error(input::Tuple{Float64, Posit16})
@ Base ./promotion.jl:383
[3] not_sametype(x::Tuple{Float64, Posit16}, y::Tuple{Float64, Posit16})
@ Base ./promotion.jl:377
[4] promote
@ ./promotion.jl:360 [inlined]
[5] *(x::Float64, y::Posit16)
@ Base ./promotion.jl:389
[6] lerpi
@ ./range.jl:958 [inlined]
[7] unsafe_getindex(r::LinRange{Posit16, Int64}, i::Int64)
@ Base ./range.jl:951
...
[19] getindex
@ ./broadcast.jl:597 [inlined]
[20] macro expansion
@ ./broadcast.jl:961 [inlined]
[21] macro expansion
@ ./simdloop.jl:77 [inlined]
[22] copyto!
@ ./broadcast.jl:960 [inlined]
[23] copyto!
@ ./broadcast.jl:913 [inlined]
[24] copy
@ ./broadcast.jl:885 [inlined]
[25] materialize
@ ./broadcast.jl:860 [inlined]
[26] generic_fft(x::Vector{Complex{Posit16}})
@ GenericFFT ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:52
[27] generic_fft
@ ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:22 [inlined] Which has to be addressed in GenericFFT.jl. Non-power2 therein is implemented as a direct transform, so the complexity is worse than with FFTW, but it works (using Float16 which doesn't have the problems discussed above). julia> GenericFFT.generic_fft(rand(Float16,8))
8-element Vector{ComplexF16}:
Float16(5.895) + Float16(0.0)im
Float16(-0.47) - Float16(0.3445)im
Float16(-0.3438) - Float16(0.2461)im
Float16(0.3042) + Float16(0.05566)im
Float16(-0.7695) + Float16(0.0)im
Float16(0.3044) - Float16(0.0569)im
Float16(-0.3438) + Float16(0.2461)im
Float16(-0.4688) + Float16(0.3457)im
julia> GenericFFT.generic_fft(rand(Float16,10))
10-element Vector{ComplexF16}:
Float16(4.754) - Float16(0.001587)im
Float16(0.6123) - Float16(0.1678)im
Float16(-0.09766) + Float16(1.582)im
Float16(0.28) - Float16(0.722)im
Float16(-0.2092) - Float16(0.3606)im
Float16(0.35) + Float16(0.00579)im
Float16(-0.2206) + Float16(0.3496)im
Float16(0.2737) + Float16(0.7065)im
Float16(-0.05493) - Float16(1.603)im
Float16(0.633) + Float16(0.1726)im So if you are interested to get this going, we should consider updating GenericFFT.jl. Shouldn't be hard, the package probably just needs some care. |
Appreciate it Milan! However, when I tried
This opposed to
Also, I noticed steep degradation of accuracy when compared with the Float64 implementation. Also, I got following warning
|
Yes, don't forget that there's two things here. 1) Get a generic fft to return a We have almost ticked off 1), but 2) requires more work. I'm not surprised that for large vectors the direct FT may hit overflows with Float16, which I believe is because it's a long sum of elements that may easily go beyond floatmax(Float16). However, that would be important to know, and clearly points towards the need for a recursive algorithm as the FFT is |
I agree, I think there should be some modification done to the FFT algorithm to cater for Float16 Posit16 formats. floatmax(Float16) = 65504 and max(abs(fft(rand(150)))) ~ 150 |
Getting back to the topic of non power of two FFTs; |
Hi @milankl, Just wondering whether it is too difficult to support non-power of 2 fft/iffts with Posits :-D. |
Base.maxintfloat(::Type{Posit})
has to be defined.The text was updated successfully, but these errors were encountered: