Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FFTs with posits #64

Open
milankl opened this issue May 7, 2022 · 26 comments
Open

FFTs with posits #64

milankl opened this issue May 7, 2022 · 26 comments
Labels
enhancement New feature or request

Comments

@milankl
Copy link
Owner

milankl commented May 7, 2022

Base.maxintfloat(::Type{Posit}) has to be defined.

julia> using SoftPosit, FastTransforms

julia> x = Posit16.(rand(8))
8-element Vector{Posit16}:
 Posit16(0.6254883)
 Posit16(0.7780762)
 Posit16(0.8145752)
 Posit16(0.50598145)
 Posit16(0.74768066)
 Posit16(0.29797363)
 Posit16(0.89624023)
 Posit16(0.6437988)

julia> fft(x)
ERROR: MethodError: no method matching maxintfloat(::Type{Posit16})
@milankl
Copy link
Owner Author

milankl commented May 10, 2022

I believe maxintfloats are

Base.maxintfloat(::Type{Posit8}) = 8              # = 2^3
Base.maxintfloat(::Type{Posit16}) = 512           # = 2^9
Base.maxintfloat(::Type{Posit16_2}) = 1024        # = 2^10
Base.maxintfloat(::Type{Posit32}) = 8388608       # = 2^23

@milankl milankl added the enhancement New feature or request label May 10, 2022
@milankl
Copy link
Owner Author

milankl commented Jun 10, 2022

Base.maxintfloat(::AbstractPosit) added with #68

@milankl
Copy link
Owner Author

milankl commented Jun 10, 2022

@tkgunaratne let me know if there's anything else needed that we can include in v0.5 for FFTs with posits

@tkgunaratne
Copy link

image
success!!

Thanks a lot!! :-)

@giordano
Copy link
Contributor

Looks like they're converted to Complex{Float32}?

@tkgunaratne
Copy link

tkgunaratne commented Jun 10, 2022

Oops didn't notice that :-(. Got excited as the FFT function worked.

@milankl
Copy link
Owner Author

milankl commented Jun 11, 2022

That must be an explicit conversion as I deliberately do not define promotion between floats and posits

julia> 1f0*Posit16(1)
ERROR: promotion of types Float32 and Posit16 failed to change any arguments

@milankl
Copy link
Owner Author

milankl commented Jun 11, 2022

sinpi is missing

Base.sinpi(x::T) where {T<:AbstractPosit} = convert(T,sinpi(float(x)))

then fft still seems to convert to Float32, but calling generic_fft_pow2 does the job

julia> using FastTransforms

julia> x = complex(Posit16.(randn(8)));

julia> FastTransforms.generic_fft_pow2(x)
8-element Vector{Complex{Posit16}}:
  Posit16(3.8544922) + Posit16(0.0)im
  Posit16(2.0048828) - Posit16(2.7148438)im
 Posit16(-3.2929688) + Posit16(0.78149414)im
  Posit16(1.4091797) - Posit16(1.3051758)im
  Posit16(3.6767578) + Posit16(0.0)im
  Posit16(1.4082031) + Posit16(1.3061523)im
 Posit16(-3.2929688) - Posit16(0.78149414)im
  Posit16(2.0039062) + Posit16(2.7148438)im

@giordano
Copy link
Contributor

As a sanity check I usually do the round-trip fft->ifft:

julia> using FastTransforms, SoftPosit

julia> Base.sinpi(x::T) where {T<:AbstractPosit} = convert(T,sinpi(float(x)))

julia> x = complex(Posit16.(randn(8)))
8-element Vector{Complex{Posit16}}:
 Posit16(-0.9309082) + Posit16(0.0)im
 Posit16(0.44580078) + Posit16(0.0)im
 Posit16(-2.0751953) + Posit16(0.0)im
 Posit16(-1.7192383) + Posit16(0.0)im
 Posit16(-2.0234375) + Posit16(0.0)im
 Posit16(-0.2477417) + Posit16(0.0)im
 Posit16(-1.5751953) + Posit16(0.0)im
 Posit16(-1.5771484) + Posit16(0.0)im

julia> FastTransforms.generic_ifft_pow2(FastTransforms.generic_fft_pow2(x))
8-element Vector{Complex{Posit16}}:
 Posit16(-0.93115234) + Posit16(0.0)im
  Posit16(0.44604492) - Posit16(0.0002746582)im
  Posit16(-2.0742188) + Posit16(0.0)im
  Posit16(-1.7197266) + Posit16(0.00022888184)im
  Posit16(-2.0234375) + Posit16(0.0)im
 Posit16(-0.24731445) - Posit16(0.00015258789)im
  Posit16(-1.5751953) + Posit16(0.0)im
  Posit16(-1.5771484) + Posit16(0.00019836426)im

Look close enough. Note that isapprox doesn't work on Posit16 because it doesn't define eps.

@milankl
Copy link
Owner Author

milankl commented Jun 13, 2022

Forgot to add Base. ...

eps(::Type{Posit8}) = reinterpret(Posit8,0x28)
eps(::Type{Posit16}) = reinterpret(Posit16,0x0a00)
eps(::Type{Posit16_1}) = reinterpret(Posit16_1,0x0100)
eps(::Type{Posit32}) = reinterpret(Posit32,0x00a0_0000)
eps(x::AbstractPosit) = max(x-prevfloat(x),nextfloat(x)-x)

@milankl
Copy link
Owner Author

milankl commented Jun 13, 2022

We still can't do isapprox(::Vector{Posit16},Vector{Posit16}) because

julia> using .SoftPosit
julia> using FastTransforms
julia> x = complex(Posit16.(randn(8)))
8-element Vector{Complex{Posit16}}:
   Posit16(2.196289) + Posit16(0.0)im
 Posit16(0.31066895) + Posit16(0.0)im
  Posit16(0.6977539) + Posit16(0.0)im
  Posit16(1.1914062) + Posit16(0.0)im
  Posit16(0.2701416) + Posit16(0.0)im
 Posit16(-0.9902344) + Posit16(0.0)im
 Posit16(0.47131348) + Posit16(0.0)im
  Posit16(0.2368164) + Posit16(0.0)im

julia> x2 = FastTransforms.generic_ifft_pow2(FastTransforms.generic_fft_pow2(x))
8-element Vector{Complex{Posit16}}:
   Posit16(2.1972656) + Posit16(0.0)im
  Posit16(0.31079102) + Posit16(6.67572e-6)im
   Posit16(0.6977539) + Posit16(0.0)im
   Posit16(1.1914062) - Posit16(0.00025081635)im
  Posit16(0.27026367) + Posit16(0.0)im
 Posit16(-0.98999023) + Posit16(0.00025081635)im
   Posit16(0.4711914) + Posit16(0.0)im
  Posit16(0.23693848) - Posit16(6.67572e-6)im

julia> x  x2
ERROR: promotion of types Float64 and Posit16 failed to change any arguments
Stacktrace:
  [1] error(::String, ::String, ::String)
    @ Base ./error.jl:42
  [2] sametype_error(input::Tuple{Float64, Posit16})
    @ Base ./promotion.jl:374
  [3] not_sametype(x::Tuple{Float64, Posit16}, y::Tuple{Float64, Posit16})
    @ Base ./promotion.jl:368
  [4] promote
    @ ./promotion.jl:351 [inlined]
  [5] +(x::Float64, y::Posit16)
    @ Base ./promotion.jl:379
  [6] generic_norm2(x::Vector{Complex{Posit16}})
    @ LinearAlgebra /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/LinearAlgebra/src/generic.jl:507

it seems that the LinearAlgebra.normInf(x::Posit16) returns a Float32 that gets promoted to Float64 here which causes a problem in an addition in LinearAlgebra.generic_norm2.

Otherwise isapprox works fine

julia> a = Posit16(rand())
julia> a  a
true

julia> complex(a)  complex(a)
true

@tkgunaratne
Copy link

tkgunaratne commented Jun 13, 2022

Could this line "julia> Base.sinpi(x::T) where {T<:AbstractPosit} = convert(T,sinpi(float(x)))" be added to any file in SoftPosit/src?

@giordano
Copy link
Contributor

It's already in #70 😉

@tkgunaratne
Copy link

I was just wondering whether any additional work has to be done to support for non-power of 2 fft/iffts with SoftPosit?

@milankl
Copy link
Owner Author

milankl commented Jun 23, 2022

This https://github.com/JuliaDSP/FourierTransforms.jl looks like a generic not-just-power2 FFT implementation to me. It hasn't been touched in two years though. I think in general, we should probabaly try to push for a generic Julia FFT in a standalone package, as has been previously discussed in other places (JuliaApproximation/FastTransforms.jl#86). I'd be happy to help out, although as a first step I'd vote to make power2 FFTs fast before tackling non-power2s, but maybe that could all go hand in hand.

@milankl
Copy link
Owner Author

milankl commented Jun 23, 2022

Having said that, we soon need a generic FFT for SpeedyWeather.jl which currently uses FFTW.rFFTWPlan{NF} and FFTW.rFFTWPlan{Complex{NF}} from FFTW.jl which only supports NF=Float32/64. But again the focus here would be rather to extend the FFTW interface to generic types than to support non-power2. If you are interested though, @tkgunaratne we could have a call and see what's actually needed to create such a package.

@giordano
Copy link
Contributor

giordano commented Jun 23, 2022

https://github.com/JuliaDSP/FourierTransforms.jl Edit: oh, it's the same one you mentioned above, but it has been touched just two weeks ago!

@tkgunaratne
Copy link

I'm afraid that I upgraded to Julia 1.8.0 and that seemed to have broken the fft operation with Posits :-(

ERROR: promotion of types Float64 and Posit16 failed to change any arguments Stacktrace: [1] error(::String, ::String, ::String) @ Base ./error.jl:44 [2] sametype_error(input::Tuple{Float64, Posit16}) @ Base ./promotion.jl:383 [3] not_sametype(x::Tuple{Float64, Posit16}, y::Tuple{Float64, Posit16}) @ Base ./promotion.jl:377 [4] promote @ ./promotion.jl:360 [inlined] [5] <=(x::Float64, y::Posit16) @ Base ./promotion.jl:429 [6] >=(x::Posit16, y::Float64) @ Base ./operators.jl:429 [7] sinpi(x::Posit16) @ Base.Math ./special/trig.jl:758 [8] generic_fft_pow2!(x::Vector{Posit16}) @ GenericFFT ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:106 [9] generic_fft_pow2(x::Vector{Complex{Posit16}}) @ GenericFFT ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:126 [10] generic_fft(x::Vector{Complex{Posit16}}) @ GenericFFT ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:50 [11] generic_fft @ ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:22 [inlined] [12] * @ ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:260 [inlined] [13] fft(x::Vector{Complex{Posit16}}) @ AbstractFFTs ~/.julia/packages/AbstractFFTs/Wg2Yf/src/definitions.jl:62 [14] top-level scope @ ~/Julia_Scripts/SoftPosit.jl/Examples/FFT_Example.jl:6

@milankl
Copy link
Owner Author

milankl commented Aug 20, 2022

I cannot reproduce this issue

julia> using SoftPosit, GenericFFT

julia> fft(Posit16.(randn(8)))
8-element Vector{Complex{Posit16}}:
 Posit16(-0.24487305) + Posit16(0.0)im
   Posit16(4.1367188) - Posit16(0.7182617)im
   Posit16(0.8757324) - Posit16(1.0927734)im
  Posit16(-1.3613281) + Posit16(0.60791016)im
   Posit16(0.2409668) + Posit16(0.0)im
  Posit16(-1.3603516) - Posit16(0.6088867)im
   Posit16(0.8757324) + Posit16(1.0927734)im
    Posit16(4.138672) + Posit16(0.7192383)im

with Julia v1.8, SoftPosit v0.5, GenericFFT v0.1.1. Also it seems to have been triggered from >=(x::Posit16, y::Float64) in sinpi(x::Posit16), which in Julia v1.8 is fine

julia> sinpi(Posit16(1.5))
Posit16(-1.0)

Note in general that an error related to promotion is not due to SoftPosit.jl as we deliberately don't implement promotion between floats and posits to flag where unintentional type conversions happen (as users of this package are probably interested to do all arithmetic in posits and not floats).

@tkgunaratne
Copy link

tkgunaratne commented Oct 11, 2022 via email

@milankl
Copy link
Owner Author

milankl commented Oct 11, 2022

It seems that

julia> using SoftPosit, GenericFFT

julia> fft(Posit16.(randn(10)))
ERROR: MethodError: no method matching Posit16(::Irrational{:π})

is missing, which can quickly be added by conversion first to Float32/64

julia> Posit16(x::Irrational) = Posit16(Base.floattype(Posit16)(x))

But then there's also

julia> fft(Posit16.(randn(10)))
ERROR: promotion of types Float64 and Posit16 failed to change any arguments
Stacktrace:
  [1] error(::String, ::String, ::String)
    @ Base ./error.jl:44
  [2] sametype_error(input::Tuple{Float64, Posit16})
    @ Base ./promotion.jl:383
  [3] not_sametype(x::Tuple{Float64, Posit16}, y::Tuple{Float64, Posit16})
    @ Base ./promotion.jl:377
  [4] promote
    @ ./promotion.jl:360 [inlined]
  [5] *(x::Float64, y::Posit16)
    @ Base ./promotion.jl:389
  [6] lerpi
    @ ./range.jl:958 [inlined]
  [7] unsafe_getindex(r::LinRange{Posit16, Int64}, i::Int64)
    @ Base ./range.jl:951
  ...
 [19] getindex
    @ ./broadcast.jl:597 [inlined]
 [20] macro expansion
    @ ./broadcast.jl:961 [inlined]
 [21] macro expansion
    @ ./simdloop.jl:77 [inlined]
 [22] copyto!
    @ ./broadcast.jl:960 [inlined]
 [23] copyto!
    @ ./broadcast.jl:913 [inlined]
 [24] copy
    @ ./broadcast.jl:885 [inlined]
 [25] materialize
    @ ./broadcast.jl:860 [inlined]
 [26] generic_fft(x::Vector{Complex{Posit16}})
    @ GenericFFT ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:52
 [27] generic_fft
    @ ~/.julia/packages/GenericFFT/Oa0Sx/src/fft.jl:22 [inlined]

Which has to be addressed in GenericFFT.jl. Non-power2 therein is implemented as a direct transform, so the complexity is worse than with FFTW, but it works (using Float16 which doesn't have the problems discussed above).

julia> GenericFFT.generic_fft(rand(Float16,8))
8-element Vector{ComplexF16}:
   Float16(5.895) + Float16(0.0)im
   Float16(-0.47) - Float16(0.3445)im
 Float16(-0.3438) - Float16(0.2461)im
  Float16(0.3042) + Float16(0.05566)im
 Float16(-0.7695) + Float16(0.0)im
  Float16(0.3044) - Float16(0.0569)im
 Float16(-0.3438) + Float16(0.2461)im
 Float16(-0.4688) + Float16(0.3457)im

julia> GenericFFT.generic_fft(rand(Float16,10))
10-element Vector{ComplexF16}:
    Float16(4.754) - Float16(0.001587)im
   Float16(0.6123) - Float16(0.1678)im
 Float16(-0.09766) + Float16(1.582)im
     Float16(0.28) - Float16(0.722)im
  Float16(-0.2092) - Float16(0.3606)im
     Float16(0.35) + Float16(0.00579)im
  Float16(-0.2206) + Float16(0.3496)im
   Float16(0.2737) + Float16(0.7065)im
 Float16(-0.05493) - Float16(1.603)im
    Float16(0.633) + Float16(0.1726)im

So if you are interested to get this going, we should consider updating GenericFFT.jl. Shouldn't be hard, the package probably just needs some care.

@tkgunaratne
Copy link

tkgunaratne commented Oct 11, 2022

Appreciate it Milan!

However, when I tried julia> GenericFFT.generic_fft(rand(Float16,150)) I got the following

150-element Vector{ComplexF16}:  
 NaN16 + NaN16*im  
 NaN16 + NaN16*im  
 NaN16 + NaN16*im  
 NaN16 + NaN16*im
 NaN16 + NaN16*im
 NaN16 + NaN16*im
 NaN16 + NaN16*im
       ⋮
 NaN16 + NaN16*im
 NaN16 + NaN16*im
 NaN16 + NaN16*im
 NaN16 + NaN16*im
 NaN16 + NaN16*im
 NaN16 + NaN16*im

This opposed to

julia> GenericFFT.generic_fft(rand(Float16,256))
256-element Vector{ComplexF16}:
  Float16(128.2) + Float16(0.0)im
   Float16(3.02) + Float16(4.0)im
 Float16(-5.164) - Float16(2.139)im
 Float16(0.3098) - Float16(1.306)im
  Float16(1.049) + Float16(1.336)im
 Float16(-1.275) - Float16(4.324)im
  Float16(3.809) - Float16(1.603)im
                 ⋮
  Float16(3.797) + Float16(1.619)im
 Float16(-1.293) + Float16(4.324)im
  Float16(1.053) - Float16(1.318)im
 Float16(0.3127) + Float16(1.313)im
  Float16(-5.18) + Float16(2.129)im
  Float16(3.031) - Float16(3.996)im

Also, I noticed steep degradation of accuracy when compared with the Float64 implementation.

Also, I got following warning

┌ Warning: Using generic fft for FFTW number type.
 @ GenericFFT C:\Users\gunaratnet\.julia\packages\GenericFFT\Oa0Sx\src\fft.jl:48

@milankl
Copy link
Owner Author

milankl commented Oct 13, 2022

Yes, don't forget that there's two things here. 1) Get a generic fft to return a Vector{Complex{T}} for any-length Vector{T} that's provided as argument and 2) address precision and range issues within that algorithm.

We have almost ticked off 1), but 2) requires more work. I'm not surprised that for large vectors the direct FT may hit overflows with Float16, which I believe is because it's a long sum of elements that may easily go beyond floatmax(Float16). However, that would be important to know, and clearly points towards the need for a recursive algorithm as the FFT is

@tkgunaratne
Copy link

I agree, I think there should be some modification done to the FFT algorithm to cater for Float16 Posit16 formats.

floatmax(Float16) = 65504 and max(abs(fft(rand(150)))) ~ 150

@tkgunaratne
Copy link

Getting back to the topic of non power of two FFTs;
MicroFloatingPoints.jl managed to facilitate non power of two FFTs for both GenericFFT.jl and FastTransforms.jl packages with few simple modifications to its code. I suspect it was fairly straight forward because under the hood it is all IEE 754 compatible Floats.

@tkgunaratne
Copy link

Hi @milankl, Just wondering whether it is too difficult to support non-power of 2 fft/iffts with Posits :-D.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants