Releases: LuxDL/Lux.jl
Releases · LuxDL/Lux.jl
v1.3.1
v1.3.0
Lux v1.3.0
Merged pull requests:
- fix: init hidden state for reactant (#1026) (@avik-pal)
- fix for Zygote and ChainRules OneElement (#1038) (@CarloLucibello)
- CompatHelper: bump compat for Optimisers to 0.4 for package DDIM, (keep existing compat) (#1059) (@github-actions[bot])
- fix: gracefully handle
OneHotArrays
(#1064) (@avik-pal) - chore: bump crate-ci/typos from 1.27.0 to 1.27.3 (#1065) (@dependabot[bot])
- fix: unsafe free for OneHotArrays (#1067) (@avik-pal)
- feat: update to Functors v0.5 (#1069) (@avik-pal)
- ci: generate tags for subdir projects (#1071) (@avik-pal)
Closed issues:
- scalar indexing of gpu array in Zygote gradient (#1016)
- sending to devices tuples, named tuples and arrays does not keep track of identical objects (#1017)
- Compiling Recurrent Models with Reactant (#1025)
- Simplify recursive code with
Functors
v0.5 (#1061) unsafe_free!
from MLDataDevices fails for OneHotArrays (#1066)
MLDataDevices-v1.6.1
MLDataDevices-v1.6.0
MLDataDevices MLDataDevices-v1.6.0
Diff since MLDataDevices-v1.5.3
Merged pull requests:
- fix: init hidden state for reactant (#1026) (@avik-pal)
- feat: update to Functors v0.5 (#1069) (@avik-pal)
Closed issues:
LuxTestUtils-v1.6.0
LuxTestUtils LuxTestUtils-v1.6.0
Merged pull requests:
- Rewrite (#7) (@avik-pal)
- Rename to Lux (#11) (@avik-pal)
- Initial Documentation (#14) (@avik-pal)
- Minor Updates (#15) (@avik-pal)
- Better CUDNN Dispatches (#16) (@avik-pal)
- Tutorials (#21) (@avik-pal)
- Proper dispatch for types not supported by CUDNN (#23) (@avik-pal)
- [WIP] Recurrent Neural Networks (#24) (@avik-pal)
- Fix math display in docs (#27) (@gdalle)
- Initial ViT Implementation & Pretrained ImageNet Models (#29) (@avik-pal)
- CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#30) (@github-actions[bot])
- Code Formatting -- SciMLStyle (#31) (@avik-pal)
- Cleanup generated function style (#33) (@avik-pal)
- Update README.md (#37) (@zsz00)
- Fix doc for
PairwiseFusion
(#39) (@theabhirath) - Extending
Scale
to allow for multiple dimension inputs (#40) (@theabhirath) - Fix Zygote error caused due to
fill!
(#41) (@theabhirath) - CompatHelper: bump compat for ComponentArrays to 0.12, (keep existing compat) (#43) (@github-actions[bot])
- Update JET tests to allow julia v1.6 (#47) (@avik-pal)
- Formatting updates and relax parameter type (#48) (@avik-pal)
- Enable doctests in CI (#51) (@avik-pal)
- fix quickstart example (#52) (@visr)
- Test on 1.8 (#54) (@avik-pal)
- Separate out testing unreleased julia versions (#55) (@avik-pal)
- Cleaner and Better Documentation (#56) (@avik-pal)
- Bump Pkg Compats (#66) (@avik-pal)
- CompatHelper: bump compat for MLDatasets to 0.7 for package examples, (keep existing compat) (#67) (@github-actions[bot])
- Manual to translate Flux to Lux (#69) (@avik-pal)
- Try codecov for doctests (#70) (@avik-pal)
- Add tests for utility functions (#74) (@avik-pal)
- Add tip to install packages (#76) (@Karthik-d-k)
- More Testing + Deprecate Nonsensical Functions + Better Naming for Kwargs (#80) (@avik-pal)
- CompatHelper: add new compat entry for Optimisers at version 0.2, (keep existing compat) (#82) (@github-actions[bot])
- Update rrules so that we can support Yota (#85) (@avik-pal)
- CompatHelper: bump compat for FluxMPI to 0.6 for package examples, (keep existing compat) (#86) (@github-actions[bot])
- Update comparison section in overview.md (#88) (@ToucheSir)
- Fix typos (#89) (@claforte)
- Fix minor typos in the docs (#93) (@gabrevaya)
- making x Float32 in migrate from Flux example (#97) (@gabrevaya)
- add init_hidden_state function (#101) (@gabrevaya)
- JLArray is now registered (#103) (@YichengDWu)
- [LuxTraining] Wrappers for less clunky training loops (#104) (@avik-pal)
- Use OneHotArrays (#105) (@YichengDWu)
- Fixes WeightNorm with zero Parameter bug (#106) (@avik-pal)
- fix state update in NeuralODE example (#107) (@gabrevaya)
- Deprecate
elementwise_*
andapplyactivation
(#113) (@avik-pal) - Go through the dense bias deprecation (#114) (@avik-pal)
- Fix Scale's paramlength (#116) (@lungd)
- Trainable hidden states (#117) (@lungd)
- Rnn bias deprecation (#120) (@lungd)
- Add use_bias kwarg to LSTMCell and GRUCell (#121) (@lungd)
- Update docs for dense layer (#124) (@avik-pal)
- Upper bound ComponentArrays (#125) (@avik-pal)
- Relax ComponentArrays compat (#126) (@avik-pal)
- Layer Normalization Implementation (#127) (@avik-pal)
- LSTM docs: don't go over first element in sequence twice (#132) (@visr)
- fix PairwiseFusion docs (#133) (@YichengDWu)
- Generic recurrent cells (#136) (@jumerckx)
- relu tests with finite diff is too unreliable (#137) (@avik-pal)
- Add kaiming initialization (#138) (@YichengDWu)
- Remove Val in typeinfo of WeightNorm (#140) (@avik-pal)
- Named Layers inside Generic Containers (#143) (@avik-pal)
- Allow fmapping over the model (#144) (@avik-pal)
- Update Imagenet example (#147) (@avik-pal)
- Make normalization more AD friendly (Diffractor) (#148) (@avik-pal)
- Fix CuArray -> Array rrule (#149) (@avik-pal)
- Allow indexing into Chains (#150) (@avik-pal)
- API for freezing layers (#151) (@avik-pal)
- Allow controlling fast activation transformation (#153) (@avik-pal)
- Introducing LuxLib.jl: Effectively pullout some of the custom layer implementations from Lux.jl (#154) (@avik-pal)
- Try relaxing JET version (#155) (@avik-pal)
- Update to use LuxLib (#156) (@avik-pal)
- Allow dispatch using
Lux.apply
(#158) (@avik-pal) - Mark non differentiable code paths (#160) (@avik-pal)
- Fix generic GN dispatch for non 4D arrays (#161) (@avik-pal)
- Add dispatch for subarray (#162) (@avik-pal)
- Add More Layers (#163) (@avik-pal)
- Fix type stability in normalization implementation (#164) (@avik-pal)
- Codecov for lib directories Take 2 (#165) (@avik-pal)
- Add freeze tests to runtests (#166) (@avik-pal)
- Precompile common workflows + check invalidations (#167) (@avik-pal)
- Make normalization typestable (#168) (@avik-pal)
- Add a manual page on precompilation (#169) (@avik-pal)
- Deprecate Lux.transform in favor of Flux2Lux.jl (#170) (@avik-pal)
- Remove dead code and improve var for Tracker.jl support (#171) (@avik-pal)
- Hyper Network Example (#172) (@avik-pal)
- Modify mkdocs settings (#173) (@avik-pal)
- Make ViT work on GPUs (#174) (@avik-pal)
- Add sensible recurrent layer wrappers (#175) (@avik-pal)
setup
only on AbstractRules (#176) (@avik-pal)- Start using Flux2Lux (#177) (@avik-pal)
- Fix some displays (#178) (@avik-pal)
- Relax dropout types (#179) (@avik-pal)
- Add instancenorm and alpha_dropout implementations (#180) (@avik-pal)
- Add InstanceNorm and AlphaDropout (#181) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package examples, (keep existing compat) (#184) (@github-actions[bot])
- remove convert rrule (#185) (@ArnoStrouwen)
- CompatHelper: bump compat for OneHotArrays to 0.2 for package examples, (keep existing compat) (#186) (@github-actions[bot])
- CompatHelper: bump compat for Turing to 0.22 for package examples, (keep existing compat) (#188) (@github-actions[bot])
- Fix layer_map for custom layers (#189) (@avik-pal)
- add example of DDIM implementation (#190) (@yng87)
- LuxCore.jl: Extremely light dependency for Lux Compatibility (#191) (@avik-pal)
- Revert github workflows for merged LuxCore.jl (#193) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package ImageNet, (keep existing compat) (#194) (@github-actions[bot])
- CompatHelper: bump compat for Setfield to 1 for package ImageNet, (keep existing compat) (#195) (@github-actions[bot])
- CompatHelper: bump compat for OneHotArrays to 0.2 for package ImageNet, (keep existing compat) (#196) (@github-actions[bot])
- ADAM -> Adam (#197) (@cossio)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#199) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package examples, (keep existing compat) (#200) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package ImageNet, (keep existing compat) (#201) (@github-actions[bot])
- Add easy tied weights/parameter sharing support (#202) (@avik-pal)
- CompatHelper: bump compat for Functors to 0.4 for package LuxCore, (keep existing compat) (#203) (@github-actions[bot])
- CompatHelper: add new compat entry for Zygote at version 0.6 for package DDIM, (keep existing compat) (#218) (@github-actions[bot])
- Update DDIM compat requirements (#219) (@avik-pal)
- Update examples (#221) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.23 for package examples, (keep existing compat) (#222) (@github-actions[bot])
- Fix docs (#223) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.4 for package examples, (keep existing compat) (#226) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package ImageNet, (keep existing compat) (#227) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package DDIM, (keep existing compat) (#228) (@github-actions[bot])
- Functor ambiguity fix (#229) (@avik-pal)
- Add all compats together (#238) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.24 for package examples, (keep existing compat) (#241) (@github-actions[bot])
- CompatHelper: bump compat for JET to 0.7 for package test, (keep existing compat) (#251) (@github-actions[bot])
- [WIP] Use Extensions for Flux2Lux (#261) (@avik-pal)
- Cleaner test workflow (#262) (@avik-pal)
- Add a patch for #243 (#263) (@avik-pal)
- Update LuxLib dependencies (#265) (@avik-pal)
- Dropping Julia 1.6 support for Lux (#266) (@avik-pal)
- Purge unnecessary dependencies into weak dependencies (#267) (@avik-pal)
- Add ForwardDiff Extension: Dropout (#269) (@avik-pal)
- Add Tracker as an Extension (#272) (@avik-pal)
- CompatHelper: bump compat for AbstractDifferentiation to 0.5 for package examples, (keep existing compat) (#273) (@github-actions[bot])
- Some Improvements (#274) (@avik-pal)
- Tracker has some of the rules (#275) (@avik-pal)
- Temporary CA + Tracker Patches (#276) (@avik-pal)
- Add CUDA and AMDGPU trigger packages (#277) (@avik-pal)
- ReverseDiff Extension (#280) (@avik-pal)
- Bump peter-evans/create-pull-request from 3 to 4 (#283) (@dependabot[bot])
- Bump actions/cache from 1 to 3 (#284) (@dependabot[bot])
- Bump actions/checkout from 1 to 3 (#285) (@dependabot[bot])
- Return the history for Recurrence (#287) (@avik-pal)
- Truncate tuples and namedtuples (#290) (@avik-pal)
- [WIP] Remove projects from
lib
toLuxDL
(#291) (@avik-pal) - Patch freeze (#292) (@avik-pal)
- Add dispatch for no activation (#293) (@avik-pal)
- Remove weakdeps from deps (#295) (@avik-pal)
- Try restoring lts support (#296) (@avik-pal)
- Testing using LuxTestUtils.jl (#297) (@avik-pal)
- CompatHelper: bump compat for Boltz to 0.2 for package ImageNet, (kee… (#298) (@avik-pal)
- Bump peter-evans/create-pull-request from 4 to 5 (#299) (@dependabot[bot])
- remove Dataloaders (#300) (@avik-pal)
- Update docs (#301) (@avik-pal)
- Fix bug in recurrence ordering (#303) (@avik-pal)
- Update LuxComponentArraysExt.jl (#304) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.25 for package examples, (keep existing compat) (#306) (@gith...
LuxLib-v1.3.8
LuxLib LuxLib-v1.3.8
Merged pull requests:
- Rewrite (#7) (@avik-pal)
- Rename to Lux (#11) (@avik-pal)
- Initial Documentation (#14) (@avik-pal)
- Minor Updates (#15) (@avik-pal)
- Better CUDNN Dispatches (#16) (@avik-pal)
- Tutorials (#21) (@avik-pal)
- Proper dispatch for types not supported by CUDNN (#23) (@avik-pal)
- [WIP] Recurrent Neural Networks (#24) (@avik-pal)
- Fix math display in docs (#27) (@gdalle)
- Initial ViT Implementation & Pretrained ImageNet Models (#29) (@avik-pal)
- CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#30) (@github-actions[bot])
- Code Formatting -- SciMLStyle (#31) (@avik-pal)
- Cleanup generated function style (#33) (@avik-pal)
- Update README.md (#37) (@zsz00)
- Fix doc for
PairwiseFusion
(#39) (@theabhirath) - Extending
Scale
to allow for multiple dimension inputs (#40) (@theabhirath) - Fix Zygote error caused due to
fill!
(#41) (@theabhirath) - CompatHelper: bump compat for ComponentArrays to 0.12, (keep existing compat) (#43) (@github-actions[bot])
- Update JET tests to allow julia v1.6 (#47) (@avik-pal)
- Formatting updates and relax parameter type (#48) (@avik-pal)
- Enable doctests in CI (#51) (@avik-pal)
- fix quickstart example (#52) (@visr)
- Test on 1.8 (#54) (@avik-pal)
- Separate out testing unreleased julia versions (#55) (@avik-pal)
- Cleaner and Better Documentation (#56) (@avik-pal)
- Bump Pkg Compats (#66) (@avik-pal)
- CompatHelper: bump compat for MLDatasets to 0.7 for package examples, (keep existing compat) (#67) (@github-actions[bot])
- Manual to translate Flux to Lux (#69) (@avik-pal)
- Try codecov for doctests (#70) (@avik-pal)
- Add tests for utility functions (#74) (@avik-pal)
- Add tip to install packages (#76) (@Karthik-d-k)
- More Testing + Deprecate Nonsensical Functions + Better Naming for Kwargs (#80) (@avik-pal)
- CompatHelper: add new compat entry for Optimisers at version 0.2, (keep existing compat) (#82) (@github-actions[bot])
- Update rrules so that we can support Yota (#85) (@avik-pal)
- CompatHelper: bump compat for FluxMPI to 0.6 for package examples, (keep existing compat) (#86) (@github-actions[bot])
- Update comparison section in overview.md (#88) (@ToucheSir)
- Fix typos (#89) (@claforte)
- Fix minor typos in the docs (#93) (@gabrevaya)
- making x Float32 in migrate from Flux example (#97) (@gabrevaya)
- add init_hidden_state function (#101) (@gabrevaya)
- JLArray is now registered (#103) (@YichengDWu)
- [LuxTraining] Wrappers for less clunky training loops (#104) (@avik-pal)
- Use OneHotArrays (#105) (@YichengDWu)
- Fixes WeightNorm with zero Parameter bug (#106) (@avik-pal)
- fix state update in NeuralODE example (#107) (@gabrevaya)
- Deprecate
elementwise_*
andapplyactivation
(#113) (@avik-pal) - Go through the dense bias deprecation (#114) (@avik-pal)
- Fix Scale's paramlength (#116) (@lungd)
- Trainable hidden states (#117) (@lungd)
- Rnn bias deprecation (#120) (@lungd)
- Add use_bias kwarg to LSTMCell and GRUCell (#121) (@lungd)
- Update docs for dense layer (#124) (@avik-pal)
- Upper bound ComponentArrays (#125) (@avik-pal)
- Relax ComponentArrays compat (#126) (@avik-pal)
- Layer Normalization Implementation (#127) (@avik-pal)
- LSTM docs: don't go over first element in sequence twice (#132) (@visr)
- fix PairwiseFusion docs (#133) (@YichengDWu)
- Generic recurrent cells (#136) (@jumerckx)
- relu tests with finite diff is too unreliable (#137) (@avik-pal)
- Add kaiming initialization (#138) (@YichengDWu)
- Remove Val in typeinfo of WeightNorm (#140) (@avik-pal)
- Named Layers inside Generic Containers (#143) (@avik-pal)
- Allow fmapping over the model (#144) (@avik-pal)
- Update Imagenet example (#147) (@avik-pal)
- Make normalization more AD friendly (Diffractor) (#148) (@avik-pal)
- Fix CuArray -> Array rrule (#149) (@avik-pal)
- Allow indexing into Chains (#150) (@avik-pal)
- API for freezing layers (#151) (@avik-pal)
- Allow controlling fast activation transformation (#153) (@avik-pal)
- Introducing LuxLib.jl: Effectively pullout some of the custom layer implementations from Lux.jl (#154) (@avik-pal)
- Try relaxing JET version (#155) (@avik-pal)
- Update to use LuxLib (#156) (@avik-pal)
- Allow dispatch using
Lux.apply
(#158) (@avik-pal) - Mark non differentiable code paths (#160) (@avik-pal)
- Fix generic GN dispatch for non 4D arrays (#161) (@avik-pal)
- Add dispatch for subarray (#162) (@avik-pal)
- Add More Layers (#163) (@avik-pal)
- Fix type stability in normalization implementation (#164) (@avik-pal)
- Codecov for lib directories Take 2 (#165) (@avik-pal)
- Add freeze tests to runtests (#166) (@avik-pal)
- Precompile common workflows + check invalidations (#167) (@avik-pal)
- Make normalization typestable (#168) (@avik-pal)
- Add a manual page on precompilation (#169) (@avik-pal)
- Deprecate Lux.transform in favor of Flux2Lux.jl (#170) (@avik-pal)
- Remove dead code and improve var for Tracker.jl support (#171) (@avik-pal)
- Hyper Network Example (#172) (@avik-pal)
- Modify mkdocs settings (#173) (@avik-pal)
- Make ViT work on GPUs (#174) (@avik-pal)
- Add sensible recurrent layer wrappers (#175) (@avik-pal)
setup
only on AbstractRules (#176) (@avik-pal)- Start using Flux2Lux (#177) (@avik-pal)
- Fix some displays (#178) (@avik-pal)
- Relax dropout types (#179) (@avik-pal)
- Add instancenorm and alpha_dropout implementations (#180) (@avik-pal)
- Add InstanceNorm and AlphaDropout (#181) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package examples, (keep existing compat) (#184) (@github-actions[bot])
- remove convert rrule (#185) (@ArnoStrouwen)
- CompatHelper: bump compat for OneHotArrays to 0.2 for package examples, (keep existing compat) (#186) (@github-actions[bot])
- CompatHelper: bump compat for Turing to 0.22 for package examples, (keep existing compat) (#188) (@github-actions[bot])
- Fix layer_map for custom layers (#189) (@avik-pal)
- add example of DDIM implementation (#190) (@yng87)
- LuxCore.jl: Extremely light dependency for Lux Compatibility (#191) (@avik-pal)
- Revert github workflows for merged LuxCore.jl (#193) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package ImageNet, (keep existing compat) (#194) (@github-actions[bot])
- CompatHelper: bump compat for Setfield to 1 for package ImageNet, (keep existing compat) (#195) (@github-actions[bot])
- CompatHelper: bump compat for OneHotArrays to 0.2 for package ImageNet, (keep existing compat) (#196) (@github-actions[bot])
- ADAM -> Adam (#197) (@cossio)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#199) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package examples, (keep existing compat) (#200) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package ImageNet, (keep existing compat) (#201) (@github-actions[bot])
- Add easy tied weights/parameter sharing support (#202) (@avik-pal)
- CompatHelper: bump compat for Functors to 0.4 for package LuxCore, (keep existing compat) (#203) (@github-actions[bot])
- CompatHelper: add new compat entry for Zygote at version 0.6 for package DDIM, (keep existing compat) (#218) (@github-actions[bot])
- Update DDIM compat requirements (#219) (@avik-pal)
- Update examples (#221) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.23 for package examples, (keep existing compat) (#222) (@github-actions[bot])
- Fix docs (#223) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.4 for package examples, (keep existing compat) (#226) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package ImageNet, (keep existing compat) (#227) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package DDIM, (keep existing compat) (#228) (@github-actions[bot])
- Functor ambiguity fix (#229) (@avik-pal)
- Add all compats together (#238) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.24 for package examples, (keep existing compat) (#241) (@github-actions[bot])
- CompatHelper: bump compat for JET to 0.7 for package test, (keep existing compat) (#251) (@github-actions[bot])
- [WIP] Use Extensions for Flux2Lux (#261) (@avik-pal)
- Cleaner test workflow (#262) (@avik-pal)
- Add a patch for #243 (#263) (@avik-pal)
- Update LuxLib dependencies (#265) (@avik-pal)
- Dropping Julia 1.6 support for Lux (#266) (@avik-pal)
- Purge unnecessary dependencies into weak dependencies (#267) (@avik-pal)
- Add ForwardDiff Extension: Dropout (#269) (@avik-pal)
- Add Tracker as an Extension (#272) (@avik-pal)
- CompatHelper: bump compat for AbstractDifferentiation to 0.5 for package examples, (keep existing compat) (#273) (@github-actions[bot])
- Some Improvements (#274) (@avik-pal)
- Tracker has some of the rules (#275) (@avik-pal)
- Temporary CA + Tracker Patches (#276) (@avik-pal)
- Add CUDA and AMDGPU trigger packages (#277) (@avik-pal)
- ReverseDiff Extension (#280) (@avik-pal)
- Bump peter-evans/create-pull-request from 3 to 4 (#283) (@dependabot[bot])
- Bump actions/cache from 1 to 3 (#284) (@dependabot[bot])
- Bump actions/checkout from 1 to 3 (#285) (@dependabot[bot])
- Return the history for Recurrence (#287) (@avik-pal)
- Truncate tuples and namedtuples (#290) (@avik-pal)
- [WIP] Remove projects from
lib
toLuxDL
(#291) (@avik-pal) - Patch freeze (#292) (@avik-pal)
- Add dispatch for no activation (#293) (@avik-pal)
- Remove weakdeps from deps (#295) (@avik-pal)
- Try restoring lts support (#296) (@avik-pal)
- Testing using LuxTestUtils.jl (#297) (@avik-pal)
- CompatHelper: bump compat for Boltz to 0.2 for package ImageNet, (kee… (#298) (@avik-pal)
- Bump peter-evans/create-pull-request from 4 to 5 (#299) (@dependabot[bot])
- remove Dataloaders (#300) (@avik-pal)
- Update docs (#301) (@avik-pal)
- Fix bug in recurrence ordering (#303) (@avik-pal)
- Update LuxComponentArraysExt.jl (#304) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.25 for package examples, (keep existing compat) (#306) (@github-actions[b...
LuxCore-v1.2.0
LuxCore LuxCore-v1.2.0
Merged pull requests:
- Rewrite (#7) (@avik-pal)
- Rename to Lux (#11) (@avik-pal)
- Initial Documentation (#14) (@avik-pal)
- Minor Updates (#15) (@avik-pal)
- Better CUDNN Dispatches (#16) (@avik-pal)
- Tutorials (#21) (@avik-pal)
- Proper dispatch for types not supported by CUDNN (#23) (@avik-pal)
- [WIP] Recurrent Neural Networks (#24) (@avik-pal)
- Fix math display in docs (#27) (@gdalle)
- Initial ViT Implementation & Pretrained ImageNet Models (#29) (@avik-pal)
- CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#30) (@github-actions[bot])
- Code Formatting -- SciMLStyle (#31) (@avik-pal)
- Cleanup generated function style (#33) (@avik-pal)
- Update README.md (#37) (@zsz00)
- Fix doc for
PairwiseFusion
(#39) (@theabhirath) - Extending
Scale
to allow for multiple dimension inputs (#40) (@theabhirath) - Fix Zygote error caused due to
fill!
(#41) (@theabhirath) - CompatHelper: bump compat for ComponentArrays to 0.12, (keep existing compat) (#43) (@github-actions[bot])
- Update JET tests to allow julia v1.6 (#47) (@avik-pal)
- Formatting updates and relax parameter type (#48) (@avik-pal)
- Enable doctests in CI (#51) (@avik-pal)
- fix quickstart example (#52) (@visr)
- Test on 1.8 (#54) (@avik-pal)
- Separate out testing unreleased julia versions (#55) (@avik-pal)
- Cleaner and Better Documentation (#56) (@avik-pal)
- Bump Pkg Compats (#66) (@avik-pal)
- CompatHelper: bump compat for MLDatasets to 0.7 for package examples, (keep existing compat) (#67) (@github-actions[bot])
- Manual to translate Flux to Lux (#69) (@avik-pal)
- Try codecov for doctests (#70) (@avik-pal)
- Add tests for utility functions (#74) (@avik-pal)
- Add tip to install packages (#76) (@Karthik-d-k)
- More Testing + Deprecate Nonsensical Functions + Better Naming for Kwargs (#80) (@avik-pal)
- CompatHelper: add new compat entry for Optimisers at version 0.2, (keep existing compat) (#82) (@github-actions[bot])
- Update rrules so that we can support Yota (#85) (@avik-pal)
- CompatHelper: bump compat for FluxMPI to 0.6 for package examples, (keep existing compat) (#86) (@github-actions[bot])
- Update comparison section in overview.md (#88) (@ToucheSir)
- Fix typos (#89) (@claforte)
- Fix minor typos in the docs (#93) (@gabrevaya)
- making x Float32 in migrate from Flux example (#97) (@gabrevaya)
- add init_hidden_state function (#101) (@gabrevaya)
- JLArray is now registered (#103) (@YichengDWu)
- [LuxTraining] Wrappers for less clunky training loops (#104) (@avik-pal)
- Use OneHotArrays (#105) (@YichengDWu)
- Fixes WeightNorm with zero Parameter bug (#106) (@avik-pal)
- fix state update in NeuralODE example (#107) (@gabrevaya)
- Deprecate
elementwise_*
andapplyactivation
(#113) (@avik-pal) - Go through the dense bias deprecation (#114) (@avik-pal)
- Fix Scale's paramlength (#116) (@lungd)
- Trainable hidden states (#117) (@lungd)
- Rnn bias deprecation (#120) (@lungd)
- Add use_bias kwarg to LSTMCell and GRUCell (#121) (@lungd)
- Update docs for dense layer (#124) (@avik-pal)
- Upper bound ComponentArrays (#125) (@avik-pal)
- Relax ComponentArrays compat (#126) (@avik-pal)
- Layer Normalization Implementation (#127) (@avik-pal)
- LSTM docs: don't go over first element in sequence twice (#132) (@visr)
- fix PairwiseFusion docs (#133) (@YichengDWu)
- Generic recurrent cells (#136) (@jumerckx)
- relu tests with finite diff is too unreliable (#137) (@avik-pal)
- Add kaiming initialization (#138) (@YichengDWu)
- Remove Val in typeinfo of WeightNorm (#140) (@avik-pal)
- Named Layers inside Generic Containers (#143) (@avik-pal)
- Allow fmapping over the model (#144) (@avik-pal)
- Update Imagenet example (#147) (@avik-pal)
- Make normalization more AD friendly (Diffractor) (#148) (@avik-pal)
- Fix CuArray -> Array rrule (#149) (@avik-pal)
- Allow indexing into Chains (#150) (@avik-pal)
- API for freezing layers (#151) (@avik-pal)
- Allow controlling fast activation transformation (#153) (@avik-pal)
- Introducing LuxLib.jl: Effectively pullout some of the custom layer implementations from Lux.jl (#154) (@avik-pal)
- Try relaxing JET version (#155) (@avik-pal)
- Update to use LuxLib (#156) (@avik-pal)
- Allow dispatch using
Lux.apply
(#158) (@avik-pal) - Mark non differentiable code paths (#160) (@avik-pal)
- Fix generic GN dispatch for non 4D arrays (#161) (@avik-pal)
- Add dispatch for subarray (#162) (@avik-pal)
- Add More Layers (#163) (@avik-pal)
- Fix type stability in normalization implementation (#164) (@avik-pal)
- Codecov for lib directories Take 2 (#165) (@avik-pal)
- Add freeze tests to runtests (#166) (@avik-pal)
- Precompile common workflows + check invalidations (#167) (@avik-pal)
- Make normalization typestable (#168) (@avik-pal)
- Add a manual page on precompilation (#169) (@avik-pal)
- Deprecate Lux.transform in favor of Flux2Lux.jl (#170) (@avik-pal)
- Remove dead code and improve var for Tracker.jl support (#171) (@avik-pal)
- Hyper Network Example (#172) (@avik-pal)
- Modify mkdocs settings (#173) (@avik-pal)
- Make ViT work on GPUs (#174) (@avik-pal)
- Add sensible recurrent layer wrappers (#175) (@avik-pal)
setup
only on AbstractRules (#176) (@avik-pal)- Start using Flux2Lux (#177) (@avik-pal)
- Fix some displays (#178) (@avik-pal)
- Relax dropout types (#179) (@avik-pal)
- Add instancenorm and alpha_dropout implementations (#180) (@avik-pal)
- Add InstanceNorm and AlphaDropout (#181) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package examples, (keep existing compat) (#184) (@github-actions[bot])
- remove convert rrule (#185) (@ArnoStrouwen)
- CompatHelper: bump compat for OneHotArrays to 0.2 for package examples, (keep existing compat) (#186) (@github-actions[bot])
- CompatHelper: bump compat for Turing to 0.22 for package examples, (keep existing compat) (#188) (@github-actions[bot])
- Fix layer_map for custom layers (#189) (@avik-pal)
- add example of DDIM implementation (#190) (@yng87)
- LuxCore.jl: Extremely light dependency for Lux Compatibility (#191) (@avik-pal)
- Revert github workflows for merged LuxCore.jl (#193) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package ImageNet, (keep existing compat) (#194) (@github-actions[bot])
- CompatHelper: bump compat for Setfield to 1 for package ImageNet, (keep existing compat) (#195) (@github-actions[bot])
- CompatHelper: bump compat for OneHotArrays to 0.2 for package ImageNet, (keep existing compat) (#196) (@github-actions[bot])
- ADAM -> Adam (#197) (@cossio)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#199) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package examples, (keep existing compat) (#200) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package ImageNet, (keep existing compat) (#201) (@github-actions[bot])
- Add easy tied weights/parameter sharing support (#202) (@avik-pal)
- CompatHelper: bump compat for Functors to 0.4 for package LuxCore, (keep existing compat) (#203) (@github-actions[bot])
- CompatHelper: add new compat entry for Zygote at version 0.6 for package DDIM, (keep existing compat) (#218) (@github-actions[bot])
- Update DDIM compat requirements (#219) (@avik-pal)
- Update examples (#221) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.23 for package examples, (keep existing compat) (#222) (@github-actions[bot])
- Fix docs (#223) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.4 for package examples, (keep existing compat) (#226) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package ImageNet, (keep existing compat) (#227) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package DDIM, (keep existing compat) (#228) (@github-actions[bot])
- Functor ambiguity fix (#229) (@avik-pal)
- Add all compats together (#238) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.24 for package examples, (keep existing compat) (#241) (@github-actions[bot])
- CompatHelper: bump compat for JET to 0.7 for package test, (keep existing compat) (#251) (@github-actions[bot])
- [WIP] Use Extensions for Flux2Lux (#261) (@avik-pal)
- Cleaner test workflow (#262) (@avik-pal)
- Add a patch for #243 (#263) (@avik-pal)
- Update LuxLib dependencies (#265) (@avik-pal)
- Dropping Julia 1.6 support for Lux (#266) (@avik-pal)
- Purge unnecessary dependencies into weak dependencies (#267) (@avik-pal)
- Add ForwardDiff Extension: Dropout (#269) (@avik-pal)
- Add Tracker as an Extension (#272) (@avik-pal)
- CompatHelper: bump compat for AbstractDifferentiation to 0.5 for package examples, (keep existing compat) (#273) (@github-actions[bot])
- Some Improvements (#274) (@avik-pal)
- Tracker has some of the rules (#275) (@avik-pal)
- Temporary CA + Tracker Patches (#276) (@avik-pal)
- Add CUDA and AMDGPU trigger packages (#277) (@avik-pal)
- ReverseDiff Extension (#280) (@avik-pal)
- Bump peter-evans/create-pull-request from 3 to 4 (#283) (@dependabot[bot])
- Bump actions/cache from 1 to 3 (#284) (@dependabot[bot])
- Bump actions/checkout from 1 to 3 (#285) (@dependabot[bot])
- Return the history for Recurrence (#287) (@avik-pal)
- Truncate tuples and namedtuples (#290) (@avik-pal)
- [WIP] Remove projects from
lib
toLuxDL
(#291) (@avik-pal) - Patch freeze (#292) (@avik-pal)
- Add dispatch for no activation (#293) (@avik-pal)
- Remove weakdeps from deps (#295) (@avik-pal)
- Try restoring lts support (#296) (@avik-pal)
- Testing using LuxTestUtils.jl (#297) (@avik-pal)
- CompatHelper: bump compat for Boltz to 0.2 for package ImageNet, (kee… (#298) (@avik-pal)
- Bump peter-evans/create-pull-request from 4 to 5 (#299) (@dependabot[bot])
- remove Dataloaders (#300) (@avik-pal)
- Update docs (#301) (@avik-pal)
- Fix bug in recurrence ordering (#303) (@avik-pal)
- Update LuxComponentArraysExt.jl (#304) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.25 for package examples, (keep existing compat) (#306) (@github-actions...
MLDataDevices-v1.5.3
MLDataDevices MLDataDevices-v1.5.3
MLDataDevices-v1.5.2
MLDataDevices MLDataDevices-v1.5.2
Merged pull requests:
- Rewrite (#7) (@avik-pal)
- Rename to Lux (#11) (@avik-pal)
- Initial Documentation (#14) (@avik-pal)
- Minor Updates (#15) (@avik-pal)
- Better CUDNN Dispatches (#16) (@avik-pal)
- Tutorials (#21) (@avik-pal)
- Proper dispatch for types not supported by CUDNN (#23) (@avik-pal)
- [WIP] Recurrent Neural Networks (#24) (@avik-pal)
- Fix math display in docs (#27) (@gdalle)
- Initial ViT Implementation & Pretrained ImageNet Models (#29) (@avik-pal)
- CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#30) (@github-actions[bot])
- Code Formatting -- SciMLStyle (#31) (@avik-pal)
- Cleanup generated function style (#33) (@avik-pal)
- Update README.md (#37) (@zsz00)
- Fix doc for
PairwiseFusion
(#39) (@theabhirath) - Extending
Scale
to allow for multiple dimension inputs (#40) (@theabhirath) - Fix Zygote error caused due to
fill!
(#41) (@theabhirath) - CompatHelper: bump compat for ComponentArrays to 0.12, (keep existing compat) (#43) (@github-actions[bot])
- Update JET tests to allow julia v1.6 (#47) (@avik-pal)
- Formatting updates and relax parameter type (#48) (@avik-pal)
- Enable doctests in CI (#51) (@avik-pal)
- fix quickstart example (#52) (@visr)
- Test on 1.8 (#54) (@avik-pal)
- Separate out testing unreleased julia versions (#55) (@avik-pal)
- Cleaner and Better Documentation (#56) (@avik-pal)
- Bump Pkg Compats (#66) (@avik-pal)
- CompatHelper: bump compat for MLDatasets to 0.7 for package examples, (keep existing compat) (#67) (@github-actions[bot])
- Manual to translate Flux to Lux (#69) (@avik-pal)
- Try codecov for doctests (#70) (@avik-pal)
- Add tests for utility functions (#74) (@avik-pal)
- Add tip to install packages (#76) (@Karthik-d-k)
- More Testing + Deprecate Nonsensical Functions + Better Naming for Kwargs (#80) (@avik-pal)
- CompatHelper: add new compat entry for Optimisers at version 0.2, (keep existing compat) (#82) (@github-actions[bot])
- Update rrules so that we can support Yota (#85) (@avik-pal)
- CompatHelper: bump compat for FluxMPI to 0.6 for package examples, (keep existing compat) (#86) (@github-actions[bot])
- Update comparison section in overview.md (#88) (@ToucheSir)
- Fix typos (#89) (@claforte)
- Fix minor typos in the docs (#93) (@gabrevaya)
- making x Float32 in migrate from Flux example (#97) (@gabrevaya)
- add init_hidden_state function (#101) (@gabrevaya)
- JLArray is now registered (#103) (@YichengDWu)
- [LuxTraining] Wrappers for less clunky training loops (#104) (@avik-pal)
- Use OneHotArrays (#105) (@YichengDWu)
- Fixes WeightNorm with zero Parameter bug (#106) (@avik-pal)
- fix state update in NeuralODE example (#107) (@gabrevaya)
- Deprecate
elementwise_*
andapplyactivation
(#113) (@avik-pal) - Go through the dense bias deprecation (#114) (@avik-pal)
- Fix Scale's paramlength (#116) (@lungd)
- Trainable hidden states (#117) (@lungd)
- Rnn bias deprecation (#120) (@lungd)
- Add use_bias kwarg to LSTMCell and GRUCell (#121) (@lungd)
- Update docs for dense layer (#124) (@avik-pal)
- Upper bound ComponentArrays (#125) (@avik-pal)
- Relax ComponentArrays compat (#126) (@avik-pal)
- Layer Normalization Implementation (#127) (@avik-pal)
- LSTM docs: don't go over first element in sequence twice (#132) (@visr)
- fix PairwiseFusion docs (#133) (@YichengDWu)
- Generic recurrent cells (#136) (@jumerckx)
- relu tests with finite diff is too unreliable (#137) (@avik-pal)
- Add kaiming initialization (#138) (@YichengDWu)
- Remove Val in typeinfo of WeightNorm (#140) (@avik-pal)
- Named Layers inside Generic Containers (#143) (@avik-pal)
- Allow fmapping over the model (#144) (@avik-pal)
- Update Imagenet example (#147) (@avik-pal)
- Make normalization more AD friendly (Diffractor) (#148) (@avik-pal)
- Fix CuArray -> Array rrule (#149) (@avik-pal)
- Allow indexing into Chains (#150) (@avik-pal)
- API for freezing layers (#151) (@avik-pal)
- Allow controlling fast activation transformation (#153) (@avik-pal)
- Introducing LuxLib.jl: Effectively pullout some of the custom layer implementations from Lux.jl (#154) (@avik-pal)
- Try relaxing JET version (#155) (@avik-pal)
- Update to use LuxLib (#156) (@avik-pal)
- Allow dispatch using
Lux.apply
(#158) (@avik-pal) - Mark non differentiable code paths (#160) (@avik-pal)
- Fix generic GN dispatch for non 4D arrays (#161) (@avik-pal)
- Add dispatch for subarray (#162) (@avik-pal)
- Add More Layers (#163) (@avik-pal)
- Fix type stability in normalization implementation (#164) (@avik-pal)
- Codecov for lib directories Take 2 (#165) (@avik-pal)
- Add freeze tests to runtests (#166) (@avik-pal)
- Precompile common workflows + check invalidations (#167) (@avik-pal)
- Make normalization typestable (#168) (@avik-pal)
- Add a manual page on precompilation (#169) (@avik-pal)
- Deprecate Lux.transform in favor of Flux2Lux.jl (#170) (@avik-pal)
- Remove dead code and improve var for Tracker.jl support (#171) (@avik-pal)
- Hyper Network Example (#172) (@avik-pal)
- Modify mkdocs settings (#173) (@avik-pal)
- Make ViT work on GPUs (#174) (@avik-pal)
- Add sensible recurrent layer wrappers (#175) (@avik-pal)
setup
only on AbstractRules (#176) (@avik-pal)- Start using Flux2Lux (#177) (@avik-pal)
- Fix some displays (#178) (@avik-pal)
- Relax dropout types (#179) (@avik-pal)
- Add instancenorm and alpha_dropout implementations (#180) (@avik-pal)
- Add InstanceNorm and AlphaDropout (#181) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package examples, (keep existing compat) (#184) (@github-actions[bot])
- remove convert rrule (#185) (@ArnoStrouwen)
- CompatHelper: bump compat for OneHotArrays to 0.2 for package examples, (keep existing compat) (#186) (@github-actions[bot])
- CompatHelper: bump compat for Turing to 0.22 for package examples, (keep existing compat) (#188) (@github-actions[bot])
- Fix layer_map for custom layers (#189) (@avik-pal)
- add example of DDIM implementation (#190) (@yng87)
- LuxCore.jl: Extremely light dependency for Lux Compatibility (#191) (@avik-pal)
- Revert github workflows for merged LuxCore.jl (#193) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.3 for package ImageNet, (keep existing compat) (#194) (@github-actions[bot])
- CompatHelper: bump compat for Setfield to 1 for package ImageNet, (keep existing compat) (#195) (@github-actions[bot])
- CompatHelper: bump compat for OneHotArrays to 0.2 for package ImageNet, (keep existing compat) (#196) (@github-actions[bot])
- ADAM -> Adam (#197) (@cossio)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#199) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package examples, (keep existing compat) (#200) (@github-actions[bot])
- CompatHelper: bump compat for Functors to 0.4 for package ImageNet, (keep existing compat) (#201) (@github-actions[bot])
- Add easy tied weights/parameter sharing support (#202) (@avik-pal)
- CompatHelper: bump compat for Functors to 0.4 for package LuxCore, (keep existing compat) (#203) (@github-actions[bot])
- CompatHelper: add new compat entry for Zygote at version 0.6 for package DDIM, (keep existing compat) (#218) (@github-actions[bot])
- Update DDIM compat requirements (#219) (@avik-pal)
- Update examples (#221) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.23 for package examples, (keep existing compat) (#222) (@github-actions[bot])
- Fix docs (#223) (@avik-pal)
- CompatHelper: bump compat for MLUtils to 0.4 for package examples, (keep existing compat) (#226) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package ImageNet, (keep existing compat) (#227) (@github-actions[bot])
- CompatHelper: bump compat for MLUtils to 0.4 for package DDIM, (keep existing compat) (#228) (@github-actions[bot])
- Functor ambiguity fix (#229) (@avik-pal)
- Add all compats together (#238) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.24 for package examples, (keep existing compat) (#241) (@github-actions[bot])
- CompatHelper: bump compat for JET to 0.7 for package test, (keep existing compat) (#251) (@github-actions[bot])
- [WIP] Use Extensions for Flux2Lux (#261) (@avik-pal)
- Cleaner test workflow (#262) (@avik-pal)
- Add a patch for #243 (#263) (@avik-pal)
- Update LuxLib dependencies (#265) (@avik-pal)
- Dropping Julia 1.6 support for Lux (#266) (@avik-pal)
- Purge unnecessary dependencies into weak dependencies (#267) (@avik-pal)
- Add ForwardDiff Extension: Dropout (#269) (@avik-pal)
- Add Tracker as an Extension (#272) (@avik-pal)
- CompatHelper: bump compat for AbstractDifferentiation to 0.5 for package examples, (keep existing compat) (#273) (@github-actions[bot])
- Some Improvements (#274) (@avik-pal)
- Tracker has some of the rules (#275) (@avik-pal)
- Temporary CA + Tracker Patches (#276) (@avik-pal)
- Add CUDA and AMDGPU trigger packages (#277) (@avik-pal)
- ReverseDiff Extension (#280) (@avik-pal)
- Bump peter-evans/create-pull-request from 3 to 4 (#283) (@dependabot[bot])
- Bump actions/cache from 1 to 3 (#284) (@dependabot[bot])
- Bump actions/checkout from 1 to 3 (#285) (@dependabot[bot])
- Return the history for Recurrence (#287) (@avik-pal)
- Truncate tuples and namedtuples (#290) (@avik-pal)
- [WIP] Remove projects from
lib
toLuxDL
(#291) (@avik-pal) - Patch freeze (#292) (@avik-pal)
- Add dispatch for no activation (#293) (@avik-pal)
- Remove weakdeps from deps (#295) (@avik-pal)
- Try restoring lts support (#296) (@avik-pal)
- Testing using LuxTestUtils.jl (#297) (@avik-pal)
- CompatHelper: bump compat for Boltz to 0.2 for package ImageNet, (kee… (#298) (@avik-pal)
- Bump peter-evans/create-pull-request from 4 to 5 (#299) (@dependabot[bot])
- remove Dataloaders (#300) (@avik-pal)
- Update docs (#301) (@avik-pal)
- Fix bug in recurrence ordering (#303) (@avik-pal)
- Update LuxComponentArraysExt.jl (#304) (@avik-pal)
- CompatHelper: bump compat for Turing to 0.25 for package examples, (keep existing compat) (#306) (@gi...
v1.2.3
Lux v1.2.3
Merged pull requests: