diff --git a/README.md b/README.md index e70392c..b05610f 100644 --- a/README.md +++ b/README.md @@ -134,6 +134,11 @@ Running the pre-trained models on sample images can the easily be done via: dune exec examples/pretrained/predict.exe path/to/resnet18.ot images/tiger.jpg ``` +## Internals + +ocaml-torch uses extensive code generation to produce bindings to thousands of torch C++ functions. +Read [internals.md](./internals.md) for details. + ## Acknowledgements Many thanks to [@LaurentMazare](https://github.com/LaurentMazare) for the [original diff --git a/images/codegen_graph.dot b/images/codegen_graph.dot new file mode 100644 index 0000000..a673e4f --- /dev/null +++ b/images/codegen_graph.dot @@ -0,0 +1,59 @@ +digraph { + subgraph cluster_binding_gen { + style=filled + color=lightgrey + label="binding generation" + declarations [label="declarations (yaml)"] + bindinggen [label="binding gen exe"]; + } + subgraph cluster_bindings { + style=filled + color=lightgrey + label="stub generation and bindings" + bindings [label="bindings (manual)"] + bindingsg [label="bindings (generated)"] + stubgen [label="ctypes stub gen exe"]; + } + subgraph cluster_wrapper { + style=filled + color=lightgrey + label="wrapper" + {rank=same; + stubsml [label="OCaml stubs (manual)", group=g1]; + stubsmlg [label="OCaml stubs (generated)", group=g2]; + } + {rank=same; + stubsc [label="C stubs (manual)", group=g1]; + stubscg [label="C stubs (generated)", group=g2]; + } + {rank=same; + apiml [label="OCaml wrapper (manual)", group=g1]; + apimlg [label="OCaml wrapper (generated)", group=g2]; + } + {rank=same; + apic [label="C/C++ API (manual)", group=g1]; + apicg [label="C/C++ API (generated)", group=g2]; + } + } + + + // GENERATION + bindinggen -> bindingsg [penwidth=2]; + bindinggen -> apimlg [penwidth=2]; + bindinggen -> apicg [penwidth=2]; + stubgen -> stubscg [penwidth=2]; + stubgen -> stubsmlg [penwidth=2]; + + // DEPENDENCY + declarations -> bindinggen[style="dashed"]; + bindings -> stubgen[style="dashed"]; + bindingsg -> stubgen[style="dashed"]; + apic -> stubsc[style="dashed"]; + apic -> stubscg[style="dashed"]; + apicg -> stubscg[style="dashed"]; + stubsc -> stubsml[style="dashed"]; + stubscg -> stubsmlg[style="dashed"]; + stubsml -> apiml[style="dashed"]; + stubsml -> apimlg[style="dashed"]; + stubsmlg -> apimlg[style="dashed"]; +} diff --git a/internals.md b/internals.md new file mode 100644 index 0000000..6f4d0e6 --- /dev/null +++ b/internals.md @@ -0,0 +1,96 @@ +# ocaml-torch internals + +ocaml-torch faces several challenges, including: +* binding to thousands of functions +* avoiding any minor memory leaks in these functions +* quickly cleaning up the memory allocations of tensors when OCaml is done with them + +In order to solve this, we have 2 steps of code generation. In this diagram, solid arrows +represent the code generation DAG and dashed arrows represent the code dependency DAG: + +![code generation DAGs](./images/codegen_graph.png) + +At a high level, + +* Declarations.yaml contains the function signatures for the whole Torch C++ API. +* Custom binding generation reads all the declarations, and whenever possible, generating + * glue code for crossing between C/C++ (the generated C/C++ API), + * glue code for using the (yet to be generated) OCaml `foreign` functions in OCaml (the generated OCaml wrapper), + * and `ctypes` bindings. +* Stub generation uses the `ctypes` library, reading the bindings and generating C and + OCaml stubs. These are just glue code to handle C/OCaml FFI. Note that we have some + manually-written C++ functions and bindings that get generated stubs. +* There are an extremely small number of manually-written stubs (just 1 as of writing) + that ctypes cannot handle. +* A combination of the generated OCaml wrapper and manually written wrapper provide an + actually usable OCaml API. These are further built upon in the main library (not + pictured). + +# Memory management + +A large part of this complexity is driven by memory management. + +## Avoiding memory leaks + +It is challenging to write manual FFI stubs without memory leaks or race conditions. We +use `ctypes` to make sure we get this right on the vast majority of functions. Although it +requires a second code generation step, this spares us from reinventing stub generation. + +## Cleaning up tensors + +We ensure that tensors are freed when OCaml garbage collects them. To do this, each Tensor +is equipped with a custom finalizer. This could be done on either the C++ or OCaml side. +However, the API to inform OCaml of a tensor's true size in memory only exists in C++ (the +custom block API). Without this, OCaml would not know when to garbage collect on CPU and +would OOM easily. + + +Note that: + +* We have not yet informed OCaml of each tensor's true size, but this is coming soon. +* OCaml is unaware of GPU memory usage. GPU users may need to manually free tensors or + manaully garbage collect. + +### Raw tensors and GC tensors + +One wrinkle in this setup is that ctypes cannot handle custom blocks. Since we want the +bulk of our stubs to be generated by ctypes, we create a distinction between `raw_tensor`s +and `gc_tensor`s. + +| | raw tensor | GC tensor | +|--------------------|------------|-------------| +| has finalizer? | no | yes | +| GC knows its size? | no | coming soon | +| FFI input for C? | no | yes | +| FFI output from C? | yes | no | +| ctypes type | void ptr | void ptr | + +The only way to convert from a `raw_tensor` to `gc_tensor` is with the hand-written, +non-ctypes function `with_tensor_gc`. It is used copiously in the generated OCaml wrapper +code to ensure we only surface GC tensors to the user. + +The lifecycle of each tensor looks like this: + +1. Some wrapper function `let t = Tensor.foo ()` gets invoked, which makes its way into C++. +2. C++ returns a `raw_tensor` that goes through a regular ctypes stub and makes its way + back to the OCaml `Tensor.foo` call. +3. Still in `Tensor.foo`, `with_tensor_gc` gets invoked. This goes back into C++ and + copies the pointer (but not the data) of the tensor to a new custom block. It now has + known off-heap size and a finalizer to free its memory. This gets returned to OCaml + with the same memory layout ctypes uses but without going through ctypes. +4. Now `let () = Tensor.bar t` gets invoked. This goes through usual ctypes stubs, since + `t` looks just like a regular `void ptr` to ctypes. +5. Eventually `t` gets garbage collected. OCaml traverses its blocks and runs the + finalizer on each one, freeing the tensor's data. + +The memory of each tensor (raw or GC) looks like this: + +``` + block 1 block2 + ------------------ ---------- +root -> | ctypes fat ptr |----> | void * |----> tensor + ------------------ ---------- + +``` + +For GC tensors, `block2` is the one with finalizer and off-heap memory. diff --git a/src/bindings/dune b/src/bindings/dune new file mode 100644 index 0000000..396ef14 --- /dev/null +++ b/src/bindings/dune @@ -0,0 +1,6 @@ +(library + (name torch_bindings) + (public_name torch.bindings) + (libraries ctypes.stubs) + (preprocess + (pps ppx_jane))) diff --git a/src/bindings/torch_bindings.ml b/src/bindings/torch_bindings.ml new file mode 100644 index 0000000..1dc33c5 --- /dev/null +++ b/src/bindings/torch_bindings.ml @@ -0,0 +1,284 @@ +open Ctypes +module Type_defs = Type_defs + +module C (F : Cstubs.FOREIGN) = struct + open Type_defs + open F + + let manual_seed = foreign "at_manual_seed" (int64_t @-> returning void) + let free = foreign "free" (ptr void @-> returning void) + let get_num_threads = foreign "at_get_num_threads" (void @-> returning int) + let set_num_threads = foreign "at_set_num_threads" (int @-> returning void) + + module Tensor = struct + let new_tensor = foreign "at_new_tensor" (void @-> returning raw_tensor) + + let tensor_of_data = + foreign + "at_tensor_of_data" + (ptr void + (* data *) + @-> ptr int64_t + (* dims *) + @-> int + (* ndims *) + @-> int + (* element size in bytes *) + @-> int + (* kind *) + @-> returning raw_tensor) + ;; + + let copy_data = + foreign + "at_copy_data" + (gc_tensor + (* tensor *) + @-> ptr void + (* data *) + @-> int64_t + (* numel *) + @-> int + (* element size in bytes *) + @-> returning void) + ;; + + let copy_ = + foreign "at_copy_" (gc_tensor (* dst *) @-> gc_tensor (* src *) @-> returning void) + ;; + + let set_data = + foreign + "at_set_data" + (gc_tensor (* dst *) @-> gc_tensor (* src *) @-> returning void) + ;; + + let float_vec = + foreign + "at_float_vec" + (ptr double (* values *) + @-> int (* num values *) + @-> int (* kind *) + @-> returning raw_tensor) + ;; + + let int_vec = + foreign + "at_int_vec" + (ptr int64_t + (* values *) + @-> int + (* num values *) + @-> int + (* kind *) + @-> returning raw_tensor) + ;; + + let device = foreign "at_device" (gc_tensor @-> returning int) + let defined = foreign "at_defined" (gc_tensor @-> returning bool) + let num_dims = foreign "at_dim" (gc_tensor @-> returning int) + let shape = foreign "at_shape" (gc_tensor @-> ptr int (* dims *) @-> returning void) + let scalar_type = foreign "at_scalar_type" (gc_tensor @-> returning int) + let backward = foreign "at_backward" (gc_tensor @-> int @-> int @-> returning void) + let requires_grad = foreign "at_requires_grad" (gc_tensor @-> returning int) + let grad_set_enabled = foreign "at_grad_set_enabled" (int @-> returning int) + let get = foreign "at_get" (gc_tensor @-> int @-> returning raw_tensor) + + let double_value = + foreign + "at_double_value_at_indexes" + (gc_tensor @-> ptr int @-> int @-> returning float) + ;; + + let int64_value = + foreign + "at_int64_value_at_indexes" + (gc_tensor @-> ptr int @-> int @-> returning int64_t) + ;; + + let double_value_set = + foreign + "at_set_double_value_at_indexes" + (gc_tensor @-> ptr int @-> int @-> float @-> returning void) + ;; + + let int64_value_set = + foreign + "at_set_int64_value_at_indexes" + (gc_tensor @-> ptr int @-> int @-> int64_t @-> returning void) + ;; + + let fill_double = foreign "at_fill_double" (gc_tensor @-> float @-> returning void) + let fill_int64 = foreign "at_fill_int64" (gc_tensor @-> int64_t @-> returning void) + let print = foreign "at_print" (gc_tensor @-> returning void) + let to_string = foreign "at_to_string" (gc_tensor @-> int @-> returning string) + let free = foreign "at_free" (gc_tensor @-> returning void) + + let run_backward = + foreign + "at_run_backward" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr raw_tensor + @-> int + @-> int + @-> returning void) + ;; + end + + module Scalar = struct + let int = foreign "ats_int" (int64_t @-> returning scalar) + let float = foreign "ats_float" (float @-> returning scalar) + let free = foreign "ats_free" (scalar @-> returning void) + end + + module Serialize = struct + let save = foreign "at_save" (gc_tensor @-> string @-> returning void) + let load = foreign "at_load" (string @-> returning raw_tensor) + + let save_multi = + foreign + "at_save_multi" + (ptr gc_tensor @-> ptr (ptr char) @-> int @-> string @-> returning void) + ;; + + let load_multi = + foreign + "at_load_multi" + (ptr raw_tensor @-> ptr (ptr char) @-> int @-> string @-> returning void) + ;; + + let load_multi_ = + foreign + "at_load_multi_" + (ptr gc_tensor @-> ptr (ptr char) @-> int @-> string @-> returning void) + ;; + + let load_callback = + foreign + "at_load_callback" + (string + @-> static_funptr Ctypes.(string @-> raw_tensor @-> returning void) + @-> returning void) + ;; + end + + module Optimizer = struct + let adam = + foreign + "ato_adam" + (float @-> float @-> float @-> float @-> float @-> returning optimizer) + ;; + + let rmsprop = + foreign + "ato_rmsprop" + (float + (* learning rate *) + @-> float + (* alpha *) + @-> float + (* eps *) + @-> float + (* weight decay *) + @-> float + (* momentum *) + @-> int + (* centered *) + @-> returning optimizer) + ;; + + let sgd = + foreign + "ato_sgd" + (float + (* learning rate *) + @-> float + (* momentum *) + @-> float + (* dampening *) + @-> float + (* weight decay *) + @-> bool + (* nesterov *) + @-> returning optimizer) + ;; + + let add_parameters = + foreign "ato_add_parameters" (optimizer @-> ptr gc_tensor @-> int @-> returning void) + ;; + + let set_learning_rate = + foreign "ato_set_learning_rate" (optimizer @-> float @-> returning void) + ;; + + let set_momentum = foreign "ato_set_momentum" (optimizer @-> float @-> returning void) + let zero_grad = foreign "ato_zero_grad" (optimizer @-> returning void) + let step = foreign "ato_step" (optimizer @-> returning void) + let free = foreign "ato_free" (optimizer @-> returning void) + end + + module Cuda = struct + let device_count = foreign "atc_cuda_device_count" (void @-> returning int) + let is_available = foreign "atc_cuda_is_available" (void @-> returning int) + let cudnn_is_available = foreign "atc_cudnn_is_available" (void @-> returning int) + let set_benchmark_cudnn = foreign "atc_set_benchmark_cudnn" (int @-> returning void) + end + + module Ivalue = struct + let to_int64 = foreign "ati_to_int" (ivalue @-> returning int64_t) + let to_bool = foreign "ati_to_bool" (ivalue @-> returning int) + let to_double = foreign "ati_to_double" (ivalue @-> returning double) + let to_tensor = foreign "ati_to_tensor" (ivalue @-> returning raw_tensor) + let tuple_length = foreign "ati_tuple_length" (ivalue @-> returning int) + let list_length = foreign "ati_list_length" (ivalue @-> returning int) + + let to_tuple = + foreign "ati_to_tuple" (ivalue @-> ptr ivalue @-> int @-> returning void) + ;; + + let to_tensor_list = + foreign "ati_to_tensor_list" (ivalue @-> ptr raw_tensor @-> int @-> returning void) + ;; + + let to_generic_list = + foreign "ati_to_generic_list" (ivalue @-> ptr ivalue @-> int @-> returning void) + ;; + + let to_string = foreign "ati_to_string" (ivalue @-> returning string) + let none = foreign "ati_none" (void @-> returning ivalue) + let bool = foreign "ati_bool" (int @-> returning ivalue) + let tensor = foreign "ati_tensor" (gc_tensor @-> returning ivalue) + let int64 = foreign "ati_int" (int64_t @-> returning ivalue) + let double = foreign "ati_double" (float @-> returning ivalue) + let tuple = foreign "ati_tuple" (ptr ivalue @-> int @-> returning ivalue) + + let tensor_list = + foreign "ati_tensor_list" (ptr gc_tensor @-> int @-> returning ivalue) + ;; + + let string = foreign "ati_string" (string @-> returning ivalue) + let tag = foreign "ati_tag" (ivalue @-> returning int) + let free = foreign "ati_free" (ivalue @-> returning void) + end + + module Module = struct + let load = foreign "atm_load" (string @-> int @-> returning module_) + let load_str = foreign "atm_load_str" (string @-> int @-> int @-> returning module_) + + let forward = + foreign "atm_forward" (module_ @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let forward_ = + foreign "atm_forward_" (module_ @-> ptr ivalue @-> int @-> returning ivalue) + ;; + + let named_buffers = foreign "atm_named_buffers" (module_ @-> returning ivalue) + let free = foreign "atm_free" (module_ @-> returning void) + end + + module Generated = Torch_bindings_generated.C (F) +end diff --git a/src/bindings/torch_bindings_generated.ml b/src/bindings/torch_bindings_generated.ml new file mode 100644 index 0000000..ca88d0e --- /dev/null +++ b/src/bindings/torch_bindings_generated.ml @@ -0,0 +1,17947 @@ +(* THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! *) + +open Ctypes + +module C0 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs___and__ = foreign "atg___and__" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs___and__tensor_ = + foreign "atg___and__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___iand__ = + foreign "atg___iand__" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___iand__tensor_ = + foreign "atg___iand__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___ilshift__ = + foreign "atg___ilshift__" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___ilshift__tensor_ = + foreign "atg___ilshift__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___ior__ = foreign "atg___ior__" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs___ior__tensor_ = + foreign "atg___ior__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___irshift__ = + foreign "atg___irshift__" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___irshift__tensor_ = + foreign "atg___irshift__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___ixor__ = + foreign "atg___ixor__" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___ixor__tensor_ = + foreign "atg___ixor__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___lshift__ = + foreign "atg___lshift__" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___lshift__scalar_out_ = + foreign + "atg___lshift__scalar_out_" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___lshift__tensor_ = + foreign "atg___lshift__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___lshift__tensor_out_ = + foreign + "atg___lshift__tensor_out_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___or__ = foreign "atg___or__" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs___or__tensor_ = + foreign "atg___or__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___rshift__ = + foreign "atg___rshift__" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___rshift__scalar_out_ = + foreign + "atg___rshift__scalar_out_" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs___rshift__tensor_ = + foreign "atg___rshift__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___rshift__tensor_out_ = + foreign + "atg___rshift__tensor_out_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs___xor__ = foreign "atg___xor__" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs___xor__tensor_ = + foreign "atg___xor__tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool2d = + foreign + "atg__adaptive_avg_pool2d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool2d_backward = + foreign + "atg__adaptive_avg_pool2d_backward" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool2d_backward_out = + foreign + "atg__adaptive_avg_pool2d_backward_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool2d_out = + foreign + "atg__adaptive_avg_pool2d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool3d = + foreign + "atg__adaptive_avg_pool3d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool3d_backward = + foreign + "atg__adaptive_avg_pool3d_backward" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool3d_backward_out = + foreign + "atg__adaptive_avg_pool3d_backward_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__adaptive_avg_pool3d_out = + foreign + "atg__adaptive_avg_pool3d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__add_batch_dim = + foreign + "atg__add_batch_dim" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__add_relu = + foreign "atg__add_relu" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__add_relu_ = + foreign "atg__add_relu_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__add_relu_out = + foreign + "atg__add_relu_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__add_relu_scalar = + foreign "atg__add_relu_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs__add_relu_scalar_ = + foreign "atg__add_relu_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs__add_relu_scalar_out = + foreign + "atg__add_relu_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs__addmm_activation = + foreign + "atg__addmm_activation" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__addmm_activation_out = + foreign + "atg__addmm_activation_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs__aminmax = + foreign "atg__aminmax" (ptr raw_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs__aminmax_dim = + foreign + "atg__aminmax_dim" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> returning void) + ;; + + let stubs__aminmax_dim_out = + foreign + "atg__aminmax_dim_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__aminmax_out = + foreign + "atg__aminmax_out" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs__amp_update_scale = + foreign + "atg__amp_update_scale" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> int64_t + @-> returning void) + ;; + + let stubs__amp_update_scale_ = + foreign + "atg__amp_update_scale_" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__amp_update_scale_out = + foreign + "atg__amp_update_scale_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__assert_tensor_metadata = + foreign + "atg__assert_tensor_metadata" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs__autocast_to_full_precision = + foreign + "atg__autocast_to_full_precision" + (gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__autocast_to_reduced_precision = + foreign + "atg__autocast_to_reduced_precision" + (gc_tensor @-> int @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_byte = + foreign "atg__cast_byte" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_char = + foreign "atg__cast_char" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_double = + foreign "atg__cast_double" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_float = + foreign "atg__cast_float" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_half = + foreign "atg__cast_half" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_int = + foreign "atg__cast_int" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_long = + foreign "atg__cast_long" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cast_short = + foreign "atg__cast_short" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cdist_backward = + foreign + "atg__cdist_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__cdist_backward_out = + foreign + "atg__cdist_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__cholesky_solve_helper = + foreign + "atg__cholesky_solve_helper" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cholesky_solve_helper_out = + foreign + "atg__cholesky_solve_helper_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__coalesce = foreign "atg__coalesce" (gc_tensor @-> returning raw_tensor) + + let stubs__coalesce_out = + foreign "atg__coalesce_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__coalesced = + foreign "atg__coalesced" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__coalesced_ = + foreign "atg__coalesced_" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__coalesced_out = + foreign "atg__coalesced_out" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__compute_linear_combination = + foreign + "atg__compute_linear_combination" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__compute_linear_combination_out = + foreign + "atg__compute_linear_combination_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__conj = foreign "atg__conj" (gc_tensor @-> returning raw_tensor) + let stubs__conj_copy = foreign "atg__conj_copy" (gc_tensor @-> returning raw_tensor) + + let stubs__conj_copy_out = + foreign "atg__conj_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__conj_physical = + foreign "atg__conj_physical" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__conj_physical_out = + foreign "atg__conj_physical_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__conv_depthwise2d = + foreign + "atg__conv_depthwise2d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__conv_depthwise2d_out = + foreign + "atg__conv_depthwise2d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__convert_indices_from_coo_to_csr = + foreign + "atg__convert_indices_from_coo_to_csr" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__convert_indices_from_coo_to_csr_out = + foreign + "atg__convert_indices_from_coo_to_csr_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__convert_indices_from_csr_to_coo = + foreign + "atg__convert_indices_from_csr_to_coo" + (gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__convert_indices_from_csr_to_coo_out = + foreign + "atg__convert_indices_from_csr_to_coo_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__convolution = + foreign + "atg__convolution" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__convolution_deprecated = + foreign + "atg__convolution_deprecated" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__convolution_mode = + foreign + "atg__convolution_mode" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> string + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__convolution_out = + foreign + "atg__convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__copy_from = + foreign "atg__copy_from" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__copy_from_and_resize = + foreign "atg__copy_from_and_resize" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__copy_from_and_resize_out = + foreign + "atg__copy_from_and_resize_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__copy_from_out = + foreign + "atg__copy_from_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__cslt_compress = + foreign "atg__cslt_compress" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__cslt_sparse_mm = + foreign + "atg__cslt_sparse_mm" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__ctc_loss = + foreign + "atg__ctc_loss" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__ctc_loss_backward = + foreign + "atg__ctc_loss_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__ctc_loss_backward_out = + foreign + "atg__ctc_loss_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__ctc_loss_backward_tensor = + foreign + "atg__ctc_loss_backward_tensor" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__ctc_loss_out = + foreign + "atg__ctc_loss_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__ctc_loss_tensor = + foreign + "atg__ctc_loss_tensor" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__ctc_loss_tensor_out = + foreign + "atg__ctc_loss_tensor_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__cudnn_ctc_loss = + foreign + "atg__cudnn_ctc_loss" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs__cudnn_ctc_loss_out = + foreign + "atg__cudnn_ctc_loss_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; +end + +module C1 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs__cudnn_ctc_loss_tensor = + foreign + "atg__cudnn_ctc_loss_tensor" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs__cudnn_init_dropout_state = + foreign + "atg__cudnn_init_dropout_state" + (double @-> int @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__cudnn_init_dropout_state_out = + foreign + "atg__cudnn_init_dropout_state_out" + (gc_tensor @-> double @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__cudnn_rnn = + foreign + "atg__cudnn_rnn" + (ptr raw_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> double + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs__cudnn_rnn_flatten_weight = + foreign + "atg__cudnn_rnn_flatten_weight" + (ptr gc_tensor + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__cudnn_rnn_flatten_weight_out = + foreign + "atg__cudnn_rnn_flatten_weight_out" + (gc_tensor + @-> ptr gc_tensor + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__cudnn_rnn_out = + foreign + "atg__cudnn_rnn_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> double + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs__debug_has_internal_overlap = + foreign "atg__debug_has_internal_overlap" (gc_tensor @-> returning int64_t) + ;; + + let stubs__dim_arange = + foreign "atg__dim_arange" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__dimi = foreign "atg__dimi" (gc_tensor @-> returning int64_t) + let stubs__dimv = foreign "atg__dimv" (gc_tensor @-> returning int64_t) + + let stubs__dirichlet_grad = + foreign + "atg__dirichlet_grad" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__dirichlet_grad_out = + foreign + "atg__dirichlet_grad_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__efficient_attention_backward = + foreign + "atg__efficient_attention_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> double + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> double + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__efficientzerotensor = + foreign + "atg__efficientzerotensor" + (ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__efficientzerotensor_out = + foreign + "atg__efficientzerotensor_out" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__embedding_bag = + foreign + "atg__embedding_bag" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> int64_t + @-> returning void) + ;; + + let stubs__embedding_bag_backward = + foreign + "atg__embedding_bag_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__embedding_bag_dense_backward = + foreign + "atg__embedding_bag_dense_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__embedding_bag_dense_backward_out = + foreign + "atg__embedding_bag_dense_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__embedding_bag_forward_only = + foreign + "atg__embedding_bag_forward_only" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> int64_t + @-> returning void) + ;; + + let stubs__embedding_bag_forward_only_out = + foreign + "atg__embedding_bag_forward_only_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> int64_t + @-> returning void) + ;; + + let stubs__embedding_bag_out = + foreign + "atg__embedding_bag_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> int64_t + @-> returning void) + ;; + + let stubs__embedding_bag_per_sample_weights_backward = + foreign + "atg__embedding_bag_per_sample_weights_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__embedding_bag_per_sample_weights_backward_out = + foreign + "atg__embedding_bag_per_sample_weights_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__embedding_bag_sparse_backward = + foreign + "atg__embedding_bag_sparse_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__empty_affine_quantized = + foreign + "atg__empty_affine_quantized" + (ptr int64_t @-> int @-> int @-> int @-> double @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__empty_affine_quantized_out = + foreign + "atg__empty_affine_quantized_out" + (gc_tensor @-> ptr int64_t @-> int @-> double @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__empty_per_channel_affine_quantized = + foreign + "atg__empty_per_channel_affine_quantized" + (ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__empty_per_channel_affine_quantized_out = + foreign + "atg__empty_per_channel_affine_quantized_out" + (gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__euclidean_dist = + foreign "atg__euclidean_dist" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__euclidean_dist_out = + foreign + "atg__euclidean_dist_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__fake_quantize_learnable_per_channel_affine = + foreign + "atg__fake_quantize_learnable_per_channel_affine" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs__fake_quantize_learnable_per_channel_affine_backward = + foreign + "atg__fake_quantize_learnable_per_channel_affine_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> double + @-> returning void) + ;; + + let stubs__fake_quantize_learnable_per_channel_affine_out = + foreign + "atg__fake_quantize_learnable_per_channel_affine_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs__fake_quantize_learnable_per_tensor_affine = + foreign + "atg__fake_quantize_learnable_per_tensor_affine" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs__fake_quantize_learnable_per_tensor_affine_backward = + foreign + "atg__fake_quantize_learnable_per_tensor_affine_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> double + @-> returning void) + ;; + + let stubs__fake_quantize_learnable_per_tensor_affine_out = + foreign + "atg__fake_quantize_learnable_per_tensor_affine_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs__fake_quantize_per_tensor_affine_cachemask_tensor_qparams = + foreign + "atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning void) + ;; + + let stubs__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out = + foreign + "atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning void) + ;; + + let stubs__fft_c2c = + foreign + "atg__fft_c2c" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__fft_c2c_out = + foreign + "atg__fft_c2c_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__fft_c2r = + foreign + "atg__fft_c2r" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__fft_c2r_out = + foreign + "atg__fft_c2r_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__fft_r2c = + foreign + "atg__fft_r2c" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__fft_r2c_out = + foreign + "atg__fft_r2c_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__fill_mem_eff_dropout_mask_ = + foreign + "atg__fill_mem_eff_dropout_mask_" + (gc_tensor @-> double @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__flash_attention_backward = + foreign + "atg__flash_attention_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> double + @-> int + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> returning void) + ;; + + let stubs__foobar = + foreign "atg__foobar" (gc_tensor @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__foobar_out = + foreign + "atg__foobar_out" + (gc_tensor @-> gc_tensor @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__functional_assert_async = + foreign + "atg__functional_assert_async" + (gc_tensor @-> string @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__functional_sym_constrain_range = + foreign + "atg__functional_sym_constrain_range" + (scalar + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__functional_sym_constrain_range_for_size = + foreign + "atg__functional_sym_constrain_range_for_size" + (scalar + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__fused_adam = + foreign + "atg__fused_adam" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> double + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_adam_ = + foreign + "atg__fused_adam_" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> double + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_adam_tensor_lr_ = + foreign + "atg__fused_adam_tensor_lr_" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_adam_tensor_lr_out = + foreign + "atg__fused_adam_tensor_lr_out" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_adamw = + foreign + "atg__fused_adamw" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> double + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_adamw_ = + foreign + "atg__fused_adamw_" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> double + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_adamw_tensor_lr_ = + foreign + "atg__fused_adamw_tensor_lr_" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_adamw_tensor_lr_out = + foreign + "atg__fused_adamw_tensor_lr_out" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> double + @-> double + @-> double + @-> double + @-> int + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__fused_dropout = + foreign + "atg__fused_dropout" + (ptr raw_tensor @-> gc_tensor @-> double @-> returning void) + ;; + + let stubs__fused_dropout_out = + foreign + "atg__fused_dropout_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs__fused_moving_avg_obs_fq_helper = + foreign + "atg__fused_moving_avg_obs_fq_helper" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs__fused_moving_avg_obs_fq_helper_functional = + foreign + "atg__fused_moving_avg_obs_fq_helper_functional" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs__fused_moving_avg_obs_fq_helper_out = + foreign + "atg__fused_moving_avg_obs_fq_helper_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs__fused_sdp_choice = + foreign + "atg__fused_sdp_choice" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> double + @-> int + @-> returning int64_t) + ;; + + let stubs__fw_primal = + foreign "atg__fw_primal" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__fw_primal_copy = + foreign "atg__fw_primal_copy" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__fw_primal_copy_out = + foreign + "atg__fw_primal_copy_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__gather_sparse_backward = + foreign + "atg__gather_sparse_backward" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__grid_sampler_2d_cpu_fallback = + foreign + "atg__grid_sampler_2d_cpu_fallback" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__grid_sampler_2d_cpu_fallback_backward = + foreign + "atg__grid_sampler_2d_cpu_fallback_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__grid_sampler_2d_cpu_fallback_out = + foreign + "atg__grid_sampler_2d_cpu_fallback_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__has_compatible_shallow_copy_type = + foreign + "atg__has_compatible_shallow_copy_type" + (gc_tensor @-> gc_tensor @-> returning bool) + ;; + + let stubs__has_same_storage_numel = + foreign "atg__has_same_storage_numel" (gc_tensor @-> gc_tensor @-> returning bool) + ;; + + let stubs__histogramdd_bin_edges = + foreign + "atg__histogramdd_bin_edges" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr double + @-> int + @-> gc_tensor + @-> int + @-> returning (ptr raw_tensor)) + ;; + + let stubs__histogramdd_bin_edges_out = + foreign + "atg__histogramdd_bin_edges_out" + (ptr gc_tensor + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr double + @-> int + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs__histogramdd_from_bin_cts = + foreign + "atg__histogramdd_from_bin_cts" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr double + @-> int + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs__histogramdd_from_bin_cts_out = + foreign + "atg__histogramdd_from_bin_cts_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr double + @-> int + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs__histogramdd_from_bin_tensors = + foreign + "atg__histogramdd_from_bin_tensors" + (gc_tensor @-> ptr gc_tensor @-> int @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__histogramdd_from_bin_tensors_out = + foreign + "atg__histogramdd_from_bin_tensors_out" + (gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs__index_put_impl = + foreign + "atg__index_put_impl" + (gc_tensor + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__index_put_impl_ = + foreign + "atg__index_put_impl_" + (gc_tensor + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__index_put_impl_out = + foreign + "atg__index_put_impl_out" + (gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__indices = foreign "atg__indices" (gc_tensor @-> returning raw_tensor) + + let stubs__indices_copy = + foreign "atg__indices_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__indices_copy_out = + foreign "atg__indices_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__int_mm = + foreign "atg__int_mm" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__int_mm_out = + foreign + "atg__int_mm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__is_all_true = foreign "atg__is_all_true" (gc_tensor @-> returning raw_tensor) + let stubs__is_any_true = foreign "atg__is_any_true" (gc_tensor @-> returning raw_tensor) + let stubs__is_zerotensor = foreign "atg__is_zerotensor" (gc_tensor @-> returning bool) + + let stubs__linalg_check_errors = + foreign "atg__linalg_check_errors" (gc_tensor @-> string @-> int @-> returning void) + ;; + + let stubs__linalg_det = + foreign "atg__linalg_det" (ptr raw_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs__linalg_det_result = + foreign + "atg__linalg_det_result" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__linalg_eigh = + foreign + "atg__linalg_eigh" + (ptr raw_tensor @-> gc_tensor @-> string @-> int @-> returning void) + ;; + + let stubs__linalg_eigh_eigenvalues = + foreign + "atg__linalg_eigh_eigenvalues" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> string + @-> int + @-> returning void) + ;; + + let stubs__linalg_slogdet = + foreign "atg__linalg_slogdet" (ptr raw_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs__linalg_slogdet_sign = + foreign + "atg__linalg_slogdet_sign" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; +end + +module C2 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs__linalg_solve_ex = + foreign + "atg__linalg_solve_ex" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs__linalg_solve_ex_result = + foreign + "atg__linalg_solve_ex_result" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs__linalg_svd = + foreign + "atg__linalg_svd" + (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> string @-> returning void) + ;; + + let stubs__linalg_svd_u = + foreign + "atg__linalg_svd_u" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> string + @-> returning void) + ;; + + let stubs__log_softmax = + foreign "atg__log_softmax" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__log_softmax_backward_data = + foreign + "atg__log_softmax_backward_data" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__log_softmax_backward_data_out = + foreign + "atg__log_softmax_backward_data_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__log_softmax_out = + foreign + "atg__log_softmax_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__logcumsumexp = + foreign "atg__logcumsumexp" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__logcumsumexp_out = + foreign + "atg__logcumsumexp_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__lstm_mps = + foreign + "atg__lstm_mps" + (ptr raw_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs__lstm_mps_out = + foreign + "atg__lstm_mps_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs__lu_with_info = + foreign + "atg__lu_with_info" + (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs__make_dep_token = + foreign "atg__make_dep_token" (int @-> int @-> returning raw_tensor) + ;; + + let stubs__make_dual = + foreign "atg__make_dual" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__make_dual_copy = + foreign + "atg__make_dual_copy" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__make_dual_copy_out = + foreign + "atg__make_dual_copy_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__make_per_channel_quantized_tensor = + foreign + "atg__make_per_channel_quantized_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__make_per_channel_quantized_tensor_out = + foreign + "atg__make_per_channel_quantized_tensor_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__make_per_tensor_quantized_tensor = + foreign + "atg__make_per_tensor_quantized_tensor" + (gc_tensor @-> double @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__make_per_tensor_quantized_tensor_out = + foreign + "atg__make_per_tensor_quantized_tensor_out" + (gc_tensor @-> gc_tensor @-> double @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__masked_scale = + foreign + "atg__masked_scale" + (gc_tensor @-> gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs__masked_scale_out = + foreign + "atg__masked_scale_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs__masked_softmax = + foreign + "atg__masked_softmax" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__masked_softmax_backward = + foreign + "atg__masked_softmax_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__masked_softmax_backward_out = + foreign + "atg__masked_softmax_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__masked_softmax_out = + foreign + "atg__masked_softmax_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__mkldnn_reshape = + foreign + "atg__mkldnn_reshape" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__mkldnn_reshape_out = + foreign + "atg__mkldnn_reshape_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__mkldnn_transpose = + foreign + "atg__mkldnn_transpose" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__mkldnn_transpose_ = + foreign + "atg__mkldnn_transpose_" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__mkldnn_transpose_out = + foreign + "atg__mkldnn_transpose_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__mps_convolution = + foreign + "atg__mps_convolution" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__mps_convolution_out = + foreign + "atg__mps_convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__mps_convolution_transpose = + foreign + "atg__mps_convolution_transpose" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__mps_convolution_transpose_out = + foreign + "atg__mps_convolution_transpose_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__native_batch_norm_legit = + foreign + "atg__native_batch_norm_legit" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs__native_batch_norm_legit_functional = + foreign + "atg__native_batch_norm_legit_functional" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs__native_batch_norm_legit_no_stats = + foreign + "atg__native_batch_norm_legit_no_stats" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs__native_batch_norm_legit_no_stats_out = + foreign + "atg__native_batch_norm_legit_no_stats_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs__native_batch_norm_legit_no_training = + foreign + "atg__native_batch_norm_legit_no_training" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> returning void) + ;; + + let stubs__native_batch_norm_legit_no_training_out = + foreign + "atg__native_batch_norm_legit_no_training_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> returning void) + ;; + + let stubs__native_batch_norm_legit_out = + foreign + "atg__native_batch_norm_legit_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs__native_multi_head_attention = + foreign + "atg__native_multi_head_attention" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__native_multi_head_attention_out = + foreign + "atg__native_multi_head_attention_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__neg_view = foreign "atg__neg_view" (gc_tensor @-> returning raw_tensor) + + let stubs__neg_view_copy = + foreign "atg__neg_view_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__neg_view_copy_out = + foreign "atg__neg_view_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__nested_from_padded = + foreign + "atg__nested_from_padded" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__nested_from_padded_and_nested_example = + foreign + "atg__nested_from_padded_and_nested_example" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__nested_from_padded_and_nested_example_out = + foreign + "atg__nested_from_padded_and_nested_example_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__nested_from_padded_out = + foreign + "atg__nested_from_padded_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__nested_select_backward = + foreign + "atg__nested_select_backward" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__nested_sum_backward = + foreign + "atg__nested_sum_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__nested_view_from_buffer = + foreign + "atg__nested_view_from_buffer" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__nested_view_from_buffer_copy = + foreign + "atg__nested_view_from_buffer_copy" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__nested_view_from_buffer_copy_out = + foreign + "atg__nested_view_from_buffer_copy_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__new_zeros_with_same_feature_meta = + foreign + "atg__new_zeros_with_same_feature_meta" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__new_zeros_with_same_feature_meta_out = + foreign + "atg__new_zeros_with_same_feature_meta_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__nnpack_available = foreign "atg__nnpack_available" (void @-> returning bool) + + let stubs__nnpack_spatial_convolution = + foreign + "atg__nnpack_spatial_convolution" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__nnpack_spatial_convolution_out = + foreign + "atg__nnpack_spatial_convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__nnz = foreign "atg__nnz" (gc_tensor @-> returning int64_t) + + let stubs__pack_padded_sequence = + foreign + "atg__pack_padded_sequence" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs__pack_padded_sequence_backward = + foreign + "atg__pack_padded_sequence_backward" + (gc_tensor @-> ptr int64_t @-> int @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__pack_padded_sequence_out = + foreign + "atg__pack_padded_sequence_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs__pad_circular = + foreign + "atg__pad_circular" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__pad_enum = + foreign + "atg__pad_enum" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__pad_packed_sequence = + foreign + "atg__pad_packed_sequence" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> scalar + @-> int64_t + @-> returning void) + ;; + + let stubs__pdist_backward = + foreign + "atg__pdist_backward" + (gc_tensor @-> gc_tensor @-> double @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__pdist_backward_out = + foreign + "atg__pdist_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__pin_memory = + foreign "atg__pin_memory" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__pin_memory_out = + foreign + "atg__pin_memory_out" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__prelu_kernel = + foreign "atg__prelu_kernel" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__prelu_kernel_backward = + foreign + "atg__prelu_kernel_backward" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs__propagate_xla_data = + foreign "atg__propagate_xla_data" (gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs__remove_batch_dim = + foreign + "atg__remove_batch_dim" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__reshape_alias = + foreign + "atg__reshape_alias" + (gc_tensor @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__reshape_alias_copy = + foreign + "atg__reshape_alias_copy" + (gc_tensor @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__reshape_alias_copy_out = + foreign + "atg__reshape_alias_copy_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__reshape_copy = + foreign + "atg__reshape_copy" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__reshape_from_tensor = + foreign "atg__reshape_from_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__resize_output = + foreign + "atg__resize_output" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__resize_output_ = + foreign + "atg__resize_output_" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__resize_output_out = + foreign + "atg__resize_output_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__rowwise_prune = + foreign + "atg__rowwise_prune" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs__sample_dirichlet = + foreign "atg__sample_dirichlet" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sample_dirichlet_out = + foreign "atg__sample_dirichlet_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__saturate_weight_to_fp16 = + foreign "atg__saturate_weight_to_fp16" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__scaled_dot_product_attention_math = + foreign + "atg__scaled_dot_product_attention_math" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> gc_tensor + @-> double + @-> int + @-> returning void) + ;; + + let stubs__scaled_dot_product_efficient_attention = + foreign + "atg__scaled_dot_product_efficient_attention" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning void) + ;; + + let stubs__scaled_dot_product_flash_attention_backward = + foreign + "atg__scaled_dot_product_flash_attention_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> double + @-> int + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> returning void) + ;; + + let stubs__scaled_mm = + foreign + "atg__scaled_mm" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__scaled_mm_out = + foreign + "atg__scaled_mm_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__scatter_reduce = + foreign + "atg__scatter_reduce" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> int + @-> returning raw_tensor) + ;; + + let stubs__scatter_reduce_ = + foreign + "atg__scatter_reduce_" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> int + @-> returning raw_tensor) + ;; + + let stubs__scatter_reduce_two_out = + foreign + "atg__scatter_reduce_two_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> int + @-> returning raw_tensor) + ;; + + let stubs__segment_reduce_backward = + foreign + "atg__segment_reduce_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> string + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs__segment_reduce_backward_out = + foreign + "atg__segment_reduce_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> string + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs__shape_as_tensor = + foreign "atg__shape_as_tensor" (gc_tensor @-> returning raw_tensor) + ;; +end + +module C3 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs__slow_conv2d_backward = + foreign + "atg__slow_conv2d_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning void) + ;; + + let stubs__sobol_engine_draw = + foreign + "atg__sobol_engine_draw" + (ptr raw_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs__sobol_engine_ff_ = + foreign + "atg__sobol_engine_ff_" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__sobol_engine_initialize_state_ = + foreign + "atg__sobol_engine_initialize_state_" + (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__sobol_engine_scramble_ = + foreign + "atg__sobol_engine_scramble_" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__softmax = + foreign "atg__softmax" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__softmax_backward_data = + foreign + "atg__softmax_backward_data" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__softmax_backward_data_out = + foreign + "atg__softmax_backward_data_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__softmax_out = + foreign + "atg__softmax_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_addmm = + foreign + "atg__sparse_addmm" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sparse_addmm_out = + foreign + "atg__sparse_addmm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sparse_broadcast_to = + foreign + "atg__sparse_broadcast_to" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_broadcast_to_copy = + foreign + "atg__sparse_broadcast_to_copy" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_broadcast_to_copy_out = + foreign + "atg__sparse_broadcast_to_copy_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_bsc_tensor_unsafe = + foreign + "atg__sparse_bsc_tensor_unsafe" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_bsr_tensor_unsafe = + foreign + "atg__sparse_bsr_tensor_unsafe" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_compressed_tensor_unsafe = + foreign + "atg__sparse_compressed_tensor_unsafe" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_coo_tensor_unsafe = + foreign + "atg__sparse_coo_tensor_unsafe" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_coo_tensor_with_dims = + foreign + "atg__sparse_coo_tensor_with_dims" + (int64_t + @-> int64_t + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_coo_tensor_with_dims_and_tensors = + foreign + "atg__sparse_coo_tensor_with_dims_and_tensors" + (int64_t + @-> int64_t + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_coo_tensor_with_dims_and_tensors_out = + foreign + "atg__sparse_coo_tensor_with_dims_and_tensors_out" + (gc_tensor + @-> int64_t + @-> int64_t + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_coo_tensor_with_dims_out = + foreign + "atg__sparse_coo_tensor_with_dims_out" + (gc_tensor @-> int64_t @-> int64_t @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_csc_tensor_unsafe = + foreign + "atg__sparse_csc_tensor_unsafe" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_csr_prod = + foreign + "atg__sparse_csr_prod" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_csr_prod_dim_dtype_out = + foreign + "atg__sparse_csr_prod_dim_dtype_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_csr_sum = + foreign + "atg__sparse_csr_sum" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_csr_sum_dim_dtype_out = + foreign + "atg__sparse_csr_sum_dim_dtype_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_csr_tensor_unsafe = + foreign + "atg__sparse_csr_tensor_unsafe" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_log_softmax = + foreign + "atg__sparse_log_softmax" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_log_softmax_backward_data = + foreign + "atg__sparse_log_softmax_backward_data" + (gc_tensor @-> gc_tensor @-> int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sparse_log_softmax_backward_data_out = + foreign + "atg__sparse_log_softmax_backward_data_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__sparse_log_softmax_int = + foreign + "atg__sparse_log_softmax_int" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_log_softmax_out = + foreign + "atg__sparse_log_softmax_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_mask_projection = + foreign + "atg__sparse_mask_projection" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_mask_projection_out = + foreign + "atg__sparse_mask_projection_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_mm = + foreign "atg__sparse_mm" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sparse_mm_reduce = + foreign + "atg__sparse_mm_reduce" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs__sparse_mm_reduce_impl = + foreign + "atg__sparse_mm_reduce_impl" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> string @-> returning void) + ;; + + let stubs__sparse_semi_structured_linear = + foreign + "atg__sparse_semi_structured_linear" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> string + @-> returning raw_tensor) + ;; + + let stubs__sparse_softmax = + foreign "atg__sparse_softmax" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_softmax_backward_data = + foreign + "atg__sparse_softmax_backward_data" + (gc_tensor @-> gc_tensor @-> int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sparse_softmax_backward_data_out = + foreign + "atg__sparse_softmax_backward_data_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__sparse_softmax_int = + foreign + "atg__sparse_softmax_int" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_softmax_out = + foreign + "atg__sparse_softmax_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_sparse_matmul = + foreign "atg__sparse_sparse_matmul" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sparse_sparse_matmul_out = + foreign + "atg__sparse_sparse_matmul_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__sparse_sum = foreign "atg__sparse_sum" (gc_tensor @-> returning raw_tensor) + + let stubs__sparse_sum_backward = + foreign + "atg__sparse_sum_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_sum_backward_out = + foreign + "atg__sparse_sum_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__sparse_sum_dim = + foreign + "atg__sparse_sum_dim" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_sum_dim_dtype = + foreign + "atg__sparse_sum_dim_dtype" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_sum_dim_out = + foreign + "atg__sparse_sum_dim_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__sparse_sum_dtype = + foreign "atg__sparse_sum_dtype" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__spdiags = + foreign + "atg__spdiags" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__spdiags_out = + foreign + "atg__spdiags_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__stack = + foreign "atg__stack" (ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__stack_out = + foreign + "atg__stack_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__standard_gamma = + foreign "atg__standard_gamma" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__standard_gamma_grad = + foreign "atg__standard_gamma_grad" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__standard_gamma_grad_out = + foreign + "atg__standard_gamma_grad_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__standard_gamma_out = + foreign "atg__standard_gamma_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_ambiguous_defaults = + foreign + "atg__test_ambiguous_defaults" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__test_ambiguous_defaults_b = + foreign + "atg__test_ambiguous_defaults_b" + (gc_tensor @-> int64_t @-> string @-> returning raw_tensor) + ;; + + let stubs__test_autograd_multiple_dispatch = + foreign "atg__test_autograd_multiple_dispatch" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_autograd_multiple_dispatch_fullcoverage_out = + foreign + "atg__test_autograd_multiple_dispatch_fullcoverage_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_autograd_multiple_dispatch_ntonly = + foreign + "atg__test_autograd_multiple_dispatch_ntonly" + (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__test_autograd_multiple_dispatch_view = + foreign + "atg__test_autograd_multiple_dispatch_view" + (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_autograd_multiple_dispatch_view_copy = + foreign + "atg__test_autograd_multiple_dispatch_view_copy" + (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_autograd_multiple_dispatch_view_copy_out = + foreign + "atg__test_autograd_multiple_dispatch_view_copy_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_check_tensor = + foreign "atg__test_check_tensor" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_functorch_fallback = + foreign + "atg__test_functorch_fallback" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_functorch_fallback_out = + foreign + "atg__test_functorch_fallback_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_optional_filled_intlist = + foreign + "atg__test_optional_filled_intlist" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__test_optional_filled_intlist_out = + foreign + "atg__test_optional_filled_intlist_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__test_optional_floatlist = + foreign + "atg__test_optional_floatlist" + (gc_tensor @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs__test_optional_floatlist_out = + foreign + "atg__test_optional_floatlist_out" + (gc_tensor @-> gc_tensor @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs__test_optional_intlist = + foreign + "atg__test_optional_intlist" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__test_optional_intlist_out = + foreign + "atg__test_optional_intlist_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__test_serialization_subcmul = + foreign + "atg__test_serialization_subcmul" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_string_default = + foreign + "atg__test_string_default" + (gc_tensor @-> string @-> string @-> returning raw_tensor) + ;; + + let stubs__test_warn_in_autograd = + foreign "atg__test_warn_in_autograd" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs__test_warn_in_autograd_out = + foreign + "atg__test_warn_in_autograd_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__thnn_differentiable_gru_cell_backward = + foreign + "atg__thnn_differentiable_gru_cell_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__thnn_differentiable_lstm_cell_backward = + foreign + "atg__thnn_differentiable_lstm_cell_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__thnn_fused_gru_cell = + foreign + "atg__thnn_fused_gru_cell" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__thnn_fused_gru_cell_backward = + foreign + "atg__thnn_fused_gru_cell_backward" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs__thnn_fused_gru_cell_backward_out = + foreign + "atg__thnn_fused_gru_cell_backward_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs__thnn_fused_gru_cell_out = + foreign + "atg__thnn_fused_gru_cell_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__thnn_fused_lstm_cell = + foreign + "atg__thnn_fused_lstm_cell" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__thnn_fused_lstm_cell_backward = + foreign + "atg__thnn_fused_lstm_cell_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs__thnn_fused_lstm_cell_backward_impl = + foreign + "atg__thnn_fused_lstm_cell_backward_impl" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs__thnn_fused_lstm_cell_backward_impl_out = + foreign + "atg__thnn_fused_lstm_cell_backward_impl_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs__thnn_fused_lstm_cell_out = + foreign + "atg__thnn_fused_lstm_cell_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs__to_copy = + foreign "atg__to_copy" (gc_tensor @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__to_copy_out = + foreign "atg__to_copy_out" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__to_cpu = + foreign "atg__to_cpu" (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs__to_dense = + foreign "atg__to_dense" (gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__to_dense_out = + foreign + "atg__to_dense_out" + (gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs__to_sparse_bsc = + foreign + "atg__to_sparse_bsc" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__to_sparse_bsc_out = + foreign + "atg__to_sparse_bsc_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; +end + +module C4 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs__to_sparse_bsr = + foreign + "atg__to_sparse_bsr" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__to_sparse_bsr_out = + foreign + "atg__to_sparse_bsr_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__to_sparse_csc = + foreign "atg__to_sparse_csc" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__to_sparse_csc_out = + foreign + "atg__to_sparse_csc_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__to_sparse_csr = + foreign "atg__to_sparse_csr" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__to_sparse_csr_out = + foreign + "atg__to_sparse_csr_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__to_sparse_semi_structured = + foreign + "atg__to_sparse_semi_structured" + (ptr raw_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs__transform_bias_rescale_qkv = + foreign + "atg__transform_bias_rescale_qkv" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning void) + ;; + + let stubs__transform_bias_rescale_qkv_out = + foreign + "atg__transform_bias_rescale_qkv_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs__transformer_encoder_layer_fwd = + foreign + "atg__transformer_encoder_layer_fwd" + (gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> double + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__transformer_encoder_layer_fwd_out = + foreign + "atg__transformer_encoder_layer_fwd_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> double + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs__trilinear = + foreign + "atg__trilinear" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__trilinear_out = + foreign + "atg__trilinear_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs__triton_multi_head_attention = + foreign + "atg__triton_multi_head_attention" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__triton_multi_head_attention_out = + foreign + "atg__triton_multi_head_attention_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs__triton_scaled_dot_attention = + foreign + "atg__triton_scaled_dot_attention" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs__triton_scaled_dot_attention_out = + foreign + "atg__triton_scaled_dot_attention_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning raw_tensor) + ;; + + let stubs__unique = + foreign "atg__unique" (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs__unique2 = + foreign + "atg__unique2" + (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> int @-> returning void) + ;; + + let stubs__unique2_out = + foreign + "atg__unique2_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs__unique_out = + foreign + "atg__unique_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs__unpack_dual = + foreign + "atg__unpack_dual" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> returning void) + ;; + + let stubs__unsafe_index = + foreign + "atg__unsafe_index" + (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__unsafe_index_put = + foreign + "atg__unsafe_index_put" + (gc_tensor @-> ptr gc_tensor @-> int @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs__unsafe_view = + foreign "atg__unsafe_view" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__unsafe_view_out = + foreign + "atg__unsafe_view_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs__upsample_bicubic2d_aa = + foreign + "atg__upsample_bicubic2d_aa" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bicubic2d_aa_backward = + foreign + "atg__upsample_bicubic2d_aa_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bicubic2d_aa_backward_grad_input = + foreign + "atg__upsample_bicubic2d_aa_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bicubic2d_aa_out = + foreign + "atg__upsample_bicubic2d_aa_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bicubic2d_aa_vec = + foreign + "atg__upsample_bicubic2d_aa_vec" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> ptr double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bilinear2d_aa = + foreign + "atg__upsample_bilinear2d_aa" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bilinear2d_aa_backward = + foreign + "atg__upsample_bilinear2d_aa_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bilinear2d_aa_backward_grad_input = + foreign + "atg__upsample_bilinear2d_aa_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bilinear2d_aa_out = + foreign + "atg__upsample_bilinear2d_aa_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_bilinear2d_aa_vec = + foreign + "atg__upsample_bilinear2d_aa_vec" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> ptr double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact1d = + foreign + "atg__upsample_nearest_exact1d" + (gc_tensor @-> ptr int64_t @-> int @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact1d_backward = + foreign + "atg__upsample_nearest_exact1d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact1d_backward_grad_input = + foreign + "atg__upsample_nearest_exact1d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact1d_out = + foreign + "atg__upsample_nearest_exact1d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact1d_vec = + foreign + "atg__upsample_nearest_exact1d_vec" + (gc_tensor @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact2d = + foreign + "atg__upsample_nearest_exact2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact2d_backward = + foreign + "atg__upsample_nearest_exact2d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact2d_backward_grad_input = + foreign + "atg__upsample_nearest_exact2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact2d_out = + foreign + "atg__upsample_nearest_exact2d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact2d_vec = + foreign + "atg__upsample_nearest_exact2d_vec" + (gc_tensor @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact3d = + foreign + "atg__upsample_nearest_exact3d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact3d_backward = + foreign + "atg__upsample_nearest_exact3d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact3d_backward_grad_input = + foreign + "atg__upsample_nearest_exact3d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact3d_out = + foreign + "atg__upsample_nearest_exact3d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs__upsample_nearest_exact3d_vec = + foreign + "atg__upsample_nearest_exact3d_vec" + (gc_tensor @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs__use_cudnn_ctc_loss = + foreign + "atg__use_cudnn_ctc_loss" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning bool) + ;; + + let stubs__use_cudnn_ctc_loss_tensor = + foreign + "atg__use_cudnn_ctc_loss_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning bool) + ;; + + let stubs__use_cudnn_rnn_flatten_weight = + foreign "atg__use_cudnn_rnn_flatten_weight" (void @-> returning bool) + ;; + + let stubs__validate_compressed_sparse_indices = + foreign + "atg__validate_compressed_sparse_indices" + (int + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning void) + ;; + + let stubs__validate_sparse_bsc_tensor_args = + foreign + "atg__validate_sparse_bsc_tensor_args" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning void) + ;; + + let stubs__validate_sparse_bsr_tensor_args = + foreign + "atg__validate_sparse_bsr_tensor_args" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning void) + ;; + + let stubs__validate_sparse_csc_tensor_args = + foreign + "atg__validate_sparse_csc_tensor_args" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning void) + ;; + + let stubs__values = foreign "atg__values" (gc_tensor @-> returning raw_tensor) + let stubs__values_copy = foreign "atg__values_copy" (gc_tensor @-> returning raw_tensor) + + let stubs__values_copy_out = + foreign "atg__values_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs__version = foreign "atg__version" (gc_tensor @-> returning int64_t) + + let stubs__weight_norm = + foreign + "atg__weight_norm" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs__weight_norm_differentiable_backward = + foreign + "atg__weight_norm_differentiable_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs__weight_norm_interface = + foreign + "atg__weight_norm_interface" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning void) + ;; + + let stubs__weight_norm_interface_backward = + foreign + "atg__weight_norm_interface_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs__weight_norm_interface_backward_out = + foreign + "atg__weight_norm_interface_backward_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs__weight_norm_interface_out = + foreign + "atg__weight_norm_interface_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs_abs = foreign "atg_abs" (gc_tensor @-> returning raw_tensor) + let stubs_abs_ = foreign "atg_abs_" (gc_tensor @-> returning raw_tensor) + + let stubs_abs_out = + foreign "atg_abs_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_absolute = foreign "atg_absolute" (gc_tensor @-> returning raw_tensor) + let stubs_absolute_ = foreign "atg_absolute_" (gc_tensor @-> returning raw_tensor) + + let stubs_absolute_out = + foreign "atg_absolute_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_acos = foreign "atg_acos" (gc_tensor @-> returning raw_tensor) + let stubs_acos_ = foreign "atg_acos_" (gc_tensor @-> returning raw_tensor) + + let stubs_acos_out = + foreign "atg_acos_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_acosh = foreign "atg_acosh" (gc_tensor @-> returning raw_tensor) + let stubs_acosh_ = foreign "atg_acosh_" (gc_tensor @-> returning raw_tensor) + + let stubs_acosh_out = + foreign "atg_acosh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_adaptive_avg_pool1d = + foreign + "atg_adaptive_avg_pool1d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_adaptive_avg_pool2d = + foreign + "atg_adaptive_avg_pool2d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_adaptive_avg_pool2d_out = + foreign + "atg_adaptive_avg_pool2d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_adaptive_avg_pool3d = + foreign + "atg_adaptive_avg_pool3d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_adaptive_avg_pool3d_backward = + foreign + "atg_adaptive_avg_pool3d_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_adaptive_avg_pool3d_out = + foreign + "atg_adaptive_avg_pool3d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_adaptive_max_pool1d = + foreign + "atg_adaptive_max_pool1d" + (ptr raw_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning void) + ;; + + let stubs_adaptive_max_pool2d = + foreign + "atg_adaptive_max_pool2d" + (ptr raw_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning void) + ;; + + let stubs_adaptive_max_pool2d_backward = + foreign + "atg_adaptive_max_pool2d_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_adaptive_max_pool2d_backward_grad_input = + foreign + "atg_adaptive_max_pool2d_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_adaptive_max_pool2d_out = + foreign + "atg_adaptive_max_pool2d_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning void) + ;; + + let stubs_adaptive_max_pool3d = + foreign + "atg_adaptive_max_pool3d" + (ptr raw_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning void) + ;; + + let stubs_adaptive_max_pool3d_backward = + foreign + "atg_adaptive_max_pool3d_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_adaptive_max_pool3d_backward_grad_input = + foreign + "atg_adaptive_max_pool3d_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_adaptive_max_pool3d_out = + foreign + "atg_adaptive_max_pool3d_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning void) + ;; + + let stubs_add = foreign "atg_add" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_add_ = foreign "atg_add_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_add_out = + foreign "atg_add_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_add_scalar = + foreign "atg_add_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_add_scalar_ = + foreign "atg_add_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; +end + +module C5 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_add_scalar_out = + foreign + "atg_add_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_addbmm = + foreign "atg_addbmm" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addbmm_ = + foreign "atg_addbmm_" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addbmm_out = + foreign + "atg_addbmm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addcdiv = + foreign "atg_addcdiv" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addcdiv_ = + foreign "atg_addcdiv_" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addcdiv_out = + foreign + "atg_addcdiv_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addcmul = + foreign "atg_addcmul" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addcmul_ = + foreign "atg_addcmul_" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addcmul_out = + foreign + "atg_addcmul_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addmm = + foreign "atg_addmm" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addmm_ = + foreign "atg_addmm_" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addmm_out = + foreign + "atg_addmm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addmv = + foreign "atg_addmv" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addmv_ = + foreign "atg_addmv_" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addmv_out = + foreign + "atg_addmv_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addr = + foreign "atg_addr" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addr_ = + foreign "atg_addr_" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_addr_out = + foreign + "atg_addr_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_adjoint = foreign "atg_adjoint" (gc_tensor @-> returning raw_tensor) + + let stubs_affine_grid_generator = + foreign + "atg_affine_grid_generator" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_affine_grid_generator_backward = + foreign + "atg_affine_grid_generator_backward" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_affine_grid_generator_out = + foreign + "atg_affine_grid_generator_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_alias = foreign "atg_alias" (gc_tensor @-> returning raw_tensor) + let stubs_alias_copy = foreign "atg_alias_copy" (gc_tensor @-> returning raw_tensor) + + let stubs_alias_copy_out = + foreign "atg_alias_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_align_as = + foreign "atg_align_as" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_align_tensors = + foreign "atg_align_tensors" (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_all = foreign "atg_all" (gc_tensor @-> returning raw_tensor) + + let stubs_all_all_out = + foreign "atg_all_all_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_all_dim = + foreign "atg_all_dim" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_all_out = + foreign + "atg_all_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_allclose = + foreign + "atg_allclose" + (gc_tensor @-> gc_tensor @-> double @-> double @-> int @-> returning bool) + ;; + + let stubs_alpha_dropout = + foreign "atg_alpha_dropout" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_alpha_dropout_ = + foreign "atg_alpha_dropout_" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_amax = + foreign "atg_amax" (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_amax_out = + foreign + "atg_amax_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_amin = + foreign "atg_amin" (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_amin_out = + foreign + "atg_amin_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_aminmax = + foreign + "atg_aminmax" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> int @-> returning void) + ;; + + let stubs_aminmax_out = + foreign + "atg_aminmax_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_angle = foreign "atg_angle" (gc_tensor @-> returning raw_tensor) + + let stubs_angle_out = + foreign "atg_angle_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_any = foreign "atg_any" (gc_tensor @-> returning raw_tensor) + + let stubs_any_all_out = + foreign "atg_any_all_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_any_dim = + foreign "atg_any_dim" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_any_out = + foreign + "atg_any_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_arange = foreign "atg_arange" (scalar @-> int @-> int @-> returning raw_tensor) + + let stubs_arange_start = + foreign "atg_arange_start" (scalar @-> scalar @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_arange_start_step = + foreign + "atg_arange_start_step" + (scalar @-> scalar @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_arccos = foreign "atg_arccos" (gc_tensor @-> returning raw_tensor) + let stubs_arccos_ = foreign "atg_arccos_" (gc_tensor @-> returning raw_tensor) + + let stubs_arccos_out = + foreign "atg_arccos_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arccosh = foreign "atg_arccosh" (gc_tensor @-> returning raw_tensor) + let stubs_arccosh_ = foreign "atg_arccosh_" (gc_tensor @-> returning raw_tensor) + + let stubs_arccosh_out = + foreign "atg_arccosh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arcsin = foreign "atg_arcsin" (gc_tensor @-> returning raw_tensor) + let stubs_arcsin_ = foreign "atg_arcsin_" (gc_tensor @-> returning raw_tensor) + + let stubs_arcsin_out = + foreign "atg_arcsin_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arcsinh = foreign "atg_arcsinh" (gc_tensor @-> returning raw_tensor) + let stubs_arcsinh_ = foreign "atg_arcsinh_" (gc_tensor @-> returning raw_tensor) + + let stubs_arcsinh_out = + foreign "atg_arcsinh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arctan = foreign "atg_arctan" (gc_tensor @-> returning raw_tensor) + + let stubs_arctan2 = + foreign "atg_arctan2" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arctan2_ = + foreign "atg_arctan2_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arctan2_out = + foreign + "atg_arctan2_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arctan_ = foreign "atg_arctan_" (gc_tensor @-> returning raw_tensor) + + let stubs_arctan_out = + foreign "atg_arctan_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_arctanh = foreign "atg_arctanh" (gc_tensor @-> returning raw_tensor) + let stubs_arctanh_ = foreign "atg_arctanh_" (gc_tensor @-> returning raw_tensor) + + let stubs_arctanh_out = + foreign "atg_arctanh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_argmax = + foreign "atg_argmax" (gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_argmax_out = + foreign + "atg_argmax_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_argmin = + foreign "atg_argmin" (gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_argmin_out = + foreign + "atg_argmin_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_argsort = + foreign "atg_argsort" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_argsort_stable = + foreign + "atg_argsort_stable" + (gc_tensor @-> int @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_argsort_stable_out = + foreign + "atg_argsort_stable_out" + (gc_tensor @-> gc_tensor @-> int @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_argwhere = foreign "atg_argwhere" (gc_tensor @-> returning raw_tensor) + + let stubs_as_strided = + foreign + "atg_as_strided" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_as_strided_ = + foreign + "atg_as_strided_" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_as_strided_copy = + foreign + "atg_as_strided_copy" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_as_strided_copy_out = + foreign + "atg_as_strided_copy_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_as_strided_scatter = + foreign + "atg_as_strided_scatter" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_as_strided_scatter_out = + foreign + "atg_as_strided_scatter_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_asin = foreign "atg_asin" (gc_tensor @-> returning raw_tensor) + let stubs_asin_ = foreign "atg_asin_" (gc_tensor @-> returning raw_tensor) + + let stubs_asin_out = + foreign "atg_asin_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_asinh = foreign "atg_asinh" (gc_tensor @-> returning raw_tensor) + let stubs_asinh_ = foreign "atg_asinh_" (gc_tensor @-> returning raw_tensor) + + let stubs_asinh_out = + foreign "atg_asinh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_atan = foreign "atg_atan" (gc_tensor @-> returning raw_tensor) + let stubs_atan2 = foreign "atg_atan2" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_atan2_ = + foreign "atg_atan2_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_atan2_out = + foreign + "atg_atan2_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_atan_ = foreign "atg_atan_" (gc_tensor @-> returning raw_tensor) + + let stubs_atan_out = + foreign "atg_atan_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_atanh = foreign "atg_atanh" (gc_tensor @-> returning raw_tensor) + let stubs_atanh_ = foreign "atg_atanh_" (gc_tensor @-> returning raw_tensor) + + let stubs_atanh_out = + foreign "atg_atanh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; +end + +module C6 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_atleast_1d = foreign "atg_atleast_1d" (gc_tensor @-> returning raw_tensor) + + let stubs_atleast_1d_sequence = + foreign + "atg_atleast_1d_sequence" + (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_atleast_2d = foreign "atg_atleast_2d" (gc_tensor @-> returning raw_tensor) + + let stubs_atleast_2d_sequence = + foreign + "atg_atleast_2d_sequence" + (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_atleast_3d = foreign "atg_atleast_3d" (gc_tensor @-> returning raw_tensor) + + let stubs_atleast_3d_sequence = + foreign + "atg_atleast_3d_sequence" + (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_avg_pool1d = + foreign + "atg_avg_pool1d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool2d = + foreign + "atg_avg_pool2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool2d_backward = + foreign + "atg_avg_pool2d_backward" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool2d_backward_grad_input = + foreign + "atg_avg_pool2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool2d_out = + foreign + "atg_avg_pool2d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool3d = + foreign + "atg_avg_pool3d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool3d_backward = + foreign + "atg_avg_pool3d_backward" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool3d_backward_grad_input = + foreign + "atg_avg_pool3d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_avg_pool3d_out = + foreign + "atg_avg_pool3d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_baddbmm = + foreign "atg_baddbmm" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_baddbmm_ = + foreign "atg_baddbmm_" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_baddbmm_out = + foreign + "atg_baddbmm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bartlett_window = + foreign "atg_bartlett_window" (int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_bartlett_window_out = + foreign "atg_bartlett_window_out" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_bartlett_window_periodic = + foreign + "atg_bartlett_window_periodic" + (int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_bartlett_window_periodic_out = + foreign + "atg_bartlett_window_periodic_out" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_batch_norm = + foreign + "atg_batch_norm" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_batch_norm_backward_elemt = + foreign + "atg_batch_norm_backward_elemt" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_batch_norm_backward_elemt_out = + foreign + "atg_batch_norm_backward_elemt_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_batch_norm_backward_reduce = + foreign + "atg_batch_norm_backward_reduce" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_batch_norm_backward_reduce_out = + foreign + "atg_batch_norm_backward_reduce_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_batch_norm_elemt = + foreign + "atg_batch_norm_elemt" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning raw_tensor) + ;; + + let stubs_batch_norm_elemt_out = + foreign + "atg_batch_norm_elemt_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning raw_tensor) + ;; + + let stubs_batch_norm_gather_stats = + foreign + "atg_batch_norm_gather_stats" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> int64_t + @-> returning void) + ;; + + let stubs_batch_norm_gather_stats_out = + foreign + "atg_batch_norm_gather_stats_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> int64_t + @-> returning void) + ;; + + let stubs_batch_norm_gather_stats_with_counts = + foreign + "atg_batch_norm_gather_stats_with_counts" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> gc_tensor + @-> returning void) + ;; + + let stubs_batch_norm_gather_stats_with_counts_out = + foreign + "atg_batch_norm_gather_stats_with_counts_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> gc_tensor + @-> returning void) + ;; + + let stubs_batch_norm_stats = + foreign + "atg_batch_norm_stats" + (ptr raw_tensor @-> gc_tensor @-> double @-> returning void) + ;; + + let stubs_batch_norm_stats_out = + foreign + "atg_batch_norm_stats_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs_batch_norm_update_stats = + foreign + "atg_batch_norm_update_stats" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs_batch_norm_update_stats_out = + foreign + "atg_batch_norm_update_stats_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs_bernoulli = foreign "atg_bernoulli" (gc_tensor @-> returning raw_tensor) + + let stubs_bernoulli_ = + foreign "atg_bernoulli_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bernoulli_float_ = + foreign "atg_bernoulli_float_" (gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_bernoulli_p = + foreign "atg_bernoulli_p" (gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_bernoulli_tensor = + foreign "atg_bernoulli_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bilinear = + foreign + "atg_bilinear" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_binary_cross_entropy = + foreign + "atg_binary_cross_entropy" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_binary_cross_entropy_backward = + foreign + "atg_binary_cross_entropy_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_binary_cross_entropy_backward_grad_input = + foreign + "atg_binary_cross_entropy_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_binary_cross_entropy_out = + foreign + "atg_binary_cross_entropy_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_binary_cross_entropy_with_logits = + foreign + "atg_binary_cross_entropy_with_logits" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_binary_cross_entropy_with_logits_out = + foreign + "atg_binary_cross_entropy_with_logits_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_bincount = + foreign "atg_bincount" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_bincount_out = + foreign + "atg_bincount_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_binomial = + foreign "atg_binomial" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_binomial_out = + foreign + "atg_binomial_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_and = + foreign "atg_bitwise_and" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_and_ = + foreign "atg_bitwise_and_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_and_scalar_out = + foreign + "atg_bitwise_and_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_and_scalar_tensor = + foreign "atg_bitwise_and_scalar_tensor" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_and_scalar_tensor_out = + foreign + "atg_bitwise_and_scalar_tensor_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_and_tensor = + foreign "atg_bitwise_and_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_and_tensor_ = + foreign "atg_bitwise_and_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_and_tensor_out = + foreign + "atg_bitwise_and_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift = + foreign "atg_bitwise_left_shift" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift_ = + foreign "atg_bitwise_left_shift_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift_scalar_tensor = + foreign + "atg_bitwise_left_shift_scalar_tensor" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift_scalar_tensor_out = + foreign + "atg_bitwise_left_shift_scalar_tensor_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift_tensor_out = + foreign + "atg_bitwise_left_shift_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift_tensor_scalar = + foreign + "atg_bitwise_left_shift_tensor_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift_tensor_scalar_ = + foreign + "atg_bitwise_left_shift_tensor_scalar_" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_left_shift_tensor_scalar_out = + foreign + "atg_bitwise_left_shift_tensor_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_not = foreign "atg_bitwise_not" (gc_tensor @-> returning raw_tensor) + let stubs_bitwise_not_ = foreign "atg_bitwise_not_" (gc_tensor @-> returning raw_tensor) + + let stubs_bitwise_not_out = + foreign "atg_bitwise_not_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_or = + foreign "atg_bitwise_or" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_or_ = + foreign "atg_bitwise_or_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_or_scalar_out = + foreign + "atg_bitwise_or_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_or_scalar_tensor = + foreign "atg_bitwise_or_scalar_tensor" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_or_scalar_tensor_out = + foreign + "atg_bitwise_or_scalar_tensor_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_or_tensor = + foreign "atg_bitwise_or_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_or_tensor_ = + foreign "atg_bitwise_or_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_or_tensor_out = + foreign + "atg_bitwise_or_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift = + foreign "atg_bitwise_right_shift" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift_ = + foreign "atg_bitwise_right_shift_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift_scalar_tensor = + foreign + "atg_bitwise_right_shift_scalar_tensor" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift_scalar_tensor_out = + foreign + "atg_bitwise_right_shift_scalar_tensor_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift_tensor_out = + foreign + "atg_bitwise_right_shift_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift_tensor_scalar = + foreign + "atg_bitwise_right_shift_tensor_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift_tensor_scalar_ = + foreign + "atg_bitwise_right_shift_tensor_scalar_" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_right_shift_tensor_scalar_out = + foreign + "atg_bitwise_right_shift_tensor_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor = + foreign "atg_bitwise_xor" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor_ = + foreign "atg_bitwise_xor_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor_scalar_out = + foreign + "atg_bitwise_xor_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor_scalar_tensor = + foreign "atg_bitwise_xor_scalar_tensor" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor_scalar_tensor_out = + foreign + "atg_bitwise_xor_scalar_tensor_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor_tensor = + foreign "atg_bitwise_xor_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor_tensor_ = + foreign "atg_bitwise_xor_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_bitwise_xor_tensor_out = + foreign + "atg_bitwise_xor_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_blackman_window = + foreign "atg_blackman_window" (int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_blackman_window_out = + foreign "atg_blackman_window_out" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_blackman_window_periodic = + foreign + "atg_blackman_window_periodic" + (int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_blackman_window_periodic_out = + foreign + "atg_blackman_window_periodic_out" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; +end + +module C7 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_block_diag = + foreign "atg_block_diag" (ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_block_diag_out = + foreign + "atg_block_diag_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_bmm = foreign "atg_bmm" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_bmm_out = + foreign "atg_bmm_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_broadcast_tensors = + foreign "atg_broadcast_tensors" (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_broadcast_to = + foreign "atg_broadcast_to" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_bucketize = + foreign + "atg_bucketize" + (gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_bucketize_scalar = + foreign + "atg_bucketize_scalar" + (scalar @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_bucketize_scalar_out = + foreign + "atg_bucketize_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_bucketize_tensor_out = + foreign + "atg_bucketize_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_can_cast = foreign "atg_can_cast" (int @-> int @-> returning bool) + + let stubs_cartesian_prod = + foreign "atg_cartesian_prod" (ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_cat = + foreign "atg_cat" (ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_cat_out = + foreign + "atg_cat_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_cauchy = + foreign "atg_cauchy" (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_cauchy_ = + foreign "atg_cauchy_" (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_cauchy_out = + foreign + "atg_cauchy_out" + (gc_tensor @-> gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_ccol_indices = foreign "atg_ccol_indices" (gc_tensor @-> returning raw_tensor) + + let stubs_ccol_indices_copy = + foreign "atg_ccol_indices_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ccol_indices_copy_out = + foreign "atg_ccol_indices_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_cdist = + foreign + "atg_cdist" + (gc_tensor @-> gc_tensor @-> double @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_ceil = foreign "atg_ceil" (gc_tensor @-> returning raw_tensor) + let stubs_ceil_ = foreign "atg_ceil_" (gc_tensor @-> returning raw_tensor) + + let stubs_ceil_out = + foreign "atg_ceil_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_celu = foreign "atg_celu" (gc_tensor @-> returning raw_tensor) + let stubs_celu_ = foreign "atg_celu_" (gc_tensor @-> returning raw_tensor) + + let stubs_celu_out = + foreign "atg_celu_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_chain_matmul = + foreign "atg_chain_matmul" (ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_chain_matmul_out = + foreign + "atg_chain_matmul_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_chalf = foreign "atg_chalf" (gc_tensor @-> returning raw_tensor) + + let stubs_channel_shuffle = + foreign "atg_channel_shuffle" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_channel_shuffle_out = + foreign + "atg_channel_shuffle_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_cholesky = foreign "atg_cholesky" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_cholesky_inverse = + foreign "atg_cholesky_inverse" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_cholesky_inverse_out = + foreign + "atg_cholesky_inverse_out" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_cholesky_out = + foreign "atg_cholesky_out" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_cholesky_solve = + foreign "atg_cholesky_solve" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_cholesky_solve_out = + foreign + "atg_cholesky_solve_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_choose_qparams_optimized = + foreign + "atg_choose_qparams_optimized" + (ptr raw_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> double + @-> int64_t + @-> returning void) + ;; + + let stubs_chunk = + foreign "atg_chunk" (gc_tensor @-> int64_t @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_clamp = + foreign "atg_clamp" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_ = + foreign "atg_clamp_" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_max = + foreign "atg_clamp_max" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_max_ = + foreign "atg_clamp_max_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_max_out = + foreign + "atg_clamp_max_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_max_tensor = + foreign "atg_clamp_max_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_max_tensor_ = + foreign "atg_clamp_max_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_max_tensor_out = + foreign + "atg_clamp_max_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_min = + foreign "atg_clamp_min" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_min_ = + foreign "atg_clamp_min_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_min_out = + foreign + "atg_clamp_min_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_min_tensor = + foreign "atg_clamp_min_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_min_tensor_ = + foreign "atg_clamp_min_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_min_tensor_out = + foreign + "atg_clamp_min_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_out = + foreign + "atg_clamp_out" + (gc_tensor @-> gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clamp_tensor = + foreign + "atg_clamp_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_tensor_ = + foreign + "atg_clamp_tensor_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clamp_tensor_out = + foreign + "atg_clamp_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clip = + foreign "atg_clip" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clip_ = + foreign "atg_clip_" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clip_out = + foreign + "atg_clip_out" + (gc_tensor @-> gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_clip_tensor = + foreign + "atg_clip_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clip_tensor_ = + foreign + "atg_clip_tensor_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clip_tensor_out = + foreign + "atg_clip_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_clone = foreign "atg_clone" (gc_tensor @-> returning raw_tensor) + + let stubs_clone_out = + foreign "atg_clone_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_coalesce = foreign "atg_coalesce" (gc_tensor @-> returning raw_tensor) + + let stubs_col2im = + foreign + "atg_col2im" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_col2im_out = + foreign + "atg_col2im_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_col_indices = foreign "atg_col_indices" (gc_tensor @-> returning raw_tensor) + + let stubs_col_indices_copy = + foreign "atg_col_indices_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_col_indices_copy_out = + foreign "atg_col_indices_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_column_stack = + foreign "atg_column_stack" (ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_column_stack_out = + foreign + "atg_column_stack_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_combinations = + foreign "atg_combinations" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_complex = + foreign "atg_complex" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_complex_out = + foreign + "atg_complex_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_concat = + foreign "atg_concat" (ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_concat_out = + foreign + "atg_concat_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_concatenate = + foreign "atg_concatenate" (ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_concatenate_out = + foreign + "atg_concatenate_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_conj = foreign "atg_conj" (gc_tensor @-> returning raw_tensor) + + let stubs_conj_physical = + foreign "atg_conj_physical" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_conj_physical_ = + foreign "atg_conj_physical_" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_conj_physical_out = + foreign "atg_conj_physical_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_constant_pad_nd = + foreign + "atg_constant_pad_nd" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_constant_pad_nd_out = + foreign + "atg_constant_pad_nd_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_contiguous = foreign "atg_contiguous" (gc_tensor @-> returning raw_tensor) + + let stubs_conv1d = + foreign + "atg_conv1d" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_conv1d_padding = + foreign + "atg_conv1d_padding" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> string + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_conv2d = + foreign + "atg_conv2d" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_conv2d_padding = + foreign + "atg_conv2d_padding" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> string + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_conv3d = + foreign + "atg_conv3d" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_conv3d_padding = + foreign + "atg_conv3d_padding" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> string + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_conv_depthwise3d = + foreign + "atg_conv_depthwise3d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_conv_depthwise3d_out = + foreign + "atg_conv_depthwise3d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_conv_tbc = + foreign + "atg_conv_tbc" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_conv_tbc_backward = + foreign + "atg_conv_tbc_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs_conv_tbc_out = + foreign + "atg_conv_tbc_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_conv_transpose1d = + foreign + "atg_conv_transpose1d" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; +end + +module C8 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_conv_transpose2d = + foreign + "atg_conv_transpose2d" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_conv_transpose3d = + foreign + "atg_conv_transpose3d" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_convolution = + foreign + "atg_convolution" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_convolution_out = + foreign + "atg_convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_convolution_overrideable = + foreign + "atg_convolution_overrideable" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_convolution_overrideable_out = + foreign + "atg_convolution_overrideable_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_copy = + foreign "atg_copy" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_copy_out = + foreign + "atg_copy_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_copy_sparse_to_sparse = + foreign + "atg_copy_sparse_to_sparse" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_copy_sparse_to_sparse_ = + foreign + "atg_copy_sparse_to_sparse_" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_copy_sparse_to_sparse_out = + foreign + "atg_copy_sparse_to_sparse_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_copysign = + foreign "atg_copysign" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_copysign_ = + foreign "atg_copysign_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_copysign_out = + foreign + "atg_copysign_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_copysign_scalar = + foreign "atg_copysign_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_copysign_scalar_ = + foreign "atg_copysign_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_copysign_scalar_out = + foreign + "atg_copysign_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_corrcoef = foreign "atg_corrcoef" (gc_tensor @-> returning raw_tensor) + let stubs_cos = foreign "atg_cos" (gc_tensor @-> returning raw_tensor) + let stubs_cos_ = foreign "atg_cos_" (gc_tensor @-> returning raw_tensor) + + let stubs_cos_out = + foreign "atg_cos_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_cosh = foreign "atg_cosh" (gc_tensor @-> returning raw_tensor) + let stubs_cosh_ = foreign "atg_cosh_" (gc_tensor @-> returning raw_tensor) + + let stubs_cosh_out = + foreign "atg_cosh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_cosine_embedding_loss = + foreign + "atg_cosine_embedding_loss" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_cosine_similarity = + foreign + "atg_cosine_similarity" + (gc_tensor @-> gc_tensor @-> int64_t @-> double @-> returning raw_tensor) + ;; + + let stubs_count_nonzero = + foreign + "atg_count_nonzero" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_count_nonzero_out = + foreign + "atg_count_nonzero_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cov = + foreign + "atg_cov" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_cross = + foreign + "atg_cross" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cross_entropy_loss = + foreign + "atg_cross_entropy_loss" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs_cross_out = + foreign + "atg_cross_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_crow_indices = foreign "atg_crow_indices" (gc_tensor @-> returning raw_tensor) + + let stubs_crow_indices_copy = + foreign "atg_crow_indices_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_crow_indices_copy_out = + foreign "atg_crow_indices_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ctc_loss = + foreign + "atg_ctc_loss" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_ctc_loss_tensor = + foreign + "atg_ctc_loss_tensor" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_cudnn_affine_grid_generator = + foreign + "atg_cudnn_affine_grid_generator" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_cudnn_affine_grid_generator_backward = + foreign + "atg_cudnn_affine_grid_generator_backward" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_cudnn_affine_grid_generator_backward_out = + foreign + "atg_cudnn_affine_grid_generator_backward_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_cudnn_affine_grid_generator_out = + foreign + "atg_cudnn_affine_grid_generator_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_cudnn_batch_norm = + foreign + "atg_cudnn_batch_norm" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs_cudnn_batch_norm_backward = + foreign + "atg_cudnn_batch_norm_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> gc_tensor + @-> returning void) + ;; + + let stubs_cudnn_batch_norm_backward_out = + foreign + "atg_cudnn_batch_norm_backward_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> gc_tensor + @-> returning void) + ;; + + let stubs_cudnn_batch_norm_out = + foreign + "atg_cudnn_batch_norm_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs_cudnn_convolution = + foreign + "atg_cudnn_convolution" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_cudnn_convolution_add_relu = + foreign + "atg_cudnn_convolution_add_relu" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_cudnn_convolution_add_relu_out = + foreign + "atg_cudnn_convolution_add_relu_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_cudnn_convolution_out = + foreign + "atg_cudnn_convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_cudnn_convolution_relu = + foreign + "atg_cudnn_convolution_relu" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_cudnn_convolution_relu_out = + foreign + "atg_cudnn_convolution_relu_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_cudnn_convolution_transpose = + foreign + "atg_cudnn_convolution_transpose" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_cudnn_convolution_transpose_out = + foreign + "atg_cudnn_convolution_transpose_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_cudnn_grid_sampler = + foreign "atg_cudnn_grid_sampler" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_cudnn_grid_sampler_backward = + foreign + "atg_cudnn_grid_sampler_backward" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_cudnn_grid_sampler_backward_out = + foreign + "atg_cudnn_grid_sampler_backward_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs_cudnn_grid_sampler_out = + foreign + "atg_cudnn_grid_sampler_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_cudnn_is_acceptable = + foreign "atg_cudnn_is_acceptable" (gc_tensor @-> returning bool) + ;; + + let stubs_cummax = + foreign "atg_cummax" (ptr raw_tensor @-> gc_tensor @-> int64_t @-> returning void) + ;; + + let stubs_cummax_out = + foreign + "atg_cummax_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs_cummaxmin_backward = + foreign + "atg_cummaxmin_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_cummin = + foreign "atg_cummin" (ptr raw_tensor @-> gc_tensor @-> int64_t @-> returning void) + ;; + + let stubs_cummin_out = + foreign + "atg_cummin_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning void) + ;; + + let stubs_cumprod = + foreign "atg_cumprod" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cumprod_ = + foreign "atg_cumprod_" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cumprod_backward = + foreign + "atg_cumprod_backward" + (gc_tensor @-> gc_tensor @-> int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_cumprod_out = + foreign + "atg_cumprod_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cumsum = + foreign "atg_cumsum" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cumsum_ = + foreign "atg_cumsum_" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cumsum_out = + foreign + "atg_cumsum_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_cumulative_trapezoid = + foreign "atg_cumulative_trapezoid" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_cumulative_trapezoid_x = + foreign + "atg_cumulative_trapezoid_x" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_data = foreign "atg_data" (gc_tensor @-> returning raw_tensor) + let stubs_deg2rad = foreign "atg_deg2rad" (gc_tensor @-> returning raw_tensor) + let stubs_deg2rad_ = foreign "atg_deg2rad_" (gc_tensor @-> returning raw_tensor) + + let stubs_deg2rad_out = + foreign "atg_deg2rad_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_dense_dim = foreign "atg_dense_dim" (gc_tensor @-> returning int64_t) + let stubs_dequantize = foreign "atg_dequantize" (gc_tensor @-> returning raw_tensor) + + let stubs_dequantize_self_out = + foreign "atg_dequantize_self_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_dequantize_tensors = + foreign "atg_dequantize_tensors" (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_dequantize_tensors_out = + foreign + "atg_dequantize_tensors_out" + (ptr gc_tensor @-> int @-> ptr gc_tensor @-> int @-> returning void) + ;; + + let stubs_det = foreign "atg_det" (gc_tensor @-> returning raw_tensor) + let stubs_detach = foreign "atg_detach" (gc_tensor @-> returning raw_tensor) + let stubs_detach_ = foreign "atg_detach_" (gc_tensor @-> returning raw_tensor) + let stubs_detach_copy = foreign "atg_detach_copy" (gc_tensor @-> returning raw_tensor) + + let stubs_detach_copy_out = + foreign "atg_detach_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_diag = foreign "atg_diag" (gc_tensor @-> int64_t @-> returning raw_tensor) + + let stubs_diag_embed = + foreign + "atg_diag_embed" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_diag_embed_out = + foreign + "atg_diag_embed_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_diag_out = + foreign "atg_diag_out" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_diagflat = + foreign "atg_diagflat" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_diagonal = + foreign + "atg_diagonal" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_diagonal_backward = + foreign + "atg_diagonal_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_diagonal_backward_out = + foreign + "atg_diagonal_backward_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_diagonal_copy = + foreign + "atg_diagonal_copy" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_diagonal_copy_out = + foreign + "atg_diagonal_copy_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_diagonal_scatter = + foreign + "atg_diagonal_scatter" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_diagonal_scatter_out = + foreign + "atg_diagonal_scatter_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_diff = + foreign + "atg_diff" + (gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_diff_out = + foreign + "atg_diff_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; +end + +module C9 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_digamma = foreign "atg_digamma" (gc_tensor @-> returning raw_tensor) + let stubs_digamma_ = foreign "atg_digamma_" (gc_tensor @-> returning raw_tensor) + + let stubs_digamma_out = + foreign "atg_digamma_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_dist = foreign "atg_dist" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_dist_out = + foreign "atg_dist_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_div = foreign "atg_div" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_div_ = foreign "atg_div_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_div_out = + foreign "atg_div_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_div_out_mode = + foreign + "atg_div_out_mode" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_div_scalar = + foreign "atg_div_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_div_scalar_ = + foreign "atg_div_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_div_scalar_mode = + foreign + "atg_div_scalar_mode" + (gc_tensor @-> scalar @-> string @-> returning raw_tensor) + ;; + + let stubs_div_scalar_mode_ = + foreign + "atg_div_scalar_mode_" + (gc_tensor @-> scalar @-> string @-> returning raw_tensor) + ;; + + let stubs_div_scalar_mode_out = + foreign + "atg_div_scalar_mode_out" + (gc_tensor @-> gc_tensor @-> scalar @-> string @-> returning raw_tensor) + ;; + + let stubs_div_scalar_out = + foreign + "atg_div_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_div_tensor_mode = + foreign + "atg_div_tensor_mode" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_div_tensor_mode_ = + foreign + "atg_div_tensor_mode_" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_divide = + foreign "atg_divide" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_divide_ = + foreign "atg_divide_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_divide_out = + foreign + "atg_divide_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_divide_out_mode = + foreign + "atg_divide_out_mode" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_divide_scalar = + foreign "atg_divide_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_divide_scalar_ = + foreign "atg_divide_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_divide_scalar_mode = + foreign + "atg_divide_scalar_mode" + (gc_tensor @-> scalar @-> string @-> returning raw_tensor) + ;; + + let stubs_divide_scalar_mode_ = + foreign + "atg_divide_scalar_mode_" + (gc_tensor @-> scalar @-> string @-> returning raw_tensor) + ;; + + let stubs_divide_tensor_mode = + foreign + "atg_divide_tensor_mode" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_divide_tensor_mode_ = + foreign + "atg_divide_tensor_mode_" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_dot = foreign "atg_dot" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_dot_out = + foreign "atg_dot_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_dropout = + foreign "atg_dropout" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_dropout_ = + foreign "atg_dropout_" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_dsplit = + foreign "atg_dsplit" (gc_tensor @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_dsplit_array = + foreign + "atg_dsplit_array" + (gc_tensor @-> ptr int64_t @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_dstack = foreign "atg_dstack" (ptr gc_tensor @-> int @-> returning raw_tensor) + + let stubs_dstack_out = + foreign "atg_dstack_out" (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_einsum = + foreign + "atg_einsum" + (string @-> ptr gc_tensor @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_elu = foreign "atg_elu" (gc_tensor @-> returning raw_tensor) + let stubs_elu_ = foreign "atg_elu_" (gc_tensor @-> returning raw_tensor) + + let stubs_elu_backward = + foreign + "atg_elu_backward" + (gc_tensor + @-> scalar + @-> scalar + @-> scalar + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_elu_backward_grad_input = + foreign + "atg_elu_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> scalar + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_elu_out = + foreign "atg_elu_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_embedding = + foreign + "atg_embedding" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_embedding_backward = + foreign + "atg_embedding_backward" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_embedding_bag = + foreign + "atg_embedding_bag" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs_embedding_bag_padding_idx = + foreign + "atg_embedding_bag_padding_idx" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_embedding_dense_backward = + foreign + "atg_embedding_dense_backward" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_embedding_dense_backward_out = + foreign + "atg_embedding_dense_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_embedding_out = + foreign + "atg_embedding_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_embedding_renorm = + foreign + "atg_embedding_renorm" + (gc_tensor @-> gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_embedding_renorm_ = + foreign + "atg_embedding_renorm_" + (gc_tensor @-> gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_embedding_renorm_out = + foreign + "atg_embedding_renorm_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> returning raw_tensor) + ;; + + let stubs_embedding_sparse_backward = + foreign + "atg_embedding_sparse_backward" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_empty = + foreign "atg_empty" (ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_empty_like = foreign "atg_empty_like" (gc_tensor @-> returning raw_tensor) + + let stubs_empty_like_out = + foreign "atg_empty_like_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_empty_out = + foreign "atg_empty_out" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_empty_permuted = + foreign + "atg_empty_permuted" + (ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_empty_permuted_out = + foreign + "atg_empty_permuted_out" + (gc_tensor @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_empty_quantized = + foreign + "atg_empty_quantized" + (ptr int64_t @-> int @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_empty_quantized_out = + foreign + "atg_empty_quantized_out" + (gc_tensor @-> ptr int64_t @-> int @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_empty_strided = + foreign + "atg_empty_strided" + (ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_empty_strided_out = + foreign + "atg_empty_strided_out" + (gc_tensor @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_eq = foreign "atg_eq" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_eq_ = foreign "atg_eq_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_eq_scalar_out = + foreign + "atg_eq_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_eq_tensor = + foreign "atg_eq_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_eq_tensor_ = + foreign "atg_eq_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_eq_tensor_out = + foreign + "atg_eq_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_equal = foreign "atg_equal" (gc_tensor @-> gc_tensor @-> returning bool) + let stubs_erf = foreign "atg_erf" (gc_tensor @-> returning raw_tensor) + let stubs_erf_ = foreign "atg_erf_" (gc_tensor @-> returning raw_tensor) + + let stubs_erf_out = + foreign "atg_erf_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_erfc = foreign "atg_erfc" (gc_tensor @-> returning raw_tensor) + let stubs_erfc_ = foreign "atg_erfc_" (gc_tensor @-> returning raw_tensor) + + let stubs_erfc_out = + foreign "atg_erfc_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_erfinv = foreign "atg_erfinv" (gc_tensor @-> returning raw_tensor) + let stubs_erfinv_ = foreign "atg_erfinv_" (gc_tensor @-> returning raw_tensor) + + let stubs_erfinv_out = + foreign "atg_erfinv_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_exp = foreign "atg_exp" (gc_tensor @-> returning raw_tensor) + let stubs_exp2 = foreign "atg_exp2" (gc_tensor @-> returning raw_tensor) + let stubs_exp2_ = foreign "atg_exp2_" (gc_tensor @-> returning raw_tensor) + + let stubs_exp2_out = + foreign "atg_exp2_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_exp_ = foreign "atg_exp_" (gc_tensor @-> returning raw_tensor) + + let stubs_exp_out = + foreign "atg_exp_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_expand = + foreign + "atg_expand" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_expand_as = + foreign "atg_expand_as" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_expand_copy = + foreign + "atg_expand_copy" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_expand_copy_out = + foreign + "atg_expand_copy_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_expm1 = foreign "atg_expm1" (gc_tensor @-> returning raw_tensor) + let stubs_expm1_ = foreign "atg_expm1_" (gc_tensor @-> returning raw_tensor) + + let stubs_expm1_out = + foreign "atg_expm1_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_exponential = + foreign "atg_exponential" (gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_exponential_ = + foreign "atg_exponential_" (gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_exponential_out = + foreign + "atg_exponential_out" + (gc_tensor @-> gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_eye = foreign "atg_eye" (int64_t @-> int @-> int @-> returning raw_tensor) + + let stubs_eye_m = + foreign "atg_eye_m" (int64_t @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_eye_m_out = + foreign "atg_eye_m_out" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_eye_out = + foreign "atg_eye_out" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_fake_quantize_per_channel_affine = + foreign + "atg_fake_quantize_per_channel_affine" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_fake_quantize_per_channel_affine_cachemask = + foreign + "atg_fake_quantize_per_channel_affine_cachemask" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning void) + ;; +end + +module C10 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_fake_quantize_per_channel_affine_cachemask_backward = + foreign + "atg_fake_quantize_per_channel_affine_cachemask_backward" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fake_quantize_per_channel_affine_cachemask_out = + foreign + "atg_fake_quantize_per_channel_affine_cachemask_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning void) + ;; + + let stubs_fake_quantize_per_tensor_affine = + foreign + "atg_fake_quantize_per_tensor_affine" + (gc_tensor @-> double @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_fake_quantize_per_tensor_affine_cachemask = + foreign + "atg_fake_quantize_per_tensor_affine_cachemask" + (ptr raw_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning void) + ;; + + let stubs_fake_quantize_per_tensor_affine_cachemask_backward = + foreign + "atg_fake_quantize_per_tensor_affine_cachemask_backward" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fake_quantize_per_tensor_affine_cachemask_out = + foreign + "atg_fake_quantize_per_tensor_affine_cachemask_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning void) + ;; + + let stubs_fake_quantize_per_tensor_affine_tensor_qparams = + foreign + "atg_fake_quantize_per_tensor_affine_tensor_qparams" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_fbgemm_linear_fp16_weight = + foreign + "atg_fbgemm_linear_fp16_weight" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fbgemm_linear_fp16_weight_fp32_activation = + foreign + "atg_fbgemm_linear_fp16_weight_fp32_activation" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fbgemm_linear_int8_weight = + foreign + "atg_fbgemm_linear_int8_weight" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_fbgemm_linear_int8_weight_fp32_activation = + foreign + "atg_fbgemm_linear_int8_weight_fp32_activation" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_fbgemm_pack_gemm_matrix_fp16 = + foreign "atg_fbgemm_pack_gemm_matrix_fp16" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fbgemm_pack_quantized_matrix = + foreign "atg_fbgemm_pack_quantized_matrix" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fbgemm_pack_quantized_matrix_kn = + foreign + "atg_fbgemm_pack_quantized_matrix_kn" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_feature_alpha_dropout = + foreign + "atg_feature_alpha_dropout" + (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_feature_alpha_dropout_ = + foreign + "atg_feature_alpha_dropout_" + (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_feature_dropout = + foreign "atg_feature_dropout" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_feature_dropout_ = + foreign "atg_feature_dropout_" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_fft_fft = + foreign + "atg_fft_fft" + (gc_tensor @-> int64_t @-> int @-> int64_t @-> string @-> returning raw_tensor) + ;; + + let stubs_fft_fft2 = + foreign + "atg_fft_fft2" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_fft2_out = + foreign + "atg_fft_fft2_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_fft_out = + foreign + "atg_fft_fft_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_fftfreq = + foreign "atg_fft_fftfreq" (int64_t @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_fft_fftfreq_out = + foreign + "atg_fft_fftfreq_out" + (gc_tensor @-> int64_t @-> double @-> returning raw_tensor) + ;; + + let stubs_fft_fftn = + foreign + "atg_fft_fftn" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_fftn_out = + foreign + "atg_fft_fftn_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_fftshift = + foreign "atg_fft_fftshift" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_fft_hfft = + foreign + "atg_fft_hfft" + (gc_tensor @-> int64_t @-> int @-> int64_t @-> string @-> returning raw_tensor) + ;; + + let stubs_fft_hfft2 = + foreign + "atg_fft_hfft2" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_hfft2_out = + foreign + "atg_fft_hfft2_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_hfft_out = + foreign + "atg_fft_hfft_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_hfftn = + foreign + "atg_fft_hfftn" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_hfftn_out = + foreign + "atg_fft_hfftn_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ifft = + foreign + "atg_fft_ifft" + (gc_tensor @-> int64_t @-> int @-> int64_t @-> string @-> returning raw_tensor) + ;; + + let stubs_fft_ifft2 = + foreign + "atg_fft_ifft2" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ifft2_out = + foreign + "atg_fft_ifft2_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ifft_out = + foreign + "atg_fft_ifft_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ifftn = + foreign + "atg_fft_ifftn" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ifftn_out = + foreign + "atg_fft_ifftn_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ifftshift = + foreign + "atg_fft_ifftshift" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_fft_ihfft = + foreign + "atg_fft_ihfft" + (gc_tensor @-> int64_t @-> int @-> int64_t @-> string @-> returning raw_tensor) + ;; + + let stubs_fft_ihfft2 = + foreign + "atg_fft_ihfft2" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ihfft2_out = + foreign + "atg_fft_ihfft2_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ihfft_out = + foreign + "atg_fft_ihfft_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ihfftn = + foreign + "atg_fft_ihfftn" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_ihfftn_out = + foreign + "atg_fft_ihfftn_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_irfft = + foreign + "atg_fft_irfft" + (gc_tensor @-> int64_t @-> int @-> int64_t @-> string @-> returning raw_tensor) + ;; + + let stubs_fft_irfft2 = + foreign + "atg_fft_irfft2" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_irfft2_out = + foreign + "atg_fft_irfft2_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_irfft_out = + foreign + "atg_fft_irfft_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_irfftn = + foreign + "atg_fft_irfftn" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_irfftn_out = + foreign + "atg_fft_irfftn_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_rfft = + foreign + "atg_fft_rfft" + (gc_tensor @-> int64_t @-> int @-> int64_t @-> string @-> returning raw_tensor) + ;; + + let stubs_fft_rfft2 = + foreign + "atg_fft_rfft2" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_rfft2_out = + foreign + "atg_fft_rfft2_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_rfft_out = + foreign + "atg_fft_rfft_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_rfftfreq = + foreign + "atg_fft_rfftfreq" + (int64_t @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_fft_rfftfreq_out = + foreign + "atg_fft_rfftfreq_out" + (gc_tensor @-> int64_t @-> double @-> returning raw_tensor) + ;; + + let stubs_fft_rfftn = + foreign + "atg_fft_rfftn" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fft_rfftn_out = + foreign + "atg_fft_rfftn_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_fill = foreign "atg_fill" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_fill_ = foreign "atg_fill_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_fill_diagonal_ = + foreign "atg_fill_diagonal_" (gc_tensor @-> scalar @-> int @-> returning raw_tensor) + ;; + + let stubs_fill_scalar_out = + foreign + "atg_fill_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_fill_tensor = + foreign "atg_fill_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fill_tensor_ = + foreign "atg_fill_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fill_tensor_out = + foreign + "atg_fill_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fix = foreign "atg_fix" (gc_tensor @-> returning raw_tensor) + let stubs_fix_ = foreign "atg_fix_" (gc_tensor @-> returning raw_tensor) + + let stubs_fix_out = + foreign "atg_fix_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_flatten = + foreign "atg_flatten" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_flatten_dense_tensors = + foreign "atg_flatten_dense_tensors" (ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_flip = + foreign "atg_flip" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_flip_out = + foreign + "atg_flip_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_fliplr = foreign "atg_fliplr" (gc_tensor @-> returning raw_tensor) + let stubs_flipud = foreign "atg_flipud" (gc_tensor @-> returning raw_tensor) + + let stubs_float_power = + foreign "atg_float_power" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_float_power_ = + foreign "atg_float_power_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_float_power_scalar = + foreign "atg_float_power_scalar" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_float_power_scalar_out = + foreign + "atg_float_power_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_float_power_tensor_ = + foreign "atg_float_power_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_float_power_tensor_scalar = + foreign "atg_float_power_tensor_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_float_power_tensor_scalar_out = + foreign + "atg_float_power_tensor_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_float_power_tensor_tensor_out = + foreign + "atg_float_power_tensor_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_floor = foreign "atg_floor" (gc_tensor @-> returning raw_tensor) + let stubs_floor_ = foreign "atg_floor_" (gc_tensor @-> returning raw_tensor) + + let stubs_floor_divide = + foreign "atg_floor_divide" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_floor_divide_ = + foreign "atg_floor_divide_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_floor_divide_out = + foreign + "atg_floor_divide_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_floor_divide_scalar = + foreign "atg_floor_divide_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_floor_divide_scalar_ = + foreign "atg_floor_divide_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_floor_out = + foreign "atg_floor_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fmax = foreign "atg_fmax" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_fmax_out = + foreign "atg_fmax_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fmin = foreign "atg_fmin" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_fmin_out = + foreign "atg_fmin_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fmod = foreign "atg_fmod" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_fmod_ = foreign "atg_fmod_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_fmod_scalar_out = + foreign + "atg_fmod_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_fmod_tensor = + foreign "atg_fmod_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; +end + +module C11 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_fmod_tensor_ = + foreign "atg_fmod_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fmod_tensor_out = + foreign + "atg_fmod_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_frac = foreign "atg_frac" (gc_tensor @-> returning raw_tensor) + let stubs_frac_ = foreign "atg_frac_" (gc_tensor @-> returning raw_tensor) + + let stubs_frac_out = + foreign "atg_frac_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_fractional_max_pool2d = + foreign + "atg_fractional_max_pool2d" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs_fractional_max_pool2d_backward = + foreign + "atg_fractional_max_pool2d_backward" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_fractional_max_pool2d_backward_grad_input = + foreign + "atg_fractional_max_pool2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_fractional_max_pool2d_output = + foreign + "atg_fractional_max_pool2d_output" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs_fractional_max_pool3d = + foreign + "atg_fractional_max_pool3d" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs_fractional_max_pool3d_backward = + foreign + "atg_fractional_max_pool3d_backward" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_fractional_max_pool3d_backward_grad_input = + foreign + "atg_fractional_max_pool3d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_fractional_max_pool3d_output = + foreign + "atg_fractional_max_pool3d_output" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs_frexp = foreign "atg_frexp" (ptr raw_tensor @-> gc_tensor @-> returning void) + + let stubs_frexp_tensor_out = + foreign + "atg_frexp_tensor_out" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_frobenius_norm = + foreign + "atg_frobenius_norm" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_frobenius_norm_out = + foreign + "atg_frobenius_norm_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_from_file = + foreign + "atg_from_file" + (string @-> int @-> int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_from_file_out = + foreign + "atg_from_file_out" + (gc_tensor @-> string @-> int @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_full = + foreign + "atg_full" + (ptr int64_t @-> int @-> scalar @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_full_like = + foreign "atg_full_like" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_full_like_out = + foreign + "atg_full_like_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_full_out = + foreign + "atg_full_out" + (gc_tensor @-> ptr int64_t @-> int @-> scalar @-> returning raw_tensor) + ;; + + let stubs_fused_moving_avg_obs_fake_quant = + foreign + "atg_fused_moving_avg_obs_fake_quant" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_gather = + foreign + "atg_gather" + (gc_tensor @-> int64_t @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_gather_backward = + foreign + "atg_gather_backward" + (gc_tensor @-> gc_tensor @-> int64_t @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_gather_out = + foreign + "atg_gather_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_gcd = foreign "atg_gcd" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_gcd_ = foreign "atg_gcd_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_gcd_out = + foreign "atg_gcd_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ge = foreign "atg_ge" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_ge_ = foreign "atg_ge_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_ge_scalar_out = + foreign + "atg_ge_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_ge_tensor = + foreign "atg_ge_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ge_tensor_ = + foreign "atg_ge_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ge_tensor_out = + foreign + "atg_ge_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_gelu = foreign "atg_gelu" (gc_tensor @-> string @-> returning raw_tensor) + let stubs_gelu_ = foreign "atg_gelu_" (gc_tensor @-> string @-> returning raw_tensor) + + let stubs_gelu_backward = + foreign + "atg_gelu_backward" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_gelu_backward_grad_input = + foreign + "atg_gelu_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_gelu_out = + foreign "atg_gelu_out" (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_geometric = + foreign "atg_geometric" (gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_geometric_ = + foreign "atg_geometric_" (gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_geometric_out = + foreign + "atg_geometric_out" + (gc_tensor @-> gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_geqrf = foreign "atg_geqrf" (ptr raw_tensor @-> gc_tensor @-> returning void) + + let stubs_geqrf_a = + foreign + "atg_geqrf_a" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_ger = foreign "atg_ger" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_ger_out = + foreign "atg_ger_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_glu = foreign "atg_glu" (gc_tensor @-> int64_t @-> returning raw_tensor) + + let stubs_glu_backward = + foreign + "atg_glu_backward" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_glu_backward_grad_input = + foreign + "atg_glu_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_glu_backward_jvp = + foreign + "atg_glu_backward_jvp" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_glu_backward_jvp_out = + foreign + "atg_glu_backward_jvp_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_glu_jvp = + foreign + "atg_glu_jvp" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_glu_jvp_out = + foreign + "atg_glu_jvp_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_glu_out = + foreign "atg_glu_out" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_grad = foreign "atg_grad" (gc_tensor @-> returning raw_tensor) + let stubs_greater = foreign "atg_greater" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_greater_ = + foreign "atg_greater_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_greater_equal = + foreign "atg_greater_equal" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_greater_equal_ = + foreign "atg_greater_equal_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_greater_equal_scalar_out = + foreign + "atg_greater_equal_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_greater_equal_tensor = + foreign "atg_greater_equal_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_greater_equal_tensor_ = + foreign "atg_greater_equal_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_greater_equal_tensor_out = + foreign + "atg_greater_equal_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_greater_scalar_out = + foreign + "atg_greater_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_greater_tensor = + foreign "atg_greater_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_greater_tensor_ = + foreign "atg_greater_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_greater_tensor_out = + foreign + "atg_greater_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_grid_sampler = + foreign + "atg_grid_sampler" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_grid_sampler_2d = + foreign + "atg_grid_sampler_2d" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_grid_sampler_2d_out = + foreign + "atg_grid_sampler_2d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_grid_sampler_3d = + foreign + "atg_grid_sampler_3d" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_grid_sampler_3d_out = + foreign + "atg_grid_sampler_3d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_group_norm = + foreign + "atg_group_norm" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_gru = + foreign + "atg_gru" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_gru_cell = + foreign + "atg_gru_cell" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_gru_data = + foreign + "atg_gru_data" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> returning void) + ;; + + let stubs_gt = foreign "atg_gt" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_gt_ = foreign "atg_gt_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_gt_scalar_out = + foreign + "atg_gt_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_gt_tensor = + foreign "atg_gt_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_gt_tensor_ = + foreign "atg_gt_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_gt_tensor_out = + foreign + "atg_gt_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hamming_window = + foreign "atg_hamming_window" (int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_hamming_window_out = + foreign "atg_hamming_window_out" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_hamming_window_periodic = + foreign + "atg_hamming_window_periodic" + (int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_hamming_window_periodic_alpha = + foreign + "atg_hamming_window_periodic_alpha" + (int64_t @-> int @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_hamming_window_periodic_alpha_beta = + foreign + "atg_hamming_window_periodic_alpha_beta" + (int64_t @-> int @-> double @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_hamming_window_periodic_alpha_beta_out = + foreign + "atg_hamming_window_periodic_alpha_beta_out" + (gc_tensor @-> int64_t @-> int @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_hamming_window_periodic_alpha_out = + foreign + "atg_hamming_window_periodic_alpha_out" + (gc_tensor @-> int64_t @-> int @-> double @-> returning raw_tensor) + ;; + + let stubs_hamming_window_periodic_out = + foreign + "atg_hamming_window_periodic_out" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_hann_window = + foreign "atg_hann_window" (int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_hann_window_out = + foreign "atg_hann_window_out" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_hann_window_periodic = + foreign + "atg_hann_window_periodic" + (int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_hann_window_periodic_out = + foreign + "atg_hann_window_periodic_out" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_hardshrink = foreign "atg_hardshrink" (gc_tensor @-> returning raw_tensor) + + let stubs_hardshrink_backward = + foreign + "atg_hardshrink_backward" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_hardshrink_backward_grad_input = + foreign + "atg_hardshrink_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_hardshrink_out = + foreign "atg_hardshrink_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; +end + +module C12 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_hardsigmoid = foreign "atg_hardsigmoid" (gc_tensor @-> returning raw_tensor) + let stubs_hardsigmoid_ = foreign "atg_hardsigmoid_" (gc_tensor @-> returning raw_tensor) + + let stubs_hardsigmoid_backward = + foreign "atg_hardsigmoid_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hardsigmoid_backward_grad_input = + foreign + "atg_hardsigmoid_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hardsigmoid_out = + foreign "atg_hardsigmoid_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hardswish = foreign "atg_hardswish" (gc_tensor @-> returning raw_tensor) + let stubs_hardswish_ = foreign "atg_hardswish_" (gc_tensor @-> returning raw_tensor) + + let stubs_hardswish_backward = + foreign "atg_hardswish_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hardswish_backward_out = + foreign + "atg_hardswish_backward_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hardswish_out = + foreign "atg_hardswish_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hardtanh = foreign "atg_hardtanh" (gc_tensor @-> returning raw_tensor) + let stubs_hardtanh_ = foreign "atg_hardtanh_" (gc_tensor @-> returning raw_tensor) + + let stubs_hardtanh_backward = + foreign + "atg_hardtanh_backward" + (gc_tensor @-> gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_hardtanh_backward_grad_input = + foreign + "atg_hardtanh_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_hardtanh_out = + foreign "atg_hardtanh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_heaviside = + foreign "atg_heaviside" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_heaviside_ = + foreign "atg_heaviside_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_heaviside_out = + foreign + "atg_heaviside_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hinge_embedding_loss = + foreign + "atg_hinge_embedding_loss" + (gc_tensor @-> gc_tensor @-> double @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_histc = foreign "atg_histc" (gc_tensor @-> int64_t @-> returning raw_tensor) + + let stubs_histc_out = + foreign "atg_histc_out" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_hsplit = + foreign "atg_hsplit" (gc_tensor @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_hsplit_array = + foreign + "atg_hsplit_array" + (gc_tensor @-> ptr int64_t @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_hspmm = foreign "atg_hspmm" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_hspmm_out = + foreign + "atg_hspmm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hstack = foreign "atg_hstack" (ptr gc_tensor @-> int @-> returning raw_tensor) + + let stubs_hstack_out = + foreign "atg_hstack_out" (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_huber_loss = + foreign + "atg_huber_loss" + (gc_tensor @-> gc_tensor @-> int64_t @-> double @-> returning raw_tensor) + ;; + + let stubs_huber_loss_backward = + foreign + "atg_huber_loss_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs_huber_loss_backward_out = + foreign + "atg_huber_loss_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs_huber_loss_out = + foreign + "atg_huber_loss_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs_hypot = foreign "atg_hypot" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_hypot_ = + foreign "atg_hypot_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_hypot_out = + foreign + "atg_hypot_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_i0 = foreign "atg_i0" (gc_tensor @-> returning raw_tensor) + let stubs_i0_ = foreign "atg_i0_" (gc_tensor @-> returning raw_tensor) + + let stubs_i0_out = + foreign "atg_i0_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_igamma = + foreign "atg_igamma" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_igamma_ = + foreign "atg_igamma_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_igamma_out = + foreign + "atg_igamma_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_igammac = + foreign "atg_igammac" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_igammac_ = + foreign "atg_igammac_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_igammac_out = + foreign + "atg_igammac_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_im2col = + foreign + "atg_im2col" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_im2col_out = + foreign + "atg_im2col_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_imag = foreign "atg_imag" (gc_tensor @-> returning raw_tensor) + + let stubs_index = + foreign "atg_index" (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_index_add = + foreign + "atg_index_add" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_add_ = + foreign + "atg_index_add_" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_add_out = + foreign + "atg_index_add_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_index_copy = + foreign + "atg_index_copy" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_copy_ = + foreign + "atg_index_copy_" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_copy_out = + foreign + "atg_index_copy_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_index_fill = + foreign + "atg_index_fill" + (gc_tensor @-> int64_t @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_index_fill_ = + foreign + "atg_index_fill_" + (gc_tensor @-> int64_t @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_index_fill_int_scalar_out = + foreign + "atg_index_fill_int_scalar_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_index_fill_int_tensor = + foreign + "atg_index_fill_int_tensor" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_fill_int_tensor_ = + foreign + "atg_index_fill_int_tensor_" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_fill_int_tensor_out = + foreign + "atg_index_fill_int_tensor_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_index_put = + foreign + "atg_index_put" + (gc_tensor @-> ptr gc_tensor @-> int @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_index_put_ = + foreign + "atg_index_put_" + (gc_tensor @-> ptr gc_tensor @-> int @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_index_put_out = + foreign + "atg_index_put_out" + (gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs_index_reduce = + foreign + "atg_index_reduce" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> int + @-> returning raw_tensor) + ;; + + let stubs_index_reduce_ = + foreign + "atg_index_reduce_" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> int + @-> returning raw_tensor) + ;; + + let stubs_index_reduce_out = + foreign + "atg_index_reduce_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> int + @-> returning raw_tensor) + ;; + + let stubs_index_select = + foreign + "atg_index_select" + (gc_tensor @-> int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_select_backward = + foreign + "atg_index_select_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_index_select_out = + foreign + "atg_index_select_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_index_tensor_out = + foreign + "atg_index_tensor_out" + (gc_tensor @-> gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_indices = foreign "atg_indices" (gc_tensor @-> returning raw_tensor) + let stubs_indices_copy = foreign "atg_indices_copy" (gc_tensor @-> returning raw_tensor) + + let stubs_indices_copy_out = + foreign "atg_indices_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_infinitely_differentiable_gelu_backward = + foreign + "atg_infinitely_differentiable_gelu_backward" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_inner = foreign "atg_inner" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_inner_out = + foreign + "atg_inner_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_instance_norm = + foreign + "atg_instance_norm" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_int_repr = foreign "atg_int_repr" (gc_tensor @-> returning raw_tensor) + + let stubs_int_repr_out = + foreign "atg_int_repr_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_inverse = foreign "atg_inverse" (gc_tensor @-> returning raw_tensor) + + let stubs_inverse_out = + foreign "atg_inverse_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_is_coalesced = foreign "atg_is_coalesced" (gc_tensor @-> returning bool) + let stubs_is_complex = foreign "atg_is_complex" (gc_tensor @-> returning bool) + let stubs_is_conj = foreign "atg_is_conj" (gc_tensor @-> returning bool) + let stubs_is_distributed = foreign "atg_is_distributed" (gc_tensor @-> returning bool) + + let stubs_is_floating_point = + foreign "atg_is_floating_point" (gc_tensor @-> returning bool) + ;; + + let stubs_is_inference = foreign "atg_is_inference" (gc_tensor @-> returning bool) + let stubs_is_leaf = foreign "atg_is_leaf" (gc_tensor @-> returning bool) + let stubs_is_neg = foreign "atg_is_neg" (gc_tensor @-> returning bool) + let stubs_is_nonzero = foreign "atg_is_nonzero" (gc_tensor @-> returning bool) + let stubs_is_pinned = foreign "atg_is_pinned" (gc_tensor @-> int @-> returning bool) + + let stubs_is_same_size = + foreign "atg_is_same_size" (gc_tensor @-> gc_tensor @-> returning bool) + ;; + + let stubs_is_set_to = + foreign "atg_is_set_to" (gc_tensor @-> gc_tensor @-> returning bool) + ;; + + let stubs_is_signed = foreign "atg_is_signed" (gc_tensor @-> returning bool) + + let stubs_is_vulkan_available = + foreign "atg_is_vulkan_available" (void @-> returning bool) + ;; + + let stubs_isclose = + foreign + "atg_isclose" + (gc_tensor @-> gc_tensor @-> double @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_isfinite = foreign "atg_isfinite" (gc_tensor @-> returning raw_tensor) + + let stubs_isin = + foreign "atg_isin" (gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_isin_scalar_tensor = + foreign + "atg_isin_scalar_tensor" + (scalar @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_isin_scalar_tensor_out = + foreign + "atg_isin_scalar_tensor_out" + (gc_tensor @-> scalar @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_isin_tensor_scalar = + foreign + "atg_isin_tensor_scalar" + (gc_tensor @-> scalar @-> int @-> int @-> returning raw_tensor) + ;; +end + +module C13 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_isin_tensor_scalar_out = + foreign + "atg_isin_tensor_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_isin_tensor_tensor_out = + foreign + "atg_isin_tensor_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_isinf = foreign "atg_isinf" (gc_tensor @-> returning raw_tensor) + + let stubs_isinf_out = + foreign "atg_isinf_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_isnan = foreign "atg_isnan" (gc_tensor @-> returning raw_tensor) + + let stubs_isnan_out = + foreign "atg_isnan_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_isneginf = foreign "atg_isneginf" (gc_tensor @-> returning raw_tensor) + + let stubs_isneginf_out = + foreign "atg_isneginf_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_isposinf = foreign "atg_isposinf" (gc_tensor @-> returning raw_tensor) + + let stubs_isposinf_out = + foreign "atg_isposinf_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_isreal = foreign "atg_isreal" (gc_tensor @-> returning raw_tensor) + + let stubs_istft = + foreign + "atg_istft" + (gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> int + @-> int + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_kaiser_window = + foreign "atg_kaiser_window" (int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_kaiser_window_beta = + foreign + "atg_kaiser_window_beta" + (int64_t @-> int @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_kaiser_window_beta_out = + foreign + "atg_kaiser_window_beta_out" + (gc_tensor @-> int64_t @-> int @-> double @-> returning raw_tensor) + ;; + + let stubs_kaiser_window_out = + foreign "atg_kaiser_window_out" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_kaiser_window_periodic = + foreign + "atg_kaiser_window_periodic" + (int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_kaiser_window_periodic_out = + foreign + "atg_kaiser_window_periodic_out" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_kl_div = + foreign + "atg_kl_div" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_kron = foreign "atg_kron" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_kron_out = + foreign "atg_kron_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_kthvalue = + foreign + "atg_kthvalue" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning void) + ;; + + let stubs_kthvalue_values = + foreign + "atg_kthvalue_values" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_l1_loss = + foreign "atg_l1_loss" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_layer_norm = + foreign + "atg_layer_norm" + (gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_lcm = foreign "atg_lcm" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_lcm_ = foreign "atg_lcm_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_lcm_out = + foreign "atg_lcm_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ldexp = foreign "atg_ldexp" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_ldexp_ = + foreign "atg_ldexp_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ldexp_out = + foreign + "atg_ldexp_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_le = foreign "atg_le" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_le_ = foreign "atg_le_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_le_scalar_out = + foreign + "atg_le_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_le_tensor = + foreign "atg_le_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_le_tensor_ = + foreign "atg_le_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_le_tensor_out = + foreign + "atg_le_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_leaky_relu = foreign "atg_leaky_relu" (gc_tensor @-> returning raw_tensor) + let stubs_leaky_relu_ = foreign "atg_leaky_relu_" (gc_tensor @-> returning raw_tensor) + + let stubs_leaky_relu_backward = + foreign + "atg_leaky_relu_backward" + (gc_tensor @-> gc_tensor @-> scalar @-> int @-> returning raw_tensor) + ;; + + let stubs_leaky_relu_backward_grad_input = + foreign + "atg_leaky_relu_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> scalar @-> int @-> returning raw_tensor) + ;; + + let stubs_leaky_relu_out = + foreign "atg_leaky_relu_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lerp = + foreign "atg_lerp" (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_lerp_ = + foreign "atg_lerp_" (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_lerp_scalar_out = + foreign + "atg_lerp_scalar_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_lerp_tensor = + foreign + "atg_lerp_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lerp_tensor_ = + foreign + "atg_lerp_tensor_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lerp_tensor_out = + foreign + "atg_lerp_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_less = foreign "atg_less" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_less_ = foreign "atg_less_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_less_equal = + foreign "atg_less_equal" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_less_equal_ = + foreign "atg_less_equal_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_less_equal_scalar_out = + foreign + "atg_less_equal_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_less_equal_tensor = + foreign "atg_less_equal_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_less_equal_tensor_ = + foreign "atg_less_equal_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_less_equal_tensor_out = + foreign + "atg_less_equal_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_less_scalar_out = + foreign + "atg_less_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_less_tensor = + foreign "atg_less_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_less_tensor_ = + foreign "atg_less_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_less_tensor_out = + foreign + "atg_less_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lgamma = foreign "atg_lgamma" (gc_tensor @-> returning raw_tensor) + let stubs_lgamma_ = foreign "atg_lgamma_" (gc_tensor @-> returning raw_tensor) + + let stubs_lgamma_out = + foreign "atg_lgamma_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lift = foreign "atg_lift" (gc_tensor @-> returning raw_tensor) + let stubs_lift_fresh = foreign "atg_lift_fresh" (gc_tensor @-> returning raw_tensor) + + let stubs_lift_fresh_copy = + foreign "atg_lift_fresh_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lift_fresh_copy_out = + foreign "atg_lift_fresh_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lift_out = + foreign "atg_lift_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_cholesky = + foreign "atg_linalg_cholesky" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_cholesky_ex = + foreign + "atg_linalg_cholesky_ex" + (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs_linalg_cholesky_ex_l = + foreign + "atg_linalg_cholesky_ex_l" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs_linalg_cholesky_out = + foreign + "atg_linalg_cholesky_out" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_cond = + foreign "atg_linalg_cond" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_linalg_cond_out = + foreign + "atg_linalg_cond_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_linalg_cond_p_str = + foreign "atg_linalg_cond_p_str" (gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_linalg_cond_p_str_out = + foreign + "atg_linalg_cond_p_str_out" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_linalg_cross = + foreign + "atg_linalg_cross" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_cross_out = + foreign + "atg_linalg_cross_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_det = foreign "atg_linalg_det" (gc_tensor @-> returning raw_tensor) + + let stubs_linalg_det_out = + foreign "atg_linalg_det_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_diagonal = + foreign + "atg_linalg_diagonal" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_eig = + foreign "atg_linalg_eig" (ptr raw_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_linalg_eig_out = + foreign + "atg_linalg_eig_out" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_linalg_eigh = + foreign "atg_linalg_eigh" (ptr raw_tensor @-> gc_tensor @-> string @-> returning void) + ;; + + let stubs_linalg_eigh_eigvals = + foreign + "atg_linalg_eigh_eigvals" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> string + @-> returning void) + ;; + + let stubs_linalg_eigvals = + foreign "atg_linalg_eigvals" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_eigvals_out = + foreign "atg_linalg_eigvals_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_eigvalsh = + foreign "atg_linalg_eigvalsh" (gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_linalg_eigvalsh_out = + foreign + "atg_linalg_eigvalsh_out" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_linalg_householder_product = + foreign + "atg_linalg_householder_product" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_householder_product_out = + foreign + "atg_linalg_householder_product_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_inv = foreign "atg_linalg_inv" (gc_tensor @-> returning raw_tensor) + + let stubs_linalg_inv_ex = + foreign "atg_linalg_inv_ex" (ptr raw_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_linalg_inv_ex_inverse = + foreign + "atg_linalg_inv_ex_inverse" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_linalg_inv_out = + foreign "atg_linalg_inv_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_ldl_factor = + foreign + "atg_linalg_ldl_factor" + (ptr raw_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_linalg_ldl_factor_ex = + foreign + "atg_linalg_ldl_factor_ex" + (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs_linalg_ldl_factor_ex_out = + foreign + "atg_linalg_ldl_factor_ex_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs_linalg_ldl_factor_out = + foreign + "atg_linalg_ldl_factor_out" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_linalg_ldl_solve = + foreign + "atg_linalg_ldl_solve" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; +end + +module C14 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_linalg_ldl_solve_out = + foreign + "atg_linalg_ldl_solve_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_lstsq = + foreign + "atg_linalg_lstsq" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> string + @-> returning void) + ;; + + let stubs_linalg_lstsq_out = + foreign + "atg_linalg_lstsq_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> string + @-> returning void) + ;; + + let stubs_linalg_lu = + foreign "atg_linalg_lu" (ptr raw_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_linalg_lu_factor = + foreign + "atg_linalg_lu_factor" + (ptr raw_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_linalg_lu_factor_ex = + foreign + "atg_linalg_lu_factor_ex" + (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs_linalg_lu_factor_ex_out = + foreign + "atg_linalg_lu_factor_ex_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs_linalg_lu_factor_out = + foreign + "atg_linalg_lu_factor_out" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_linalg_lu_out = + foreign + "atg_linalg_lu_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs_linalg_lu_solve = + foreign + "atg_linalg_lu_solve" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_lu_solve_out = + foreign + "atg_linalg_lu_solve_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_matmul = + foreign "atg_linalg_matmul" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_matmul_out = + foreign + "atg_linalg_matmul_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_exp = + foreign "atg_linalg_matrix_exp" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_exp_out = + foreign "atg_linalg_matrix_exp_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_power = + foreign "atg_linalg_matrix_power" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_power_out = + foreign + "atg_linalg_matrix_power_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank = + foreign + "atg_linalg_matrix_rank" + (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank_atol_rtol_float = + foreign + "atg_linalg_matrix_rank_atol_rtol_float" + (gc_tensor @-> double @-> int @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank_atol_rtol_float_out = + foreign + "atg_linalg_matrix_rank_atol_rtol_float_out" + (gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> double + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank_atol_rtol_tensor = + foreign + "atg_linalg_matrix_rank_atol_rtol_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank_atol_rtol_tensor_out = + foreign + "atg_linalg_matrix_rank_atol_rtol_tensor_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank_out = + foreign + "atg_linalg_matrix_rank_out" + (gc_tensor @-> gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank_out_tol_tensor = + foreign + "atg_linalg_matrix_rank_out_tol_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_matrix_rank_tol_tensor = + foreign + "atg_linalg_matrix_rank_tol_tensor" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_multi_dot = + foreign "atg_linalg_multi_dot" (ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_multi_dot_out = + foreign + "atg_linalg_multi_dot_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv = + foreign "atg_linalg_pinv" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv_atol_rtol_float = + foreign + "atg_linalg_pinv_atol_rtol_float" + (gc_tensor @-> double @-> int @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv_atol_rtol_float_out = + foreign + "atg_linalg_pinv_atol_rtol_float_out" + (gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> double + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv_atol_rtol_tensor = + foreign + "atg_linalg_pinv_atol_rtol_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv_atol_rtol_tensor_out = + foreign + "atg_linalg_pinv_atol_rtol_tensor_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv_out = + foreign + "atg_linalg_pinv_out" + (gc_tensor @-> gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv_out_rcond_tensor = + foreign + "atg_linalg_pinv_out_rcond_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_pinv_rcond_tensor = + foreign + "atg_linalg_pinv_rcond_tensor" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_qr = + foreign "atg_linalg_qr" (ptr raw_tensor @-> gc_tensor @-> string @-> returning void) + ;; + + let stubs_linalg_qr_out = + foreign + "atg_linalg_qr_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> string + @-> returning void) + ;; + + let stubs_linalg_slogdet = + foreign "atg_linalg_slogdet" (ptr raw_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_linalg_slogdet_out = + foreign + "atg_linalg_slogdet_out" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_linalg_solve = + foreign "atg_linalg_solve" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_solve_ex = + foreign + "atg_linalg_solve_ex" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs_linalg_solve_ex_out = + foreign + "atg_linalg_solve_ex_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs_linalg_solve_out = + foreign + "atg_linalg_solve_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_solve_triangular = + foreign + "atg_linalg_solve_triangular" + (gc_tensor @-> gc_tensor @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_solve_triangular_out = + foreign + "atg_linalg_solve_triangular_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_svd = + foreign + "atg_linalg_svd" + (ptr raw_tensor @-> gc_tensor @-> int @-> string @-> returning void) + ;; + + let stubs_linalg_svd_u = + foreign + "atg_linalg_svd_u" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> string + @-> returning void) + ;; + + let stubs_linalg_svdvals = + foreign "atg_linalg_svdvals" (gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_linalg_svdvals_out = + foreign + "atg_linalg_svdvals_out" + (gc_tensor @-> gc_tensor @-> string @-> returning raw_tensor) + ;; + + let stubs_linalg_tensorinv = + foreign "atg_linalg_tensorinv" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_tensorinv_out = + foreign + "atg_linalg_tensorinv_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_tensorsolve = + foreign + "atg_linalg_tensorsolve" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_tensorsolve_out = + foreign + "atg_linalg_tensorsolve_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_linalg_vander = + foreign "atg_linalg_vander" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_linalg_vecdot = + foreign + "atg_linalg_vecdot" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linalg_vecdot_out = + foreign + "atg_linalg_vecdot_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_linear = + foreign "atg_linear" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linear_out = + foreign + "atg_linear_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_linspace = + foreign + "atg_linspace" + (scalar @-> scalar @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_linspace_out = + foreign + "atg_linspace_out" + (gc_tensor @-> scalar @-> scalar @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_log = foreign "atg_log" (gc_tensor @-> returning raw_tensor) + let stubs_log10 = foreign "atg_log10" (gc_tensor @-> returning raw_tensor) + let stubs_log10_ = foreign "atg_log10_" (gc_tensor @-> returning raw_tensor) + + let stubs_log10_out = + foreign "atg_log10_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_log1p = foreign "atg_log1p" (gc_tensor @-> returning raw_tensor) + let stubs_log1p_ = foreign "atg_log1p_" (gc_tensor @-> returning raw_tensor) + + let stubs_log1p_out = + foreign "atg_log1p_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_log2 = foreign "atg_log2" (gc_tensor @-> returning raw_tensor) + let stubs_log2_ = foreign "atg_log2_" (gc_tensor @-> returning raw_tensor) + + let stubs_log2_out = + foreign "atg_log2_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_log_ = foreign "atg_log_" (gc_tensor @-> returning raw_tensor) + + let stubs_log_normal = + foreign "atg_log_normal" (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_log_normal_ = + foreign "atg_log_normal_" (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_log_normal_out = + foreign + "atg_log_normal_out" + (gc_tensor @-> gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_log_out = + foreign "atg_log_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_log_sigmoid = foreign "atg_log_sigmoid" (gc_tensor @-> returning raw_tensor) + + let stubs_log_sigmoid_backward = + foreign + "atg_log_sigmoid_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_log_sigmoid_backward_grad_input = + foreign + "atg_log_sigmoid_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_log_sigmoid_out = + foreign "atg_log_sigmoid_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_log_softmax = + foreign "atg_log_softmax" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_log_softmax_int_out = + foreign + "atg_log_softmax_int_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_logaddexp = + foreign "atg_logaddexp" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logaddexp2 = + foreign "atg_logaddexp2" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logaddexp2_out = + foreign + "atg_logaddexp2_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logaddexp_out = + foreign + "atg_logaddexp_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logcumsumexp = + foreign "atg_logcumsumexp" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_logcumsumexp_out = + foreign + "atg_logcumsumexp_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_logdet = foreign "atg_logdet" (gc_tensor @-> returning raw_tensor) + + let stubs_logical_and = + foreign "atg_logical_and" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_and_ = + foreign "atg_logical_and_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_and_out = + foreign + "atg_logical_and_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_not = foreign "atg_logical_not" (gc_tensor @-> returning raw_tensor) + let stubs_logical_not_ = foreign "atg_logical_not_" (gc_tensor @-> returning raw_tensor) + + let stubs_logical_not_out = + foreign "atg_logical_not_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_or = + foreign "atg_logical_or" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_or_ = + foreign "atg_logical_or_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_or_out = + foreign + "atg_logical_or_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_xor = + foreign "atg_logical_xor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_xor_ = + foreign "atg_logical_xor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_logical_xor_out = + foreign + "atg_logical_xor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; +end + +module C15 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_logit = + foreign "atg_logit" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_logit_ = + foreign "atg_logit_" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_logit_backward = + foreign + "atg_logit_backward" + (gc_tensor @-> gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_logit_backward_grad_input = + foreign + "atg_logit_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_logit_out = + foreign + "atg_logit_out" + (gc_tensor @-> gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_logspace = + foreign + "atg_logspace" + (scalar @-> scalar @-> int64_t @-> double @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_logspace_out = + foreign + "atg_logspace_out" + (gc_tensor @-> scalar @-> scalar @-> int64_t @-> double @-> returning raw_tensor) + ;; + + let stubs_logsumexp = + foreign + "atg_logsumexp" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_logsumexp_out = + foreign + "atg_logsumexp_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_lstm = + foreign + "atg_lstm" + (ptr raw_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_lstm_cell = + foreign + "atg_lstm_cell" + (ptr raw_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning void) + ;; + + let stubs_lstm_data = + foreign + "atg_lstm_data" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> returning void) + ;; + + let stubs_lstm_mps_backward = + foreign + "atg_lstm_mps_backward" + (gc_tensor + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_lt = foreign "atg_lt" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_lt_ = foreign "atg_lt_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_lt_scalar_out = + foreign + "atg_lt_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_lt_tensor = + foreign "atg_lt_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lt_tensor_ = + foreign "atg_lt_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lt_tensor_out = + foreign + "atg_lt_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lu_solve = + foreign "atg_lu_solve" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lu_solve_out = + foreign + "atg_lu_solve_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_lu_unpack = + foreign + "atg_lu_unpack" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs_lu_unpack_out = + foreign + "atg_lu_unpack_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs_margin_ranking_loss = + foreign + "atg_margin_ranking_loss" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_masked_fill = + foreign "atg_masked_fill" (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_masked_fill_ = + foreign + "atg_masked_fill_" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_masked_fill_scalar_out = + foreign + "atg_masked_fill_scalar_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_masked_fill_tensor = + foreign + "atg_masked_fill_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_fill_tensor_ = + foreign + "atg_masked_fill_tensor_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_fill_tensor_out = + foreign + "atg_masked_fill_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_scatter = + foreign + "atg_masked_scatter" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_scatter_ = + foreign + "atg_masked_scatter_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_scatter_out = + foreign + "atg_masked_scatter_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_select = + foreign "atg_masked_select" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_select_backward = + foreign + "atg_masked_select_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_masked_select_out = + foreign + "atg_masked_select_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_matmul = + foreign "atg_matmul" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_matmul_out = + foreign + "atg_matmul_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_matrix_exp = foreign "atg_matrix_exp" (gc_tensor @-> returning raw_tensor) + + let stubs_matrix_exp_backward = + foreign "atg_matrix_exp_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_matrix_h = foreign "atg_matrix_h" (gc_tensor @-> returning raw_tensor) + + let stubs_matrix_power = + foreign "atg_matrix_power" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_matrix_power_out = + foreign + "atg_matrix_power_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_max = foreign "atg_max" (gc_tensor @-> returning raw_tensor) + + let stubs_max_dim = + foreign + "atg_max_dim" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> returning void) + ;; + + let stubs_max_dim_max = + foreign + "atg_max_dim_max" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_max_other = + foreign "atg_max_other" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_max_out = + foreign "atg_max_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_max_pool1d = + foreign + "atg_max_pool1d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_max_pool1d_with_indices = + foreign + "atg_max_pool1d_with_indices" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_max_pool2d = + foreign + "atg_max_pool2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_max_pool2d_backward = + foreign + "atg_max_pool2d_backward" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_max_pool2d_backward_out = + foreign + "atg_max_pool2d_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_max_pool2d_with_indices = + foreign + "atg_max_pool2d_with_indices" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_max_pool2d_with_indices_backward = + foreign + "atg_max_pool2d_with_indices_backward" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_max_pool2d_with_indices_backward_grad_input = + foreign + "atg_max_pool2d_with_indices_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_max_pool2d_with_indices_out = + foreign + "atg_max_pool2d_with_indices_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_max_pool3d = + foreign + "atg_max_pool3d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_max_pool3d_with_indices = + foreign + "atg_max_pool3d_with_indices" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_max_pool3d_with_indices_backward = + foreign + "atg_max_pool3d_with_indices_backward" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_max_pool3d_with_indices_backward_grad_input = + foreign + "atg_max_pool3d_with_indices_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_max_pool3d_with_indices_out = + foreign + "atg_max_pool3d_with_indices_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_max_unary_out = + foreign "atg_max_unary_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_max_unpool2d = + foreign + "atg_max_unpool2d" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_max_unpool2d_out = + foreign + "atg_max_unpool2d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_max_unpool3d = + foreign + "atg_max_unpool3d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_max_unpool3d_out = + foreign + "atg_max_unpool3d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_maximum = + foreign "atg_maximum" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_maximum_out = + foreign + "atg_maximum_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mean = foreign "atg_mean" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_mean_dim = + foreign + "atg_mean_dim" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_mean_out = + foreign + "atg_mean_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_median = foreign "atg_median" (gc_tensor @-> returning raw_tensor) + + let stubs_median_dim = + foreign + "atg_median_dim" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> returning void) + ;; + + let stubs_median_dim_values = + foreign + "atg_median_dim_values" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_median_out = + foreign "atg_median_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_meshgrid = + foreign "atg_meshgrid" (ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_meshgrid_indexing = + foreign + "atg_meshgrid_indexing" + (ptr gc_tensor @-> int @-> string @-> returning (ptr raw_tensor)) + ;; + + let stubs_mh = foreign "atg_mh" (gc_tensor @-> returning raw_tensor) + let stubs_min = foreign "atg_min" (gc_tensor @-> returning raw_tensor) + + let stubs_min_dim = + foreign + "atg_min_dim" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> returning void) + ;; + + let stubs_min_dim_min = + foreign + "atg_min_dim_min" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_min_other = + foreign "atg_min_other" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_min_out = + foreign "atg_min_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_min_unary_out = + foreign "atg_min_unary_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_minimum = + foreign "atg_minimum" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_minimum_out = + foreign + "atg_minimum_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_miopen_batch_norm = + foreign + "atg_miopen_batch_norm" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs_miopen_batch_norm_backward = + foreign + "atg_miopen_batch_norm_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs_miopen_batch_norm_backward_out = + foreign + "atg_miopen_batch_norm_backward_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs_miopen_batch_norm_out = + foreign + "atg_miopen_batch_norm_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs_miopen_convolution = + foreign + "atg_miopen_convolution" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_miopen_convolution_add_relu = + foreign + "atg_miopen_convolution_add_relu" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_miopen_convolution_out = + foreign + "atg_miopen_convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_miopen_convolution_relu = + foreign + "atg_miopen_convolution_relu" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_miopen_convolution_transpose = + foreign + "atg_miopen_convolution_transpose" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_miopen_convolution_transpose_out = + foreign + "atg_miopen_convolution_transpose_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_miopen_depthwise_convolution = + foreign + "atg_miopen_depthwise_convolution" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_miopen_depthwise_convolution_out = + foreign + "atg_miopen_depthwise_convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_miopen_rnn = + foreign + "atg_miopen_rnn" + (ptr raw_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> double + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; +end + +module C16 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_miopen_rnn_out = + foreign + "atg_miopen_rnn_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> double + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs_mish = foreign "atg_mish" (gc_tensor @-> returning raw_tensor) + let stubs_mish_ = foreign "atg_mish_" (gc_tensor @-> returning raw_tensor) + + let stubs_mish_backward = + foreign "atg_mish_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mish_out = + foreign "atg_mish_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mkldnn_adaptive_avg_pool2d = + foreign + "atg_mkldnn_adaptive_avg_pool2d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_mkldnn_adaptive_avg_pool2d_backward = + foreign + "atg_mkldnn_adaptive_avg_pool2d_backward" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mkldnn_adaptive_avg_pool2d_backward_out = + foreign + "atg_mkldnn_adaptive_avg_pool2d_backward_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mkldnn_adaptive_avg_pool2d_out = + foreign + "atg_mkldnn_adaptive_avg_pool2d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_mkldnn_convolution = + foreign + "atg_mkldnn_convolution" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_convolution_out = + foreign + "atg_mkldnn_convolution_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_linear = + foreign + "atg_mkldnn_linear" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mkldnn_linear_backward_input = + foreign + "atg_mkldnn_linear_backward_input" + (ptr int64_t @-> int @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mkldnn_linear_backward_input_out = + foreign + "atg_mkldnn_linear_backward_input_out" + (gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_linear_backward_weights = + foreign + "atg_mkldnn_linear_backward_weights" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_mkldnn_linear_backward_weights_out = + foreign + "atg_mkldnn_linear_backward_weights_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs_mkldnn_linear_out = + foreign + "atg_mkldnn_linear_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool2d = + foreign + "atg_mkldnn_max_pool2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool2d_backward = + foreign + "atg_mkldnn_max_pool2d_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool2d_backward_out = + foreign + "atg_mkldnn_max_pool2d_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool2d_out = + foreign + "atg_mkldnn_max_pool2d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool3d = + foreign + "atg_mkldnn_max_pool3d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool3d_backward = + foreign + "atg_mkldnn_max_pool3d_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool3d_backward_out = + foreign + "atg_mkldnn_max_pool3d_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_max_pool3d_out = + foreign + "atg_mkldnn_max_pool3d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_reorder_conv2d_weight = + foreign + "atg_mkldnn_reorder_conv2d_weight" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_reorder_conv2d_weight_out = + foreign + "atg_mkldnn_reorder_conv2d_weight_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_reorder_conv3d_weight = + foreign + "atg_mkldnn_reorder_conv3d_weight" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_reorder_conv3d_weight_out = + foreign + "atg_mkldnn_reorder_conv3d_weight_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_mkldnn_rnn_layer = + foreign + "atg_mkldnn_rnn_layer" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_mkldnn_rnn_layer_backward = + foreign + "atg_mkldnn_rnn_layer_backward" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs_mkldnn_rnn_layer_backward_out = + foreign + "atg_mkldnn_rnn_layer_backward_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> gc_tensor + @-> returning void) + ;; + + let stubs_mkldnn_rnn_layer_out = + foreign + "atg_mkldnn_rnn_layer_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_mm = foreign "atg_mm" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_mm_out = + foreign "atg_mm_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mode = + foreign + "atg_mode" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> returning void) + ;; + + let stubs_mode_values = + foreign + "atg_mode_values" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_moveaxis = + foreign + "atg_moveaxis" + (gc_tensor @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_moveaxis_int = + foreign "atg_moveaxis_int" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_movedim = + foreign + "atg_movedim" + (gc_tensor @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_movedim_int = + foreign "atg_movedim_int" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_mse_loss = + foreign "atg_mse_loss" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_mse_loss_backward = + foreign + "atg_mse_loss_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_mse_loss_backward_grad_input = + foreign + "atg_mse_loss_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_mse_loss_out = + foreign + "atg_mse_loss_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_msort = foreign "atg_msort" (gc_tensor @-> returning raw_tensor) + + let stubs_msort_out = + foreign "atg_msort_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mt = foreign "atg_mt" (gc_tensor @-> returning raw_tensor) + let stubs_mul = foreign "atg_mul" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_mul_ = foreign "atg_mul_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_mul_out = + foreign "atg_mul_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mul_scalar = + foreign "atg_mul_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_mul_scalar_ = + foreign "atg_mul_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_mul_scalar_out = + foreign + "atg_mul_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_multi_margin_loss_backward = + foreign + "atg_multi_margin_loss_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_multi_margin_loss_backward_grad_input = + foreign + "atg_multi_margin_loss_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_multilabel_margin_loss = + foreign + "atg_multilabel_margin_loss" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_multilabel_margin_loss_backward = + foreign + "atg_multilabel_margin_loss_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_multilabel_margin_loss_backward_grad_input = + foreign + "atg_multilabel_margin_loss_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_multilabel_margin_loss_out = + foreign + "atg_multilabel_margin_loss_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_multinomial = + foreign "atg_multinomial" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_multinomial_out = + foreign + "atg_multinomial_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_multiply = + foreign "atg_multiply" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_multiply_ = + foreign "atg_multiply_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_multiply_out = + foreign + "atg_multiply_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_multiply_scalar = + foreign "atg_multiply_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_multiply_scalar_ = + foreign "atg_multiply_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_mv = foreign "atg_mv" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_mv_out = + foreign "atg_mv_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_mvlgamma = + foreign "atg_mvlgamma" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_mvlgamma_ = + foreign "atg_mvlgamma_" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_mvlgamma_out = + foreign + "atg_mvlgamma_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_nan_to_num = + foreign + "atg_nan_to_num" + (gc_tensor + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_nan_to_num_ = + foreign + "atg_nan_to_num_" + (gc_tensor + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_nan_to_num_out = + foreign + "atg_nan_to_num_out" + (gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_nanmean = + foreign + "atg_nanmean" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_nanmean_out = + foreign + "atg_nanmean_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_nanmedian = foreign "atg_nanmedian" (gc_tensor @-> returning raw_tensor) + + let stubs_nanmedian_dim = + foreign + "atg_nanmedian_dim" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> returning void) + ;; + + let stubs_nanmedian_dim_values = + foreign + "atg_nanmedian_dim_values" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_nanmedian_out = + foreign "atg_nanmedian_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_nanquantile = + foreign + "atg_nanquantile" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_nanquantile_out = + foreign + "atg_nanquantile_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_nanquantile_scalar = + foreign + "atg_nanquantile_scalar" + (gc_tensor + @-> double + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_nanquantile_scalar_out = + foreign + "atg_nanquantile_scalar_out" + (gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_nansum = + foreign + "atg_nansum" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_nansum_out = + foreign + "atg_nansum_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_narrow = + foreign + "atg_narrow" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_narrow_copy = + foreign + "atg_narrow_copy" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_narrow_copy_out = + foreign + "atg_narrow_copy_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_narrow_tensor = + foreign + "atg_narrow_tensor" + (gc_tensor @-> int64_t @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_native_batch_norm = + foreign + "atg_native_batch_norm" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs_native_batch_norm_out = + foreign + "atg_native_batch_norm_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> double + @-> double + @-> returning void) + ;; + + let stubs_native_channel_shuffle = + foreign "atg_native_channel_shuffle" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_native_dropout = + foreign + "atg_native_dropout" + (ptr raw_tensor @-> gc_tensor @-> double @-> int @-> returning void) + ;; + + let stubs_native_dropout_backward = + foreign + "atg_native_dropout_backward" + (gc_tensor @-> gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_native_dropout_backward_out = + foreign + "atg_native_dropout_backward_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_native_dropout_out = + foreign + "atg_native_dropout_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> returning void) + ;; + + let stubs_native_group_norm = + foreign + "atg_native_group_norm" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> double + @-> returning void) + ;; + + let stubs_native_group_norm_out = + foreign + "atg_native_group_norm_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> double + @-> returning void) + ;; +end + +module C17 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_native_layer_norm = + foreign + "atg_native_layer_norm" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs_native_layer_norm_out = + foreign + "atg_native_layer_norm_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> gc_tensor + @-> double + @-> returning void) + ;; + + let stubs_native_norm = foreign "atg_native_norm" (gc_tensor @-> returning raw_tensor) + + let stubs_native_norm_out = + foreign "atg_native_norm_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_native_norm_scalaropt_dim_dtype = + foreign + "atg_native_norm_scalaropt_dim_dtype" + (gc_tensor + @-> scalar + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_native_norm_scalaropt_dim_dtype_out = + foreign + "atg_native_norm_scalaropt_dim_dtype_out" + (gc_tensor + @-> gc_tensor + @-> scalar + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_ne = foreign "atg_ne" (gc_tensor @-> scalar @-> returning raw_tensor) + let stubs_ne_ = foreign "atg_ne_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_ne_scalar_out = + foreign + "atg_ne_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_ne_tensor = + foreign "atg_ne_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ne_tensor_ = + foreign "atg_ne_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ne_tensor_out = + foreign + "atg_ne_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_neg = foreign "atg_neg" (gc_tensor @-> returning raw_tensor) + let stubs_neg_ = foreign "atg_neg_" (gc_tensor @-> returning raw_tensor) + + let stubs_neg_out = + foreign "atg_neg_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_negative = foreign "atg_negative" (gc_tensor @-> returning raw_tensor) + let stubs_negative_ = foreign "atg_negative_" (gc_tensor @-> returning raw_tensor) + + let stubs_negative_out = + foreign "atg_negative_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_nested_to_padded_tensor = + foreign + "atg_nested_to_padded_tensor" + (gc_tensor @-> double @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_new_empty = + foreign + "atg_new_empty" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_new_empty_out = + foreign + "atg_new_empty_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_new_empty_strided = + foreign + "atg_new_empty_strided" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_new_empty_strided_out = + foreign + "atg_new_empty_strided_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_new_full = + foreign + "atg_new_full" + (gc_tensor + @-> ptr int64_t + @-> int + @-> scalar + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_new_full_out = + foreign + "atg_new_full_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> scalar @-> returning raw_tensor) + ;; + + let stubs_new_ones = + foreign + "atg_new_ones" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_new_ones_out = + foreign + "atg_new_ones_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_new_zeros = + foreign + "atg_new_zeros" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_new_zeros_out = + foreign + "atg_new_zeros_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_nextafter = + foreign "atg_nextafter" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_nextafter_ = + foreign "atg_nextafter_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_nextafter_out = + foreign + "atg_nextafter_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_nll_loss = + foreign + "atg_nll_loss" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_nll_loss2d = + foreign + "atg_nll_loss2d" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_nll_loss2d_backward = + foreign + "atg_nll_loss2d_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_nll_loss2d_backward_grad_input = + foreign + "atg_nll_loss2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_nll_loss2d_out = + foreign + "atg_nll_loss2d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_nll_loss_backward = + foreign + "atg_nll_loss_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_nll_loss_backward_grad_input = + foreign + "atg_nll_loss_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_nll_loss_nd = + foreign + "atg_nll_loss_nd" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_nll_loss_out = + foreign + "atg_nll_loss_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_nonzero = foreign "atg_nonzero" (gc_tensor @-> returning raw_tensor) + + let stubs_nonzero_numpy = + foreign "atg_nonzero_numpy" (gc_tensor @-> returning (ptr raw_tensor)) + ;; + + let stubs_nonzero_out = + foreign "atg_nonzero_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_nonzero_static = + foreign + "atg_nonzero_static" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_nonzero_static_out = + foreign + "atg_nonzero_static_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_norm = foreign "atg_norm" (gc_tensor @-> returning raw_tensor) + + let stubs_norm_dtype_out = + foreign + "atg_norm_dtype_out" + (gc_tensor + @-> gc_tensor + @-> scalar + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_norm_except_dim = + foreign + "atg_norm_except_dim" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_norm_out = + foreign + "atg_norm_out" + (gc_tensor + @-> gc_tensor + @-> scalar + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_norm_scalar_out = + foreign "atg_norm_scalar_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_norm_scalaropt_dim = + foreign + "atg_norm_scalaropt_dim" + (gc_tensor @-> scalar @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_norm_scalaropt_dim_dtype = + foreign + "atg_norm_scalaropt_dim_dtype" + (gc_tensor + @-> scalar + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_norm_scalaropt_dtype = + foreign + "atg_norm_scalaropt_dtype" + (gc_tensor @-> scalar @-> int @-> returning raw_tensor) + ;; + + let stubs_norm_scalaropt_dtype_out = + foreign + "atg_norm_scalaropt_dtype_out" + (gc_tensor @-> gc_tensor @-> scalar @-> int @-> returning raw_tensor) + ;; + + let stubs_normal_ = + foreign "atg_normal_" (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_normal_functional = + foreign + "atg_normal_functional" + (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_not_equal = + foreign "atg_not_equal" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_not_equal_ = + foreign "atg_not_equal_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_not_equal_scalar_out = + foreign + "atg_not_equal_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_not_equal_tensor = + foreign "atg_not_equal_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_not_equal_tensor_ = + foreign "atg_not_equal_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_not_equal_tensor_out = + foreign + "atg_not_equal_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_nuclear_norm = + foreign "atg_nuclear_norm" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_nuclear_norm_dim = + foreign + "atg_nuclear_norm_dim" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_nuclear_norm_dim_out = + foreign + "atg_nuclear_norm_dim_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_nuclear_norm_out = + foreign + "atg_nuclear_norm_out" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_numpy_t = foreign "atg_numpy_t" (gc_tensor @-> returning raw_tensor) + + let stubs_one_hot = + foreign "atg_one_hot" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_ones = + foreign "atg_ones" (ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_ones_like = foreign "atg_ones_like" (gc_tensor @-> returning raw_tensor) + + let stubs_ones_like_out = + foreign "atg_ones_like_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ones_out = + foreign "atg_ones_out" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_orgqr = foreign "atg_orgqr" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_orgqr_out = + foreign + "atg_orgqr_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_ormqr = + foreign + "atg_ormqr" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_ormqr_out = + foreign + "atg_ormqr_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_outer = foreign "atg_outer" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_outer_out = + foreign + "atg_outer_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_output_nr = foreign "atg_output_nr" (gc_tensor @-> returning int64_t) + + let stubs_pad = + foreign + "atg_pad" + (gc_tensor + @-> ptr int64_t + @-> int + @-> string + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_pad_sequence = + foreign + "atg_pad_sequence" + (ptr gc_tensor @-> int @-> int @-> double @-> returning raw_tensor) + ;; + + let stubs_pairwise_distance = + foreign + "atg_pairwise_distance" + (gc_tensor @-> gc_tensor @-> double @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_pdist = foreign "atg_pdist" (gc_tensor @-> double @-> returning raw_tensor) + + let stubs_permute = + foreign "atg_permute" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_permute_copy = + foreign "atg_permute_copy" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_permute_copy_out = + foreign + "atg_permute_copy_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_pin_memory = + foreign "atg_pin_memory" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_pinverse = + foreign "atg_pinverse" (gc_tensor @-> double @-> returning raw_tensor) + ;; + + let stubs_pixel_shuffle = + foreign "atg_pixel_shuffle" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_pixel_shuffle_out = + foreign + "atg_pixel_shuffle_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_pixel_unshuffle = + foreign "atg_pixel_unshuffle" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_pixel_unshuffle_out = + foreign + "atg_pixel_unshuffle_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_poisson = foreign "atg_poisson" (gc_tensor @-> returning raw_tensor) + + let stubs_poisson_nll_loss = + foreign + "atg_poisson_nll_loss" + (gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> double + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_poisson_out = + foreign "atg_poisson_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_polar = foreign "atg_polar" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_polar_out = + foreign + "atg_polar_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_polygamma = + foreign "atg_polygamma" (int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_polygamma_ = + foreign "atg_polygamma_" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; +end + +module C18 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_polygamma_out = + foreign + "atg_polygamma_out" + (gc_tensor @-> int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_positive = foreign "atg_positive" (gc_tensor @-> returning raw_tensor) + let stubs_pow = foreign "atg_pow" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_pow_ = foreign "atg_pow_" (gc_tensor @-> scalar @-> returning raw_tensor) + + let stubs_pow_scalar = + foreign "atg_pow_scalar" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_pow_scalar_out = + foreign + "atg_pow_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_pow_tensor_ = + foreign "atg_pow_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_pow_tensor_scalar = + foreign "atg_pow_tensor_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_pow_tensor_scalar_out = + foreign + "atg_pow_tensor_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_pow_tensor_tensor_out = + foreign + "atg_pow_tensor_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_prelu = foreign "atg_prelu" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_prod = foreign "atg_prod" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_prod_dim_int = + foreign + "atg_prod_dim_int" + (gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_prod_int_out = + foreign + "atg_prod_int_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_prod_out = + foreign "atg_prod_out" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_put = + foreign + "atg_put" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_put_ = + foreign + "atg_put_" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_put_out = + foreign + "atg_put_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs_q_per_channel_axis = + foreign "atg_q_per_channel_axis" (gc_tensor @-> returning int64_t) + ;; + + let stubs_q_per_channel_scales = + foreign "atg_q_per_channel_scales" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_q_per_channel_scales_out = + foreign + "atg_q_per_channel_scales_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_q_per_channel_zero_points = + foreign "atg_q_per_channel_zero_points" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_q_per_channel_zero_points_out = + foreign + "atg_q_per_channel_zero_points_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_q_scale = foreign "atg_q_scale" (gc_tensor @-> returning double) + let stubs_q_zero_point = foreign "atg_q_zero_point" (gc_tensor @-> returning int64_t) + let stubs_qr = foreign "atg_qr" (ptr raw_tensor @-> gc_tensor @-> int @-> returning void) + + let stubs_qr_q = + foreign + "atg_qr_q" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_quantile = + foreign + "atg_quantile" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_quantile_out = + foreign + "atg_quantile_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_quantile_scalar = + foreign + "atg_quantile_scalar" + (gc_tensor + @-> double + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_quantile_scalar_out = + foreign + "atg_quantile_scalar_out" + (gc_tensor + @-> gc_tensor + @-> double + @-> int64_t + @-> int + @-> int + @-> string + @-> returning raw_tensor) + ;; + + let stubs_quantize_per_channel = + foreign + "atg_quantize_per_channel" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_quantize_per_channel_out = + foreign + "atg_quantize_per_channel_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantize_per_tensor = + foreign + "atg_quantize_per_tensor" + (gc_tensor @-> double @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_quantize_per_tensor_dynamic = + foreign + "atg_quantize_per_tensor_dynamic" + (gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_quantize_per_tensor_dynamic_out = + foreign + "atg_quantize_per_tensor_dynamic_out" + (gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_quantize_per_tensor_out = + foreign + "atg_quantize_per_tensor_out" + (gc_tensor @-> gc_tensor @-> double @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_quantize_per_tensor_tensor_qparams = + foreign + "atg_quantize_per_tensor_tensor_qparams" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_quantize_per_tensor_tensor_qparams_out = + foreign + "atg_quantize_per_tensor_tensor_qparams_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantize_per_tensor_tensors = + foreign + "atg_quantize_per_tensor_tensors" + (ptr gc_tensor + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning (ptr raw_tensor)) + ;; + + let stubs_quantize_per_tensor_tensors_out = + foreign + "atg_quantize_per_tensor_tensors_out" + (ptr gc_tensor + @-> int + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> gc_tensor + @-> int + @-> returning void) + ;; + + let stubs_quantized_batch_norm = + foreign + "atg_quantized_batch_norm" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_quantized_batch_norm_out = + foreign + "atg_quantized_batch_norm_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_quantized_gru_cell = + foreign + "atg_quantized_gru_cell" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> scalar + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_quantized_lstm_cell = + foreign + "atg_quantized_lstm_cell" + (ptr raw_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> scalar + @-> scalar + @-> returning void) + ;; + + let stubs_quantized_max_pool1d = + foreign + "atg_quantized_max_pool1d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantized_max_pool1d_out = + foreign + "atg_quantized_max_pool1d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantized_max_pool2d = + foreign + "atg_quantized_max_pool2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantized_max_pool2d_out = + foreign + "atg_quantized_max_pool2d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantized_max_pool3d = + foreign + "atg_quantized_max_pool3d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantized_max_pool3d_out = + foreign + "atg_quantized_max_pool3d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_quantized_rnn_relu_cell = + foreign + "atg_quantized_rnn_relu_cell" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> scalar + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_quantized_rnn_tanh_cell = + foreign + "atg_quantized_rnn_tanh_cell" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> scalar + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_rad2deg = foreign "atg_rad2deg" (gc_tensor @-> returning raw_tensor) + let stubs_rad2deg_ = foreign "atg_rad2deg_" (gc_tensor @-> returning raw_tensor) + + let stubs_rad2deg_out = + foreign "atg_rad2deg_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_rand = + foreign "atg_rand" (ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_rand_like = foreign "atg_rand_like" (gc_tensor @-> returning raw_tensor) + + let stubs_rand_like_out = + foreign "atg_rand_like_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_rand_out = + foreign "atg_rand_out" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_randint = + foreign + "atg_randint" + (int64_t @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_randint_like = + foreign "atg_randint_like" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_randint_like_low_dtype = + foreign + "atg_randint_like_low_dtype" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_randint_like_low_dtype_out = + foreign + "atg_randint_like_low_dtype_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_randint_like_out = + foreign + "atg_randint_like_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_randint_low = + foreign + "atg_randint_low" + (int64_t + @-> int64_t + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_randint_low_out = + foreign + "atg_randint_low_out" + (gc_tensor @-> int64_t @-> int64_t @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_randint_out = + foreign + "atg_randint_out" + (gc_tensor @-> int64_t @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_randn = + foreign "atg_randn" (ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_randn_like = foreign "atg_randn_like" (gc_tensor @-> returning raw_tensor) + + let stubs_randn_like_out = + foreign "atg_randn_like_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_randn_out = + foreign "atg_randn_out" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_random = foreign "atg_random" (gc_tensor @-> returning raw_tensor) + let stubs_random_ = foreign "atg_random_" (gc_tensor @-> returning raw_tensor) + + let stubs_random_from = + foreign + "atg_random_from" + (gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_random_from_ = + foreign + "atg_random_from_" + (gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_random_from_out = + foreign + "atg_random_from_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_random_out = + foreign "atg_random_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_random_to = + foreign "atg_random_to" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_random_to_ = + foreign "atg_random_to_" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_random_to_out = + foreign + "atg_random_to_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_randperm = + foreign "atg_randperm" (int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_randperm_out = + foreign "atg_randperm_out" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_range = + foreign "atg_range" (scalar @-> scalar @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_range_out = + foreign "atg_range_out" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_range_out_ = + foreign "atg_range_out_" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_range_step = + foreign "atg_range_step" (scalar @-> scalar @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_ravel = foreign "atg_ravel" (gc_tensor @-> returning raw_tensor) + let stubs_real = foreign "atg_real" (gc_tensor @-> returning raw_tensor) + let stubs_reciprocal = foreign "atg_reciprocal" (gc_tensor @-> returning raw_tensor) + let stubs_reciprocal_ = foreign "atg_reciprocal_" (gc_tensor @-> returning raw_tensor) + + let stubs_reciprocal_out = + foreign "atg_reciprocal_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_reflection_pad1d = + foreign + "atg_reflection_pad1d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reflection_pad1d_backward = + foreign + "atg_reflection_pad1d_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reflection_pad1d_backward_grad_input = + foreign + "atg_reflection_pad1d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_reflection_pad1d_out = + foreign + "atg_reflection_pad1d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reflection_pad2d = + foreign + "atg_reflection_pad2d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reflection_pad2d_backward = + foreign + "atg_reflection_pad2d_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reflection_pad2d_backward_grad_input = + foreign + "atg_reflection_pad2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_reflection_pad2d_out = + foreign + "atg_reflection_pad2d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; +end + +module C19 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_reflection_pad3d = + foreign + "atg_reflection_pad3d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reflection_pad3d_backward = + foreign + "atg_reflection_pad3d_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reflection_pad3d_backward_grad_input = + foreign + "atg_reflection_pad3d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_reflection_pad3d_out = + foreign + "atg_reflection_pad3d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_relu = foreign "atg_relu" (gc_tensor @-> returning raw_tensor) + let stubs_relu6 = foreign "atg_relu6" (gc_tensor @-> returning raw_tensor) + let stubs_relu6_ = foreign "atg_relu6_" (gc_tensor @-> returning raw_tensor) + let stubs_relu_ = foreign "atg_relu_" (gc_tensor @-> returning raw_tensor) + + let stubs_relu_out = + foreign "atg_relu_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_remainder = + foreign "atg_remainder" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_remainder_ = + foreign "atg_remainder_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_remainder_scalar_out = + foreign + "atg_remainder_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_remainder_scalar_tensor = + foreign "atg_remainder_scalar_tensor" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_remainder_scalar_tensor_out = + foreign + "atg_remainder_scalar_tensor_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_remainder_tensor = + foreign "atg_remainder_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_remainder_tensor_ = + foreign "atg_remainder_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_remainder_tensor_out = + foreign + "atg_remainder_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_renorm = + foreign + "atg_renorm" + (gc_tensor @-> scalar @-> int64_t @-> scalar @-> returning raw_tensor) + ;; + + let stubs_renorm_ = + foreign + "atg_renorm_" + (gc_tensor @-> scalar @-> int64_t @-> scalar @-> returning raw_tensor) + ;; + + let stubs_renorm_out = + foreign + "atg_renorm_out" + (gc_tensor @-> gc_tensor @-> scalar @-> int64_t @-> scalar @-> returning raw_tensor) + ;; + + let stubs_repeat = + foreign "atg_repeat" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_repeat_interleave = + foreign + "atg_repeat_interleave" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_repeat_interleave_self_int = + foreign + "atg_repeat_interleave_self_int" + (gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_repeat_interleave_self_tensor = + foreign + "atg_repeat_interleave_self_tensor" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_repeat_interleave_tensor_out = + foreign + "atg_repeat_interleave_tensor_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_repeat_out = + foreign + "atg_repeat_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad1d = + foreign + "atg_replication_pad1d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad1d_backward = + foreign + "atg_replication_pad1d_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad1d_backward_grad_input = + foreign + "atg_replication_pad1d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_replication_pad1d_out = + foreign + "atg_replication_pad1d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad2d = + foreign + "atg_replication_pad2d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad2d_backward = + foreign + "atg_replication_pad2d_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad2d_backward_grad_input = + foreign + "atg_replication_pad2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_replication_pad2d_out = + foreign + "atg_replication_pad2d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad3d = + foreign + "atg_replication_pad3d" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad3d_backward = + foreign + "atg_replication_pad3d_backward" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_replication_pad3d_backward_grad_input = + foreign + "atg_replication_pad3d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_replication_pad3d_out = + foreign + "atg_replication_pad3d_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_requires_grad_ = + foreign "atg_requires_grad_" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_reshape = + foreign "atg_reshape" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_reshape_as = + foreign "atg_reshape_as" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_resize = + foreign "atg_resize" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_resize_ = + foreign "atg_resize_" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_resize_as = + foreign "atg_resize_as" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_resize_as_ = + foreign "atg_resize_as_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_resize_as_out = + foreign + "atg_resize_as_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_resize_as_sparse = + foreign "atg_resize_as_sparse" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_resize_as_sparse_ = + foreign "atg_resize_as_sparse_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_resize_as_sparse_out = + foreign + "atg_resize_as_sparse_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_resize_out = + foreign + "atg_resize_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_resolve_conj = foreign "atg_resolve_conj" (gc_tensor @-> returning raw_tensor) + let stubs_resolve_neg = foreign "atg_resolve_neg" (gc_tensor @-> returning raw_tensor) + let stubs_retains_grad = foreign "atg_retains_grad" (gc_tensor @-> returning bool) + + let stubs_rnn_relu = + foreign + "atg_rnn_relu" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_rnn_relu_cell = + foreign + "atg_rnn_relu_cell" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_rnn_relu_data = + foreign + "atg_rnn_relu_data" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> returning void) + ;; + + let stubs_rnn_tanh = + foreign + "atg_rnn_tanh" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_rnn_tanh_cell = + foreign + "atg_rnn_tanh_cell" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_rnn_tanh_data = + foreign + "atg_rnn_tanh_data" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr gc_tensor + @-> int + @-> int + @-> int64_t + @-> double + @-> int + @-> int + @-> returning void) + ;; + + let stubs_roll = + foreign + "atg_roll" + (gc_tensor @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_roll_out = + foreign + "atg_roll_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_rot90 = + foreign + "atg_rot90" + (gc_tensor @-> int64_t @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_rot90_out = + foreign + "atg_rot90_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_round = foreign "atg_round" (gc_tensor @-> returning raw_tensor) + let stubs_round_ = foreign "atg_round_" (gc_tensor @-> returning raw_tensor) + + let stubs_round_decimals = + foreign "atg_round_decimals" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_round_decimals_ = + foreign "atg_round_decimals_" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_round_decimals_out = + foreign + "atg_round_decimals_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_round_out = + foreign "atg_round_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_row_indices = foreign "atg_row_indices" (gc_tensor @-> returning raw_tensor) + + let stubs_row_indices_copy = + foreign "atg_row_indices_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_row_indices_copy_out = + foreign "atg_row_indices_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_row_stack = + foreign "atg_row_stack" (ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_row_stack_out = + foreign + "atg_row_stack_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_rrelu = foreign "atg_rrelu" (gc_tensor @-> int @-> returning raw_tensor) + let stubs_rrelu_ = foreign "atg_rrelu_" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_rrelu_with_noise = + foreign + "atg_rrelu_with_noise" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_rrelu_with_noise_ = + foreign + "atg_rrelu_with_noise_" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_rrelu_with_noise_backward = + foreign + "atg_rrelu_with_noise_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_rrelu_with_noise_backward_out = + foreign + "atg_rrelu_with_noise_backward_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_rrelu_with_noise_out = + foreign + "atg_rrelu_with_noise_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_rsqrt = foreign "atg_rsqrt" (gc_tensor @-> returning raw_tensor) + let stubs_rsqrt_ = foreign "atg_rsqrt_" (gc_tensor @-> returning raw_tensor) + + let stubs_rsqrt_out = + foreign "atg_rsqrt_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_rsub = foreign "atg_rsub" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_rsub_scalar = + foreign "atg_rsub_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_rsub_scalar_out = + foreign + "atg_rsub_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_rsub_tensor_out = + foreign + "atg_rsub_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_scalar_tensor = + foreign "atg_scalar_tensor" (scalar @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_scalar_tensor_out = + foreign "atg_scalar_tensor_out" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_scaled_dot_product_attention = + foreign + "atg_scaled_dot_product_attention" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_scatter = + foreign + "atg_scatter" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_scatter_ = + foreign + "atg_scatter_" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_scatter_add = + foreign + "atg_scatter_add" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_scatter_add_ = + foreign + "atg_scatter_add_" + (gc_tensor @-> int64_t @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_scatter_add_out = + foreign + "atg_scatter_add_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_scatter_reduce = + foreign + "atg_scatter_reduce" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> returning raw_tensor) + ;; + + let stubs_scatter_reduce_ = + foreign + "atg_scatter_reduce_" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> returning raw_tensor) + ;; + + let stubs_scatter_reduce_out = + foreign + "atg_scatter_reduce_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> string + @-> returning raw_tensor) + ;; + + let stubs_scatter_src_out = + foreign + "atg_scatter_src_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> gc_tensor + @-> returning raw_tensor) + ;; +end + +module C20 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_scatter_value = + foreign + "atg_scatter_value" + (gc_tensor @-> int64_t @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_scatter_value_ = + foreign + "atg_scatter_value_" + (gc_tensor @-> int64_t @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_scatter_value_out = + foreign + "atg_scatter_value_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_scatter_value_reduce = + foreign + "atg_scatter_value_reduce" + (gc_tensor @-> int64_t @-> gc_tensor @-> scalar @-> string @-> returning raw_tensor) + ;; + + let stubs_scatter_value_reduce_ = + foreign + "atg_scatter_value_reduce_" + (gc_tensor @-> int64_t @-> gc_tensor @-> scalar @-> string @-> returning raw_tensor) + ;; + + let stubs_scatter_value_reduce_out = + foreign + "atg_scatter_value_reduce_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> gc_tensor + @-> scalar + @-> string + @-> returning raw_tensor) + ;; + + let stubs_searchsorted = + foreign + "atg_searchsorted" + (gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> string + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_searchsorted_scalar = + foreign + "atg_searchsorted_scalar" + (gc_tensor + @-> scalar + @-> int + @-> int + @-> string + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_searchsorted_scalar_out = + foreign + "atg_searchsorted_scalar_out" + (gc_tensor + @-> gc_tensor + @-> scalar + @-> int + @-> int + @-> string + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_searchsorted_tensor_out = + foreign + "atg_searchsorted_tensor_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> string + @-> gc_tensor + @-> returning raw_tensor) + ;; + + let stubs_segment_reduce = + foreign + "atg_segment_reduce" + (gc_tensor + @-> string + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_segment_reduce_out = + foreign + "atg_segment_reduce_out" + (gc_tensor + @-> gc_tensor + @-> string + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_select = + foreign "atg_select" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_select_backward = + foreign + "atg_select_backward" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_select_backward_out = + foreign + "atg_select_backward_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_select_copy = + foreign "atg_select_copy" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_select_copy_int_out = + foreign + "atg_select_copy_int_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_select_scatter = + foreign + "atg_select_scatter" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_select_scatter_out = + foreign + "atg_select_scatter_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_selu = foreign "atg_selu" (gc_tensor @-> returning raw_tensor) + let stubs_selu_ = foreign "atg_selu_" (gc_tensor @-> returning raw_tensor) + let stubs_set = foreign "atg_set" (gc_tensor @-> returning raw_tensor) + let stubs_set_ = foreign "atg_set_" (gc_tensor @-> returning raw_tensor) + + let stubs_set_out = + foreign "atg_set_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_set_requires_grad = + foreign "atg_set_requires_grad" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_set_source_tensor = + foreign "atg_set_source_tensor" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_set_source_tensor_ = + foreign "atg_set_source_tensor_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_set_source_tensor_out = + foreign + "atg_set_source_tensor_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_set_source_tensor_storage_offset_ = + foreign + "atg_set_source_tensor_storage_offset_" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sgn = foreign "atg_sgn" (gc_tensor @-> returning raw_tensor) + let stubs_sgn_ = foreign "atg_sgn_" (gc_tensor @-> returning raw_tensor) + + let stubs_sgn_out = + foreign "atg_sgn_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sigmoid = foreign "atg_sigmoid" (gc_tensor @-> returning raw_tensor) + let stubs_sigmoid_ = foreign "atg_sigmoid_" (gc_tensor @-> returning raw_tensor) + + let stubs_sigmoid_backward = + foreign "atg_sigmoid_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sigmoid_backward_grad_input = + foreign + "atg_sigmoid_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sigmoid_out = + foreign "atg_sigmoid_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sign = foreign "atg_sign" (gc_tensor @-> returning raw_tensor) + let stubs_sign_ = foreign "atg_sign_" (gc_tensor @-> returning raw_tensor) + + let stubs_sign_out = + foreign "atg_sign_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_signbit = foreign "atg_signbit" (gc_tensor @-> returning raw_tensor) + + let stubs_signbit_out = + foreign "atg_signbit_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_silu = foreign "atg_silu" (gc_tensor @-> returning raw_tensor) + let stubs_silu_ = foreign "atg_silu_" (gc_tensor @-> returning raw_tensor) + + let stubs_silu_backward = + foreign "atg_silu_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_silu_backward_grad_input = + foreign + "atg_silu_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_silu_out = + foreign "atg_silu_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sin = foreign "atg_sin" (gc_tensor @-> returning raw_tensor) + let stubs_sin_ = foreign "atg_sin_" (gc_tensor @-> returning raw_tensor) + + let stubs_sin_out = + foreign "atg_sin_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sinc = foreign "atg_sinc" (gc_tensor @-> returning raw_tensor) + let stubs_sinc_ = foreign "atg_sinc_" (gc_tensor @-> returning raw_tensor) + + let stubs_sinc_out = + foreign "atg_sinc_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sinh = foreign "atg_sinh" (gc_tensor @-> returning raw_tensor) + let stubs_sinh_ = foreign "atg_sinh_" (gc_tensor @-> returning raw_tensor) + + let stubs_sinh_out = + foreign "atg_sinh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_size = foreign "atg_size" (gc_tensor @-> int64_t @-> returning int64_t) + + let stubs_slice = + foreign + "atg_slice" + (gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_slice_backward = + foreign + "atg_slice_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_slice_backward_out = + foreign + "atg_slice_backward_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_slice_copy = + foreign + "atg_slice_copy" + (gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_slice_copy_tensor_out = + foreign + "atg_slice_copy_tensor_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_slice_scatter = + foreign + "atg_slice_scatter" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_slice_scatter_out = + foreign + "atg_slice_scatter_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_slogdet = + foreign "atg_slogdet" (ptr raw_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_slogdet_out = + foreign + "atg_slogdet_out" + (ptr raw_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning void) + ;; + + let stubs_slow_conv3d = + foreign + "atg_slow_conv3d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv3d_out = + foreign + "atg_slow_conv3d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_dilated2d = + foreign + "atg_slow_conv_dilated2d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_dilated2d_out = + foreign + "atg_slow_conv_dilated2d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_dilated3d = + foreign + "atg_slow_conv_dilated3d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_dilated3d_out = + foreign + "atg_slow_conv_dilated3d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_transpose2d = + foreign + "atg_slow_conv_transpose2d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_transpose2d_out = + foreign + "atg_slow_conv_transpose2d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_transpose3d = + foreign + "atg_slow_conv_transpose3d" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_slow_conv_transpose3d_out = + foreign + "atg_slow_conv_transpose3d_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_smm = foreign "atg_smm" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_smooth_l1_loss = + foreign + "atg_smooth_l1_loss" + (gc_tensor @-> gc_tensor @-> int64_t @-> double @-> returning raw_tensor) + ;; + + let stubs_smooth_l1_loss_backward = + foreign + "atg_smooth_l1_loss_backward" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs_smooth_l1_loss_backward_grad_input = + foreign + "atg_smooth_l1_loss_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs_smooth_l1_loss_out = + foreign + "atg_smooth_l1_loss_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> double + @-> returning raw_tensor) + ;; + + let stubs_soft_margin_loss = + foreign + "atg_soft_margin_loss" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_soft_margin_loss_backward = + foreign + "atg_soft_margin_loss_backward" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_soft_margin_loss_backward_grad_input = + foreign + "atg_soft_margin_loss_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_soft_margin_loss_out = + foreign + "atg_soft_margin_loss_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_softmax = + foreign "atg_softmax" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_softmax_int_out = + foreign + "atg_softmax_int_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_softplus = foreign "atg_softplus" (gc_tensor @-> returning raw_tensor) + + let stubs_softplus_backward = + foreign + "atg_softplus_backward" + (gc_tensor @-> gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_softplus_backward_grad_input = + foreign + "atg_softplus_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> scalar + @-> scalar + @-> returning raw_tensor) + ;; + + let stubs_softplus_out = + foreign "atg_softplus_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_softshrink = foreign "atg_softshrink" (gc_tensor @-> returning raw_tensor) + + let stubs_softshrink_backward = + foreign + "atg_softshrink_backward" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_softshrink_backward_grad_input = + foreign + "atg_softshrink_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_softshrink_out = + foreign "atg_softshrink_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sort = + foreign + "atg_sort" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> returning void) + ;; + + let stubs_sort_stable = + foreign + "atg_sort_stable" + (ptr raw_tensor @-> gc_tensor @-> int @-> int64_t @-> int @-> returning void) + ;; + + let stubs_sort_values = + foreign + "atg_sort_values" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_sort_values_stable = + foreign + "atg_sort_values_stable" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_sparse_bsc_tensor = + foreign + "atg_sparse_bsc_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; +end + +module C21 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_sparse_bsc_tensor_ccol_row_value_size = + foreign + "atg_sparse_bsc_tensor_ccol_row_value_size" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sparse_bsr_tensor = + foreign + "atg_sparse_bsr_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_sparse_bsr_tensor_crow_col_value_size = + foreign + "atg_sparse_bsr_tensor_crow_col_value_size" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sparse_compressed_tensor = + foreign + "atg_sparse_compressed_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_sparse_compressed_tensor_comp_plain_value_size = + foreign + "atg_sparse_compressed_tensor_comp_plain_value_size" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sparse_coo_tensor = + foreign + "atg_sparse_coo_tensor" + (ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_sparse_coo_tensor_indices = + foreign + "atg_sparse_coo_tensor_indices" + (gc_tensor @-> gc_tensor @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_sparse_coo_tensor_indices_size = + foreign + "atg_sparse_coo_tensor_indices_size" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sparse_coo_tensor_size_out = + foreign + "atg_sparse_coo_tensor_size_out" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_sparse_csc_tensor = + foreign + "atg_sparse_csc_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_sparse_csc_tensor_ccol_row_value_size = + foreign + "atg_sparse_csc_tensor_ccol_row_value_size" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sparse_csr_tensor = + foreign + "atg_sparse_csr_tensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_sparse_csr_tensor_crow_col_value_size = + foreign + "atg_sparse_csr_tensor_crow_col_value_size" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sparse_dim = foreign "atg_sparse_dim" (gc_tensor @-> returning int64_t) + + let stubs_sparse_mask = + foreign "atg_sparse_mask" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sparse_mask_out = + foreign + "atg_sparse_mask_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sparse_resize = + foreign + "atg_sparse_resize" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_sparse_resize_ = + foreign + "atg_sparse_resize_" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_sparse_resize_and_clear = + foreign + "atg_sparse_resize_and_clear" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_sparse_resize_and_clear_ = + foreign + "atg_sparse_resize_and_clear_" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_sparse_resize_and_clear_out = + foreign + "atg_sparse_resize_and_clear_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_sparse_resize_out = + foreign + "atg_sparse_resize_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_sparse_sampled_addmm = + foreign + "atg_sparse_sampled_addmm" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sparse_sampled_addmm_out = + foreign + "atg_sparse_sampled_addmm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_airy_ai = + foreign "atg_special_airy_ai" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_airy_ai_out = + foreign "atg_special_airy_ai_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_j0 = + foreign "atg_special_bessel_j0" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_j0_out = + foreign "atg_special_bessel_j0_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_j1 = + foreign "atg_special_bessel_j1" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_j1_out = + foreign "atg_special_bessel_j1_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_y0 = + foreign "atg_special_bessel_y0" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_y0_out = + foreign "atg_special_bessel_y0_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_y1 = + foreign "atg_special_bessel_y1" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_bessel_y1_out = + foreign "atg_special_bessel_y1_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_t = + foreign + "atg_special_chebyshev_polynomial_t" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_t_n_scalar = + foreign + "atg_special_chebyshev_polynomial_t_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_t_n_scalar_out = + foreign + "atg_special_chebyshev_polynomial_t_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_t_out = + foreign + "atg_special_chebyshev_polynomial_t_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_t_x_scalar = + foreign + "atg_special_chebyshev_polynomial_t_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_t_x_scalar_out = + foreign + "atg_special_chebyshev_polynomial_t_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_u = + foreign + "atg_special_chebyshev_polynomial_u" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_u_n_scalar = + foreign + "atg_special_chebyshev_polynomial_u_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_u_n_scalar_out = + foreign + "atg_special_chebyshev_polynomial_u_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_u_out = + foreign + "atg_special_chebyshev_polynomial_u_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_u_x_scalar = + foreign + "atg_special_chebyshev_polynomial_u_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_u_x_scalar_out = + foreign + "atg_special_chebyshev_polynomial_u_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_v = + foreign + "atg_special_chebyshev_polynomial_v" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_v_n_scalar = + foreign + "atg_special_chebyshev_polynomial_v_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_v_n_scalar_out = + foreign + "atg_special_chebyshev_polynomial_v_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_v_out = + foreign + "atg_special_chebyshev_polynomial_v_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_v_x_scalar = + foreign + "atg_special_chebyshev_polynomial_v_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_v_x_scalar_out = + foreign + "atg_special_chebyshev_polynomial_v_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_w = + foreign + "atg_special_chebyshev_polynomial_w" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_w_n_scalar = + foreign + "atg_special_chebyshev_polynomial_w_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_w_n_scalar_out = + foreign + "atg_special_chebyshev_polynomial_w_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_w_out = + foreign + "atg_special_chebyshev_polynomial_w_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_w_x_scalar = + foreign + "atg_special_chebyshev_polynomial_w_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_chebyshev_polynomial_w_x_scalar_out = + foreign + "atg_special_chebyshev_polynomial_w_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_digamma = + foreign "atg_special_digamma" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_digamma_out = + foreign "atg_special_digamma_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_entr = foreign "atg_special_entr" (gc_tensor @-> returning raw_tensor) + + let stubs_special_entr_out = + foreign "atg_special_entr_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_erf = foreign "atg_special_erf" (gc_tensor @-> returning raw_tensor) + + let stubs_special_erf_out = + foreign "atg_special_erf_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_erfc = foreign "atg_special_erfc" (gc_tensor @-> returning raw_tensor) + + let stubs_special_erfc_out = + foreign "atg_special_erfc_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_erfcx = + foreign "atg_special_erfcx" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_erfcx_out = + foreign "atg_special_erfcx_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_erfinv = + foreign "atg_special_erfinv" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_erfinv_out = + foreign "atg_special_erfinv_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_exp2 = foreign "atg_special_exp2" (gc_tensor @-> returning raw_tensor) + + let stubs_special_exp2_out = + foreign "atg_special_exp2_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_expit = + foreign "atg_special_expit" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_expit_out = + foreign "atg_special_expit_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_expm1 = + foreign "atg_special_expm1" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_expm1_out = + foreign "atg_special_expm1_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_gammainc = + foreign "atg_special_gammainc" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_gammainc_out = + foreign + "atg_special_gammainc_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_gammaincc = + foreign "atg_special_gammaincc" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_gammaincc_out = + foreign + "atg_special_gammaincc_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_gammaln = + foreign "atg_special_gammaln" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_gammaln_out = + foreign "atg_special_gammaln_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_h = + foreign + "atg_special_hermite_polynomial_h" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_h_n_scalar = + foreign + "atg_special_hermite_polynomial_h_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_h_n_scalar_out = + foreign + "atg_special_hermite_polynomial_h_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_h_out = + foreign + "atg_special_hermite_polynomial_h_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_h_x_scalar = + foreign + "atg_special_hermite_polynomial_h_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_h_x_scalar_out = + foreign + "atg_special_hermite_polynomial_h_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_he = + foreign + "atg_special_hermite_polynomial_he" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_he_n_scalar = + foreign + "atg_special_hermite_polynomial_he_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_he_n_scalar_out = + foreign + "atg_special_hermite_polynomial_he_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_he_out = + foreign + "atg_special_hermite_polynomial_he_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_he_x_scalar = + foreign + "atg_special_hermite_polynomial_he_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_hermite_polynomial_he_x_scalar_out = + foreign + "atg_special_hermite_polynomial_he_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_i0 = foreign "atg_special_i0" (gc_tensor @-> returning raw_tensor) + + let stubs_special_i0_out = + foreign "atg_special_i0_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_i0e = foreign "atg_special_i0e" (gc_tensor @-> returning raw_tensor) + + let stubs_special_i0e_out = + foreign "atg_special_i0e_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_i1 = foreign "atg_special_i1" (gc_tensor @-> returning raw_tensor) + + let stubs_special_i1_out = + foreign "atg_special_i1_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; +end + +module C22 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_special_i1e = foreign "atg_special_i1e" (gc_tensor @-> returning raw_tensor) + + let stubs_special_i1e_out = + foreign "atg_special_i1e_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_laguerre_polynomial_l = + foreign + "atg_special_laguerre_polynomial_l" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_laguerre_polynomial_l_n_scalar = + foreign + "atg_special_laguerre_polynomial_l_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_laguerre_polynomial_l_n_scalar_out = + foreign + "atg_special_laguerre_polynomial_l_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_laguerre_polynomial_l_out = + foreign + "atg_special_laguerre_polynomial_l_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_laguerre_polynomial_l_x_scalar = + foreign + "atg_special_laguerre_polynomial_l_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_laguerre_polynomial_l_x_scalar_out = + foreign + "atg_special_laguerre_polynomial_l_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_legendre_polynomial_p = + foreign + "atg_special_legendre_polynomial_p" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_legendre_polynomial_p_n_scalar = + foreign + "atg_special_legendre_polynomial_p_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_legendre_polynomial_p_n_scalar_out = + foreign + "atg_special_legendre_polynomial_p_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_legendre_polynomial_p_out = + foreign + "atg_special_legendre_polynomial_p_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_legendre_polynomial_p_x_scalar = + foreign + "atg_special_legendre_polynomial_p_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_legendre_polynomial_p_x_scalar_out = + foreign + "atg_special_legendre_polynomial_p_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_log1p = + foreign "atg_special_log1p" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_log1p_out = + foreign "atg_special_log1p_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_log_ndtr = + foreign "atg_special_log_ndtr" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_log_ndtr_out = + foreign "atg_special_log_ndtr_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_log_softmax = + foreign + "atg_special_log_softmax" + (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_special_logit = + foreign "atg_special_logit" (gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_special_logit_out = + foreign + "atg_special_logit_out" + (gc_tensor @-> gc_tensor @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_special_logsumexp = + foreign + "atg_special_logsumexp" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_special_logsumexp_out = + foreign + "atg_special_logsumexp_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_i0 = + foreign "atg_special_modified_bessel_i0" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_i0_out = + foreign + "atg_special_modified_bessel_i0_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_i1 = + foreign "atg_special_modified_bessel_i1" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_i1_out = + foreign + "atg_special_modified_bessel_i1_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_k0 = + foreign "atg_special_modified_bessel_k0" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_k0_out = + foreign + "atg_special_modified_bessel_k0_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_k1 = + foreign "atg_special_modified_bessel_k1" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_modified_bessel_k1_out = + foreign + "atg_special_modified_bessel_k1_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_multigammaln = + foreign "atg_special_multigammaln" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_special_multigammaln_out = + foreign + "atg_special_multigammaln_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_special_ndtr = foreign "atg_special_ndtr" (gc_tensor @-> returning raw_tensor) + + let stubs_special_ndtr_out = + foreign "atg_special_ndtr_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_ndtri = + foreign "atg_special_ndtri" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_ndtri_out = + foreign "atg_special_ndtri_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_polygamma = + foreign "atg_special_polygamma" (int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_polygamma_out = + foreign + "atg_special_polygamma_out" + (gc_tensor @-> int64_t @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_psi = foreign "atg_special_psi" (gc_tensor @-> returning raw_tensor) + + let stubs_special_psi_out = + foreign "atg_special_psi_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_round = + foreign "atg_special_round" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_special_round_out = + foreign + "atg_special_round_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_special_scaled_modified_bessel_k0 = + foreign "atg_special_scaled_modified_bessel_k0" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_scaled_modified_bessel_k0_out = + foreign + "atg_special_scaled_modified_bessel_k0_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_scaled_modified_bessel_k1 = + foreign "atg_special_scaled_modified_bessel_k1" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_scaled_modified_bessel_k1_out = + foreign + "atg_special_scaled_modified_bessel_k1_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_t = + foreign + "atg_special_shifted_chebyshev_polynomial_t" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_t_n_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_t_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_t_n_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_t_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_t_out = + foreign + "atg_special_shifted_chebyshev_polynomial_t_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_t_x_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_t_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_t_x_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_t_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_u = + foreign + "atg_special_shifted_chebyshev_polynomial_u" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_u_n_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_u_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_u_n_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_u_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_u_out = + foreign + "atg_special_shifted_chebyshev_polynomial_u_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_u_x_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_u_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_u_x_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_u_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_v = + foreign + "atg_special_shifted_chebyshev_polynomial_v" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_v_n_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_v_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_v_n_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_v_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_v_out = + foreign + "atg_special_shifted_chebyshev_polynomial_v_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_v_x_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_v_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_v_x_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_v_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_w = + foreign + "atg_special_shifted_chebyshev_polynomial_w" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_w_n_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_w_n_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_w_n_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_w_n_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_w_out = + foreign + "atg_special_shifted_chebyshev_polynomial_w_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_w_x_scalar = + foreign + "atg_special_shifted_chebyshev_polynomial_w_x_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_shifted_chebyshev_polynomial_w_x_scalar_out = + foreign + "atg_special_shifted_chebyshev_polynomial_w_x_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_sinc = foreign "atg_special_sinc" (gc_tensor @-> returning raw_tensor) + + let stubs_special_sinc_out = + foreign "atg_special_sinc_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_softmax = + foreign "atg_special_softmax" (gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_special_spherical_bessel_j0 = + foreign "atg_special_spherical_bessel_j0" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_spherical_bessel_j0_out = + foreign + "atg_special_spherical_bessel_j0_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlog1py = + foreign "atg_special_xlog1py" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlog1py_other_scalar = + foreign + "atg_special_xlog1py_other_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_xlog1py_other_scalar_out = + foreign + "atg_special_xlog1py_other_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_xlog1py_out = + foreign + "atg_special_xlog1py_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlog1py_self_scalar = + foreign + "atg_special_xlog1py_self_scalar" + (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlog1py_self_scalar_out = + foreign + "atg_special_xlog1py_self_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlogy = + foreign "atg_special_xlogy" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlogy_other_scalar = + foreign + "atg_special_xlogy_other_scalar" + (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_xlogy_other_scalar_out = + foreign + "atg_special_xlogy_other_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_xlogy_out = + foreign + "atg_special_xlogy_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlogy_self_scalar = + foreign "atg_special_xlogy_self_scalar" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_xlogy_self_scalar_out = + foreign + "atg_special_xlogy_self_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_zeta = + foreign "atg_special_zeta" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_zeta_other_scalar = + foreign "atg_special_zeta_other_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_zeta_other_scalar_out = + foreign + "atg_special_zeta_other_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_special_zeta_out = + foreign + "atg_special_zeta_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_zeta_self_scalar = + foreign "atg_special_zeta_self_scalar" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_special_zeta_self_scalar_out = + foreign + "atg_special_zeta_self_scalar_out" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_split = + foreign "atg_split" (gc_tensor @-> int64_t @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_split_copy = + foreign + "atg_split_copy" + (gc_tensor @-> int64_t @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_split_copy_tensor_out = + foreign + "atg_split_copy_tensor_out" + (ptr gc_tensor @-> int @-> gc_tensor @-> int64_t @-> int64_t @-> returning void) + ;; + + let stubs_split_sizes = + foreign + "atg_split_sizes" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_split_with_sizes = + foreign + "atg_split_with_sizes" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_split_with_sizes_copy = + foreign + "atg_split_with_sizes_copy" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr raw_tensor)) + ;; +end + +module C23 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_split_with_sizes_copy_out = + foreign + "atg_split_with_sizes_copy_out" + (ptr gc_tensor + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning void) + ;; + + let stubs_sqrt = foreign "atg_sqrt" (gc_tensor @-> returning raw_tensor) + let stubs_sqrt_ = foreign "atg_sqrt_" (gc_tensor @-> returning raw_tensor) + + let stubs_sqrt_out = + foreign "atg_sqrt_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_square = foreign "atg_square" (gc_tensor @-> returning raw_tensor) + let stubs_square_ = foreign "atg_square_" (gc_tensor @-> returning raw_tensor) + + let stubs_square_out = + foreign "atg_square_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_squeeze = foreign "atg_squeeze" (gc_tensor @-> returning raw_tensor) + let stubs_squeeze_ = foreign "atg_squeeze_" (gc_tensor @-> returning raw_tensor) + let stubs_squeeze_copy = foreign "atg_squeeze_copy" (gc_tensor @-> returning raw_tensor) + + let stubs_squeeze_copy_dim = + foreign "atg_squeeze_copy_dim" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_squeeze_copy_dim_out = + foreign + "atg_squeeze_copy_dim_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_squeeze_copy_dims = + foreign + "atg_squeeze_copy_dims" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_squeeze_copy_dims_out = + foreign + "atg_squeeze_copy_dims_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_squeeze_copy_out = + foreign "atg_squeeze_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_squeeze_dim = + foreign "atg_squeeze_dim" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_squeeze_dim_ = + foreign "atg_squeeze_dim_" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_squeeze_dims = + foreign "atg_squeeze_dims" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_squeeze_dims_ = + foreign + "atg_squeeze_dims_" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_sspaddmm = + foreign "atg_sspaddmm" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sspaddmm_out = + foreign + "atg_sspaddmm_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_stack = + foreign "atg_stack" (ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_stack_out = + foreign + "atg_stack_out" + (gc_tensor @-> ptr gc_tensor @-> int @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_std = foreign "atg_std" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_std_correction = + foreign + "atg_std_correction" + (gc_tensor @-> ptr int64_t @-> int @-> scalar @-> int @-> returning raw_tensor) + ;; + + let stubs_std_correction_out = + foreign + "atg_std_correction_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> scalar + @-> int + @-> returning raw_tensor) + ;; + + let stubs_std_dim = + foreign + "atg_std_dim" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_std_mean = + foreign "atg_std_mean" (ptr raw_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_std_mean_correction = + foreign + "atg_std_mean_correction" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> scalar + @-> int + @-> returning void) + ;; + + let stubs_std_mean_correction_out = + foreign + "atg_std_mean_correction_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> scalar + @-> int + @-> returning void) + ;; + + let stubs_std_mean_dim = + foreign + "atg_std_mean_dim" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_std_out = + foreign + "atg_std_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_stft = + foreign + "atg_stft" + (gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_stft_center = + foreign + "atg_stft_center" + (gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int64_t + @-> int + @-> gc_tensor + @-> int + @-> string + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_stride = foreign "atg_stride" (gc_tensor @-> int64_t @-> returning int64_t) + let stubs_sub = foreign "atg_sub" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + let stubs_sub_ = foreign "atg_sub_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_sub_out = + foreign "atg_sub_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_sub_scalar = + foreign "atg_sub_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_sub_scalar_ = + foreign "atg_sub_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_sub_scalar_out = + foreign + "atg_sub_scalar_out" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_subtract = + foreign "atg_subtract" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_subtract_ = + foreign "atg_subtract_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_subtract_out = + foreign + "atg_subtract_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_subtract_scalar = + foreign "atg_subtract_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_subtract_scalar_ = + foreign "atg_subtract_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_sum = foreign "atg_sum" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_sum_dim_intlist = + foreign + "atg_sum_dim_intlist" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_sum_intlist_out = + foreign + "atg_sum_intlist_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_sum_out = + foreign "atg_sum_out" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_sum_to_size = + foreign "atg_sum_to_size" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_svd = + foreign "atg_svd" (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> returning void) + ;; + + let stubs_svd_u = + foreign + "atg_svd_u" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> returning void) + ;; + + let stubs_swapaxes = + foreign "atg_swapaxes" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_swapaxes_ = + foreign "atg_swapaxes_" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_swapdims = + foreign "atg_swapdims" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_swapdims_ = + foreign "atg_swapdims_" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_tr = foreign "atg_t" (gc_tensor @-> returning raw_tensor) + let stubs_t_ = foreign "atg_t_" (gc_tensor @-> returning raw_tensor) + let stubs_t_copy = foreign "atg_t_copy" (gc_tensor @-> returning raw_tensor) + + let stubs_t_copy_out = + foreign "atg_t_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_take = foreign "atg_take" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_take_along_dim = + foreign + "atg_take_along_dim" + (gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_take_along_dim_out = + foreign + "atg_take_along_dim_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_take_out = + foreign "atg_take_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_tan = foreign "atg_tan" (gc_tensor @-> returning raw_tensor) + let stubs_tan_ = foreign "atg_tan_" (gc_tensor @-> returning raw_tensor) + + let stubs_tan_out = + foreign "atg_tan_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_tanh = foreign "atg_tanh" (gc_tensor @-> returning raw_tensor) + let stubs_tanh_ = foreign "atg_tanh_" (gc_tensor @-> returning raw_tensor) + + let stubs_tanh_backward = + foreign "atg_tanh_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_tanh_backward_grad_input = + foreign + "atg_tanh_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_tanh_out = + foreign "atg_tanh_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_tensor_split = + foreign + "atg_tensor_split" + (gc_tensor @-> int64_t @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_tensor_split_indices = + foreign + "atg_tensor_split_indices" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_tensor_split_tensor_indices_or_sections = + foreign + "atg_tensor_split_tensor_indices_or_sections" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_tensordot = + foreign + "atg_tensordot" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_tensordot_out = + foreign + "atg_tensordot_out" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> returning raw_tensor) + ;; + + let stubs_threshold = + foreign "atg_threshold" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_threshold_ = + foreign "atg_threshold_" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_threshold_backward = + foreign + "atg_threshold_backward" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_threshold_backward_grad_input = + foreign + "atg_threshold_backward_grad_input" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_threshold_out = + foreign + "atg_threshold_out" + (gc_tensor @-> gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_tile = + foreign "atg_tile" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_to_ = foreign "atg_to" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_to_dense = + foreign "atg_to_dense" (gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_to_dense_backward = + foreign + "atg_to_dense_backward" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_to_device = + foreign + "atg_to_device" + (gc_tensor @-> int @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_to_dtype = + foreign "atg_to_dtype" (gc_tensor @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_to_dtype_layout = + foreign + "atg_to_dtype_layout" + (gc_tensor @-> int @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_to_mkldnn = + foreign "atg_to_mkldnn" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_to_mkldnn_backward = + foreign "atg_to_mkldnn_backward" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_to_mkldnn_out = + foreign "atg_to_mkldnn_out" (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_to_other = + foreign + "atg_to_other" + (gc_tensor @-> gc_tensor @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_to_padded_tensor = + foreign + "atg_to_padded_tensor" + (gc_tensor @-> double @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_to_padded_tensor_out = + foreign + "atg_to_padded_tensor_out" + (gc_tensor @-> gc_tensor @-> double @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_topk = + foreign + "atg_topk" + (ptr raw_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_topk_values = + foreign + "atg_topk_values" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_totype = foreign "atg_totype" (gc_tensor @-> int @-> returning raw_tensor) + let stubs_trace = foreign "atg_trace" (gc_tensor @-> returning raw_tensor) +end + +module C24 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_trace_backward = + foreign + "atg_trace_backward" + (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_trace_out = + foreign "atg_trace_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_transpose = + foreign "atg_transpose" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_transpose_ = + foreign "atg_transpose_" (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_transpose_copy = + foreign + "atg_transpose_copy" + (gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_transpose_copy_int_out = + foreign + "atg_transpose_copy_int_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_trapezoid = + foreign "atg_trapezoid" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_trapezoid_x = + foreign + "atg_trapezoid_x" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_trapz = + foreign "atg_trapz" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_trapz_dx = + foreign "atg_trapz_dx" (gc_tensor @-> double @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_triangular_solve = + foreign + "atg_triangular_solve" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_triangular_solve_x = + foreign + "atg_triangular_solve_x" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_tril = foreign "atg_tril" (gc_tensor @-> int64_t @-> returning raw_tensor) + let stubs_tril_ = foreign "atg_tril_" (gc_tensor @-> int64_t @-> returning raw_tensor) + + let stubs_tril_indices = + foreign + "atg_tril_indices" + (int64_t @-> int64_t @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_tril_indices_out = + foreign + "atg_tril_indices_out" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_tril_out = + foreign "atg_tril_out" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_triplet_margin_loss = + foreign + "atg_triplet_margin_loss" + (gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> double + @-> double + @-> double + @-> int + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_triu = foreign "atg_triu" (gc_tensor @-> int64_t @-> returning raw_tensor) + let stubs_triu_ = foreign "atg_triu_" (gc_tensor @-> int64_t @-> returning raw_tensor) + + let stubs_triu_indices = + foreign + "atg_triu_indices" + (int64_t @-> int64_t @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_triu_indices_out = + foreign + "atg_triu_indices_out" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_triu_out = + foreign "atg_triu_out" (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_true_divide = + foreign "atg_true_divide" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_true_divide_ = + foreign "atg_true_divide_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_true_divide_out = + foreign + "atg_true_divide_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_true_divide_scalar = + foreign "atg_true_divide_scalar" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_true_divide_scalar_ = + foreign "atg_true_divide_scalar_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_trunc = foreign "atg_trunc" (gc_tensor @-> returning raw_tensor) + let stubs_trunc_ = foreign "atg_trunc_" (gc_tensor @-> returning raw_tensor) + + let stubs_trunc_out = + foreign "atg_trunc_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_type_as = + foreign "atg_type_as" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_unbind = + foreign "atg_unbind" (gc_tensor @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_unbind_copy = + foreign "atg_unbind_copy" (gc_tensor @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_unbind_copy_int_out = + foreign + "atg_unbind_copy_int_out" + (ptr gc_tensor @-> int @-> gc_tensor @-> int64_t @-> returning void) + ;; + + let stubs_unflatten = + foreign + "atg_unflatten" + (gc_tensor @-> int64_t @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_unflatten_dense_tensors = + foreign + "atg_unflatten_dense_tensors" + (gc_tensor @-> ptr gc_tensor @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_unfold = + foreign + "atg_unfold" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_unfold_backward = + foreign + "atg_unfold_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_unfold_backward_out = + foreign + "atg_unfold_backward_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_unfold_copy = + foreign + "atg_unfold_copy" + (gc_tensor @-> int64_t @-> int64_t @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_unfold_copy_out = + foreign + "atg_unfold_copy_out" + (gc_tensor + @-> gc_tensor + @-> int64_t + @-> int64_t + @-> int64_t + @-> returning raw_tensor) + ;; + + let stubs_uniform = + foreign "atg_uniform" (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_uniform_ = + foreign "atg_uniform_" (gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_uniform_out = + foreign + "atg_uniform_out" + (gc_tensor @-> gc_tensor @-> double @-> double @-> returning raw_tensor) + ;; + + let stubs_unique_consecutive = + foreign + "atg_unique_consecutive" + (ptr raw_tensor @-> gc_tensor @-> int @-> int @-> int64_t @-> int @-> returning void) + ;; + + let stubs_unique_consecutive_out = + foreign + "atg_unique_consecutive_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int + @-> int + @-> int64_t + @-> int + @-> returning void) + ;; + + let stubs_unique_dim = + foreign + "atg_unique_dim" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> int @-> int @-> returning void) + ;; + + let stubs_unique_dim_consecutive = + foreign + "atg_unique_dim_consecutive" + (ptr raw_tensor @-> gc_tensor @-> int64_t @-> int @-> int @-> returning void) + ;; + + let stubs_unique_dim_consecutive_out = + foreign + "atg_unique_dim_consecutive_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> returning void) + ;; + + let stubs_unique_dim_out = + foreign + "atg_unique_dim_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> int64_t + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_unsafe_chunk = + foreign + "atg_unsafe_chunk" + (gc_tensor @-> int64_t @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_unsafe_split = + foreign + "atg_unsafe_split" + (gc_tensor @-> int64_t @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_unsafe_split_tensor_out = + foreign + "atg_unsafe_split_tensor_out" + (ptr gc_tensor @-> int @-> gc_tensor @-> int64_t @-> int64_t @-> returning void) + ;; + + let stubs_unsafe_split_with_sizes = + foreign + "atg_unsafe_split_with_sizes" + (gc_tensor @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_unsafe_split_with_sizes_out = + foreign + "atg_unsafe_split_with_sizes_out" + (ptr gc_tensor + @-> int + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int64_t + @-> returning void) + ;; + + let stubs_unsqueeze = + foreign "atg_unsqueeze" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_unsqueeze_ = + foreign "atg_unsqueeze_" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_unsqueeze_copy = + foreign "atg_unsqueeze_copy" (gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_unsqueeze_copy_out = + foreign + "atg_unsqueeze_copy_out" + (gc_tensor @-> gc_tensor @-> int64_t @-> returning raw_tensor) + ;; + + let stubs_upsample_bicubic2d = + foreign + "atg_upsample_bicubic2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bicubic2d_backward = + foreign + "atg_upsample_bicubic2d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bicubic2d_backward_grad_input = + foreign + "atg_upsample_bicubic2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bicubic2d_out = + foreign + "atg_upsample_bicubic2d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bicubic2d_vec = + foreign + "atg_upsample_bicubic2d_vec" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> ptr double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bilinear2d = + foreign + "atg_upsample_bilinear2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bilinear2d_backward = + foreign + "atg_upsample_bilinear2d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bilinear2d_backward_grad_input = + foreign + "atg_upsample_bilinear2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bilinear2d_out = + foreign + "atg_upsample_bilinear2d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_bilinear2d_vec = + foreign + "atg_upsample_bilinear2d_vec" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> ptr double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_linear1d = + foreign + "atg_upsample_linear1d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_linear1d_backward = + foreign + "atg_upsample_linear1d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_linear1d_backward_grad_input = + foreign + "atg_upsample_linear1d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_linear1d_out = + foreign + "atg_upsample_linear1d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_linear1d_vec = + foreign + "atg_upsample_linear1d_vec" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> ptr double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest1d = + foreign + "atg_upsample_nearest1d" + (gc_tensor @-> ptr int64_t @-> int @-> double @-> int @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest1d_backward = + foreign + "atg_upsample_nearest1d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest1d_backward_grad_input = + foreign + "atg_upsample_nearest1d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest1d_out = + foreign + "atg_upsample_nearest1d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest1d_vec = + foreign + "atg_upsample_nearest1d_vec" + (gc_tensor @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest2d = + foreign + "atg_upsample_nearest2d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest2d_backward = + foreign + "atg_upsample_nearest2d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest2d_backward_grad_input = + foreign + "atg_upsample_nearest2d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest2d_out = + foreign + "atg_upsample_nearest2d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest2d_vec = + foreign + "atg_upsample_nearest2d_vec" + (gc_tensor @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest3d = + foreign + "atg_upsample_nearest3d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest3d_backward = + foreign + "atg_upsample_nearest3d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest3d_backward_grad_input = + foreign + "atg_upsample_nearest3d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest3d_out = + foreign + "atg_upsample_nearest3d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_nearest3d_vec = + foreign + "atg_upsample_nearest3d_vec" + (gc_tensor @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning raw_tensor) + ;; + + let stubs_upsample_trilinear3d = + foreign + "atg_upsample_trilinear3d" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_trilinear3d_backward = + foreign + "atg_upsample_trilinear3d_backward" + (gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_trilinear3d_backward_grad_input = + foreign + "atg_upsample_trilinear3d_backward_grad_input" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_trilinear3d_out = + foreign + "atg_upsample_trilinear3d_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> double + @-> int + @-> double + @-> int + @-> double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_upsample_trilinear3d_vec = + foreign + "atg_upsample_trilinear3d_vec" + (gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> ptr double + @-> int + @-> returning raw_tensor) + ;; + + let stubs_value_selecting_reduction_backward = + foreign + "atg_value_selecting_reduction_backward" + (gc_tensor + @-> int64_t + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_values = foreign "atg_values" (gc_tensor @-> returning raw_tensor) + let stubs_values_copy = foreign "atg_values_copy" (gc_tensor @-> returning raw_tensor) + + let stubs_values_copy_out = + foreign "atg_values_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_vander = + foreign "atg_vander" (gc_tensor @-> int64_t @-> int @-> int @-> returning raw_tensor) + ;; +end + +module C25 (F : Cstubs.FOREIGN) = struct + open F + open Type_defs + + let stubs_var = foreign "atg_var" (gc_tensor @-> int @-> returning raw_tensor) + + let stubs_var_correction = + foreign + "atg_var_correction" + (gc_tensor @-> ptr int64_t @-> int @-> scalar @-> int @-> returning raw_tensor) + ;; + + let stubs_var_correction_out = + foreign + "atg_var_correction_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> scalar + @-> int + @-> returning raw_tensor) + ;; + + let stubs_var_dim = + foreign + "atg_var_dim" + (gc_tensor @-> ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_var_mean = + foreign "atg_var_mean" (ptr raw_tensor @-> gc_tensor @-> int @-> returning void) + ;; + + let stubs_var_mean_correction = + foreign + "atg_var_mean_correction" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> scalar + @-> int + @-> returning void) + ;; + + let stubs_var_mean_correction_out = + foreign + "atg_var_mean_correction_out" + (ptr raw_tensor + @-> gc_tensor + @-> gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> scalar + @-> int + @-> returning void) + ;; + + let stubs_var_mean_dim = + foreign + "atg_var_mean_dim" + (ptr raw_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning void) + ;; + + let stubs_var_out = + foreign + "atg_var_out" + (gc_tensor + @-> gc_tensor + @-> ptr int64_t + @-> int + @-> int + @-> int + @-> returning raw_tensor) + ;; + + let stubs_vdot = foreign "atg_vdot" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_vdot_out = + foreign "atg_vdot_out" (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_view = + foreign "atg_view" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_view_as = + foreign "atg_view_as" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_view_as_complex = + foreign "atg_view_as_complex" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_view_as_complex_copy = + foreign "atg_view_as_complex_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_view_as_complex_copy_out = + foreign + "atg_view_as_complex_copy_out" + (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_view_as_real = foreign "atg_view_as_real" (gc_tensor @-> returning raw_tensor) + + let stubs_view_as_real_copy = + foreign "atg_view_as_real_copy" (gc_tensor @-> returning raw_tensor) + ;; + + let stubs_view_as_real_copy_out = + foreign "atg_view_as_real_copy_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_view_copy = + foreign "atg_view_copy" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_view_copy_dtype = + foreign "atg_view_copy_dtype" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_view_copy_dtype_out = + foreign + "atg_view_copy_dtype_out" + (gc_tensor @-> gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_view_copy_out = + foreign + "atg_view_copy_out" + (gc_tensor @-> gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; + + let stubs_view_dtype = + foreign "atg_view_dtype" (gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_vsplit = + foreign "atg_vsplit" (gc_tensor @-> int64_t @-> returning (ptr raw_tensor)) + ;; + + let stubs_vsplit_array = + foreign + "atg_vsplit_array" + (gc_tensor @-> ptr int64_t @-> int @-> returning (ptr raw_tensor)) + ;; + + let stubs_vstack = foreign "atg_vstack" (ptr gc_tensor @-> int @-> returning raw_tensor) + + let stubs_vstack_out = + foreign "atg_vstack_out" (gc_tensor @-> ptr gc_tensor @-> int @-> returning raw_tensor) + ;; + + let stubs_where = foreign "atg_where" (gc_tensor @-> returning (ptr raw_tensor)) + + let stubs_where_scalar = + foreign "atg_where_scalar" (gc_tensor @-> scalar @-> scalar @-> returning raw_tensor) + ;; + + let stubs_where_scalarother = + foreign + "atg_where_scalarother" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_where_scalarself = + foreign + "atg_where_scalarself" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_where_self = + foreign + "atg_where_self" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_where_self_out = + foreign + "atg_where_self_out" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_xlogy = foreign "atg_xlogy" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + + let stubs_xlogy_ = + foreign "atg_xlogy_" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_xlogy_outscalar_other = + foreign + "atg_xlogy_outscalar_other" + (gc_tensor @-> gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_xlogy_outscalar_self = + foreign + "atg_xlogy_outscalar_self" + (gc_tensor @-> scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_xlogy_outtensor = + foreign + "atg_xlogy_outtensor" + (gc_tensor @-> gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_xlogy_scalar_other = + foreign "atg_xlogy_scalar_other" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_xlogy_scalar_other_ = + foreign "atg_xlogy_scalar_other_" (gc_tensor @-> scalar @-> returning raw_tensor) + ;; + + let stubs_xlogy_scalar_self = + foreign "atg_xlogy_scalar_self" (scalar @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_zero = foreign "atg_zero" (gc_tensor @-> returning raw_tensor) + let stubs_zero_ = foreign "atg_zero_" (gc_tensor @-> returning raw_tensor) + + let stubs_zero_out = + foreign "atg_zero_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_zeros = + foreign "atg_zeros" (ptr int64_t @-> int @-> int @-> int @-> returning raw_tensor) + ;; + + let stubs_zeros_like = foreign "atg_zeros_like" (gc_tensor @-> returning raw_tensor) + + let stubs_zeros_like_out = + foreign "atg_zeros_like_out" (gc_tensor @-> gc_tensor @-> returning raw_tensor) + ;; + + let stubs_zeros_out = + foreign "atg_zeros_out" (gc_tensor @-> ptr int64_t @-> int @-> returning raw_tensor) + ;; +end + +module C (F : Cstubs.FOREIGN) = struct + include C0 (F) + include C1 (F) + include C2 (F) + include C3 (F) + include C4 (F) + include C5 (F) + include C6 (F) + include C7 (F) + include C8 (F) + include C9 (F) + include C10 (F) + include C11 (F) + include C12 (F) + include C13 (F) + include C14 (F) + include C15 (F) + include C16 (F) + include C17 (F) + include C18 (F) + include C19 (F) + include C20 (F) + include C21 (F) + include C22 (F) + include C23 (F) + include C24 (F) + include C25 (F) +end diff --git a/src/bindings/type_defs.ml b/src/bindings/type_defs.ml new file mode 100644 index 0000000..0fdcb48 --- /dev/null +++ b/src/bindings/type_defs.ml @@ -0,0 +1,23 @@ +open Ctypes + +type raw_tensor = unit ptr +type gc_tensor = unit ptr +type ivalue = unit ptr +type module_ = unit ptr +type optimizer = unit ptr +type scalar = unit ptr + +let raw_tensor : raw_tensor typ = ptr void +let gc_tensor : gc_tensor typ = ptr void +let ivalue : ivalue typ = ptr void +let module_ : module_ typ = ptr void +let optimizer : optimizer typ = ptr void +let scalar : scalar typ = ptr void +let none_gc_tensor = null +let gc_tensor_of_voidp t = Cstubs_internals.make_ptr void t +let is_none_raw_tensor t = is_null t + +let fatptr_of_raw_tensor (t : raw_tensor) = + let (CPointer fatptr) = t in + fatptr +;; diff --git a/src/bindings/type_defs.mli b/src/bindings/type_defs.mli new file mode 100644 index 0000000..e95e576 --- /dev/null +++ b/src/bindings/type_defs.mli @@ -0,0 +1,19 @@ +open Ctypes + +type raw_tensor +type gc_tensor +type ivalue +type module_ +type optimizer +type scalar + +val raw_tensor : raw_tensor typ +val gc_tensor : gc_tensor typ +val ivalue : ivalue typ +val module_ : module_ typ +val optimizer : optimizer typ +val scalar : scalar typ +val none_gc_tensor : gc_tensor +val gc_tensor_of_voidp : Ctypes_ptr.voidp -> gc_tensor +val is_none_raw_tensor : raw_tensor -> bool +val fatptr_of_raw_tensor : raw_tensor -> (Obj.t option, unit) Cstubs_internals.fatptr diff --git a/src/gen_bindings/dune b/src/gen_bindings/dune new file mode 100644 index 0000000..ff8bc33 --- /dev/null +++ b/src/gen_bindings/dune @@ -0,0 +1,6 @@ +(executables + (modes byte exe) + (names gen) + (libraries core.command core_unix.command_unix stdio yaml) + (preprocess + (pps ppx_let ppx_string))) diff --git a/src/gen/gen.ml b/src/gen_bindings/gen.ml similarity index 83% rename from src/gen/gen.ml rename to src/gen_bindings/gen.ml index ac86e84..7f3412b 100644 --- a/src/gen/gen.ml +++ b/src/gen_bindings/gen.ml @@ -4,6 +4,10 @@ open Base open Stdio +let cpp_filename = "torch_api_generated" +let bindings_filename = "torch_bindings_generated.ml" +let wrapper_filename = "wrapper_generated" + let excluded_functions = Set.of_list (module String) @@ -193,7 +197,7 @@ module Func = struct Printf.sprintf "int64_t *%s_data, int %s_len" arg_name arg_name | DoubleList -> Printf.sprintf "double *%s_data, int %s_len" arg_name arg_name | TensorOptList | TensorList -> - Printf.sprintf "tensor *%s_data, int %s_len" arg_name arg_name + Printf.sprintf "gc_tensor *%s_data, int %s_len" arg_name arg_name | TensorOptions -> Printf.sprintf "int %s_kind, int %s_device" arg_name arg_name | Int64Option -> Printf.sprintf "int64_t %s_v, int %s_null" arg_name arg_name | DoubleOption -> Printf.sprintf "double %s_v, int %s_null" arg_name arg_name @@ -203,8 +207,7 @@ module Func = struct | Bool -> "int" | Int64 -> "int64_t" | Double -> "double" - | Tensor -> "tensor" - | TensorOption -> "tensor" + | Tensor | TensorOption -> "gc_tensor" | ScalarType -> "int" | Device -> "int" | Scalar -> "scalar" @@ -225,8 +228,10 @@ module Func = struct let c_args_list args = List.map args ~f:(fun { arg_name; arg_type; _ } -> match arg_type with - | Scalar | Tensor -> "*" ^ arg_name - | TensorOption -> [%string "(%{arg_name} ? *%{arg_name} : torch::Tensor())"] + | Scalar -> "*" ^ arg_name + | Tensor -> [%string "*tensor_ptr_from_ocaml(%{arg_name})"] + | TensorOption -> + [%string "(%{arg_name} ? tensor_from_ocaml(%{arg_name}) : torch::Tensor())"] | Bool -> "(bool)" ^ arg_name | IntList -> [%string "torch::IntArrayRef(%{arg_name}_data, %{arg_name}_len)"] | IntListOption -> @@ -258,7 +263,13 @@ module Func = struct | `function_ -> [%string "torch::%{t.name}(%{c_args_list t.args})"] | `method_ -> (match t.args with - | head :: tail -> [%string "%{head.arg_name}->%{t.name}(%{c_args_list tail})"] + | head :: tail -> + let obj = + match head.arg_type with + | Tensor -> [%string "tensor_ptr_from_ocaml(%{head.arg_name})"] + | _ -> head.arg_name + in + [%string "%{obj}->%{t.name}(%{c_args_list tail})"] | [] -> failwith [%string "Method calls should have at least one argument %{t.name}"]) ;; @@ -273,7 +284,7 @@ module Func = struct | other -> other ;; - let stubs_signature t = + let bindings_signature t = let args = List.concat_map t.args ~f:(fun arg -> match arg.arg_type with @@ -282,30 +293,30 @@ module Func = struct | Int64Option -> [ "int64_t"; "int" ] | Double -> [ "double" ] | DoubleOption -> [ "double"; "int" ] - | Tensor -> [ "t" ] - | TensorOption -> [ "t" ] + | Tensor | TensorOption -> [ "gc_tensor" ] | TensorOptions -> [ "int"; "int" ] | ScalarType -> [ "int" ] | Device -> [ "int" ] | IntList | IntListOption -> [ "ptr int64_t"; "int" ] | DoubleList -> [ "ptr double"; "int" ] - | TensorOptList | TensorList -> [ "ptr t"; "int" ] + | TensorOptList | TensorList -> [ "ptr gc_tensor"; "int" ] | String -> [ "string" ] | Scalar -> [ "scalar" ]) |> String.concat ~sep:" @-> " in - let simple_stub args return_type = + let simple_binding args return_type = if String.length args > 0 then [%string "%{args} @-> returning %{return_type}"] else [%string "void @-> returning %{return_type}"] in match t.returns with - | `fixed _ -> [%string "ptr t @-> %{args} @-> returning void"] - | `dynamic -> [%string "%{args} @-> returning (ptr t)"] - | `nothing -> simple_stub args "void" - | `bool -> simple_stub args "bool" - | `int64_t -> simple_stub args "int64_t" - | `double -> simple_stub args "double" + | `fixed 1 -> [%string "%{args} @-> returning raw_tensor"] + | `fixed _ -> [%string "ptr raw_tensor @-> %{args} @-> returning void"] + | `dynamic -> [%string "%{args} @-> returning (ptr raw_tensor)"] + | `nothing -> simple_binding args "void" + | `bool -> simple_binding args "bool" + | `int64_t -> simple_binding args "int64_t" + | `double -> simple_binding args "double" ;; let replace_map = @@ -343,11 +354,12 @@ module Func = struct [%string {|(%{name} |> CArray.of_list double |> CArray.start) (List.length %{name})|}] | TensorList -> - [%string "(CArray.of_list t %{name} |> CArray.start) (List.length %{name})"] + [%string + "(CArray.of_list gc_tensor %{name} |> CArray.start) (List.length %{name})"] | TensorOptList -> [%string - "(List.map (function Some x -> x | None -> null) %{name} |> CArray.of_list t \ - |> CArray.start) (List.length %{name})"] + "(List.map (function Some x -> x | None -> none_gc_tensor) %{name} |> \ + CArray.of_list gc_tensor |> CArray.start) (List.length %{name})"] | Bool -> [%string "(if %{name} then 1 else 0)"] | ScalarType -> [%string "(Kind.packed_to_int %{name})"] | TensorOptions -> @@ -357,7 +369,8 @@ module Func = struct if String.( = ) name "reduction" then "(Reduction.to_int reduction |> Int64.of_int)" else [%string "(Int64.of_int %{name})"] - | TensorOption -> [%string "(match %{name} with | Some v -> v | None -> null)"] + | TensorOption -> + [%string "(match %{name} with | Some v -> v | None -> none_gc_tensor)"] | Double | String | Scalar | Tensor -> name) |> String.concat ~sep:" " ;; @@ -466,7 +479,7 @@ let p out_channel s = ;; let write_cpp funcs filename = - Out_channel.with_file (filename ^ ".cpp.h") ~f:(fun out_cpp -> + Out_channel.with_file (filename ^ ".cpp") ~f:(fun out_cpp -> Out_channel.with_file (filename ^ ".h") ~f:(fun out_h -> let pc s = p out_cpp s in let ph s = p out_h s in @@ -478,22 +491,20 @@ let write_cpp funcs filename = let c_typed_args_list = Func.c_typed_args_list func in match func.returns with | `dynamic -> - pc "tensor *atg_%s(%s) {" exported_name c_typed_args_list; + pc "raw_tensor *atg_%s(%s) {" exported_name c_typed_args_list; pc " PROTECT("; - pc " auto outputs__ = %s;" (Func.c_call func); + pc " auto results__ = %s;" (Func.c_call func); (* the returned type is a C++ vector of tensors *) - pc " int sz = outputs__.size();"; - pc - " torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * \ - sizeof(torch::Tensor*));"; + pc " int sz = results__.size();"; + pc " raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor));"; pc " for (int i = 0; i < sz; ++i)"; - pc " out__[i] = new torch::Tensor(outputs__[i]);"; + pc " out__[i] = tensor_to_ocaml(results__[i]);"; pc " out__[sz] = nullptr;"; pc " return out__;"; pc " )"; pc "}"; pc ""; - ph "tensor *atg_%s(%s);" exported_name c_typed_args_list + ph "raw_tensor *atg_%s(%s);" exported_name c_typed_args_list | `nothing -> pc "void atg_%s(%s) {" exported_name c_typed_args_list; pc " PROTECT("; @@ -502,20 +513,26 @@ let write_cpp funcs filename = pc "}"; pc ""; ph "void atg_%s(%s);" exported_name c_typed_args_list + | `fixed 1 -> + pc "raw_tensor atg_%s(%s) {" exported_name c_typed_args_list; + pc " PROTECT("; + pc " torch::Tensor results__ = %s;" (Func.c_call func); + pc " return tensor_to_ocaml(results__);"; + pc " )"; + pc "}"; + pc ""; + ph "raw_tensor atg_%s(%s);" exported_name c_typed_args_list | `fixed ntensors -> - pc "void atg_%s(tensor *out__, %s) {" exported_name c_typed_args_list; + pc "void atg_%s(raw_tensor *out__, %s) {" exported_name c_typed_args_list; pc " PROTECT("; - pc " auto outputs__ = %s;" (Func.c_call func); - if ntensors = 1 - then pc " out__[0] = new torch::Tensor(outputs__);" - else - for i = 0 to ntensors - 1 do - pc " out__[%d] = new torch::Tensor(std::get<%d>(outputs__));" i i - done; + pc " auto results__ = %s;" (Func.c_call func); + for i = 0 to ntensors - 1 do + pc " out__[%d] = tensor_to_ocaml(std::get<%d>(results__));" i i + done; pc " )"; pc "}"; pc ""; - ph "void atg_%s(tensor *, %s);" exported_name c_typed_args_list + ph "void atg_%s(raw_tensor *, %s);" exported_name c_typed_args_list | (`bool | `int64_t | `double) as returns -> let c_type = match returns with @@ -533,7 +550,7 @@ let write_cpp funcs filename = ph "%s atg_%s(%s);" c_type exported_name c_typed_args_list))) ;; -let write_stubs funcs filename = +let write_bindings funcs filename = Out_channel.with_file filename ~f:(fun out_channel -> let p s = p out_channel s in p "(* THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! *)"; @@ -544,14 +561,11 @@ let write_stubs funcs filename = List.iteri funcs ~f:(fun idx funcs -> p "module C%d(F: Cstubs.FOREIGN) = struct" idx; p " open F"; - p " type t = unit ptr"; - p " let t : t typ = ptr void"; - p " type scalar = unit ptr"; - p " let scalar : scalar typ = ptr void"; + p " open Type_defs"; List.iter funcs ~f:(fun (exported_name, func) -> p " let stubs_%s =" (Func.caml_name exported_name); p " foreign \"atg_%s\"" exported_name; - p " (%s)" (Func.stubs_signature func); + p " (%s)" (Func.bindings_signature func); p ""); p "end"); p "module C(F: Cstubs.FOREIGN) = struct"; @@ -567,23 +581,9 @@ let write_wrapper funcs filename = pm "(* THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! *)"; pm ""; pm "open Ctypes"; - pm ""; - pm "module C = Torch_bindings.C(Torch_generated)"; - pm "open C.TensorG"; - pm ""; - pm "let to_tensor_list ptr ="; - pm " let rec loop ptr acc ="; - pm " let tensor = !@ptr in"; - pm " if is_null tensor"; - pm " then acc"; - pm " else begin"; - pm " Gc.finalise C.Tensor.free tensor;"; - pm " loop (ptr +@ 1) (tensor :: acc)"; - pm " end"; - pm " in"; - pm " let result = loop ptr [] in"; - pm " C.free (to_voidp ptr);"; - pm " List.rev result"; + pm "open Torch_bindings.Type_defs"; + pm "open Torch_stubs"; + pm "open C.Generated"; pm ""; pi "(* THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! *)"; pi ""; @@ -597,15 +597,16 @@ let write_wrapper funcs filename = (match func.returns with | `nothing | `bool | `int64_t | `double -> pm " stubs_%s %s" caml_name (Func.caml_binding_args func) + | `fixed 1 -> + pm " stubs_%s %s |> with_tensor_gc" caml_name (Func.caml_binding_args func) | `fixed ntensors -> - pm " let out__ = CArray.make t %d in" ntensors; + pm " let out__ = CArray.make raw_tensor %d in" ntensors; pm " stubs_%s (CArray.start out__) %s;" caml_name (Func.caml_binding_args func); for i = 0 to ntensors - 1 do - pm " let t%d = CArray.get out__ %d in" i i; - pm " Gc.finalise C.Tensor.free t%d;" i + pm " let t%d = CArray.get out__ %d |> with_tensor_gc in" i i done; pm " %s" @@ -659,8 +660,8 @@ let methods = ] ;; -let run ~yaml_filename ~cpp_filename ~stubs_filename ~wrapper_filename = - let funcs = read_yaml yaml_filename in +let run ~declarations_filename ~gen_bindings ~gen_wrappers = + let funcs = read_yaml declarations_filename in let funcs = methods @ funcs in printf "Generating code for %d functions.\n%!" (List.length funcs); (* Generate some unique names for overloaded functions. *) @@ -694,15 +695,24 @@ let run ~yaml_filename ~cpp_filename ~stubs_filename ~wrapper_filename = name, func)) |> Map.of_alist_exn (module String) in - write_cpp funcs cpp_filename; - write_stubs funcs stubs_filename; - write_wrapper funcs wrapper_filename + if gen_bindings then write_bindings funcs bindings_filename; + if gen_wrappers + then ( + write_cpp funcs cpp_filename; + write_wrapper funcs wrapper_filename) ;; -let () = - run - ~yaml_filename:"third_party/pytorch/Declarations-v2.1.0.yaml" - ~cpp_filename:"src/wrapper/torch_api_generated" - ~stubs_filename:"src/stubs/torch_bindings_generated.ml" - ~wrapper_filename:"src/wrapper/wrapper_generated" +let command = + Command.basic + ~summary:"generate bindings or wrapper code for toch functions" + (let%map_open.Command declarations_filename = + flag "declarations" (required string) ~doc:"PATH path to Declarations.yaml" + and gen_bindings = + flag "bindings" no_arg ~doc:"if passed in, generate ctypes bindings OCaml code" + and gen_wrappers = + flag "wrappers" no_arg ~doc:"if passed in, generate wrapper C++ and OCaml code" + in + fun () -> run ~declarations_filename ~gen_bindings ~gen_wrappers) ;; + +let () = Command_unix.run command diff --git a/src/gen/dune b/src/gen_stubs/dune similarity index 52% rename from src/gen/dune rename to src/gen_stubs/dune index 41d2ddf..7d0e753 100644 --- a/src/gen/dune +++ b/src/gen_stubs/dune @@ -1,6 +1,6 @@ (executables (modes byte exe) - (names gen) - (libraries base stdio yaml) + (names gen_stubs) + (libraries ctypes.stubs torch_bindings) (preprocess (pps ppx_string))) diff --git a/src/stubs/torch_gen.ml b/src/gen_stubs/gen_stubs.ml similarity index 74% rename from src/stubs/torch_gen.ml rename to src/gen_stubs/gen_stubs.ml index f8d4d17..2a3067e 100644 --- a/src/stubs/torch_gen.ml +++ b/src/gen_stubs/gen_stubs.ml @@ -1,9 +1,9 @@ let () = let fmt file = Format.formatter_of_out_channel (open_out file) in - let fmt_c = fmt "torch_stubs.c" in + let fmt_c = fmt "torch_stubs_generated.c" in Format.fprintf fmt_c "#include \"torch_api.h\"@."; Cstubs.write_c fmt_c ~prefix:"caml_" (module Torch_bindings.C); - let fmt_ml = fmt "torch_generated.ml" in + let fmt_ml = fmt "torch_stubs_generated.ml" in Cstubs.write_ml fmt_ml ~prefix:"caml_" (module Torch_bindings.C); flush_all () ;; diff --git a/src/stubs/dune b/src/stubs/dune deleted file mode 100644 index e3dc775..0000000 --- a/src/stubs/dune +++ /dev/null @@ -1,6 +0,0 @@ -(executables - (modes byte exe) - (names torch_gen) - (libraries ctypes.stubs) - (preprocess - (pps ppx_jane))) diff --git a/src/stubs/torch_bindings.ml b/src/stubs/torch_bindings.ml deleted file mode 100644 index b854d7b..0000000 --- a/src/stubs/torch_bindings.ml +++ /dev/null @@ -1,269 +0,0 @@ -open Ctypes - -module C (F : Cstubs.FOREIGN) = struct - open F - - let manual_seed = foreign "at_manual_seed" (int64_t @-> returning void) - let free = foreign "free" (ptr void @-> returning void) - let get_num_threads = foreign "at_get_num_threads" (void @-> returning int) - let set_num_threads = foreign "at_set_num_threads" (int @-> returning void) - - module Tensor = struct - type t = unit ptr - - let t : t typ = ptr void - let new_tensor = foreign "at_new_tensor" (void @-> returning t) - - let tensor_of_data = - foreign - "at_tensor_of_data" - (ptr void - (* data *) - @-> ptr int64_t - (* dims *) - @-> int - (* ndims *) - @-> int - (* element size in bytes *) - @-> int - (* kind *) - @-> returning t) - ;; - - let copy_data = - foreign - "at_copy_data" - (t - (* tensor *) - @-> ptr void - (* data *) - @-> int64_t - (* numel *) - @-> int - (* element size in bytes *) - @-> returning void) - ;; - - let copy_ = foreign "at_copy_" (t (* dst *) @-> t (* src *) @-> returning void) - let set_data = foreign "at_set_data" (t (* dst *) @-> t (* src *) @-> returning void) - - let float_vec = - foreign - "at_float_vec" - (ptr double (* values *) - @-> int (* num values *) - @-> int (* kind *) - @-> returning t) - ;; - - let int_vec = - foreign - "at_int_vec" - (ptr int64_t - (* values *) - @-> int - (* num values *) - @-> int - (* kind *) - @-> returning t) - ;; - - let device = foreign "at_device" (t @-> returning int) - let defined = foreign "at_defined" (t @-> returning bool) - let num_dims = foreign "at_dim" (t @-> returning int) - let shape = foreign "at_shape" (t @-> ptr int (* dims *) @-> returning void) - let scalar_type = foreign "at_scalar_type" (t @-> returning int) - let backward = foreign "at_backward" (t @-> int @-> int @-> returning void) - let requires_grad = foreign "at_requires_grad" (t @-> returning int) - let grad_set_enabled = foreign "at_grad_set_enabled" (int @-> returning int) - let get = foreign "at_get" (t @-> int @-> returning t) - - let double_value = - foreign "at_double_value_at_indexes" (t @-> ptr int @-> int @-> returning float) - ;; - - let int64_value = - foreign "at_int64_value_at_indexes" (t @-> ptr int @-> int @-> returning int64_t) - ;; - - let double_value_set = - foreign - "at_set_double_value_at_indexes" - (t @-> ptr int @-> int @-> float @-> returning void) - ;; - - let int64_value_set = - foreign - "at_set_int64_value_at_indexes" - (t @-> ptr int @-> int @-> int64_t @-> returning void) - ;; - - let fill_double = foreign "at_fill_double" (t @-> float @-> returning void) - let fill_int64 = foreign "at_fill_int64" (t @-> int64_t @-> returning void) - let print = foreign "at_print" (t @-> returning void) - let to_string = foreign "at_to_string" (t @-> int @-> returning string) - let free = foreign "at_free" (t @-> returning void) - - let run_backward = - foreign - "at_run_backward" - (ptr t @-> int @-> ptr t @-> int @-> ptr t @-> int @-> int @-> returning void) - ;; - end - - module Scalar = struct - type t = unit ptr - - let t : t typ = ptr void - let int = foreign "ats_int" (int64_t @-> returning t) - let float = foreign "ats_float" (float @-> returning t) - let free = foreign "ats_free" (t @-> returning void) - end - - module Serialize = struct - let t = Tensor.t - let save = foreign "at_save" (t @-> string @-> returning void) - let load = foreign "at_load" (string @-> returning t) - - let save_multi = - foreign - "at_save_multi" - (ptr t @-> ptr (ptr char) @-> int @-> string @-> returning void) - ;; - - let load_multi = - foreign - "at_load_multi" - (ptr t @-> ptr (ptr char) @-> int @-> string @-> returning void) - ;; - - let load_multi_ = - foreign - "at_load_multi_" - (ptr t @-> ptr (ptr char) @-> int @-> string @-> returning void) - ;; - - let load_callback = - foreign - "at_load_callback" - (string - @-> static_funptr Ctypes.(string @-> t @-> returning void) - @-> returning void) - ;; - end - - module Optimizer = struct - type t = unit ptr - - let t : t typ = ptr void - - let adam = - foreign "ato_adam" (float @-> float @-> float @-> float @-> float @-> returning t) - ;; - - let rmsprop = - foreign - "ato_rmsprop" - (float - (* learning rate *) - @-> float - (* alpha *) - @-> float - (* eps *) - @-> float - (* weight decay *) - @-> float - (* momentum *) - @-> int - (* centered *) - @-> returning t) - ;; - - let sgd = - foreign - "ato_sgd" - (float - (* learning rate *) - @-> float - (* momentum *) - @-> float - (* dampening *) - @-> float - (* weight decay *) - @-> bool - (* nesterov *) - @-> returning t) - ;; - - let add_parameters = - foreign "ato_add_parameters" (t @-> ptr Tensor.t @-> int @-> returning void) - ;; - - let set_learning_rate = - foreign "ato_set_learning_rate" (t @-> float @-> returning void) - ;; - - let set_momentum = foreign "ato_set_momentum" (t @-> float @-> returning void) - let zero_grad = foreign "ato_zero_grad" (t @-> returning void) - let step = foreign "ato_step" (t @-> returning void) - let free = foreign "ato_free" (t @-> returning void) - end - - module Cuda = struct - let device_count = foreign "atc_cuda_device_count" (void @-> returning int) - let is_available = foreign "atc_cuda_is_available" (void @-> returning int) - let cudnn_is_available = foreign "atc_cudnn_is_available" (void @-> returning int) - let set_benchmark_cudnn = foreign "atc_set_benchmark_cudnn" (int @-> returning void) - end - - module Ivalue = struct - type t = unit ptr - - let t : t typ = ptr void - let to_int64 = foreign "ati_to_int" (t @-> returning int64_t) - let to_bool = foreign "ati_to_bool" (t @-> returning int) - let to_double = foreign "ati_to_double" (t @-> returning double) - let to_tensor = foreign "ati_to_tensor" (t @-> returning Tensor.t) - let tuple_length = foreign "ati_tuple_length" (t @-> returning int) - let list_length = foreign "ati_list_length" (t @-> returning int) - let to_tuple = foreign "ati_to_tuple" (t @-> ptr t @-> int @-> returning void) - - let to_tensor_list = - foreign "ati_to_tensor_list" (t @-> ptr Tensor.t @-> int @-> returning void) - ;; - - let to_generic_list = - foreign "ati_to_generic_list" (t @-> ptr t @-> int @-> returning void) - ;; - - let to_string = foreign "ati_to_string" (t @-> returning string) - let none = foreign "ati_none" (void @-> returning t) - let bool = foreign "ati_bool" (int @-> returning t) - let tensor = foreign "ati_tensor" (Tensor.t @-> returning t) - let int64 = foreign "ati_int" (int64_t @-> returning t) - let double = foreign "ati_double" (float @-> returning t) - let tuple = foreign "ati_tuple" (ptr t @-> int @-> returning t) - let tensor_list = foreign "ati_tensor_list" (ptr Tensor.t @-> int @-> returning t) - let string = foreign "ati_string" (string @-> returning t) - let tag = foreign "ati_tag" (t @-> returning int) - let free = foreign "ati_free" (t @-> returning void) - end - - module Module = struct - type t = unit ptr - - let t : t typ = ptr void - let load = foreign "atm_load" (string @-> int @-> returning t) - let load_str = foreign "atm_load_str" (string @-> int @-> int @-> returning t) - let forward = foreign "atm_forward" (t @-> ptr Tensor.t @-> int @-> returning Tensor.t) - - let forward_ = - foreign "atm_forward_" (t @-> ptr Ivalue.t @-> int @-> returning Ivalue.t) - ;; - - let named_buffers = foreign "atm_named_buffers" (t @-> returning Ivalue.t) - let free = foreign "atm_free" (t @-> returning void) - end - - module TensorG = Torch_bindings_generated.C (F) -end diff --git a/src/stubs/torch_bindings_generated.ml b/src/stubs/torch_bindings_generated.ml deleted file mode 100644 index ecbf933..0000000 --- a/src/stubs/torch_bindings_generated.ml +++ /dev/null @@ -1,15190 +0,0 @@ -(* THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! *) - -open Ctypes - -module C0 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - let stubs___and__ = foreign "atg___and__" (ptr t @-> t @-> scalar @-> returning void) - - let stubs___and__tensor_ = - foreign "atg___and__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___iand__ = foreign "atg___iand__" (ptr t @-> t @-> scalar @-> returning void) - - let stubs___iand__tensor_ = - foreign "atg___iand__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___ilshift__ = - foreign "atg___ilshift__" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs___ilshift__tensor_ = - foreign "atg___ilshift__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___ior__ = foreign "atg___ior__" (ptr t @-> t @-> scalar @-> returning void) - - let stubs___ior__tensor_ = - foreign "atg___ior__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___irshift__ = - foreign "atg___irshift__" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs___irshift__tensor_ = - foreign "atg___irshift__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___ixor__ = foreign "atg___ixor__" (ptr t @-> t @-> scalar @-> returning void) - - let stubs___ixor__tensor_ = - foreign "atg___ixor__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___lshift__ = - foreign "atg___lshift__" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs___lshift__scalar_out_ = - foreign "atg___lshift__scalar_out_" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs___lshift__tensor_ = - foreign "atg___lshift__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___lshift__tensor_out_ = - foreign "atg___lshift__tensor_out_" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs___or__ = foreign "atg___or__" (ptr t @-> t @-> scalar @-> returning void) - - let stubs___or__tensor_ = - foreign "atg___or__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___rshift__ = - foreign "atg___rshift__" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs___rshift__scalar_out_ = - foreign "atg___rshift__scalar_out_" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs___rshift__tensor_ = - foreign "atg___rshift__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs___rshift__tensor_out_ = - foreign "atg___rshift__tensor_out_" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs___xor__ = foreign "atg___xor__" (ptr t @-> t @-> scalar @-> returning void) - - let stubs___xor__tensor_ = - foreign "atg___xor__tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__adaptive_avg_pool2d = - foreign - "atg__adaptive_avg_pool2d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__adaptive_avg_pool2d_backward = - foreign "atg__adaptive_avg_pool2d_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__adaptive_avg_pool2d_backward_out = - foreign - "atg__adaptive_avg_pool2d_backward_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__adaptive_avg_pool2d_out = - foreign - "atg__adaptive_avg_pool2d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__adaptive_avg_pool3d = - foreign - "atg__adaptive_avg_pool3d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__adaptive_avg_pool3d_backward = - foreign "atg__adaptive_avg_pool3d_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__adaptive_avg_pool3d_backward_out = - foreign - "atg__adaptive_avg_pool3d_backward_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__adaptive_avg_pool3d_out = - foreign - "atg__adaptive_avg_pool3d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__add_batch_dim = - foreign "atg__add_batch_dim" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__add_relu = foreign "atg__add_relu" (ptr t @-> t @-> t @-> returning void) - let stubs__add_relu_ = foreign "atg__add_relu_" (ptr t @-> t @-> t @-> returning void) - - let stubs__add_relu_out = - foreign "atg__add_relu_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__add_relu_scalar = - foreign "atg__add_relu_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs__add_relu_scalar_ = - foreign "atg__add_relu_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs__add_relu_scalar_out = - foreign "atg__add_relu_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs__addmm_activation = - foreign "atg__addmm_activation" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__addmm_activation_out = - foreign - "atg__addmm_activation_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__aminmax = foreign "atg__aminmax" (ptr t @-> t @-> returning void) - - let stubs__aminmax_dim = - foreign "atg__aminmax_dim" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__aminmax_dim_out = - foreign - "atg__aminmax_dim_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__aminmax_out = - foreign "atg__aminmax_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__amp_update_scale = - foreign - "atg__amp_update_scale" - (ptr t @-> t @-> t @-> t @-> double @-> double @-> int64_t @-> returning void) - ;; - - let stubs__amp_update_scale_ = - foreign - "atg__amp_update_scale_" - (ptr t @-> t @-> t @-> t @-> double @-> double @-> int64_t @-> returning void) - ;; - - let stubs__amp_update_scale_out = - foreign - "atg__amp_update_scale_out" - (ptr t @-> t @-> t @-> t @-> t @-> double @-> double @-> int64_t @-> returning void) - ;; - - let stubs__assert_tensor_metadata = - foreign - "atg__assert_tensor_metadata" - (t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__autocast_to_full_precision = - foreign - "atg__autocast_to_full_precision" - (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__autocast_to_reduced_precision = - foreign - "atg__autocast_to_reduced_precision" - (ptr t @-> t @-> int @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__cast_byte = foreign "atg__cast_byte" (ptr t @-> t @-> int @-> returning void) - let stubs__cast_char = foreign "atg__cast_char" (ptr t @-> t @-> int @-> returning void) - - let stubs__cast_double = - foreign "atg__cast_double" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs__cast_float = - foreign "atg__cast_float" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs__cast_half = foreign "atg__cast_half" (ptr t @-> t @-> int @-> returning void) - let stubs__cast_int = foreign "atg__cast_int" (ptr t @-> t @-> int @-> returning void) - let stubs__cast_long = foreign "atg__cast_long" (ptr t @-> t @-> int @-> returning void) - - let stubs__cast_short = - foreign "atg__cast_short" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs__cdist_backward = - foreign - "atg__cdist_backward" - (ptr t @-> t @-> t @-> t @-> double @-> t @-> returning void) - ;; - - let stubs__cdist_backward_out = - foreign - "atg__cdist_backward_out" - (ptr t @-> t @-> t @-> t @-> t @-> double @-> t @-> returning void) - ;; - - let stubs__cholesky_solve_helper = - foreign "atg__cholesky_solve_helper" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__cholesky_solve_helper_out = - foreign - "atg__cholesky_solve_helper_out" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__coalesce = foreign "atg__coalesce" (ptr t @-> t @-> returning void) - - let stubs__coalesce_out = - foreign "atg__coalesce_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__coalesced = foreign "atg__coalesced" (ptr t @-> t @-> int @-> returning void) - - let stubs__coalesced_ = - foreign "atg__coalesced_" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs__coalesced_out = - foreign "atg__coalesced_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__compute_linear_combination = - foreign "atg__compute_linear_combination" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__compute_linear_combination_out = - foreign - "atg__compute_linear_combination_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__conj = foreign "atg__conj" (ptr t @-> t @-> returning void) - let stubs__conj_copy = foreign "atg__conj_copy" (ptr t @-> t @-> returning void) - - let stubs__conj_copy_out = - foreign "atg__conj_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__conj_physical = foreign "atg__conj_physical" (ptr t @-> t @-> returning void) - - let stubs__conj_physical_out = - foreign "atg__conj_physical_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__conv_depthwise2d = - foreign - "atg__conv_depthwise2d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs__conv_depthwise2d_out = - foreign - "atg__conv_depthwise2d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs__convert_indices_from_coo_to_csr = - foreign - "atg__convert_indices_from_coo_to_csr" - (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__convert_indices_from_coo_to_csr_out = - foreign - "atg__convert_indices_from_coo_to_csr_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__convert_indices_from_csr_to_coo = - foreign - "atg__convert_indices_from_csr_to_coo" - (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__convert_indices_from_csr_to_coo_out = - foreign - "atg__convert_indices_from_csr_to_coo_out" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__convolution = - foreign - "atg__convolution" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__convolution_deprecated = - foreign - "atg__convolution_deprecated" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__convolution_mode = - foreign - "atg__convolution_mode" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> string - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__convolution_out = - foreign - "atg__convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__copy_from = - foreign "atg__copy_from" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__copy_from_and_resize = - foreign "atg__copy_from_and_resize" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__copy_from_and_resize_out = - foreign "atg__copy_from_and_resize_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__copy_from_out = - foreign "atg__copy_from_out" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__cslt_compress = foreign "atg__cslt_compress" (ptr t @-> t @-> returning void) - - let stubs__cslt_sparse_mm = - foreign "atg__cslt_sparse_mm" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__ctc_loss = - foreign - "atg__ctc_loss" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__ctc_loss_backward = - foreign - "atg__ctc_loss_backward" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> t - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__ctc_loss_backward_out = - foreign - "atg__ctc_loss_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> t - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__ctc_loss_backward_tensor = - foreign - "atg__ctc_loss_backward_tensor" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__ctc_loss_out = - foreign - "atg__ctc_loss_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__ctc_loss_tensor = - foreign - "atg__ctc_loss_tensor" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__ctc_loss_tensor_out = - foreign - "atg__ctc_loss_tensor_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__cudnn_ctc_loss = - foreign - "atg__cudnn_ctc_loss" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs__cudnn_ctc_loss_out = - foreign - "atg__cudnn_ctc_loss_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; -end - -module C1 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs__cudnn_ctc_loss_tensor = - foreign - "atg__cudnn_ctc_loss_tensor" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__cudnn_init_dropout_state = - foreign - "atg__cudnn_init_dropout_state" - (ptr t @-> double @-> int @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__cudnn_init_dropout_state_out = - foreign - "atg__cudnn_init_dropout_state_out" - (ptr t @-> t @-> double @-> int @-> int64_t @-> returning void) - ;; - - let stubs__cudnn_rnn = - foreign - "atg__cudnn_rnn" - (ptr t - @-> t - @-> ptr t - @-> int - @-> int64_t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> double - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs__cudnn_rnn_flatten_weight = - foreign - "atg__cudnn_rnn_flatten_weight" - (ptr t - @-> ptr t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs__cudnn_rnn_flatten_weight_out = - foreign - "atg__cudnn_rnn_flatten_weight_out" - (ptr t - @-> t - @-> ptr t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs__cudnn_rnn_out = - foreign - "atg__cudnn_rnn_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> ptr t - @-> int - @-> int64_t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> double - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs__debug_has_internal_overlap = - foreign "atg__debug_has_internal_overlap" (t @-> returning int64_t) - ;; - - let stubs__dim_arange = - foreign "atg__dim_arange" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__dimi = foreign "atg__dimi" (t @-> returning int64_t) - let stubs__dimv = foreign "atg__dimv" (t @-> returning int64_t) - - let stubs__dirichlet_grad = - foreign "atg__dirichlet_grad" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__dirichlet_grad_out = - foreign "atg__dirichlet_grad_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__efficient_attention_backward = - foreign - "atg__efficient_attention_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> double - @-> t - @-> t - @-> int64_t - @-> int - @-> double - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__efficientzerotensor = - foreign - "atg__efficientzerotensor" - (ptr t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__efficientzerotensor_out = - foreign - "atg__efficientzerotensor_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__embedding_bag = - foreign - "atg__embedding_bag" - (ptr t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_backward = - foreign - "atg__embedding_bag_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> t - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_dense_backward = - foreign - "atg__embedding_bag_dense_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> int64_t - @-> t - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_dense_backward_out = - foreign - "atg__embedding_bag_dense_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> int64_t - @-> t - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_forward_only = - foreign - "atg__embedding_bag_forward_only" - (ptr t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_forward_only_out = - foreign - "atg__embedding_bag_forward_only_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_out = - foreign - "atg__embedding_bag_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_per_sample_weights_backward = - foreign - "atg__embedding_bag_per_sample_weights_backward" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__embedding_bag_per_sample_weights_backward_out = - foreign - "atg__embedding_bag_per_sample_weights_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs__embedding_bag_sparse_backward = - foreign - "atg__embedding_bag_sparse_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> int64_t - @-> t - @-> int64_t - @-> returning void) - ;; - - let stubs__empty_affine_quantized = - foreign - "atg__empty_affine_quantized" - (ptr t - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> double - @-> int64_t - @-> returning void) - ;; - - let stubs__empty_affine_quantized_out = - foreign - "atg__empty_affine_quantized_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> double @-> int64_t @-> returning void) - ;; - - let stubs__empty_per_channel_affine_quantized = - foreign - "atg__empty_per_channel_affine_quantized" - (ptr t - @-> ptr int64_t - @-> int - @-> t - @-> t - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs__empty_per_channel_affine_quantized_out = - foreign - "atg__empty_per_channel_affine_quantized_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__euclidean_dist = - foreign "atg__euclidean_dist" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__euclidean_dist_out = - foreign "atg__euclidean_dist_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__fake_quantize_learnable_per_channel_affine = - foreign - "atg__fake_quantize_learnable_per_channel_affine" - (ptr t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> double - @-> returning void) - ;; - - let stubs__fake_quantize_learnable_per_channel_affine_backward = - foreign - "atg__fake_quantize_learnable_per_channel_affine_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> double - @-> returning void) - ;; - - let stubs__fake_quantize_learnable_per_channel_affine_out = - foreign - "atg__fake_quantize_learnable_per_channel_affine_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> double - @-> returning void) - ;; - - let stubs__fake_quantize_learnable_per_tensor_affine = - foreign - "atg__fake_quantize_learnable_per_tensor_affine" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> double @-> returning void) - ;; - - let stubs__fake_quantize_learnable_per_tensor_affine_backward = - foreign - "atg__fake_quantize_learnable_per_tensor_affine_backward" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> double @-> returning void) - ;; - - let stubs__fake_quantize_learnable_per_tensor_affine_out = - foreign - "atg__fake_quantize_learnable_per_tensor_affine_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> double @-> returning void) - ;; - - let stubs__fake_quantize_per_tensor_affine_cachemask_tensor_qparams = - foreign - "atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out = - foreign - "atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs__fft_c2c = - foreign - "atg__fft_c2c" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__fft_c2c_out = - foreign - "atg__fft_c2c_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__fft_c2r = - foreign - "atg__fft_c2r" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__fft_c2r_out = - foreign - "atg__fft_c2r_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs__fft_r2c = - foreign - "atg__fft_r2c" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__fft_r2c_out = - foreign - "atg__fft_r2c_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__fill_mem_eff_dropout_mask_ = - foreign - "atg__fill_mem_eff_dropout_mask_" - (ptr t @-> t @-> double @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__flash_attention_backward = - foreign - "atg__flash_attention_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> double - @-> int - @-> t - @-> t - @-> double - @-> int - @-> returning void) - ;; - - let stubs__foobar = - foreign "atg__foobar" (ptr t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__foobar_out = - foreign - "atg__foobar_out" - (ptr t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__functional_assert_async = - foreign - "atg__functional_assert_async" - (ptr t @-> t @-> string @-> t @-> returning void) - ;; - - let stubs__functional_sym_constrain_range = - foreign - "atg__functional_sym_constrain_range" - (ptr t @-> scalar @-> int64_t @-> int @-> int64_t @-> int @-> t @-> returning void) - ;; - - let stubs__functional_sym_constrain_range_for_size = - foreign - "atg__functional_sym_constrain_range_for_size" - (ptr t @-> scalar @-> int64_t @-> int @-> int64_t @-> int @-> t @-> returning void) - ;; - - let stubs__fused_adam = - foreign - "atg__fused_adam" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> double - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_adam_ = - foreign - "atg__fused_adam_" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> double - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_adam_tensor_lr_ = - foreign - "atg__fused_adam_tensor_lr_" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> t - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_adam_tensor_lr_out = - foreign - "atg__fused_adam_tensor_lr_out" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> t - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_adamw = - foreign - "atg__fused_adamw" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> double - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_adamw_ = - foreign - "atg__fused_adamw_" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> double - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_adamw_tensor_lr_ = - foreign - "atg__fused_adamw_tensor_lr_" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> t - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_adamw_tensor_lr_out = - foreign - "atg__fused_adamw_tensor_lr_out" - (ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> t - @-> double - @-> double - @-> double - @-> double - @-> int - @-> int - @-> t - @-> t - @-> returning void) - ;; - - let stubs__fused_dropout = - foreign "atg__fused_dropout" (ptr t @-> t @-> double @-> returning void) - ;; - - let stubs__fused_dropout_out = - foreign - "atg__fused_dropout_out" - (ptr t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs__fused_moving_avg_obs_fq_helper = - foreign - "atg__fused_moving_avg_obs_fq_helper" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs__fused_moving_avg_obs_fq_helper_functional = - foreign - "atg__fused_moving_avg_obs_fq_helper_functional" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs__fused_moving_avg_obs_fq_helper_out = - foreign - "atg__fused_moving_avg_obs_fq_helper_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs__fused_sdp_choice = - foreign - "atg__fused_sdp_choice" - (t @-> t @-> t @-> t @-> double @-> int @-> double @-> int @-> returning int64_t) - ;; - - let stubs__fw_primal = - foreign "atg__fw_primal" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__fw_primal_copy = - foreign "atg__fw_primal_copy" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__fw_primal_copy_out = - foreign "atg__fw_primal_copy_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__gather_sparse_backward = - foreign - "atg__gather_sparse_backward" - (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs__grid_sampler_2d_cpu_fallback = - foreign - "atg__grid_sampler_2d_cpu_fallback" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__grid_sampler_2d_cpu_fallback_backward = - foreign - "atg__grid_sampler_2d_cpu_fallback_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__grid_sampler_2d_cpu_fallback_out = - foreign - "atg__grid_sampler_2d_cpu_fallback_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__has_compatible_shallow_copy_type = - foreign "atg__has_compatible_shallow_copy_type" (t @-> t @-> returning bool) - ;; - - let stubs__has_same_storage_numel = - foreign "atg__has_same_storage_numel" (t @-> t @-> returning bool) - ;; - - let stubs__histogramdd_bin_edges = - foreign - "atg__histogramdd_bin_edges" - (t - @-> ptr int64_t - @-> int - @-> ptr double - @-> int - @-> t - @-> int - @-> returning (ptr t)) - ;; - - let stubs__histogramdd_bin_edges_out = - foreign - "atg__histogramdd_bin_edges_out" - (ptr t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr double - @-> int - @-> t - @-> int - @-> returning void) - ;; - - let stubs__histogramdd_from_bin_cts = - foreign - "atg__histogramdd_from_bin_cts" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr double - @-> int - @-> t - @-> int - @-> returning void) - ;; - - let stubs__histogramdd_from_bin_cts_out = - foreign - "atg__histogramdd_from_bin_cts_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr double - @-> int - @-> t - @-> int - @-> returning void) - ;; - - let stubs__histogramdd_from_bin_tensors = - foreign - "atg__histogramdd_from_bin_tensors" - (ptr t @-> t @-> ptr t @-> int @-> t @-> int @-> returning void) - ;; - - let stubs__histogramdd_from_bin_tensors_out = - foreign - "atg__histogramdd_from_bin_tensors_out" - (ptr t @-> t @-> t @-> ptr t @-> int @-> t @-> int @-> returning void) - ;; - - let stubs__index_put_impl = - foreign - "atg__index_put_impl" - (ptr t @-> t @-> ptr t @-> int @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__index_put_impl_ = - foreign - "atg__index_put_impl_" - (ptr t @-> t @-> ptr t @-> int @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__index_put_impl_out = - foreign - "atg__index_put_impl_out" - (ptr t @-> t @-> t @-> ptr t @-> int @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__indices = foreign "atg__indices" (ptr t @-> t @-> returning void) - let stubs__indices_copy = foreign "atg__indices_copy" (ptr t @-> t @-> returning void) - - let stubs__indices_copy_out = - foreign "atg__indices_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__int_mm = foreign "atg__int_mm" (ptr t @-> t @-> t @-> returning void) - - let stubs__int_mm_out = - foreign "atg__int_mm_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__is_all_true = foreign "atg__is_all_true" (ptr t @-> t @-> returning void) - let stubs__is_any_true = foreign "atg__is_any_true" (ptr t @-> t @-> returning void) - let stubs__is_zerotensor = foreign "atg__is_zerotensor" (t @-> returning bool) - - let stubs__linalg_check_errors = - foreign "atg__linalg_check_errors" (t @-> string @-> int @-> returning void) - ;; - - let stubs__linalg_det = foreign "atg__linalg_det" (ptr t @-> t @-> returning void) - - let stubs__linalg_det_result = - foreign "atg__linalg_det_result" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__linalg_eigh = - foreign "atg__linalg_eigh" (ptr t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs__linalg_eigh_eigenvalues = - foreign - "atg__linalg_eigh_eigenvalues" - (ptr t @-> t @-> t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs__linalg_slogdet = - foreign "atg__linalg_slogdet" (ptr t @-> t @-> returning void) - ;; - - let stubs__linalg_slogdet_sign = - foreign - "atg__linalg_slogdet_sign" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; -end - -module C2 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs__linalg_solve_ex = - foreign "atg__linalg_solve_ex" (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__linalg_solve_ex_result = - foreign - "atg__linalg_solve_ex_result" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__linalg_svd = - foreign "atg__linalg_svd" (ptr t @-> t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs__linalg_svd_u = - foreign - "atg__linalg_svd_u" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs__log_softmax = - foreign "atg__log_softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__log_softmax_backward_data = - foreign - "atg__log_softmax_backward_data" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__log_softmax_backward_data_out = - foreign - "atg__log_softmax_backward_data_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__log_softmax_out = - foreign - "atg__log_softmax_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__logcumsumexp = - foreign "atg__logcumsumexp" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__logcumsumexp_out = - foreign "atg__logcumsumexp_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__lstm_mps = - foreign - "atg__lstm_mps" - (ptr t - @-> t - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__lstm_mps_out = - foreign - "atg__lstm_mps_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__lu_with_info = - foreign "atg__lu_with_info" (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__make_dep_token = - foreign "atg__make_dep_token" (ptr t @-> int @-> int @-> returning void) - ;; - - let stubs__make_dual = - foreign "atg__make_dual" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__make_dual_copy = - foreign "atg__make_dual_copy" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__make_dual_copy_out = - foreign - "atg__make_dual_copy_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__make_per_channel_quantized_tensor = - foreign - "atg__make_per_channel_quantized_tensor" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__make_per_channel_quantized_tensor_out = - foreign - "atg__make_per_channel_quantized_tensor_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__make_per_tensor_quantized_tensor = - foreign - "atg__make_per_tensor_quantized_tensor" - (ptr t @-> t @-> double @-> int64_t @-> returning void) - ;; - - let stubs__make_per_tensor_quantized_tensor_out = - foreign - "atg__make_per_tensor_quantized_tensor_out" - (ptr t @-> t @-> t @-> double @-> int64_t @-> returning void) - ;; - - let stubs__masked_scale = - foreign "atg__masked_scale" (ptr t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs__masked_scale_out = - foreign "atg__masked_scale_out" (ptr t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs__masked_softmax = - foreign - "atg__masked_softmax" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__masked_softmax_backward = - foreign - "atg__masked_softmax_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__masked_softmax_backward_out = - foreign - "atg__masked_softmax_backward_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__masked_softmax_out = - foreign - "atg__masked_softmax_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__mkldnn_reshape = - foreign "atg__mkldnn_reshape" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__mkldnn_reshape_out = - foreign - "atg__mkldnn_reshape_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__mkldnn_transpose = - foreign - "atg__mkldnn_transpose" - (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__mkldnn_transpose_ = - foreign - "atg__mkldnn_transpose_" - (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__mkldnn_transpose_out = - foreign - "atg__mkldnn_transpose_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__mps_convolution = - foreign - "atg__mps_convolution" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__mps_convolution_out = - foreign - "atg__mps_convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__mps_convolution_transpose = - foreign - "atg__mps_convolution_transpose" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__mps_convolution_transpose_out = - foreign - "atg__mps_convolution_transpose_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__native_batch_norm_legit = - foreign - "atg__native_batch_norm_legit" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs__native_batch_norm_legit_functional = - foreign - "atg__native_batch_norm_legit_functional" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs__native_batch_norm_legit_no_stats = - foreign - "atg__native_batch_norm_legit_no_stats" - (ptr t @-> t @-> t @-> t @-> int @-> double @-> double @-> returning void) - ;; - - let stubs__native_batch_norm_legit_no_stats_out = - foreign - "atg__native_batch_norm_legit_no_stats_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs__native_batch_norm_legit_no_training = - foreign - "atg__native_batch_norm_legit_no_training" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs__native_batch_norm_legit_no_training_out = - foreign - "atg__native_batch_norm_legit_no_training_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> double - @-> returning void) - ;; - - let stubs__native_batch_norm_legit_out = - foreign - "atg__native_batch_norm_legit_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs__native_multi_head_attention = - foreign - "atg__native_multi_head_attention" - (ptr t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__native_multi_head_attention_out = - foreign - "atg__native_multi_head_attention_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__neg_view = foreign "atg__neg_view" (ptr t @-> t @-> returning void) - let stubs__neg_view_copy = foreign "atg__neg_view_copy" (ptr t @-> t @-> returning void) - - let stubs__neg_view_copy_out = - foreign "atg__neg_view_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__nested_from_padded = - foreign "atg__nested_from_padded" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__nested_from_padded_and_nested_example = - foreign - "atg__nested_from_padded_and_nested_example" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__nested_from_padded_and_nested_example_out = - foreign - "atg__nested_from_padded_and_nested_example_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__nested_from_padded_out = - foreign - "atg__nested_from_padded_out" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__nested_select_backward = - foreign - "atg__nested_select_backward" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__nested_sum_backward = - foreign - "atg__nested_sum_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__nested_view_from_buffer = - foreign - "atg__nested_view_from_buffer" - (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__nested_view_from_buffer_copy = - foreign - "atg__nested_view_from_buffer_copy" - (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__nested_view_from_buffer_copy_out = - foreign - "atg__nested_view_from_buffer_copy_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__new_zeros_with_same_feature_meta = - foreign - "atg__new_zeros_with_same_feature_meta" - (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__new_zeros_with_same_feature_meta_out = - foreign - "atg__new_zeros_with_same_feature_meta_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__nnpack_available = foreign "atg__nnpack_available" (void @-> returning bool) - - let stubs__nnpack_spatial_convolution = - foreign - "atg__nnpack_spatial_convolution" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs__nnpack_spatial_convolution_out = - foreign - "atg__nnpack_spatial_convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs__nnz = foreign "atg__nnz" (t @-> returning int64_t) - - let stubs__pack_padded_sequence = - foreign "atg__pack_padded_sequence" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__pack_padded_sequence_backward = - foreign - "atg__pack_padded_sequence_backward" - (ptr t @-> t @-> ptr int64_t @-> int @-> t @-> int @-> returning void) - ;; - - let stubs__pack_padded_sequence_out = - foreign - "atg__pack_padded_sequence_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__pad_circular = - foreign "atg__pad_circular" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__pad_enum = - foreign - "atg__pad_enum" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> double - @-> int - @-> returning void) - ;; - - let stubs__pad_packed_sequence = - foreign - "atg__pad_packed_sequence" - (ptr t @-> t @-> t @-> int @-> scalar @-> int64_t @-> returning void) - ;; - - let stubs__pdist_backward = - foreign "atg__pdist_backward" (ptr t @-> t @-> t @-> double @-> t @-> returning void) - ;; - - let stubs__pdist_backward_out = - foreign - "atg__pdist_backward_out" - (ptr t @-> t @-> t @-> t @-> double @-> t @-> returning void) - ;; - - let stubs__pin_memory = - foreign "atg__pin_memory" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs__pin_memory_out = - foreign "atg__pin_memory_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__prelu_kernel = - foreign "atg__prelu_kernel" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__prelu_kernel_backward = - foreign "atg__prelu_kernel_backward" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__propagate_xla_data = - foreign "atg__propagate_xla_data" (t @-> t @-> returning void) - ;; - - let stubs__remove_batch_dim = - foreign - "atg__remove_batch_dim" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__reshape_alias = - foreign - "atg__reshape_alias" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__reshape_alias_copy = - foreign - "atg__reshape_alias_copy" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__reshape_alias_copy_out = - foreign - "atg__reshape_alias_copy_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs__reshape_copy = - foreign "atg__reshape_copy" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__reshape_from_tensor = - foreign "atg__reshape_from_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__resize_output = - foreign - "atg__resize_output" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__resize_output_ = - foreign - "atg__resize_output_" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__resize_output_out = - foreign - "atg__resize_output_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__rowwise_prune = - foreign "atg__rowwise_prune" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__sample_dirichlet = - foreign "atg__sample_dirichlet" (ptr t @-> t @-> returning void) - ;; - - let stubs__sample_dirichlet_out = - foreign "atg__sample_dirichlet_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__saturate_weight_to_fp16 = - foreign "atg__saturate_weight_to_fp16" (ptr t @-> t @-> returning void) - ;; - - let stubs__scaled_dot_product_attention_math = - foreign - "atg__scaled_dot_product_attention_math" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> int - @-> t - @-> double - @-> int - @-> returning void) - ;; - - let stubs__scaled_dot_product_efficient_attention = - foreign - "atg__scaled_dot_product_efficient_attention" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__scaled_dot_product_flash_attention_backward = - foreign - "atg__scaled_dot_product_flash_attention_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> double - @-> int - @-> t - @-> t - @-> double - @-> int - @-> returning void) - ;; - - let stubs__scaled_mm = - foreign - "atg__scaled_mm" - (ptr t @-> t @-> t @-> t @-> int @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__scaled_mm_out = - foreign - "atg__scaled_mm_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__scatter_reduce = - foreign - "atg__scatter_reduce" - (ptr t @-> t @-> int64_t @-> t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs__scatter_reduce_ = - foreign - "atg__scatter_reduce_" - (ptr t @-> t @-> int64_t @-> t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs__scatter_reduce_two_out = - foreign - "atg__scatter_reduce_two_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs__segment_reduce_backward = - foreign - "atg__segment_reduce_backward" - (ptr t - @-> t - @-> t - @-> t - @-> string - @-> t - @-> t - @-> int64_t - @-> scalar - @-> returning void) - ;; - - let stubs__segment_reduce_backward_out = - foreign - "atg__segment_reduce_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> string - @-> t - @-> t - @-> int64_t - @-> scalar - @-> returning void) - ;; - - let stubs__shape_as_tensor = - foreign "atg__shape_as_tensor" (ptr t @-> t @-> returning void) - ;; -end - -module C3 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs__slow_conv2d_backward = - foreign - "atg__slow_conv2d_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs__sobol_engine_draw = - foreign - "atg__sobol_engine_draw" - (ptr t @-> t @-> int64_t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sobol_engine_ff_ = - foreign - "atg__sobol_engine_ff_" - (ptr t @-> t @-> int64_t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__sobol_engine_initialize_state_ = - foreign - "atg__sobol_engine_initialize_state_" - (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__sobol_engine_scramble_ = - foreign - "atg__sobol_engine_scramble_" - (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__softmax = - foreign "atg__softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__softmax_backward_data = - foreign - "atg__softmax_backward_data" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__softmax_backward_data_out = - foreign - "atg__softmax_backward_data_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__softmax_out = - foreign "atg__softmax_out" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_addmm = - foreign "atg__sparse_addmm" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__sparse_addmm_out = - foreign "atg__sparse_addmm_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__sparse_broadcast_to = - foreign - "atg__sparse_broadcast_to" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_broadcast_to_copy = - foreign - "atg__sparse_broadcast_to_copy" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_broadcast_to_copy_out = - foreign - "atg__sparse_broadcast_to_copy_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_bsc_tensor_unsafe = - foreign - "atg__sparse_bsc_tensor_unsafe" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_bsr_tensor_unsafe = - foreign - "atg__sparse_bsr_tensor_unsafe" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_compressed_tensor_unsafe = - foreign - "atg__sparse_compressed_tensor_unsafe" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_coo_tensor_unsafe = - foreign - "atg__sparse_coo_tensor_unsafe" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__sparse_coo_tensor_with_dims = - foreign - "atg__sparse_coo_tensor_with_dims" - (ptr t - @-> int64_t - @-> int64_t - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__sparse_coo_tensor_with_dims_and_tensors = - foreign - "atg__sparse_coo_tensor_with_dims_and_tensors" - (ptr t - @-> int64_t - @-> int64_t - @-> ptr int64_t - @-> int - @-> t - @-> t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs__sparse_coo_tensor_with_dims_and_tensors_out = - foreign - "atg__sparse_coo_tensor_with_dims_and_tensors_out" - (ptr t - @-> t - @-> int64_t - @-> int64_t - @-> ptr int64_t - @-> int - @-> t - @-> t - @-> int - @-> returning void) - ;; - - let stubs__sparse_coo_tensor_with_dims_out = - foreign - "atg__sparse_coo_tensor_with_dims_out" - (ptr t @-> t @-> int64_t @-> int64_t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_csc_tensor_unsafe = - foreign - "atg__sparse_csc_tensor_unsafe" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_csr_prod = - foreign - "atg__sparse_csr_prod" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_csr_prod_dim_dtype_out = - foreign - "atg__sparse_csr_prod_dim_dtype_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_csr_sum = - foreign - "atg__sparse_csr_sum" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_csr_sum_dim_dtype_out = - foreign - "atg__sparse_csr_sum_dim_dtype_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_csr_tensor_unsafe = - foreign - "atg__sparse_csr_tensor_unsafe" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_log_softmax = - foreign "atg__sparse_log_softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_log_softmax_backward_data = - foreign - "atg__sparse_log_softmax_backward_data" - (ptr t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs__sparse_log_softmax_backward_data_out = - foreign - "atg__sparse_log_softmax_backward_data_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs__sparse_log_softmax_int = - foreign - "atg__sparse_log_softmax_int" - (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_log_softmax_out = - foreign - "atg__sparse_log_softmax_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_mask_projection = - foreign "atg__sparse_mask_projection" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__sparse_mask_projection_out = - foreign - "atg__sparse_mask_projection_out" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__sparse_mm = foreign "atg__sparse_mm" (ptr t @-> t @-> t @-> returning void) - - let stubs__sparse_mm_reduce = - foreign "atg__sparse_mm_reduce" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs__sparse_mm_reduce_impl = - foreign "atg__sparse_mm_reduce_impl" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs__sparse_semi_structured_linear = - foreign - "atg__sparse_semi_structured_linear" - (ptr t @-> t @-> t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs__sparse_softmax = - foreign "atg__sparse_softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_softmax_backward_data = - foreign - "atg__sparse_softmax_backward_data" - (ptr t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs__sparse_softmax_backward_data_out = - foreign - "atg__sparse_softmax_backward_data_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs__sparse_softmax_int = - foreign "atg__sparse_softmax_int" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_softmax_out = - foreign - "atg__sparse_softmax_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_sparse_matmul = - foreign "atg__sparse_sparse_matmul" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__sparse_sparse_matmul_out = - foreign "atg__sparse_sparse_matmul_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__sparse_sum = foreign "atg__sparse_sum" (ptr t @-> t @-> returning void) - - let stubs__sparse_sum_backward = - foreign - "atg__sparse_sum_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_sum_backward_out = - foreign - "atg__sparse_sum_backward_out" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_sum_dim = - foreign "atg__sparse_sum_dim" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_sum_dim_dtype = - foreign - "atg__sparse_sum_dim_dtype" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs__sparse_sum_dim_out = - foreign - "atg__sparse_sum_dim_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__sparse_sum_dtype = - foreign "atg__sparse_sum_dtype" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs__spdiags = - foreign "atg__spdiags" (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__spdiags_out = - foreign - "atg__spdiags_out" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__stack = - foreign "atg__stack" (ptr t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs__stack_out = - foreign "atg__stack_out" (ptr t @-> t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs__standard_gamma = - foreign "atg__standard_gamma" (ptr t @-> t @-> returning void) - ;; - - let stubs__standard_gamma_grad = - foreign "atg__standard_gamma_grad" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__standard_gamma_grad_out = - foreign "atg__standard_gamma_grad_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__standard_gamma_out = - foreign "atg__standard_gamma_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__test_ambiguous_defaults = - foreign - "atg__test_ambiguous_defaults" - (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__test_ambiguous_defaults_b = - foreign - "atg__test_ambiguous_defaults_b" - (ptr t @-> t @-> int64_t @-> string @-> returning void) - ;; - - let stubs__test_autograd_multiple_dispatch = - foreign "atg__test_autograd_multiple_dispatch" (ptr t @-> t @-> returning void) - ;; - - let stubs__test_autograd_multiple_dispatch_fullcoverage_out = - foreign - "atg__test_autograd_multiple_dispatch_fullcoverage_out" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__test_autograd_multiple_dispatch_ntonly = - foreign - "atg__test_autograd_multiple_dispatch_ntonly" - (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs__test_autograd_multiple_dispatch_view = - foreign "atg__test_autograd_multiple_dispatch_view" (ptr t @-> t @-> returning void) - ;; - - let stubs__test_autograd_multiple_dispatch_view_copy = - foreign - "atg__test_autograd_multiple_dispatch_view_copy" - (ptr t @-> t @-> returning void) - ;; - - let stubs__test_autograd_multiple_dispatch_view_copy_out = - foreign - "atg__test_autograd_multiple_dispatch_view_copy_out" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__test_check_tensor = - foreign "atg__test_check_tensor" (ptr t @-> t @-> returning void) - ;; - - let stubs__test_functorch_fallback = - foreign "atg__test_functorch_fallback" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__test_functorch_fallback_out = - foreign "atg__test_functorch_fallback_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__test_optional_filled_intlist = - foreign - "atg__test_optional_filled_intlist" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__test_optional_filled_intlist_out = - foreign - "atg__test_optional_filled_intlist_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__test_optional_floatlist = - foreign - "atg__test_optional_floatlist" - (ptr t @-> t @-> ptr double @-> int @-> returning void) - ;; - - let stubs__test_optional_floatlist_out = - foreign - "atg__test_optional_floatlist_out" - (ptr t @-> t @-> t @-> ptr double @-> int @-> returning void) - ;; - - let stubs__test_optional_intlist = - foreign - "atg__test_optional_intlist" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__test_optional_intlist_out = - foreign - "atg__test_optional_intlist_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__test_serialization_subcmul = - foreign "atg__test_serialization_subcmul" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__test_string_default = - foreign - "atg__test_string_default" - (ptr t @-> t @-> string @-> string @-> returning void) - ;; - - let stubs__test_warn_in_autograd = - foreign "atg__test_warn_in_autograd" (ptr t @-> t @-> returning void) - ;; - - let stubs__test_warn_in_autograd_out = - foreign "atg__test_warn_in_autograd_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__thnn_differentiable_gru_cell_backward = - foreign - "atg__thnn_differentiable_gru_cell_backward" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__thnn_differentiable_lstm_cell_backward = - foreign - "atg__thnn_differentiable_lstm_cell_backward" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__thnn_fused_gru_cell = - foreign - "atg__thnn_fused_gru_cell" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__thnn_fused_gru_cell_backward = - foreign - "atg__thnn_fused_gru_cell_backward" - (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__thnn_fused_gru_cell_backward_out = - foreign - "atg__thnn_fused_gru_cell_backward_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__thnn_fused_gru_cell_out = - foreign - "atg__thnn_fused_gru_cell_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__thnn_fused_lstm_cell = - foreign - "atg__thnn_fused_lstm_cell" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__thnn_fused_lstm_cell_backward = - foreign - "atg__thnn_fused_lstm_cell_backward" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__thnn_fused_lstm_cell_backward_impl = - foreign - "atg__thnn_fused_lstm_cell_backward_impl" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__thnn_fused_lstm_cell_backward_impl_out = - foreign - "atg__thnn_fused_lstm_cell_backward_impl_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__thnn_fused_lstm_cell_out = - foreign - "atg__thnn_fused_lstm_cell_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs__to_copy = - foreign "atg__to_copy" (ptr t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__to_copy_out = - foreign "atg__to_copy_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs__to_cpu = foreign "atg__to_cpu" (ptr t @-> int @-> returning (ptr t)) - - let stubs__to_dense = - foreign "atg__to_dense" (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__to_dense_out = - foreign "atg__to_dense_out" (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__to_sparse_bsc = - foreign - "atg__to_sparse_bsc" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__to_sparse_bsc_out = - foreign - "atg__to_sparse_bsc_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; -end - -module C4 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs__to_sparse_bsr = - foreign - "atg__to_sparse_bsr" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__to_sparse_bsr_out = - foreign - "atg__to_sparse_bsr_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs__to_sparse_csc = - foreign "atg__to_sparse_csc" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__to_sparse_csc_out = - foreign - "atg__to_sparse_csc_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__to_sparse_csr = - foreign "atg__to_sparse_csr" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__to_sparse_csr_out = - foreign - "atg__to_sparse_csr_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs__to_sparse_semi_structured = - foreign "atg__to_sparse_semi_structured" (ptr t @-> t @-> returning void) - ;; - - let stubs__transform_bias_rescale_qkv = - foreign - "atg__transform_bias_rescale_qkv" - (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__transform_bias_rescale_qkv_out = - foreign - "atg__transform_bias_rescale_qkv_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__transformer_encoder_layer_fwd = - foreign - "atg__transformer_encoder_layer_fwd" - (ptr t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int - @-> double - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__transformer_encoder_layer_fwd_out = - foreign - "atg__transformer_encoder_layer_fwd_out" - (ptr t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int - @-> double - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs__trilinear = - foreign - "atg__trilinear" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__trilinear_out = - foreign - "atg__trilinear_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs__triton_multi_head_attention = - foreign - "atg__triton_multi_head_attention" - (ptr t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> returning void) - ;; - - let stubs__triton_multi_head_attention_out = - foreign - "atg__triton_multi_head_attention_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> returning void) - ;; - - let stubs__triton_scaled_dot_attention = - foreign - "atg__triton_scaled_dot_attention" - (ptr t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs__triton_scaled_dot_attention_out = - foreign - "atg__triton_scaled_dot_attention_out" - (ptr t @-> t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs__unique = - foreign "atg__unique" (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__unique2 = - foreign "atg__unique2" (ptr t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__unique2_out = - foreign - "atg__unique2_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs__unique_out = - foreign "atg__unique_out" (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs__unpack_dual = - foreign "atg__unpack_dual" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__unsafe_index = - foreign "atg__unsafe_index" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs__unsafe_index_put = - foreign - "atg__unsafe_index_put" - (ptr t @-> t @-> ptr t @-> int @-> t @-> int @-> returning void) - ;; - - let stubs__unsafe_view = - foreign "atg__unsafe_view" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__unsafe_view_out = - foreign - "atg__unsafe_view_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__upsample_bicubic2d_aa = - foreign - "atg__upsample_bicubic2d_aa" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bicubic2d_aa_backward = - foreign - "atg__upsample_bicubic2d_aa_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bicubic2d_aa_backward_grad_input = - foreign - "atg__upsample_bicubic2d_aa_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bicubic2d_aa_out = - foreign - "atg__upsample_bicubic2d_aa_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bicubic2d_aa_vec = - foreign - "atg__upsample_bicubic2d_aa_vec" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> ptr double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bilinear2d_aa = - foreign - "atg__upsample_bilinear2d_aa" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bilinear2d_aa_backward = - foreign - "atg__upsample_bilinear2d_aa_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bilinear2d_aa_backward_grad_input = - foreign - "atg__upsample_bilinear2d_aa_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bilinear2d_aa_out = - foreign - "atg__upsample_bilinear2d_aa_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_bilinear2d_aa_vec = - foreign - "atg__upsample_bilinear2d_aa_vec" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> ptr double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact1d = - foreign - "atg__upsample_nearest_exact1d" - (ptr t @-> t @-> ptr int64_t @-> int @-> double @-> int @-> returning void) - ;; - - let stubs__upsample_nearest_exact1d_backward = - foreign - "atg__upsample_nearest_exact1d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact1d_backward_grad_input = - foreign - "atg__upsample_nearest_exact1d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact1d_out = - foreign - "atg__upsample_nearest_exact1d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> double @-> int @-> returning void) - ;; - - let stubs__upsample_nearest_exact1d_vec = - foreign - "atg__upsample_nearest_exact1d_vec" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning void) - ;; - - let stubs__upsample_nearest_exact2d = - foreign - "atg__upsample_nearest_exact2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact2d_backward = - foreign - "atg__upsample_nearest_exact2d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact2d_backward_grad_input = - foreign - "atg__upsample_nearest_exact2d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact2d_out = - foreign - "atg__upsample_nearest_exact2d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact2d_vec = - foreign - "atg__upsample_nearest_exact2d_vec" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning void) - ;; - - let stubs__upsample_nearest_exact3d = - foreign - "atg__upsample_nearest_exact3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact3d_backward = - foreign - "atg__upsample_nearest_exact3d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact3d_backward_grad_input = - foreign - "atg__upsample_nearest_exact3d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact3d_out = - foreign - "atg__upsample_nearest_exact3d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs__upsample_nearest_exact3d_vec = - foreign - "atg__upsample_nearest_exact3d_vec" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning void) - ;; - - let stubs__use_cudnn_ctc_loss = - foreign - "atg__use_cudnn_ctc_loss" - (t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning bool) - ;; - - let stubs__use_cudnn_ctc_loss_tensor = - foreign - "atg__use_cudnn_ctc_loss_tensor" - (t @-> t @-> t @-> t @-> int64_t @-> returning bool) - ;; - - let stubs__use_cudnn_rnn_flatten_weight = - foreign "atg__use_cudnn_rnn_flatten_weight" (void @-> returning bool) - ;; - - let stubs__validate_compressed_sparse_indices = - foreign - "atg__validate_compressed_sparse_indices" - (int @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs__validate_sparse_bsc_tensor_args = - foreign - "atg__validate_sparse_bsc_tensor_args" - (t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__validate_sparse_bsr_tensor_args = - foreign - "atg__validate_sparse_bsr_tensor_args" - (t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__validate_sparse_csc_tensor_args = - foreign - "atg__validate_sparse_csc_tensor_args" - (t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs__values = foreign "atg__values" (ptr t @-> t @-> returning void) - let stubs__values_copy = foreign "atg__values_copy" (ptr t @-> t @-> returning void) - - let stubs__values_copy_out = - foreign "atg__values_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs__version = foreign "atg__version" (t @-> returning int64_t) - - let stubs__weight_norm = - foreign "atg__weight_norm" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__weight_norm_differentiable_backward = - foreign - "atg__weight_norm_differentiable_backward" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__weight_norm_interface = - foreign "atg__weight_norm_interface" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__weight_norm_interface_backward = - foreign - "atg__weight_norm_interface_backward" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__weight_norm_interface_backward_out = - foreign - "atg__weight_norm_interface_backward_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs__weight_norm_interface_out = - foreign - "atg__weight_norm_interface_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_abs = foreign "atg_abs" (ptr t @-> t @-> returning void) - let stubs_abs_ = foreign "atg_abs_" (ptr t @-> t @-> returning void) - let stubs_abs_out = foreign "atg_abs_out" (ptr t @-> t @-> t @-> returning void) - let stubs_absolute = foreign "atg_absolute" (ptr t @-> t @-> returning void) - let stubs_absolute_ = foreign "atg_absolute_" (ptr t @-> t @-> returning void) - - let stubs_absolute_out = - foreign "atg_absolute_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_acos = foreign "atg_acos" (ptr t @-> t @-> returning void) - let stubs_acos_ = foreign "atg_acos_" (ptr t @-> t @-> returning void) - let stubs_acos_out = foreign "atg_acos_out" (ptr t @-> t @-> t @-> returning void) - let stubs_acosh = foreign "atg_acosh" (ptr t @-> t @-> returning void) - let stubs_acosh_ = foreign "atg_acosh_" (ptr t @-> t @-> returning void) - let stubs_acosh_out = foreign "atg_acosh_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_adaptive_avg_pool1d = - foreign - "atg_adaptive_avg_pool1d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_avg_pool2d = - foreign - "atg_adaptive_avg_pool2d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_avg_pool2d_out = - foreign - "atg_adaptive_avg_pool2d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_avg_pool3d = - foreign - "atg_adaptive_avg_pool3d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_avg_pool3d_backward = - foreign "atg_adaptive_avg_pool3d_backward" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_adaptive_avg_pool3d_out = - foreign - "atg_adaptive_avg_pool3d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_max_pool1d = - foreign - "atg_adaptive_max_pool1d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_max_pool2d = - foreign - "atg_adaptive_max_pool2d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_max_pool2d_backward = - foreign "atg_adaptive_max_pool2d_backward" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_adaptive_max_pool2d_backward_grad_input = - foreign - "atg_adaptive_max_pool2d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_adaptive_max_pool2d_out = - foreign - "atg_adaptive_max_pool2d_out" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_max_pool3d = - foreign - "atg_adaptive_max_pool3d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_adaptive_max_pool3d_backward = - foreign "atg_adaptive_max_pool3d_backward" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_adaptive_max_pool3d_backward_grad_input = - foreign - "atg_adaptive_max_pool3d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_adaptive_max_pool3d_out = - foreign - "atg_adaptive_max_pool3d_out" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_add = foreign "atg_add" (ptr t @-> t @-> t @-> returning void) - let stubs_add_ = foreign "atg_add_" (ptr t @-> t @-> t @-> returning void) - let stubs_add_out = foreign "atg_add_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_add_scalar = - foreign "atg_add_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_add_scalar_ = - foreign "atg_add_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; -end - -module C5 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_add_scalar_out = - foreign "atg_add_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_addbmm = foreign "atg_addbmm" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_addbmm_ = foreign "atg_addbmm_" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_addbmm_out = - foreign "atg_addbmm_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_addcdiv = foreign "atg_addcdiv" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_addcdiv_ = foreign "atg_addcdiv_" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_addcdiv_out = - foreign "atg_addcdiv_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_addcmul = foreign "atg_addcmul" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_addcmul_ = foreign "atg_addcmul_" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_addcmul_out = - foreign "atg_addcmul_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_addmm = foreign "atg_addmm" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_addmm_ = foreign "atg_addmm_" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_addmm_out = - foreign "atg_addmm_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_addmv = foreign "atg_addmv" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_addmv_ = foreign "atg_addmv_" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_addmv_out = - foreign "atg_addmv_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_addr = foreign "atg_addr" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_addr_ = foreign "atg_addr_" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_addr_out = - foreign "atg_addr_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_adjoint = foreign "atg_adjoint" (ptr t @-> t @-> returning void) - - let stubs_affine_grid_generator = - foreign - "atg_affine_grid_generator" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_affine_grid_generator_backward = - foreign - "atg_affine_grid_generator_backward" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_affine_grid_generator_out = - foreign - "atg_affine_grid_generator_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_alias = foreign "atg_alias" (ptr t @-> t @-> returning void) - let stubs_alias_copy = foreign "atg_alias_copy" (ptr t @-> t @-> returning void) - - let stubs_alias_copy_out = - foreign "atg_alias_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_align_as = foreign "atg_align_as" (ptr t @-> t @-> t @-> returning void) - - let stubs_align_tensors = - foreign "atg_align_tensors" (ptr t @-> int @-> returning (ptr t)) - ;; - - let stubs_all = foreign "atg_all" (ptr t @-> t @-> returning void) - let stubs_all_all_out = foreign "atg_all_all_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_all_dim = - foreign "atg_all_dim" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_all_out = - foreign "atg_all_out" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_allclose = - foreign "atg_allclose" (t @-> t @-> double @-> double @-> int @-> returning bool) - ;; - - let stubs_alpha_dropout = - foreign "atg_alpha_dropout" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_alpha_dropout_ = - foreign "atg_alpha_dropout_" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_amax = - foreign "atg_amax" (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_amax_out = - foreign - "atg_amax_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_amin = - foreign "atg_amin" (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_amin_out = - foreign - "atg_amin_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_aminmax = - foreign "atg_aminmax" (ptr t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_aminmax_out = - foreign - "atg_aminmax_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_angle = foreign "atg_angle" (ptr t @-> t @-> returning void) - let stubs_angle_out = foreign "atg_angle_out" (ptr t @-> t @-> t @-> returning void) - let stubs_any = foreign "atg_any" (ptr t @-> t @-> returning void) - let stubs_any_all_out = foreign "atg_any_all_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_any_dim = - foreign "atg_any_dim" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_any_out = - foreign "atg_any_out" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_arange = - foreign "atg_arange" (ptr t @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_arange_start = - foreign - "atg_arange_start" - (ptr t @-> scalar @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_arange_start_step = - foreign - "atg_arange_start_step" - (ptr t @-> scalar @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_arccos = foreign "atg_arccos" (ptr t @-> t @-> returning void) - let stubs_arccos_ = foreign "atg_arccos_" (ptr t @-> t @-> returning void) - let stubs_arccos_out = foreign "atg_arccos_out" (ptr t @-> t @-> t @-> returning void) - let stubs_arccosh = foreign "atg_arccosh" (ptr t @-> t @-> returning void) - let stubs_arccosh_ = foreign "atg_arccosh_" (ptr t @-> t @-> returning void) - let stubs_arccosh_out = foreign "atg_arccosh_out" (ptr t @-> t @-> t @-> returning void) - let stubs_arcsin = foreign "atg_arcsin" (ptr t @-> t @-> returning void) - let stubs_arcsin_ = foreign "atg_arcsin_" (ptr t @-> t @-> returning void) - let stubs_arcsin_out = foreign "atg_arcsin_out" (ptr t @-> t @-> t @-> returning void) - let stubs_arcsinh = foreign "atg_arcsinh" (ptr t @-> t @-> returning void) - let stubs_arcsinh_ = foreign "atg_arcsinh_" (ptr t @-> t @-> returning void) - let stubs_arcsinh_out = foreign "atg_arcsinh_out" (ptr t @-> t @-> t @-> returning void) - let stubs_arctan = foreign "atg_arctan" (ptr t @-> t @-> returning void) - let stubs_arctan2 = foreign "atg_arctan2" (ptr t @-> t @-> t @-> returning void) - let stubs_arctan2_ = foreign "atg_arctan2_" (ptr t @-> t @-> t @-> returning void) - - let stubs_arctan2_out = - foreign "atg_arctan2_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_arctan_ = foreign "atg_arctan_" (ptr t @-> t @-> returning void) - let stubs_arctan_out = foreign "atg_arctan_out" (ptr t @-> t @-> t @-> returning void) - let stubs_arctanh = foreign "atg_arctanh" (ptr t @-> t @-> returning void) - let stubs_arctanh_ = foreign "atg_arctanh_" (ptr t @-> t @-> returning void) - let stubs_arctanh_out = foreign "atg_arctanh_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_argmax = - foreign "atg_argmax" (ptr t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_argmax_out = - foreign - "atg_argmax_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_argmin = - foreign "atg_argmin" (ptr t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_argmin_out = - foreign - "atg_argmin_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_argsort = - foreign "atg_argsort" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_argsort_stable = - foreign - "atg_argsort_stable" - (ptr t @-> t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_argsort_stable_out = - foreign - "atg_argsort_stable_out" - (ptr t @-> t @-> t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_argwhere = foreign "atg_argwhere" (ptr t @-> t @-> returning void) - - let stubs_as_strided = - foreign - "atg_as_strided" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_as_strided_ = - foreign - "atg_as_strided_" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_as_strided_copy = - foreign - "atg_as_strided_copy" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_as_strided_copy_out = - foreign - "atg_as_strided_copy_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_as_strided_scatter = - foreign - "atg_as_strided_scatter" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_as_strided_scatter_out = - foreign - "atg_as_strided_scatter_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_asin = foreign "atg_asin" (ptr t @-> t @-> returning void) - let stubs_asin_ = foreign "atg_asin_" (ptr t @-> t @-> returning void) - let stubs_asin_out = foreign "atg_asin_out" (ptr t @-> t @-> t @-> returning void) - let stubs_asinh = foreign "atg_asinh" (ptr t @-> t @-> returning void) - let stubs_asinh_ = foreign "atg_asinh_" (ptr t @-> t @-> returning void) - let stubs_asinh_out = foreign "atg_asinh_out" (ptr t @-> t @-> t @-> returning void) - let stubs_atan = foreign "atg_atan" (ptr t @-> t @-> returning void) - let stubs_atan2 = foreign "atg_atan2" (ptr t @-> t @-> t @-> returning void) - let stubs_atan2_ = foreign "atg_atan2_" (ptr t @-> t @-> t @-> returning void) - - let stubs_atan2_out = - foreign "atg_atan2_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_atan_ = foreign "atg_atan_" (ptr t @-> t @-> returning void) - let stubs_atan_out = foreign "atg_atan_out" (ptr t @-> t @-> t @-> returning void) - let stubs_atanh = foreign "atg_atanh" (ptr t @-> t @-> returning void) - let stubs_atanh_ = foreign "atg_atanh_" (ptr t @-> t @-> returning void) - let stubs_atanh_out = foreign "atg_atanh_out" (ptr t @-> t @-> t @-> returning void) -end - -module C6 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - let stubs_atleast_1d = foreign "atg_atleast_1d" (ptr t @-> t @-> returning void) - - let stubs_atleast_1d_sequence = - foreign "atg_atleast_1d_sequence" (ptr t @-> int @-> returning (ptr t)) - ;; - - let stubs_atleast_2d = foreign "atg_atleast_2d" (ptr t @-> t @-> returning void) - - let stubs_atleast_2d_sequence = - foreign "atg_atleast_2d_sequence" (ptr t @-> int @-> returning (ptr t)) - ;; - - let stubs_atleast_3d = foreign "atg_atleast_3d" (ptr t @-> t @-> returning void) - - let stubs_atleast_3d_sequence = - foreign "atg_atleast_3d_sequence" (ptr t @-> int @-> returning (ptr t)) - ;; - - let stubs_avg_pool1d = - foreign - "atg_avg_pool1d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_avg_pool2d = - foreign - "atg_avg_pool2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_avg_pool2d_backward = - foreign - "atg_avg_pool2d_backward" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_avg_pool2d_backward_grad_input = - foreign - "atg_avg_pool2d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_avg_pool2d_out = - foreign - "atg_avg_pool2d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_avg_pool3d = - foreign - "atg_avg_pool3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_avg_pool3d_backward = - foreign - "atg_avg_pool3d_backward" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_avg_pool3d_backward_grad_input = - foreign - "atg_avg_pool3d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_avg_pool3d_out = - foreign - "atg_avg_pool3d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_baddbmm = foreign "atg_baddbmm" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_baddbmm_ = foreign "atg_baddbmm_" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_baddbmm_out = - foreign "atg_baddbmm_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_bartlett_window = - foreign "atg_bartlett_window" (ptr t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_bartlett_window_out = - foreign "atg_bartlett_window_out" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_bartlett_window_periodic = - foreign - "atg_bartlett_window_periodic" - (ptr t @-> int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_bartlett_window_periodic_out = - foreign - "atg_bartlett_window_periodic_out" - (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_batch_norm = - foreign - "atg_batch_norm" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> int - @-> returning void) - ;; - - let stubs_batch_norm_backward_elemt = - foreign - "atg_batch_norm_backward_elemt" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_batch_norm_backward_elemt_out = - foreign - "atg_batch_norm_backward_elemt_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_batch_norm_backward_reduce = - foreign - "atg_batch_norm_backward_reduce" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_batch_norm_backward_reduce_out = - foreign - "atg_batch_norm_backward_reduce_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_batch_norm_elemt = - foreign - "atg_batch_norm_elemt" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_batch_norm_elemt_out = - foreign - "atg_batch_norm_elemt_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_batch_norm_gather_stats = - foreign - "atg_batch_norm_gather_stats" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> double - @-> int64_t - @-> returning void) - ;; - - let stubs_batch_norm_gather_stats_out = - foreign - "atg_batch_norm_gather_stats_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> double - @-> int64_t - @-> returning void) - ;; - - let stubs_batch_norm_gather_stats_with_counts = - foreign - "atg_batch_norm_gather_stats_with_counts" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> double @-> double @-> t @-> returning void) - ;; - - let stubs_batch_norm_gather_stats_with_counts_out = - foreign - "atg_batch_norm_gather_stats_with_counts_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> double - @-> t - @-> returning void) - ;; - - let stubs_batch_norm_stats = - foreign "atg_batch_norm_stats" (ptr t @-> t @-> double @-> returning void) - ;; - - let stubs_batch_norm_stats_out = - foreign - "atg_batch_norm_stats_out" - (ptr t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_batch_norm_update_stats = - foreign - "atg_batch_norm_update_stats" - (ptr t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_batch_norm_update_stats_out = - foreign - "atg_batch_norm_update_stats_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_bernoulli = foreign "atg_bernoulli" (ptr t @-> t @-> returning void) - let stubs_bernoulli_ = foreign "atg_bernoulli_" (ptr t @-> t @-> t @-> returning void) - - let stubs_bernoulli_float_ = - foreign "atg_bernoulli_float_" (ptr t @-> t @-> double @-> returning void) - ;; - - let stubs_bernoulli_p = - foreign "atg_bernoulli_p" (ptr t @-> t @-> double @-> returning void) - ;; - - let stubs_bernoulli_tensor = - foreign "atg_bernoulli_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bilinear = - foreign "atg_bilinear" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_binary_cross_entropy = - foreign - "atg_binary_cross_entropy" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_binary_cross_entropy_backward = - foreign - "atg_binary_cross_entropy_backward" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_binary_cross_entropy_backward_grad_input = - foreign - "atg_binary_cross_entropy_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_binary_cross_entropy_out = - foreign - "atg_binary_cross_entropy_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_binary_cross_entropy_with_logits = - foreign - "atg_binary_cross_entropy_with_logits" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_binary_cross_entropy_with_logits_out = - foreign - "atg_binary_cross_entropy_with_logits_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_bincount = - foreign "atg_bincount" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_bincount_out = - foreign "atg_bincount_out" (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_binomial = foreign "atg_binomial" (ptr t @-> t @-> t @-> returning void) - - let stubs_binomial_out = - foreign "atg_binomial_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_and = - foreign "atg_bitwise_and" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_and_ = - foreign "atg_bitwise_and_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_and_scalar_out = - foreign "atg_bitwise_and_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_and_scalar_tensor = - foreign "atg_bitwise_and_scalar_tensor" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_and_scalar_tensor_out = - foreign - "atg_bitwise_and_scalar_tensor_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_and_tensor = - foreign "atg_bitwise_and_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_and_tensor_ = - foreign "atg_bitwise_and_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_and_tensor_out = - foreign "atg_bitwise_and_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_left_shift = - foreign "atg_bitwise_left_shift" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_left_shift_ = - foreign "atg_bitwise_left_shift_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_left_shift_scalar_tensor = - foreign - "atg_bitwise_left_shift_scalar_tensor" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_left_shift_scalar_tensor_out = - foreign - "atg_bitwise_left_shift_scalar_tensor_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_left_shift_tensor_out = - foreign - "atg_bitwise_left_shift_tensor_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_left_shift_tensor_scalar = - foreign - "atg_bitwise_left_shift_tensor_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_left_shift_tensor_scalar_ = - foreign - "atg_bitwise_left_shift_tensor_scalar_" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_left_shift_tensor_scalar_out = - foreign - "atg_bitwise_left_shift_tensor_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_not = foreign "atg_bitwise_not" (ptr t @-> t @-> returning void) - let stubs_bitwise_not_ = foreign "atg_bitwise_not_" (ptr t @-> t @-> returning void) - - let stubs_bitwise_not_out = - foreign "atg_bitwise_not_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_or = - foreign "atg_bitwise_or" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_or_ = - foreign "atg_bitwise_or_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_or_scalar_out = - foreign "atg_bitwise_or_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_or_scalar_tensor = - foreign "atg_bitwise_or_scalar_tensor" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_or_scalar_tensor_out = - foreign - "atg_bitwise_or_scalar_tensor_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_or_tensor = - foreign "atg_bitwise_or_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_or_tensor_ = - foreign "atg_bitwise_or_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_or_tensor_out = - foreign "atg_bitwise_or_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_right_shift = - foreign "atg_bitwise_right_shift" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_right_shift_ = - foreign "atg_bitwise_right_shift_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_right_shift_scalar_tensor = - foreign - "atg_bitwise_right_shift_scalar_tensor" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_right_shift_scalar_tensor_out = - foreign - "atg_bitwise_right_shift_scalar_tensor_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_right_shift_tensor_out = - foreign - "atg_bitwise_right_shift_tensor_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_right_shift_tensor_scalar = - foreign - "atg_bitwise_right_shift_tensor_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_right_shift_tensor_scalar_ = - foreign - "atg_bitwise_right_shift_tensor_scalar_" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_right_shift_tensor_scalar_out = - foreign - "atg_bitwise_right_shift_tensor_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_xor = - foreign "atg_bitwise_xor" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_xor_ = - foreign "atg_bitwise_xor_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_xor_scalar_out = - foreign "atg_bitwise_xor_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_bitwise_xor_scalar_tensor = - foreign "atg_bitwise_xor_scalar_tensor" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_xor_scalar_tensor_out = - foreign - "atg_bitwise_xor_scalar_tensor_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_bitwise_xor_tensor = - foreign "atg_bitwise_xor_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_xor_tensor_ = - foreign "atg_bitwise_xor_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_bitwise_xor_tensor_out = - foreign "atg_bitwise_xor_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_blackman_window = - foreign "atg_blackman_window" (ptr t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_blackman_window_out = - foreign "atg_blackman_window_out" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_blackman_window_periodic = - foreign - "atg_blackman_window_periodic" - (ptr t @-> int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_blackman_window_periodic_out = - foreign - "atg_blackman_window_periodic_out" - (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; -end - -module C7 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_block_diag = - foreign "atg_block_diag" (ptr t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_block_diag_out = - foreign "atg_block_diag_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_bmm = foreign "atg_bmm" (ptr t @-> t @-> t @-> returning void) - let stubs_bmm_out = foreign "atg_bmm_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_broadcast_tensors = - foreign "atg_broadcast_tensors" (ptr t @-> int @-> returning (ptr t)) - ;; - - let stubs_broadcast_to = - foreign "atg_broadcast_to" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_bucketize = - foreign "atg_bucketize" (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_bucketize_scalar = - foreign - "atg_bucketize_scalar" - (ptr t @-> scalar @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_bucketize_scalar_out = - foreign - "atg_bucketize_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_bucketize_tensor_out = - foreign - "atg_bucketize_tensor_out" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_can_cast = foreign "atg_can_cast" (int @-> int @-> returning bool) - - let stubs_cartesian_prod = - foreign "atg_cartesian_prod" (ptr t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_cat = - foreign "atg_cat" (ptr t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_cat_out = - foreign "atg_cat_out" (ptr t @-> t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_cauchy = - foreign "atg_cauchy" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_cauchy_ = - foreign "atg_cauchy_" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_cauchy_out = - foreign "atg_cauchy_out" (ptr t @-> t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_ccol_indices = foreign "atg_ccol_indices" (ptr t @-> t @-> returning void) - - let stubs_ccol_indices_copy = - foreign "atg_ccol_indices_copy" (ptr t @-> t @-> returning void) - ;; - - let stubs_ccol_indices_copy_out = - foreign "atg_ccol_indices_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_cdist = - foreign - "atg_cdist" - (ptr t @-> t @-> t @-> double @-> int64_t @-> int @-> returning void) - ;; - - let stubs_ceil = foreign "atg_ceil" (ptr t @-> t @-> returning void) - let stubs_ceil_ = foreign "atg_ceil_" (ptr t @-> t @-> returning void) - let stubs_ceil_out = foreign "atg_ceil_out" (ptr t @-> t @-> t @-> returning void) - let stubs_celu = foreign "atg_celu" (ptr t @-> t @-> returning void) - let stubs_celu_ = foreign "atg_celu_" (ptr t @-> t @-> returning void) - let stubs_celu_out = foreign "atg_celu_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_chain_matmul = - foreign "atg_chain_matmul" (ptr t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_chain_matmul_out = - foreign "atg_chain_matmul_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_chalf = foreign "atg_chalf" (ptr t @-> t @-> returning void) - - let stubs_channel_shuffle = - foreign "atg_channel_shuffle" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_channel_shuffle_out = - foreign "atg_channel_shuffle_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_cholesky = foreign "atg_cholesky" (ptr t @-> t @-> int @-> returning void) - - let stubs_cholesky_inverse = - foreign "atg_cholesky_inverse" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_cholesky_inverse_out = - foreign "atg_cholesky_inverse_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_cholesky_out = - foreign "atg_cholesky_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_cholesky_solve = - foreign "atg_cholesky_solve" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_cholesky_solve_out = - foreign "atg_cholesky_solve_out" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_choose_qparams_optimized = - foreign - "atg_choose_qparams_optimized" - (ptr t @-> t @-> int64_t @-> int64_t @-> double @-> int64_t @-> returning void) - ;; - - let stubs_chunk = foreign "atg_chunk" (t @-> int64_t @-> int64_t @-> returning (ptr t)) - - let stubs_clamp = - foreign "atg_clamp" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_clamp_ = - foreign "atg_clamp_" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_clamp_max = foreign "atg_clamp_max" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_clamp_max_ = - foreign "atg_clamp_max_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_clamp_max_out = - foreign "atg_clamp_max_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_clamp_max_tensor = - foreign "atg_clamp_max_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_max_tensor_ = - foreign "atg_clamp_max_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_max_tensor_out = - foreign "atg_clamp_max_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_min = foreign "atg_clamp_min" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_clamp_min_ = - foreign "atg_clamp_min_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_clamp_min_out = - foreign "atg_clamp_min_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_clamp_min_tensor = - foreign "atg_clamp_min_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_min_tensor_ = - foreign "atg_clamp_min_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_min_tensor_out = - foreign "atg_clamp_min_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_out = - foreign "atg_clamp_out" (ptr t @-> t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_clamp_tensor = - foreign "atg_clamp_tensor" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_tensor_ = - foreign "atg_clamp_tensor_" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clamp_tensor_out = - foreign "atg_clamp_tensor_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clip = - foreign "atg_clip" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_clip_ = - foreign "atg_clip_" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_clip_out = - foreign "atg_clip_out" (ptr t @-> t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_clip_tensor = - foreign "atg_clip_tensor" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clip_tensor_ = - foreign "atg_clip_tensor_" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clip_tensor_out = - foreign "atg_clip_tensor_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_clone = foreign "atg_clone" (ptr t @-> t @-> returning void) - let stubs_clone_out = foreign "atg_clone_out" (ptr t @-> t @-> t @-> returning void) - let stubs_coalesce = foreign "atg_coalesce" (ptr t @-> t @-> returning void) - - let stubs_col2im = - foreign - "atg_col2im" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_col2im_out = - foreign - "atg_col2im_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_col_indices = foreign "atg_col_indices" (ptr t @-> t @-> returning void) - - let stubs_col_indices_copy = - foreign "atg_col_indices_copy" (ptr t @-> t @-> returning void) - ;; - - let stubs_col_indices_copy_out = - foreign "atg_col_indices_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_column_stack = - foreign "atg_column_stack" (ptr t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_column_stack_out = - foreign "atg_column_stack_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_combinations = - foreign "atg_combinations" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_complex = foreign "atg_complex" (ptr t @-> t @-> t @-> returning void) - - let stubs_complex_out = - foreign "atg_complex_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_concat = - foreign "atg_concat" (ptr t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_concat_out = - foreign "atg_concat_out" (ptr t @-> t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_concatenate = - foreign "atg_concatenate" (ptr t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_concatenate_out = - foreign - "atg_concatenate_out" - (ptr t @-> t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_conj = foreign "atg_conj" (ptr t @-> t @-> returning void) - let stubs_conj_physical = foreign "atg_conj_physical" (ptr t @-> t @-> returning void) - let stubs_conj_physical_ = foreign "atg_conj_physical_" (ptr t @-> t @-> returning void) - - let stubs_conj_physical_out = - foreign "atg_conj_physical_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_constant_pad_nd = - foreign "atg_constant_pad_nd" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_constant_pad_nd_out = - foreign - "atg_constant_pad_nd_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_contiguous = foreign "atg_contiguous" (ptr t @-> t @-> returning void) - - let stubs_conv1d = - foreign - "atg_conv1d" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_conv1d_padding = - foreign - "atg_conv1d_padding" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> string - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_conv2d = - foreign - "atg_conv2d" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_conv2d_padding = - foreign - "atg_conv2d_padding" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> string - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_conv3d = - foreign - "atg_conv3d" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_conv3d_padding = - foreign - "atg_conv3d_padding" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> string - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_conv_depthwise3d = - foreign - "atg_conv_depthwise3d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_conv_depthwise3d_out = - foreign - "atg_conv_depthwise3d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_conv_tbc = - foreign "atg_conv_tbc" (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_conv_tbc_backward = - foreign - "atg_conv_tbc_backward" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_conv_tbc_out = - foreign - "atg_conv_tbc_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_conv_transpose1d = - foreign - "atg_conv_transpose1d" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> ptr int64_t - @-> int - @-> returning void) - ;; -end - -module C8 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_conv_transpose2d = - foreign - "atg_conv_transpose2d" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_conv_transpose3d = - foreign - "atg_conv_transpose3d" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_convolution = - foreign - "atg_convolution" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_convolution_out = - foreign - "atg_convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_convolution_overrideable = - foreign - "atg_convolution_overrideable" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_convolution_overrideable_out = - foreign - "atg_convolution_overrideable_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_copy = foreign "atg_copy" (ptr t @-> t @-> t @-> int @-> returning void) - - let stubs_copy_out = - foreign "atg_copy_out" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_copy_sparse_to_sparse = - foreign "atg_copy_sparse_to_sparse" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_copy_sparse_to_sparse_ = - foreign "atg_copy_sparse_to_sparse_" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_copy_sparse_to_sparse_out = - foreign - "atg_copy_sparse_to_sparse_out" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_copysign = foreign "atg_copysign" (ptr t @-> t @-> t @-> returning void) - let stubs_copysign_ = foreign "atg_copysign_" (ptr t @-> t @-> t @-> returning void) - - let stubs_copysign_out = - foreign "atg_copysign_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_copysign_scalar = - foreign "atg_copysign_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_copysign_scalar_ = - foreign "atg_copysign_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_copysign_scalar_out = - foreign "atg_copysign_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_corrcoef = foreign "atg_corrcoef" (ptr t @-> t @-> returning void) - let stubs_cos = foreign "atg_cos" (ptr t @-> t @-> returning void) - let stubs_cos_ = foreign "atg_cos_" (ptr t @-> t @-> returning void) - let stubs_cos_out = foreign "atg_cos_out" (ptr t @-> t @-> t @-> returning void) - let stubs_cosh = foreign "atg_cosh" (ptr t @-> t @-> returning void) - let stubs_cosh_ = foreign "atg_cosh_" (ptr t @-> t @-> returning void) - let stubs_cosh_out = foreign "atg_cosh_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_cosine_embedding_loss = - foreign - "atg_cosine_embedding_loss" - (ptr t @-> t @-> t @-> t @-> double @-> int64_t @-> returning void) - ;; - - let stubs_cosine_similarity = - foreign - "atg_cosine_similarity" - (ptr t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_count_nonzero = - foreign - "atg_count_nonzero" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_count_nonzero_out = - foreign - "atg_count_nonzero_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cov = - foreign "atg_cov" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_cross = - foreign "atg_cross" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cross_entropy_loss = - foreign - "atg_cross_entropy_loss" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_cross_out = - foreign - "atg_cross_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_crow_indices = foreign "atg_crow_indices" (ptr t @-> t @-> returning void) - - let stubs_crow_indices_copy = - foreign "atg_crow_indices_copy" (ptr t @-> t @-> returning void) - ;; - - let stubs_crow_indices_copy_out = - foreign "atg_crow_indices_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_ctc_loss = - foreign - "atg_ctc_loss" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_ctc_loss_tensor = - foreign - "atg_ctc_loss_tensor" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cudnn_affine_grid_generator = - foreign - "atg_cudnn_affine_grid_generator" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_cudnn_affine_grid_generator_backward = - foreign - "atg_cudnn_affine_grid_generator_backward" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_cudnn_affine_grid_generator_backward_out = - foreign - "atg_cudnn_affine_grid_generator_backward_out" - (ptr t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_cudnn_affine_grid_generator_out = - foreign - "atg_cudnn_affine_grid_generator_out" - (ptr t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_cudnn_batch_norm = - foreign - "atg_cudnn_batch_norm" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs_cudnn_batch_norm_backward = - foreign - "atg_cudnn_batch_norm_backward" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> double @-> t @-> returning void) - ;; - - let stubs_cudnn_batch_norm_backward_out = - foreign - "atg_cudnn_batch_norm_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> t - @-> returning void) - ;; - - let stubs_cudnn_batch_norm_out = - foreign - "atg_cudnn_batch_norm_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs_cudnn_convolution = - foreign - "atg_cudnn_convolution" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_cudnn_convolution_add_relu = - foreign - "atg_cudnn_convolution_add_relu" - (ptr t - @-> t - @-> t - @-> t - @-> scalar - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_cudnn_convolution_add_relu_out = - foreign - "atg_cudnn_convolution_add_relu_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> scalar - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_cudnn_convolution_out = - foreign - "atg_cudnn_convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_cudnn_convolution_relu = - foreign - "atg_cudnn_convolution_relu" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_cudnn_convolution_relu_out = - foreign - "atg_cudnn_convolution_relu_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_cudnn_convolution_transpose = - foreign - "atg_cudnn_convolution_transpose" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_cudnn_convolution_transpose_out = - foreign - "atg_cudnn_convolution_transpose_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_cudnn_grid_sampler = - foreign "atg_cudnn_grid_sampler" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_cudnn_grid_sampler_backward = - foreign "atg_cudnn_grid_sampler_backward" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_cudnn_grid_sampler_backward_out = - foreign - "atg_cudnn_grid_sampler_backward_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_cudnn_grid_sampler_out = - foreign "atg_cudnn_grid_sampler_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_cudnn_is_acceptable = foreign "atg_cudnn_is_acceptable" (t @-> returning bool) - let stubs_cummax = foreign "atg_cummax" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_cummax_out = - foreign "atg_cummax_out" (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_cummaxmin_backward = - foreign - "atg_cummaxmin_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_cummin = foreign "atg_cummin" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_cummin_out = - foreign "atg_cummin_out" (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_cumprod = - foreign "atg_cumprod" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cumprod_ = - foreign "atg_cumprod_" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cumprod_backward = - foreign "atg_cumprod_backward" (ptr t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_cumprod_out = - foreign "atg_cumprod_out" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cumsum = - foreign "atg_cumsum" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cumsum_ = - foreign "atg_cumsum_" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cumsum_out = - foreign "atg_cumsum_out" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_cumulative_trapezoid = - foreign "atg_cumulative_trapezoid" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_cumulative_trapezoid_x = - foreign "atg_cumulative_trapezoid_x" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_data = foreign "atg_data" (ptr t @-> t @-> returning void) - let stubs_deg2rad = foreign "atg_deg2rad" (ptr t @-> t @-> returning void) - let stubs_deg2rad_ = foreign "atg_deg2rad_" (ptr t @-> t @-> returning void) - let stubs_deg2rad_out = foreign "atg_deg2rad_out" (ptr t @-> t @-> t @-> returning void) - let stubs_dense_dim = foreign "atg_dense_dim" (t @-> returning int64_t) - let stubs_dequantize = foreign "atg_dequantize" (ptr t @-> t @-> returning void) - - let stubs_dequantize_self_out = - foreign "atg_dequantize_self_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_dequantize_tensors = - foreign "atg_dequantize_tensors" (ptr t @-> int @-> returning (ptr t)) - ;; - - let stubs_dequantize_tensors_out = - foreign - "atg_dequantize_tensors_out" - (ptr t @-> int @-> ptr t @-> int @-> returning void) - ;; - - let stubs_det = foreign "atg_det" (ptr t @-> t @-> returning void) - let stubs_detach = foreign "atg_detach" (ptr t @-> t @-> returning void) - let stubs_detach_ = foreign "atg_detach_" (ptr t @-> t @-> returning void) - let stubs_detach_copy = foreign "atg_detach_copy" (ptr t @-> t @-> returning void) - - let stubs_detach_copy_out = - foreign "atg_detach_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_diag = foreign "atg_diag" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_diag_embed = - foreign - "atg_diag_embed" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_diag_embed_out = - foreign - "atg_diag_embed_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_diag_out = - foreign "atg_diag_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_diagflat = foreign "atg_diagflat" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_diagonal = - foreign - "atg_diagonal" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_diagonal_backward = - foreign - "atg_diagonal_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_diagonal_backward_out = - foreign - "atg_diagonal_backward_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_diagonal_copy = - foreign - "atg_diagonal_copy" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_diagonal_copy_out = - foreign - "atg_diagonal_copy_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_diagonal_scatter = - foreign - "atg_diagonal_scatter" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_diagonal_scatter_out = - foreign - "atg_diagonal_scatter_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_diff = - foreign "atg_diff" (ptr t @-> t @-> int64_t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_diff_out = - foreign - "atg_diff_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> t @-> t @-> returning void) - ;; -end - -module C9 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - let stubs_digamma = foreign "atg_digamma" (ptr t @-> t @-> returning void) - let stubs_digamma_ = foreign "atg_digamma_" (ptr t @-> t @-> returning void) - let stubs_digamma_out = foreign "atg_digamma_out" (ptr t @-> t @-> t @-> returning void) - let stubs_dist = foreign "atg_dist" (ptr t @-> t @-> t @-> returning void) - let stubs_dist_out = foreign "atg_dist_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_div = foreign "atg_div" (ptr t @-> t @-> t @-> returning void) - let stubs_div_ = foreign "atg_div_" (ptr t @-> t @-> t @-> returning void) - let stubs_div_out = foreign "atg_div_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_div_out_mode = - foreign "atg_div_out_mode" (ptr t @-> t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_div_scalar = - foreign "atg_div_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_div_scalar_ = - foreign "atg_div_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_div_scalar_mode = - foreign "atg_div_scalar_mode" (ptr t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_div_scalar_mode_ = - foreign "atg_div_scalar_mode_" (ptr t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_div_scalar_mode_out = - foreign - "atg_div_scalar_mode_out" - (ptr t @-> t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_div_scalar_out = - foreign "atg_div_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_div_tensor_mode = - foreign "atg_div_tensor_mode" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_div_tensor_mode_ = - foreign "atg_div_tensor_mode_" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_divide = foreign "atg_divide" (ptr t @-> t @-> t @-> returning void) - let stubs_divide_ = foreign "atg_divide_" (ptr t @-> t @-> t @-> returning void) - - let stubs_divide_out = - foreign "atg_divide_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_divide_out_mode = - foreign "atg_divide_out_mode" (ptr t @-> t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_divide_scalar = - foreign "atg_divide_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_divide_scalar_ = - foreign "atg_divide_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_divide_scalar_mode = - foreign "atg_divide_scalar_mode" (ptr t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_divide_scalar_mode_ = - foreign - "atg_divide_scalar_mode_" - (ptr t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_divide_tensor_mode = - foreign "atg_divide_tensor_mode" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_divide_tensor_mode_ = - foreign "atg_divide_tensor_mode_" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_dot = foreign "atg_dot" (ptr t @-> t @-> t @-> returning void) - let stubs_dot_out = foreign "atg_dot_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_dropout = - foreign "atg_dropout" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_dropout_ = - foreign "atg_dropout_" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_dsplit = foreign "atg_dsplit" (t @-> int64_t @-> returning (ptr t)) - - let stubs_dsplit_array = - foreign "atg_dsplit_array" (t @-> ptr int64_t @-> int @-> returning (ptr t)) - ;; - - let stubs_dstack = foreign "atg_dstack" (ptr t @-> ptr t @-> int @-> returning void) - - let stubs_dstack_out = - foreign "atg_dstack_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_einsum = - foreign - "atg_einsum" - (ptr t @-> string @-> ptr t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_elu = foreign "atg_elu" (ptr t @-> t @-> returning void) - let stubs_elu_ = foreign "atg_elu_" (ptr t @-> t @-> returning void) - - let stubs_elu_backward = - foreign - "atg_elu_backward" - (ptr t @-> t @-> scalar @-> scalar @-> scalar @-> int @-> t @-> returning void) - ;; - - let stubs_elu_backward_grad_input = - foreign - "atg_elu_backward_grad_input" - (ptr t @-> t @-> t @-> scalar @-> scalar @-> scalar @-> int @-> t @-> returning void) - ;; - - let stubs_elu_out = foreign "atg_elu_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_embedding = - foreign - "atg_embedding" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_embedding_backward = - foreign - "atg_embedding_backward" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_embedding_bag = - foreign - "atg_embedding_bag" - (ptr t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> returning void) - ;; - - let stubs_embedding_bag_padding_idx = - foreign - "atg_embedding_bag_padding_idx" - (ptr t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_embedding_dense_backward = - foreign - "atg_embedding_dense_backward" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_embedding_dense_backward_out = - foreign - "atg_embedding_dense_backward_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_embedding_out = - foreign - "atg_embedding_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_embedding_renorm = - foreign - "atg_embedding_renorm" - (ptr t @-> t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_embedding_renorm_ = - foreign - "atg_embedding_renorm_" - (ptr t @-> t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_embedding_renorm_out = - foreign - "atg_embedding_renorm_out" - (ptr t @-> t @-> t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_embedding_sparse_backward = - foreign - "atg_embedding_sparse_backward" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_empty = - foreign "atg_empty" (ptr t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_empty_like = foreign "atg_empty_like" (ptr t @-> t @-> returning void) - - let stubs_empty_like_out = - foreign "atg_empty_like_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_empty_out = - foreign "atg_empty_out" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_empty_permuted = - foreign - "atg_empty_permuted" - (ptr t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_empty_permuted_out = - foreign - "atg_empty_permuted_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_empty_quantized = - foreign - "atg_empty_quantized" - (ptr t @-> ptr int64_t @-> int @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_empty_quantized_out = - foreign - "atg_empty_quantized_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> t @-> returning void) - ;; - - let stubs_empty_strided = - foreign - "atg_empty_strided" - (ptr t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_empty_strided_out = - foreign - "atg_empty_strided_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_eq = foreign "atg_eq" (ptr t @-> t @-> scalar @-> returning void) - let stubs_eq_ = foreign "atg_eq_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_eq_scalar_out = - foreign "atg_eq_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_eq_tensor = foreign "atg_eq_tensor" (ptr t @-> t @-> t @-> returning void) - let stubs_eq_tensor_ = foreign "atg_eq_tensor_" (ptr t @-> t @-> t @-> returning void) - - let stubs_eq_tensor_out = - foreign "atg_eq_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_equal = foreign "atg_equal" (t @-> t @-> returning bool) - let stubs_erf = foreign "atg_erf" (ptr t @-> t @-> returning void) - let stubs_erf_ = foreign "atg_erf_" (ptr t @-> t @-> returning void) - let stubs_erf_out = foreign "atg_erf_out" (ptr t @-> t @-> t @-> returning void) - let stubs_erfc = foreign "atg_erfc" (ptr t @-> t @-> returning void) - let stubs_erfc_ = foreign "atg_erfc_" (ptr t @-> t @-> returning void) - let stubs_erfc_out = foreign "atg_erfc_out" (ptr t @-> t @-> t @-> returning void) - let stubs_erfinv = foreign "atg_erfinv" (ptr t @-> t @-> returning void) - let stubs_erfinv_ = foreign "atg_erfinv_" (ptr t @-> t @-> returning void) - let stubs_erfinv_out = foreign "atg_erfinv_out" (ptr t @-> t @-> t @-> returning void) - let stubs_exp = foreign "atg_exp" (ptr t @-> t @-> returning void) - let stubs_exp2 = foreign "atg_exp2" (ptr t @-> t @-> returning void) - let stubs_exp2_ = foreign "atg_exp2_" (ptr t @-> t @-> returning void) - let stubs_exp2_out = foreign "atg_exp2_out" (ptr t @-> t @-> t @-> returning void) - let stubs_exp_ = foreign "atg_exp_" (ptr t @-> t @-> returning void) - let stubs_exp_out = foreign "atg_exp_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_expand = - foreign "atg_expand" (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_expand_as = foreign "atg_expand_as" (ptr t @-> t @-> t @-> returning void) - - let stubs_expand_copy = - foreign - "atg_expand_copy" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_expand_copy_out = - foreign - "atg_expand_copy_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_expm1 = foreign "atg_expm1" (ptr t @-> t @-> returning void) - let stubs_expm1_ = foreign "atg_expm1_" (ptr t @-> t @-> returning void) - let stubs_expm1_out = foreign "atg_expm1_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_exponential = - foreign "atg_exponential" (ptr t @-> t @-> double @-> returning void) - ;; - - let stubs_exponential_ = - foreign "atg_exponential_" (ptr t @-> t @-> double @-> returning void) - ;; - - let stubs_exponential_out = - foreign "atg_exponential_out" (ptr t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_eye = foreign "atg_eye" (ptr t @-> int64_t @-> int @-> int @-> returning void) - - let stubs_eye_m = - foreign "atg_eye_m" (ptr t @-> int64_t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_eye_m_out = - foreign "atg_eye_m_out" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_eye_out = foreign "atg_eye_out" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_fake_quantize_per_channel_affine = - foreign - "atg_fake_quantize_per_channel_affine" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_fake_quantize_per_channel_affine_cachemask = - foreign - "atg_fake_quantize_per_channel_affine_cachemask" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; -end - -module C10 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_fake_quantize_per_channel_affine_cachemask_backward = - foreign - "atg_fake_quantize_per_channel_affine_cachemask_backward" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_fake_quantize_per_channel_affine_cachemask_out = - foreign - "atg_fake_quantize_per_channel_affine_cachemask_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_fake_quantize_per_tensor_affine = - foreign - "atg_fake_quantize_per_tensor_affine" - (ptr t @-> t @-> double @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_fake_quantize_per_tensor_affine_cachemask = - foreign - "atg_fake_quantize_per_tensor_affine_cachemask" - (ptr t @-> t @-> double @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_fake_quantize_per_tensor_affine_cachemask_backward = - foreign - "atg_fake_quantize_per_tensor_affine_cachemask_backward" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_fake_quantize_per_tensor_affine_cachemask_out = - foreign - "atg_fake_quantize_per_tensor_affine_cachemask_out" - (ptr t - @-> t - @-> t - @-> t - @-> double - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_fake_quantize_per_tensor_affine_tensor_qparams = - foreign - "atg_fake_quantize_per_tensor_affine_tensor_qparams" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_fbgemm_linear_fp16_weight = - foreign "atg_fbgemm_linear_fp16_weight" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_fbgemm_linear_fp16_weight_fp32_activation = - foreign - "atg_fbgemm_linear_fp16_weight_fp32_activation" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_fbgemm_linear_int8_weight = - foreign - "atg_fbgemm_linear_int8_weight" - (ptr t @-> t @-> t @-> t @-> t @-> scalar @-> scalar @-> t @-> returning void) - ;; - - let stubs_fbgemm_linear_int8_weight_fp32_activation = - foreign - "atg_fbgemm_linear_int8_weight_fp32_activation" - (ptr t @-> t @-> t @-> t @-> t @-> scalar @-> scalar @-> t @-> returning void) - ;; - - let stubs_fbgemm_pack_gemm_matrix_fp16 = - foreign "atg_fbgemm_pack_gemm_matrix_fp16" (ptr t @-> t @-> returning void) - ;; - - let stubs_fbgemm_pack_quantized_matrix = - foreign "atg_fbgemm_pack_quantized_matrix" (ptr t @-> t @-> returning void) - ;; - - let stubs_fbgemm_pack_quantized_matrix_kn = - foreign - "atg_fbgemm_pack_quantized_matrix_kn" - (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_feature_alpha_dropout = - foreign "atg_feature_alpha_dropout" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_feature_alpha_dropout_ = - foreign - "atg_feature_alpha_dropout_" - (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_feature_dropout = - foreign "atg_feature_dropout" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_feature_dropout_ = - foreign "atg_feature_dropout_" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_fft_fft = - foreign - "atg_fft_fft" - (ptr t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_fft2 = - foreign - "atg_fft_fft2" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_fft2_out = - foreign - "atg_fft_fft2_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_fft_out = - foreign - "atg_fft_fft_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_fftfreq = - foreign - "atg_fft_fftfreq" - (ptr t @-> int64_t @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_fft_fftfreq_out = - foreign "atg_fft_fftfreq_out" (ptr t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_fft_fftn = - foreign - "atg_fft_fftn" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_fftn_out = - foreign - "atg_fft_fftn_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_fftshift = - foreign "atg_fft_fftshift" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_fft_hfft = - foreign - "atg_fft_hfft" - (ptr t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_hfft2 = - foreign - "atg_fft_hfft2" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_hfft2_out = - foreign - "atg_fft_hfft2_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_hfft_out = - foreign - "atg_fft_hfft_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_hfftn = - foreign - "atg_fft_hfftn" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_hfftn_out = - foreign - "atg_fft_hfftn_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ifft = - foreign - "atg_fft_ifft" - (ptr t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_ifft2 = - foreign - "atg_fft_ifft2" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ifft2_out = - foreign - "atg_fft_ifft2_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ifft_out = - foreign - "atg_fft_ifft_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_ifftn = - foreign - "atg_fft_ifftn" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ifftn_out = - foreign - "atg_fft_ifftn_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ifftshift = - foreign "atg_fft_ifftshift" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_fft_ihfft = - foreign - "atg_fft_ihfft" - (ptr t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_ihfft2 = - foreign - "atg_fft_ihfft2" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ihfft2_out = - foreign - "atg_fft_ihfft2_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ihfft_out = - foreign - "atg_fft_ihfft_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_ihfftn = - foreign - "atg_fft_ihfftn" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_ihfftn_out = - foreign - "atg_fft_ihfftn_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_irfft = - foreign - "atg_fft_irfft" - (ptr t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_irfft2 = - foreign - "atg_fft_irfft2" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_irfft2_out = - foreign - "atg_fft_irfft2_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_irfft_out = - foreign - "atg_fft_irfft_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_irfftn = - foreign - "atg_fft_irfftn" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_irfftn_out = - foreign - "atg_fft_irfftn_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_rfft = - foreign - "atg_fft_rfft" - (ptr t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_rfft2 = - foreign - "atg_fft_rfft2" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_rfft2_out = - foreign - "atg_fft_rfft2_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_rfft_out = - foreign - "atg_fft_rfft_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> string @-> returning void) - ;; - - let stubs_fft_rfftfreq = - foreign - "atg_fft_rfftfreq" - (ptr t @-> int64_t @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_fft_rfftfreq_out = - foreign "atg_fft_rfftfreq_out" (ptr t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_fft_rfftn = - foreign - "atg_fft_rfftn" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fft_rfftn_out = - foreign - "atg_fft_rfftn_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> string - @-> returning void) - ;; - - let stubs_fill = foreign "atg_fill" (ptr t @-> t @-> scalar @-> returning void) - let stubs_fill_ = foreign "atg_fill_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_fill_diagonal_ = - foreign "atg_fill_diagonal_" (ptr t @-> t @-> scalar @-> int @-> returning void) - ;; - - let stubs_fill_scalar_out = - foreign "atg_fill_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_fill_tensor = foreign "atg_fill_tensor" (ptr t @-> t @-> t @-> returning void) - - let stubs_fill_tensor_ = - foreign "atg_fill_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_fill_tensor_out = - foreign "atg_fill_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_fix = foreign "atg_fix" (ptr t @-> t @-> returning void) - let stubs_fix_ = foreign "atg_fix_" (ptr t @-> t @-> returning void) - let stubs_fix_out = foreign "atg_fix_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_flatten = - foreign "atg_flatten" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_flatten_dense_tensors = - foreign "atg_flatten_dense_tensors" (ptr t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_flip = - foreign "atg_flip" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_flip_out = - foreign "atg_flip_out" (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_fliplr = foreign "atg_fliplr" (ptr t @-> t @-> returning void) - let stubs_flipud = foreign "atg_flipud" (ptr t @-> t @-> returning void) - let stubs_float_power = foreign "atg_float_power" (ptr t @-> t @-> t @-> returning void) - - let stubs_float_power_ = - foreign "atg_float_power_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_float_power_scalar = - foreign "atg_float_power_scalar" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_float_power_scalar_out = - foreign "atg_float_power_scalar_out" (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_float_power_tensor_ = - foreign "atg_float_power_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_float_power_tensor_scalar = - foreign "atg_float_power_tensor_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_float_power_tensor_scalar_out = - foreign - "atg_float_power_tensor_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_float_power_tensor_tensor_out = - foreign - "atg_float_power_tensor_tensor_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_floor = foreign "atg_floor" (ptr t @-> t @-> returning void) - let stubs_floor_ = foreign "atg_floor_" (ptr t @-> t @-> returning void) - - let stubs_floor_divide = - foreign "atg_floor_divide" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_floor_divide_ = - foreign "atg_floor_divide_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_floor_divide_out = - foreign "atg_floor_divide_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_floor_divide_scalar = - foreign "atg_floor_divide_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_floor_divide_scalar_ = - foreign "atg_floor_divide_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_floor_out = foreign "atg_floor_out" (ptr t @-> t @-> t @-> returning void) - let stubs_fmax = foreign "atg_fmax" (ptr t @-> t @-> t @-> returning void) - let stubs_fmax_out = foreign "atg_fmax_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_fmin = foreign "atg_fmin" (ptr t @-> t @-> t @-> returning void) - let stubs_fmin_out = foreign "atg_fmin_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_fmod = foreign "atg_fmod" (ptr t @-> t @-> scalar @-> returning void) - let stubs_fmod_ = foreign "atg_fmod_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_fmod_scalar_out = - foreign "atg_fmod_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_fmod_tensor = foreign "atg_fmod_tensor" (ptr t @-> t @-> t @-> returning void) -end - -module C11 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_fmod_tensor_ = - foreign "atg_fmod_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_fmod_tensor_out = - foreign "atg_fmod_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_frac = foreign "atg_frac" (ptr t @-> t @-> returning void) - let stubs_frac_ = foreign "atg_frac_" (ptr t @-> t @-> returning void) - let stubs_frac_out = foreign "atg_frac_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_fractional_max_pool2d = - foreign - "atg_fractional_max_pool2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_fractional_max_pool2d_backward = - foreign - "atg_fractional_max_pool2d_backward" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_fractional_max_pool2d_backward_grad_input = - foreign - "atg_fractional_max_pool2d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_fractional_max_pool2d_output = - foreign - "atg_fractional_max_pool2d_output" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_fractional_max_pool3d = - foreign - "atg_fractional_max_pool3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_fractional_max_pool3d_backward = - foreign - "atg_fractional_max_pool3d_backward" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_fractional_max_pool3d_backward_grad_input = - foreign - "atg_fractional_max_pool3d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_fractional_max_pool3d_output = - foreign - "atg_fractional_max_pool3d_output" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_frexp = foreign "atg_frexp" (ptr t @-> t @-> returning void) - - let stubs_frexp_tensor_out = - foreign "atg_frexp_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_frobenius_norm = - foreign - "atg_frobenius_norm" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_frobenius_norm_out = - foreign - "atg_frobenius_norm_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_from_file = - foreign - "atg_from_file" - (ptr t @-> string @-> int @-> int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_from_file_out = - foreign - "atg_from_file_out" - (ptr t @-> t @-> string @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_full = - foreign - "atg_full" - (ptr t @-> ptr int64_t @-> int @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_full_like = foreign "atg_full_like" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_full_like_out = - foreign "atg_full_like_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_full_out = - foreign - "atg_full_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> scalar @-> returning void) - ;; - - let stubs_fused_moving_avg_obs_fake_quant = - foreign - "atg_fused_moving_avg_obs_fake_quant" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_gather = - foreign "atg_gather" (ptr t @-> t @-> int64_t @-> t @-> int @-> returning void) - ;; - - let stubs_gather_backward = - foreign - "atg_gather_backward" - (ptr t @-> t @-> t @-> int64_t @-> t @-> int @-> returning void) - ;; - - let stubs_gather_out = - foreign - "atg_gather_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> int @-> returning void) - ;; - - let stubs_gcd = foreign "atg_gcd" (ptr t @-> t @-> t @-> returning void) - let stubs_gcd_ = foreign "atg_gcd_" (ptr t @-> t @-> t @-> returning void) - let stubs_gcd_out = foreign "atg_gcd_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_ge = foreign "atg_ge" (ptr t @-> t @-> scalar @-> returning void) - let stubs_ge_ = foreign "atg_ge_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_ge_scalar_out = - foreign "atg_ge_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_ge_tensor = foreign "atg_ge_tensor" (ptr t @-> t @-> t @-> returning void) - let stubs_ge_tensor_ = foreign "atg_ge_tensor_" (ptr t @-> t @-> t @-> returning void) - - let stubs_ge_tensor_out = - foreign "atg_ge_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_gelu = foreign "atg_gelu" (ptr t @-> t @-> string @-> returning void) - let stubs_gelu_ = foreign "atg_gelu_" (ptr t @-> t @-> string @-> returning void) - - let stubs_gelu_backward = - foreign "atg_gelu_backward" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_gelu_backward_grad_input = - foreign - "atg_gelu_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_gelu_out = - foreign "atg_gelu_out" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_geometric = foreign "atg_geometric" (ptr t @-> t @-> double @-> returning void) - - let stubs_geometric_ = - foreign "atg_geometric_" (ptr t @-> t @-> double @-> returning void) - ;; - - let stubs_geometric_out = - foreign "atg_geometric_out" (ptr t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_geqrf = foreign "atg_geqrf" (ptr t @-> t @-> returning void) - let stubs_geqrf_a = foreign "atg_geqrf_a" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_ger = foreign "atg_ger" (ptr t @-> t @-> t @-> returning void) - let stubs_ger_out = foreign "atg_ger_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_glu = foreign "atg_glu" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_glu_backward = - foreign "atg_glu_backward" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_glu_backward_grad_input = - foreign - "atg_glu_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_glu_backward_jvp = - foreign - "atg_glu_backward_jvp" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_glu_backward_jvp_out = - foreign - "atg_glu_backward_jvp_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_glu_jvp = - foreign "atg_glu_jvp" (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_glu_jvp_out = - foreign - "atg_glu_jvp_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_glu_out = - foreign "atg_glu_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_grad = foreign "atg_grad" (ptr t @-> t @-> returning void) - let stubs_greater = foreign "atg_greater" (ptr t @-> t @-> scalar @-> returning void) - let stubs_greater_ = foreign "atg_greater_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_greater_equal = - foreign "atg_greater_equal" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_greater_equal_ = - foreign "atg_greater_equal_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_greater_equal_scalar_out = - foreign - "atg_greater_equal_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_greater_equal_tensor = - foreign "atg_greater_equal_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_greater_equal_tensor_ = - foreign "atg_greater_equal_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_greater_equal_tensor_out = - foreign "atg_greater_equal_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_greater_scalar_out = - foreign "atg_greater_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_greater_tensor = - foreign "atg_greater_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_greater_tensor_ = - foreign "atg_greater_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_greater_tensor_out = - foreign "atg_greater_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_grid_sampler = - foreign - "atg_grid_sampler" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_grid_sampler_2d = - foreign - "atg_grid_sampler_2d" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_grid_sampler_2d_out = - foreign - "atg_grid_sampler_2d_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_grid_sampler_3d = - foreign - "atg_grid_sampler_3d" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_grid_sampler_3d_out = - foreign - "atg_grid_sampler_3d_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_group_norm = - foreign - "atg_group_norm" - (ptr t @-> t @-> int64_t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_gru = - foreign - "atg_gru" - (ptr t - @-> t - @-> t - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_gru_cell = - foreign "atg_gru_cell" (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_gru_data = - foreign - "atg_gru_data" - (ptr t - @-> t - @-> t - @-> t - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> returning void) - ;; - - let stubs_gt = foreign "atg_gt" (ptr t @-> t @-> scalar @-> returning void) - let stubs_gt_ = foreign "atg_gt_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_gt_scalar_out = - foreign "atg_gt_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_gt_tensor = foreign "atg_gt_tensor" (ptr t @-> t @-> t @-> returning void) - let stubs_gt_tensor_ = foreign "atg_gt_tensor_" (ptr t @-> t @-> t @-> returning void) - - let stubs_gt_tensor_out = - foreign "atg_gt_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_hamming_window = - foreign "atg_hamming_window" (ptr t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_hamming_window_out = - foreign "atg_hamming_window_out" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_hamming_window_periodic = - foreign - "atg_hamming_window_periodic" - (ptr t @-> int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_hamming_window_periodic_alpha = - foreign - "atg_hamming_window_periodic_alpha" - (ptr t @-> int64_t @-> int @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_hamming_window_periodic_alpha_beta = - foreign - "atg_hamming_window_periodic_alpha_beta" - (ptr t @-> int64_t @-> int @-> double @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_hamming_window_periodic_alpha_beta_out = - foreign - "atg_hamming_window_periodic_alpha_beta_out" - (ptr t @-> t @-> int64_t @-> int @-> double @-> double @-> returning void) - ;; - - let stubs_hamming_window_periodic_alpha_out = - foreign - "atg_hamming_window_periodic_alpha_out" - (ptr t @-> t @-> int64_t @-> int @-> double @-> returning void) - ;; - - let stubs_hamming_window_periodic_out = - foreign - "atg_hamming_window_periodic_out" - (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_hann_window = - foreign "atg_hann_window" (ptr t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_hann_window_out = - foreign "atg_hann_window_out" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_hann_window_periodic = - foreign - "atg_hann_window_periodic" - (ptr t @-> int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_hann_window_periodic_out = - foreign - "atg_hann_window_periodic_out" - (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_hardshrink = foreign "atg_hardshrink" (ptr t @-> t @-> returning void) - - let stubs_hardshrink_backward = - foreign "atg_hardshrink_backward" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_hardshrink_backward_grad_input = - foreign - "atg_hardshrink_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_hardshrink_out = - foreign "atg_hardshrink_out" (ptr t @-> t @-> t @-> returning void) - ;; -end - -module C12 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - let stubs_hardsigmoid = foreign "atg_hardsigmoid" (ptr t @-> t @-> returning void) - let stubs_hardsigmoid_ = foreign "atg_hardsigmoid_" (ptr t @-> t @-> returning void) - - let stubs_hardsigmoid_backward = - foreign "atg_hardsigmoid_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_hardsigmoid_backward_grad_input = - foreign - "atg_hardsigmoid_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_hardsigmoid_out = - foreign "atg_hardsigmoid_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_hardswish = foreign "atg_hardswish" (ptr t @-> t @-> returning void) - let stubs_hardswish_ = foreign "atg_hardswish_" (ptr t @-> t @-> returning void) - - let stubs_hardswish_backward = - foreign "atg_hardswish_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_hardswish_backward_out = - foreign "atg_hardswish_backward_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_hardswish_out = - foreign "atg_hardswish_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_hardtanh = foreign "atg_hardtanh" (ptr t @-> t @-> returning void) - let stubs_hardtanh_ = foreign "atg_hardtanh_" (ptr t @-> t @-> returning void) - - let stubs_hardtanh_backward = - foreign - "atg_hardtanh_backward" - (ptr t @-> t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_hardtanh_backward_grad_input = - foreign - "atg_hardtanh_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_hardtanh_out = - foreign "atg_hardtanh_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_heaviside = foreign "atg_heaviside" (ptr t @-> t @-> t @-> returning void) - let stubs_heaviside_ = foreign "atg_heaviside_" (ptr t @-> t @-> t @-> returning void) - - let stubs_heaviside_out = - foreign "atg_heaviside_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_hinge_embedding_loss = - foreign - "atg_hinge_embedding_loss" - (ptr t @-> t @-> t @-> double @-> int64_t @-> returning void) - ;; - - let stubs_histc = foreign "atg_histc" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_histc_out = - foreign "atg_histc_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_hsplit = foreign "atg_hsplit" (t @-> int64_t @-> returning (ptr t)) - - let stubs_hsplit_array = - foreign "atg_hsplit_array" (t @-> ptr int64_t @-> int @-> returning (ptr t)) - ;; - - let stubs_hspmm = foreign "atg_hspmm" (ptr t @-> t @-> t @-> returning void) - - let stubs_hspmm_out = - foreign "atg_hspmm_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_hstack = foreign "atg_hstack" (ptr t @-> ptr t @-> int @-> returning void) - - let stubs_hstack_out = - foreign "atg_hstack_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_huber_loss = - foreign "atg_huber_loss" (ptr t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_huber_loss_backward = - foreign - "atg_huber_loss_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_huber_loss_backward_out = - foreign - "atg_huber_loss_backward_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_huber_loss_out = - foreign - "atg_huber_loss_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_hypot = foreign "atg_hypot" (ptr t @-> t @-> t @-> returning void) - let stubs_hypot_ = foreign "atg_hypot_" (ptr t @-> t @-> t @-> returning void) - - let stubs_hypot_out = - foreign "atg_hypot_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_i0 = foreign "atg_i0" (ptr t @-> t @-> returning void) - let stubs_i0_ = foreign "atg_i0_" (ptr t @-> t @-> returning void) - let stubs_i0_out = foreign "atg_i0_out" (ptr t @-> t @-> t @-> returning void) - let stubs_igamma = foreign "atg_igamma" (ptr t @-> t @-> t @-> returning void) - let stubs_igamma_ = foreign "atg_igamma_" (ptr t @-> t @-> t @-> returning void) - - let stubs_igamma_out = - foreign "atg_igamma_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_igammac = foreign "atg_igammac" (ptr t @-> t @-> t @-> returning void) - let stubs_igammac_ = foreign "atg_igammac_" (ptr t @-> t @-> t @-> returning void) - - let stubs_igammac_out = - foreign "atg_igammac_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_im2col = - foreign - "atg_im2col" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_im2col_out = - foreign - "atg_im2col_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_imag = foreign "atg_imag" (ptr t @-> t @-> returning void) - let stubs_index = foreign "atg_index" (ptr t @-> t @-> ptr t @-> int @-> returning void) - - let stubs_index_add = - foreign "atg_index_add" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_add_ = - foreign "atg_index_add_" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_add_out = - foreign - "atg_index_add_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_copy = - foreign "atg_index_copy" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_copy_ = - foreign "atg_index_copy_" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_copy_out = - foreign - "atg_index_copy_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_fill = - foreign "atg_index_fill" (ptr t @-> t @-> int64_t @-> t @-> scalar @-> returning void) - ;; - - let stubs_index_fill_ = - foreign "atg_index_fill_" (ptr t @-> t @-> int64_t @-> t @-> scalar @-> returning void) - ;; - - let stubs_index_fill_int_scalar_out = - foreign - "atg_index_fill_int_scalar_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> scalar @-> returning void) - ;; - - let stubs_index_fill_int_tensor = - foreign - "atg_index_fill_int_tensor" - (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_fill_int_tensor_ = - foreign - "atg_index_fill_int_tensor_" - (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_fill_int_tensor_out = - foreign - "atg_index_fill_int_tensor_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_index_put = - foreign - "atg_index_put" - (ptr t @-> t @-> ptr t @-> int @-> t @-> int @-> returning void) - ;; - - let stubs_index_put_ = - foreign - "atg_index_put_" - (ptr t @-> t @-> ptr t @-> int @-> t @-> int @-> returning void) - ;; - - let stubs_index_put_out = - foreign - "atg_index_put_out" - (ptr t @-> t @-> t @-> ptr t @-> int @-> t @-> int @-> returning void) - ;; - - let stubs_index_reduce = - foreign - "atg_index_reduce" - (ptr t @-> t @-> int64_t @-> t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs_index_reduce_ = - foreign - "atg_index_reduce_" - (ptr t @-> t @-> int64_t @-> t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs_index_reduce_out = - foreign - "atg_index_reduce_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> string @-> int @-> returning void) - ;; - - let stubs_index_select = - foreign "atg_index_select" (ptr t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_index_select_backward = - foreign - "atg_index_select_backward" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> t @-> returning void) - ;; - - let stubs_index_select_out = - foreign "atg_index_select_out" (ptr t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_index_tensor_out = - foreign "atg_index_tensor_out" (ptr t @-> t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_indices = foreign "atg_indices" (ptr t @-> t @-> returning void) - let stubs_indices_copy = foreign "atg_indices_copy" (ptr t @-> t @-> returning void) - - let stubs_indices_copy_out = - foreign "atg_indices_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_infinitely_differentiable_gelu_backward = - foreign - "atg_infinitely_differentiable_gelu_backward" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_inner = foreign "atg_inner" (ptr t @-> t @-> t @-> returning void) - - let stubs_inner_out = - foreign "atg_inner_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_instance_norm = - foreign - "atg_instance_norm" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> int - @-> returning void) - ;; - - let stubs_int_repr = foreign "atg_int_repr" (ptr t @-> t @-> returning void) - - let stubs_int_repr_out = - foreign "atg_int_repr_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_inverse = foreign "atg_inverse" (ptr t @-> t @-> returning void) - let stubs_inverse_out = foreign "atg_inverse_out" (ptr t @-> t @-> t @-> returning void) - let stubs_is_coalesced = foreign "atg_is_coalesced" (t @-> returning bool) - let stubs_is_complex = foreign "atg_is_complex" (t @-> returning bool) - let stubs_is_conj = foreign "atg_is_conj" (t @-> returning bool) - let stubs_is_distributed = foreign "atg_is_distributed" (t @-> returning bool) - let stubs_is_floating_point = foreign "atg_is_floating_point" (t @-> returning bool) - let stubs_is_inference = foreign "atg_is_inference" (t @-> returning bool) - let stubs_is_leaf = foreign "atg_is_leaf" (t @-> returning bool) - let stubs_is_neg = foreign "atg_is_neg" (t @-> returning bool) - let stubs_is_nonzero = foreign "atg_is_nonzero" (t @-> returning bool) - let stubs_is_pinned = foreign "atg_is_pinned" (t @-> int @-> returning bool) - let stubs_is_same_size = foreign "atg_is_same_size" (t @-> t @-> returning bool) - let stubs_is_set_to = foreign "atg_is_set_to" (t @-> t @-> returning bool) - let stubs_is_signed = foreign "atg_is_signed" (t @-> returning bool) - - let stubs_is_vulkan_available = - foreign "atg_is_vulkan_available" (void @-> returning bool) - ;; - - let stubs_isclose = - foreign - "atg_isclose" - (ptr t @-> t @-> t @-> double @-> double @-> int @-> returning void) - ;; - - let stubs_isfinite = foreign "atg_isfinite" (ptr t @-> t @-> returning void) - - let stubs_isin = - foreign "atg_isin" (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_isin_scalar_tensor = - foreign - "atg_isin_scalar_tensor" - (ptr t @-> scalar @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_isin_scalar_tensor_out = - foreign - "atg_isin_scalar_tensor_out" - (ptr t @-> t @-> scalar @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_isin_tensor_scalar = - foreign - "atg_isin_tensor_scalar" - (ptr t @-> t @-> scalar @-> int @-> int @-> returning void) - ;; -end - -module C13 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_isin_tensor_scalar_out = - foreign - "atg_isin_tensor_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_isin_tensor_tensor_out = - foreign - "atg_isin_tensor_tensor_out" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_isinf = foreign "atg_isinf" (ptr t @-> t @-> returning void) - let stubs_isinf_out = foreign "atg_isinf_out" (ptr t @-> t @-> t @-> returning void) - let stubs_isnan = foreign "atg_isnan" (ptr t @-> t @-> returning void) - let stubs_isnan_out = foreign "atg_isnan_out" (ptr t @-> t @-> t @-> returning void) - let stubs_isneginf = foreign "atg_isneginf" (ptr t @-> t @-> returning void) - - let stubs_isneginf_out = - foreign "atg_isneginf_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_isposinf = foreign "atg_isposinf" (ptr t @-> t @-> returning void) - - let stubs_isposinf_out = - foreign "atg_isposinf_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_isreal = foreign "atg_isreal" (ptr t @-> t @-> returning void) - - let stubs_istft = - foreign - "atg_istft" - (ptr t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> int - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_kaiser_window = - foreign "atg_kaiser_window" (ptr t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_kaiser_window_beta = - foreign - "atg_kaiser_window_beta" - (ptr t @-> int64_t @-> int @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_kaiser_window_beta_out = - foreign - "atg_kaiser_window_beta_out" - (ptr t @-> t @-> int64_t @-> int @-> double @-> returning void) - ;; - - let stubs_kaiser_window_out = - foreign "atg_kaiser_window_out" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_kaiser_window_periodic = - foreign - "atg_kaiser_window_periodic" - (ptr t @-> int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_kaiser_window_periodic_out = - foreign - "atg_kaiser_window_periodic_out" - (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_kl_div = - foreign "atg_kl_div" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_kron = foreign "atg_kron" (ptr t @-> t @-> t @-> returning void) - let stubs_kron_out = foreign "atg_kron_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_kthvalue = - foreign "atg_kthvalue" (ptr t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_kthvalue_values = - foreign - "atg_kthvalue_values" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_l1_loss = - foreign "atg_l1_loss" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_layer_norm = - foreign - "atg_layer_norm" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> t - @-> double - @-> int - @-> returning void) - ;; - - let stubs_lcm = foreign "atg_lcm" (ptr t @-> t @-> t @-> returning void) - let stubs_lcm_ = foreign "atg_lcm_" (ptr t @-> t @-> t @-> returning void) - let stubs_lcm_out = foreign "atg_lcm_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_ldexp = foreign "atg_ldexp" (ptr t @-> t @-> t @-> returning void) - let stubs_ldexp_ = foreign "atg_ldexp_" (ptr t @-> t @-> t @-> returning void) - - let stubs_ldexp_out = - foreign "atg_ldexp_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_le = foreign "atg_le" (ptr t @-> t @-> scalar @-> returning void) - let stubs_le_ = foreign "atg_le_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_le_scalar_out = - foreign "atg_le_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_le_tensor = foreign "atg_le_tensor" (ptr t @-> t @-> t @-> returning void) - let stubs_le_tensor_ = foreign "atg_le_tensor_" (ptr t @-> t @-> t @-> returning void) - - let stubs_le_tensor_out = - foreign "atg_le_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_leaky_relu = foreign "atg_leaky_relu" (ptr t @-> t @-> returning void) - let stubs_leaky_relu_ = foreign "atg_leaky_relu_" (ptr t @-> t @-> returning void) - - let stubs_leaky_relu_backward = - foreign - "atg_leaky_relu_backward" - (ptr t @-> t @-> t @-> scalar @-> int @-> returning void) - ;; - - let stubs_leaky_relu_backward_grad_input = - foreign - "atg_leaky_relu_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> scalar @-> int @-> returning void) - ;; - - let stubs_leaky_relu_out = - foreign "atg_leaky_relu_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_lerp = foreign "atg_lerp" (ptr t @-> t @-> t @-> scalar @-> returning void) - let stubs_lerp_ = foreign "atg_lerp_" (ptr t @-> t @-> t @-> scalar @-> returning void) - - let stubs_lerp_scalar_out = - foreign "atg_lerp_scalar_out" (ptr t @-> t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_lerp_tensor = - foreign "atg_lerp_tensor" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_lerp_tensor_ = - foreign "atg_lerp_tensor_" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_lerp_tensor_out = - foreign "atg_lerp_tensor_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_less = foreign "atg_less" (ptr t @-> t @-> scalar @-> returning void) - let stubs_less_ = foreign "atg_less_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_less_equal = - foreign "atg_less_equal" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_less_equal_ = - foreign "atg_less_equal_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_less_equal_scalar_out = - foreign "atg_less_equal_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_less_equal_tensor = - foreign "atg_less_equal_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_less_equal_tensor_ = - foreign "atg_less_equal_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_less_equal_tensor_out = - foreign "atg_less_equal_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_less_scalar_out = - foreign "atg_less_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_less_tensor = foreign "atg_less_tensor" (ptr t @-> t @-> t @-> returning void) - - let stubs_less_tensor_ = - foreign "atg_less_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_less_tensor_out = - foreign "atg_less_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_lgamma = foreign "atg_lgamma" (ptr t @-> t @-> returning void) - let stubs_lgamma_ = foreign "atg_lgamma_" (ptr t @-> t @-> returning void) - let stubs_lgamma_out = foreign "atg_lgamma_out" (ptr t @-> t @-> t @-> returning void) - let stubs_lift = foreign "atg_lift" (ptr t @-> t @-> returning void) - let stubs_lift_fresh = foreign "atg_lift_fresh" (ptr t @-> t @-> returning void) - - let stubs_lift_fresh_copy = - foreign "atg_lift_fresh_copy" (ptr t @-> t @-> returning void) - ;; - - let stubs_lift_fresh_copy_out = - foreign "atg_lift_fresh_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_lift_out = foreign "atg_lift_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_linalg_cholesky = - foreign "atg_linalg_cholesky" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_cholesky_ex = - foreign "atg_linalg_cholesky_ex" (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_cholesky_ex_l = - foreign - "atg_linalg_cholesky_ex_l" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_cholesky_out = - foreign "atg_linalg_cholesky_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_cond = - foreign "atg_linalg_cond" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_linalg_cond_out = - foreign "atg_linalg_cond_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_linalg_cond_p_str = - foreign "atg_linalg_cond_p_str" (ptr t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_cond_p_str_out = - foreign "atg_linalg_cond_p_str_out" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_cross = - foreign "atg_linalg_cross" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_cross_out = - foreign "atg_linalg_cross_out" (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_det = foreign "atg_linalg_det" (ptr t @-> t @-> returning void) - - let stubs_linalg_det_out = - foreign "atg_linalg_det_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_diagonal = - foreign - "atg_linalg_diagonal" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_eig = foreign "atg_linalg_eig" (ptr t @-> t @-> returning void) - - let stubs_linalg_eig_out = - foreign "atg_linalg_eig_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_eigh = - foreign "atg_linalg_eigh" (ptr t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_eigh_eigvals = - foreign - "atg_linalg_eigh_eigvals" - (ptr t @-> t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_eigvals = foreign "atg_linalg_eigvals" (ptr t @-> t @-> returning void) - - let stubs_linalg_eigvals_out = - foreign "atg_linalg_eigvals_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_eigvalsh = - foreign "atg_linalg_eigvalsh" (ptr t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_eigvalsh_out = - foreign "atg_linalg_eigvalsh_out" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_householder_product = - foreign "atg_linalg_householder_product" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_householder_product_out = - foreign - "atg_linalg_householder_product_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_inv = foreign "atg_linalg_inv" (ptr t @-> t @-> returning void) - - let stubs_linalg_inv_ex = - foreign "atg_linalg_inv_ex" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_inv_ex_inverse = - foreign - "atg_linalg_inv_ex_inverse" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_inv_out = - foreign "atg_linalg_inv_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_ldl_factor = - foreign "atg_linalg_ldl_factor" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_ldl_factor_ex = - foreign "atg_linalg_ldl_factor_ex" (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_ldl_factor_ex_out = - foreign - "atg_linalg_ldl_factor_ex_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_ldl_factor_out = - foreign - "atg_linalg_ldl_factor_out" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_ldl_solve = - foreign "atg_linalg_ldl_solve" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; -end - -module C14 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_linalg_ldl_solve_out = - foreign - "atg_linalg_ldl_solve_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_lstsq = - foreign - "atg_linalg_lstsq" - (ptr t @-> t @-> t @-> double @-> int @-> string @-> returning void) - ;; - - let stubs_linalg_lstsq_out = - foreign - "atg_linalg_lstsq_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> int - @-> string - @-> returning void) - ;; - - let stubs_linalg_lu = foreign "atg_linalg_lu" (ptr t @-> t @-> int @-> returning void) - - let stubs_linalg_lu_factor = - foreign "atg_linalg_lu_factor" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_lu_factor_ex = - foreign "atg_linalg_lu_factor_ex" (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_lu_factor_ex_out = - foreign - "atg_linalg_lu_factor_ex_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_lu_factor_out = - foreign "atg_linalg_lu_factor_out" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_lu_out = - foreign "atg_linalg_lu_out" (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_lu_solve = - foreign - "atg_linalg_lu_solve" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_lu_solve_out = - foreign - "atg_linalg_lu_solve_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_matmul = - foreign "atg_linalg_matmul" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_matmul_out = - foreign "atg_linalg_matmul_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_matrix_exp = - foreign "atg_linalg_matrix_exp" (ptr t @-> t @-> returning void) - ;; - - let stubs_linalg_matrix_exp_out = - foreign "atg_linalg_matrix_exp_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_matrix_power = - foreign "atg_linalg_matrix_power" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_matrix_power_out = - foreign - "atg_linalg_matrix_power_out" - (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_matrix_rank = - foreign "atg_linalg_matrix_rank" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_linalg_matrix_rank_atol_rtol_float = - foreign - "atg_linalg_matrix_rank_atol_rtol_float" - (ptr t @-> t @-> double @-> int @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_matrix_rank_atol_rtol_float_out = - foreign - "atg_linalg_matrix_rank_atol_rtol_float_out" - (ptr t @-> t @-> t @-> double @-> int @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_matrix_rank_atol_rtol_tensor = - foreign - "atg_linalg_matrix_rank_atol_rtol_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_matrix_rank_atol_rtol_tensor_out = - foreign - "atg_linalg_matrix_rank_atol_rtol_tensor_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_matrix_rank_out = - foreign - "atg_linalg_matrix_rank_out" - (ptr t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_linalg_matrix_rank_out_tol_tensor = - foreign - "atg_linalg_matrix_rank_out_tol_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_matrix_rank_tol_tensor = - foreign - "atg_linalg_matrix_rank_tol_tensor" - (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_multi_dot = - foreign "atg_linalg_multi_dot" (ptr t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_linalg_multi_dot_out = - foreign "atg_linalg_multi_dot_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_linalg_pinv = - foreign "atg_linalg_pinv" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_linalg_pinv_atol_rtol_float = - foreign - "atg_linalg_pinv_atol_rtol_float" - (ptr t @-> t @-> double @-> int @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_pinv_atol_rtol_float_out = - foreign - "atg_linalg_pinv_atol_rtol_float_out" - (ptr t @-> t @-> t @-> double @-> int @-> double @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_pinv_atol_rtol_tensor = - foreign - "atg_linalg_pinv_atol_rtol_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_pinv_atol_rtol_tensor_out = - foreign - "atg_linalg_pinv_atol_rtol_tensor_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_pinv_out = - foreign "atg_linalg_pinv_out" (ptr t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_linalg_pinv_out_rcond_tensor = - foreign - "atg_linalg_pinv_out_rcond_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_pinv_rcond_tensor = - foreign "atg_linalg_pinv_rcond_tensor" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_qr = foreign "atg_linalg_qr" (ptr t @-> t @-> string @-> returning void) - - let stubs_linalg_qr_out = - foreign "atg_linalg_qr_out" (ptr t @-> t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_slogdet = foreign "atg_linalg_slogdet" (ptr t @-> t @-> returning void) - - let stubs_linalg_slogdet_out = - foreign "atg_linalg_slogdet_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_linalg_solve = - foreign "atg_linalg_solve" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_solve_ex = - foreign "atg_linalg_solve_ex" (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_solve_ex_out = - foreign - "atg_linalg_solve_ex_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_solve_out = - foreign "atg_linalg_solve_out" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_linalg_solve_triangular = - foreign - "atg_linalg_solve_triangular" - (ptr t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_solve_triangular_out = - foreign - "atg_linalg_solve_triangular_out" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_linalg_svd = - foreign "atg_linalg_svd" (ptr t @-> t @-> int @-> string @-> returning void) - ;; - - let stubs_linalg_svd_u = - foreign - "atg_linalg_svd_u" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> string @-> returning void) - ;; - - let stubs_linalg_svdvals = - foreign "atg_linalg_svdvals" (ptr t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_svdvals_out = - foreign "atg_linalg_svdvals_out" (ptr t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_linalg_tensorinv = - foreign "atg_linalg_tensorinv" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_tensorinv_out = - foreign "atg_linalg_tensorinv_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_tensorsolve = - foreign - "atg_linalg_tensorsolve" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_linalg_tensorsolve_out = - foreign - "atg_linalg_tensorsolve_out" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_linalg_vander = - foreign "atg_linalg_vander" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_linalg_vecdot = - foreign "atg_linalg_vecdot" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linalg_vecdot_out = - foreign - "atg_linalg_vecdot_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_linear = foreign "atg_linear" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_linear_out = - foreign "atg_linear_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_linspace = - foreign - "atg_linspace" - (ptr t @-> scalar @-> scalar @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_linspace_out = - foreign - "atg_linspace_out" - (ptr t @-> t @-> scalar @-> scalar @-> int64_t @-> returning void) - ;; - - let stubs_log = foreign "atg_log" (ptr t @-> t @-> returning void) - let stubs_log10 = foreign "atg_log10" (ptr t @-> t @-> returning void) - let stubs_log10_ = foreign "atg_log10_" (ptr t @-> t @-> returning void) - let stubs_log10_out = foreign "atg_log10_out" (ptr t @-> t @-> t @-> returning void) - let stubs_log1p = foreign "atg_log1p" (ptr t @-> t @-> returning void) - let stubs_log1p_ = foreign "atg_log1p_" (ptr t @-> t @-> returning void) - let stubs_log1p_out = foreign "atg_log1p_out" (ptr t @-> t @-> t @-> returning void) - let stubs_log2 = foreign "atg_log2" (ptr t @-> t @-> returning void) - let stubs_log2_ = foreign "atg_log2_" (ptr t @-> t @-> returning void) - let stubs_log2_out = foreign "atg_log2_out" (ptr t @-> t @-> t @-> returning void) - let stubs_log_ = foreign "atg_log_" (ptr t @-> t @-> returning void) - - let stubs_log_normal = - foreign "atg_log_normal" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_log_normal_ = - foreign "atg_log_normal_" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_log_normal_out = - foreign - "atg_log_normal_out" - (ptr t @-> t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_log_out = foreign "atg_log_out" (ptr t @-> t @-> t @-> returning void) - let stubs_log_sigmoid = foreign "atg_log_sigmoid" (ptr t @-> t @-> returning void) - - let stubs_log_sigmoid_backward = - foreign "atg_log_sigmoid_backward" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_log_sigmoid_backward_grad_input = - foreign - "atg_log_sigmoid_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_log_sigmoid_out = - foreign "atg_log_sigmoid_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_log_softmax = - foreign "atg_log_softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_log_softmax_int_out = - foreign - "atg_log_softmax_int_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_logaddexp = foreign "atg_logaddexp" (ptr t @-> t @-> t @-> returning void) - let stubs_logaddexp2 = foreign "atg_logaddexp2" (ptr t @-> t @-> t @-> returning void) - - let stubs_logaddexp2_out = - foreign "atg_logaddexp2_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_logaddexp_out = - foreign "atg_logaddexp_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_logcumsumexp = - foreign "atg_logcumsumexp" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_logcumsumexp_out = - foreign "atg_logcumsumexp_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_logdet = foreign "atg_logdet" (ptr t @-> t @-> returning void) - let stubs_logical_and = foreign "atg_logical_and" (ptr t @-> t @-> t @-> returning void) - - let stubs_logical_and_ = - foreign "atg_logical_and_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_logical_and_out = - foreign "atg_logical_and_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_logical_not = foreign "atg_logical_not" (ptr t @-> t @-> returning void) - let stubs_logical_not_ = foreign "atg_logical_not_" (ptr t @-> t @-> returning void) - - let stubs_logical_not_out = - foreign "atg_logical_not_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_logical_or = foreign "atg_logical_or" (ptr t @-> t @-> t @-> returning void) - let stubs_logical_or_ = foreign "atg_logical_or_" (ptr t @-> t @-> t @-> returning void) - - let stubs_logical_or_out = - foreign "atg_logical_or_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_logical_xor = foreign "atg_logical_xor" (ptr t @-> t @-> t @-> returning void) - - let stubs_logical_xor_ = - foreign "atg_logical_xor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_logical_xor_out = - foreign "atg_logical_xor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; -end - -module C15 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - let stubs_logit = foreign "atg_logit" (ptr t @-> t @-> double @-> int @-> returning void) - - let stubs_logit_ = - foreign "atg_logit_" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_logit_backward = - foreign "atg_logit_backward" (ptr t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_logit_backward_grad_input = - foreign - "atg_logit_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_logit_out = - foreign "atg_logit_out" (ptr t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_logspace = - foreign - "atg_logspace" - (ptr t - @-> scalar - @-> scalar - @-> int64_t - @-> double - @-> int - @-> int - @-> returning void) - ;; - - let stubs_logspace_out = - foreign - "atg_logspace_out" - (ptr t @-> t @-> scalar @-> scalar @-> int64_t @-> double @-> returning void) - ;; - - let stubs_logsumexp = - foreign - "atg_logsumexp" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_logsumexp_out = - foreign - "atg_logsumexp_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_lstm = - foreign - "atg_lstm" - (ptr t - @-> t - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_lstm_cell = - foreign - "atg_lstm_cell" - (ptr t @-> t @-> ptr t @-> int @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_lstm_data = - foreign - "atg_lstm_data" - (ptr t - @-> t - @-> t - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> returning void) - ;; - - let stubs_lstm_mps_backward = - foreign - "atg_lstm_mps_backward" - (t - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> ptr t - @-> int - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_lt = foreign "atg_lt" (ptr t @-> t @-> scalar @-> returning void) - let stubs_lt_ = foreign "atg_lt_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_lt_scalar_out = - foreign "atg_lt_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_lt_tensor = foreign "atg_lt_tensor" (ptr t @-> t @-> t @-> returning void) - let stubs_lt_tensor_ = foreign "atg_lt_tensor_" (ptr t @-> t @-> t @-> returning void) - - let stubs_lt_tensor_out = - foreign "atg_lt_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_lu_solve = foreign "atg_lu_solve" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_lu_solve_out = - foreign "atg_lu_solve_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_lu_unpack = - foreign "atg_lu_unpack" (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_lu_unpack_out = - foreign - "atg_lu_unpack_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_margin_ranking_loss = - foreign - "atg_margin_ranking_loss" - (ptr t @-> t @-> t @-> t @-> double @-> int64_t @-> returning void) - ;; - - let stubs_masked_fill = - foreign "atg_masked_fill" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_masked_fill_ = - foreign "atg_masked_fill_" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_masked_fill_scalar_out = - foreign - "atg_masked_fill_scalar_out" - (ptr t @-> t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_masked_fill_tensor = - foreign "atg_masked_fill_tensor" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_fill_tensor_ = - foreign "atg_masked_fill_tensor_" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_fill_tensor_out = - foreign "atg_masked_fill_tensor_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_scatter = - foreign "atg_masked_scatter" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_scatter_ = - foreign "atg_masked_scatter_" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_scatter_out = - foreign "atg_masked_scatter_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_select = - foreign "atg_masked_select" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_select_backward = - foreign "atg_masked_select_backward" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_masked_select_out = - foreign "atg_masked_select_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_matmul = foreign "atg_matmul" (ptr t @-> t @-> t @-> returning void) - - let stubs_matmul_out = - foreign "atg_matmul_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_matrix_exp = foreign "atg_matrix_exp" (ptr t @-> t @-> returning void) - - let stubs_matrix_exp_backward = - foreign "atg_matrix_exp_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_matrix_h = foreign "atg_matrix_h" (ptr t @-> t @-> returning void) - - let stubs_matrix_power = - foreign "atg_matrix_power" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_matrix_power_out = - foreign "atg_matrix_power_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_max = foreign "atg_max" (ptr t @-> t @-> returning void) - - let stubs_max_dim = - foreign "atg_max_dim" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_max_dim_max = - foreign - "atg_max_dim_max" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_max_other = foreign "atg_max_other" (ptr t @-> t @-> t @-> returning void) - let stubs_max_out = foreign "atg_max_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_max_pool1d = - foreign - "atg_max_pool1d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool1d_with_indices = - foreign - "atg_max_pool1d_with_indices" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool2d = - foreign - "atg_max_pool2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool2d_backward = - foreign - "atg_max_pool2d_backward" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool2d_backward_out = - foreign - "atg_max_pool2d_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool2d_with_indices = - foreign - "atg_max_pool2d_with_indices" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool2d_with_indices_backward = - foreign - "atg_max_pool2d_with_indices_backward" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> t - @-> returning void) - ;; - - let stubs_max_pool2d_with_indices_backward_grad_input = - foreign - "atg_max_pool2d_with_indices_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> t - @-> returning void) - ;; - - let stubs_max_pool2d_with_indices_out = - foreign - "atg_max_pool2d_with_indices_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool3d = - foreign - "atg_max_pool3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool3d_with_indices = - foreign - "atg_max_pool3d_with_indices" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_pool3d_with_indices_backward = - foreign - "atg_max_pool3d_with_indices_backward" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> t - @-> returning void) - ;; - - let stubs_max_pool3d_with_indices_backward_grad_input = - foreign - "atg_max_pool3d_with_indices_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> t - @-> returning void) - ;; - - let stubs_max_pool3d_with_indices_out = - foreign - "atg_max_pool3d_with_indices_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_max_unary_out = - foreign "atg_max_unary_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_max_unpool2d = - foreign - "atg_max_unpool2d" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_max_unpool2d_out = - foreign - "atg_max_unpool2d_out" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_max_unpool3d = - foreign - "atg_max_unpool3d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_max_unpool3d_out = - foreign - "atg_max_unpool3d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_maximum = foreign "atg_maximum" (ptr t @-> t @-> t @-> returning void) - - let stubs_maximum_out = - foreign "atg_maximum_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_mean = foreign "atg_mean" (ptr t @-> t @-> int @-> returning void) - - let stubs_mean_dim = - foreign - "atg_mean_dim" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_mean_out = - foreign - "atg_mean_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_median = foreign "atg_median" (ptr t @-> t @-> returning void) - - let stubs_median_dim = - foreign "atg_median_dim" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_median_dim_values = - foreign - "atg_median_dim_values" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_median_out = foreign "atg_median_out" (ptr t @-> t @-> t @-> returning void) - let stubs_meshgrid = foreign "atg_meshgrid" (ptr t @-> int @-> returning (ptr t)) - - let stubs_meshgrid_indexing = - foreign "atg_meshgrid_indexing" (ptr t @-> int @-> string @-> returning (ptr t)) - ;; - - let stubs_mh = foreign "atg_mh" (ptr t @-> t @-> returning void) - let stubs_min = foreign "atg_min" (ptr t @-> t @-> returning void) - - let stubs_min_dim = - foreign "atg_min_dim" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_min_dim_min = - foreign - "atg_min_dim_min" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_min_other = foreign "atg_min_other" (ptr t @-> t @-> t @-> returning void) - let stubs_min_out = foreign "atg_min_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_min_unary_out = - foreign "atg_min_unary_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_minimum = foreign "atg_minimum" (ptr t @-> t @-> t @-> returning void) - - let stubs_minimum_out = - foreign "atg_minimum_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_miopen_batch_norm = - foreign - "atg_miopen_batch_norm" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs_miopen_batch_norm_backward = - foreign - "atg_miopen_batch_norm_backward" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_miopen_batch_norm_backward_out = - foreign - "atg_miopen_batch_norm_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> returning void) - ;; - - let stubs_miopen_batch_norm_out = - foreign - "atg_miopen_batch_norm_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs_miopen_convolution = - foreign - "atg_miopen_convolution" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_miopen_convolution_add_relu = - foreign - "atg_miopen_convolution_add_relu" - (ptr t - @-> t - @-> t - @-> t - @-> scalar - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_miopen_convolution_out = - foreign - "atg_miopen_convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_miopen_convolution_relu = - foreign - "atg_miopen_convolution_relu" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_miopen_convolution_transpose = - foreign - "atg_miopen_convolution_transpose" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_miopen_convolution_transpose_out = - foreign - "atg_miopen_convolution_transpose_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_miopen_depthwise_convolution = - foreign - "atg_miopen_depthwise_convolution" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_miopen_depthwise_convolution_out = - foreign - "atg_miopen_depthwise_convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_miopen_rnn = - foreign - "atg_miopen_rnn" - (ptr t - @-> t - @-> ptr t - @-> int - @-> int64_t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> double - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; -end - -module C16 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_miopen_rnn_out = - foreign - "atg_miopen_rnn_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> ptr t - @-> int - @-> int64_t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> double - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> t - @-> returning void) - ;; - - let stubs_mish = foreign "atg_mish" (ptr t @-> t @-> returning void) - let stubs_mish_ = foreign "atg_mish_" (ptr t @-> t @-> returning void) - - let stubs_mish_backward = - foreign "atg_mish_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_mish_out = foreign "atg_mish_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_mkldnn_adaptive_avg_pool2d = - foreign - "atg_mkldnn_adaptive_avg_pool2d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_mkldnn_adaptive_avg_pool2d_backward = - foreign - "atg_mkldnn_adaptive_avg_pool2d_backward" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_mkldnn_adaptive_avg_pool2d_backward_out = - foreign - "atg_mkldnn_adaptive_avg_pool2d_backward_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_mkldnn_adaptive_avg_pool2d_out = - foreign - "atg_mkldnn_adaptive_avg_pool2d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_mkldnn_convolution = - foreign - "atg_mkldnn_convolution" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_mkldnn_convolution_out = - foreign - "atg_mkldnn_convolution_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_mkldnn_linear = - foreign "atg_mkldnn_linear" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_mkldnn_linear_backward_input = - foreign - "atg_mkldnn_linear_backward_input" - (ptr t @-> ptr int64_t @-> int @-> t @-> t @-> returning void) - ;; - - let stubs_mkldnn_linear_backward_input_out = - foreign - "atg_mkldnn_linear_backward_input_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> t @-> t @-> returning void) - ;; - - let stubs_mkldnn_linear_backward_weights = - foreign - "atg_mkldnn_linear_backward_weights" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_mkldnn_linear_backward_weights_out = - foreign - "atg_mkldnn_linear_backward_weights_out" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_mkldnn_linear_out = - foreign "atg_mkldnn_linear_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_mkldnn_max_pool2d = - foreign - "atg_mkldnn_max_pool2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_max_pool2d_backward = - foreign - "atg_mkldnn_max_pool2d_backward" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_max_pool2d_backward_out = - foreign - "atg_mkldnn_max_pool2d_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_max_pool2d_out = - foreign - "atg_mkldnn_max_pool2d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_max_pool3d = - foreign - "atg_mkldnn_max_pool3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_max_pool3d_backward = - foreign - "atg_mkldnn_max_pool3d_backward" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_max_pool3d_backward_out = - foreign - "atg_mkldnn_max_pool3d_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_max_pool3d_out = - foreign - "atg_mkldnn_max_pool3d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_reorder_conv2d_weight = - foreign - "atg_mkldnn_reorder_conv2d_weight" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_reorder_conv2d_weight_out = - foreign - "atg_mkldnn_reorder_conv2d_weight_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_reorder_conv3d_weight = - foreign - "atg_mkldnn_reorder_conv3d_weight" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_mkldnn_reorder_conv3d_weight_out = - foreign - "atg_mkldnn_reorder_conv3d_weight_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_mkldnn_rnn_layer = - foreign - "atg_mkldnn_rnn_layer" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mkldnn_rnn_layer_backward = - foreign - "atg_mkldnn_rnn_layer_backward" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> t - @-> returning void) - ;; - - let stubs_mkldnn_rnn_layer_backward_out = - foreign - "atg_mkldnn_rnn_layer_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> t - @-> returning void) - ;; - - let stubs_mkldnn_rnn_layer_out = - foreign - "atg_mkldnn_rnn_layer_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_mm = foreign "atg_mm" (ptr t @-> t @-> t @-> returning void) - let stubs_mm_out = foreign "atg_mm_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_mode = foreign "atg_mode" (ptr t @-> t @-> int64_t @-> int @-> returning void) - - let stubs_mode_values = - foreign - "atg_mode_values" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_moveaxis = - foreign - "atg_moveaxis" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_moveaxis_int = - foreign "atg_moveaxis_int" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_movedim = - foreign - "atg_movedim" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_movedim_int = - foreign "atg_movedim_int" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_mse_loss = - foreign "atg_mse_loss" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_mse_loss_backward = - foreign - "atg_mse_loss_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_mse_loss_backward_grad_input = - foreign - "atg_mse_loss_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_mse_loss_out = - foreign "atg_mse_loss_out" (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_msort = foreign "atg_msort" (ptr t @-> t @-> returning void) - let stubs_msort_out = foreign "atg_msort_out" (ptr t @-> t @-> t @-> returning void) - let stubs_mt = foreign "atg_mt" (ptr t @-> t @-> returning void) - let stubs_mul = foreign "atg_mul" (ptr t @-> t @-> t @-> returning void) - let stubs_mul_ = foreign "atg_mul_" (ptr t @-> t @-> t @-> returning void) - let stubs_mul_out = foreign "atg_mul_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_mul_scalar = - foreign "atg_mul_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_mul_scalar_ = - foreign "atg_mul_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_mul_scalar_out = - foreign "atg_mul_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_multi_margin_loss_backward = - foreign - "atg_multi_margin_loss_backward" - (ptr t @-> t @-> t @-> t @-> scalar @-> scalar @-> t @-> int64_t @-> returning void) - ;; - - let stubs_multi_margin_loss_backward_grad_input = - foreign - "atg_multi_margin_loss_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> scalar - @-> scalar - @-> t - @-> int64_t - @-> returning void) - ;; - - let stubs_multilabel_margin_loss = - foreign "atg_multilabel_margin_loss" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_multilabel_margin_loss_backward = - foreign - "atg_multilabel_margin_loss_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_multilabel_margin_loss_backward_grad_input = - foreign - "atg_multilabel_margin_loss_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_multilabel_margin_loss_out = - foreign - "atg_multilabel_margin_loss_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_multinomial = - foreign "atg_multinomial" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_multinomial_out = - foreign - "atg_multinomial_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_multiply = foreign "atg_multiply" (ptr t @-> t @-> t @-> returning void) - let stubs_multiply_ = foreign "atg_multiply_" (ptr t @-> t @-> t @-> returning void) - - let stubs_multiply_out = - foreign "atg_multiply_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_multiply_scalar = - foreign "atg_multiply_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_multiply_scalar_ = - foreign "atg_multiply_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_mv = foreign "atg_mv" (ptr t @-> t @-> t @-> returning void) - let stubs_mv_out = foreign "atg_mv_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_mvlgamma = foreign "atg_mvlgamma" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_mvlgamma_ = - foreign "atg_mvlgamma_" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_mvlgamma_out = - foreign "atg_mvlgamma_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_nan_to_num = - foreign - "atg_nan_to_num" - (ptr t - @-> t - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_nan_to_num_ = - foreign - "atg_nan_to_num_" - (ptr t - @-> t - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_nan_to_num_out = - foreign - "atg_nan_to_num_out" - (ptr t - @-> t - @-> t - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_nanmean = - foreign - "atg_nanmean" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_nanmean_out = - foreign - "atg_nanmean_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_nanmedian = foreign "atg_nanmedian" (ptr t @-> t @-> returning void) - - let stubs_nanmedian_dim = - foreign "atg_nanmedian_dim" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_nanmedian_dim_values = - foreign - "atg_nanmedian_dim_values" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_nanmedian_out = - foreign "atg_nanmedian_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_nanquantile = - foreign - "atg_nanquantile" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs_nanquantile_out = - foreign - "atg_nanquantile_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs_nanquantile_scalar = - foreign - "atg_nanquantile_scalar" - (ptr t @-> t @-> double @-> int64_t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs_nanquantile_scalar_out = - foreign - "atg_nanquantile_scalar_out" - (ptr t - @-> t - @-> t - @-> double - @-> int64_t - @-> int - @-> int - @-> string - @-> returning void) - ;; - - let stubs_nansum = - foreign - "atg_nansum" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_nansum_out = - foreign - "atg_nansum_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_narrow = - foreign - "atg_narrow" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_narrow_copy = - foreign - "atg_narrow_copy" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_narrow_copy_out = - foreign - "atg_narrow_copy_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_narrow_tensor = - foreign - "atg_narrow_tensor" - (ptr t @-> t @-> int64_t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_native_batch_norm = - foreign - "atg_native_batch_norm" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs_native_batch_norm_out = - foreign - "atg_native_batch_norm_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> double - @-> double - @-> returning void) - ;; - - let stubs_native_channel_shuffle = - foreign "atg_native_channel_shuffle" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_native_dropout = - foreign "atg_native_dropout" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_native_dropout_backward = - foreign "atg_native_dropout_backward" (ptr t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_native_dropout_backward_out = - foreign - "atg_native_dropout_backward_out" - (ptr t @-> t @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_native_dropout_out = - foreign - "atg_native_dropout_out" - (ptr t @-> t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_native_group_norm = - foreign - "atg_native_group_norm" - (ptr t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> double - @-> returning void) - ;; - - let stubs_native_group_norm_out = - foreign - "atg_native_group_norm_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> double - @-> returning void) - ;; -end - -module C17 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_native_layer_norm = - foreign - "atg_native_layer_norm" - (ptr t @-> t @-> ptr int64_t @-> int @-> t @-> t @-> double @-> returning void) - ;; - - let stubs_native_layer_norm_out = - foreign - "atg_native_layer_norm_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> t - @-> double - @-> returning void) - ;; - - let stubs_native_norm = foreign "atg_native_norm" (ptr t @-> t @-> returning void) - - let stubs_native_norm_out = - foreign "atg_native_norm_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_native_norm_scalaropt_dim_dtype = - foreign - "atg_native_norm_scalaropt_dim_dtype" - (ptr t @-> t @-> scalar @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_native_norm_scalaropt_dim_dtype_out = - foreign - "atg_native_norm_scalaropt_dim_dtype_out" - (ptr t - @-> t - @-> t - @-> scalar - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_ne = foreign "atg_ne" (ptr t @-> t @-> scalar @-> returning void) - let stubs_ne_ = foreign "atg_ne_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_ne_scalar_out = - foreign "atg_ne_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_ne_tensor = foreign "atg_ne_tensor" (ptr t @-> t @-> t @-> returning void) - let stubs_ne_tensor_ = foreign "atg_ne_tensor_" (ptr t @-> t @-> t @-> returning void) - - let stubs_ne_tensor_out = - foreign "atg_ne_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_neg = foreign "atg_neg" (ptr t @-> t @-> returning void) - let stubs_neg_ = foreign "atg_neg_" (ptr t @-> t @-> returning void) - let stubs_neg_out = foreign "atg_neg_out" (ptr t @-> t @-> t @-> returning void) - let stubs_negative = foreign "atg_negative" (ptr t @-> t @-> returning void) - let stubs_negative_ = foreign "atg_negative_" (ptr t @-> t @-> returning void) - - let stubs_negative_out = - foreign "atg_negative_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_nested_to_padded_tensor = - foreign - "atg_nested_to_padded_tensor" - (ptr t @-> t @-> double @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_new_empty = - foreign - "atg_new_empty" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_new_empty_out = - foreign - "atg_new_empty_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_new_empty_strided = - foreign - "atg_new_empty_strided" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_new_empty_strided_out = - foreign - "atg_new_empty_strided_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_new_full = - foreign - "atg_new_full" - (ptr t @-> t @-> ptr int64_t @-> int @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_new_full_out = - foreign - "atg_new_full_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> scalar @-> returning void) - ;; - - let stubs_new_ones = - foreign - "atg_new_ones" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_new_ones_out = - foreign - "atg_new_ones_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_new_zeros = - foreign - "atg_new_zeros" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_new_zeros_out = - foreign - "atg_new_zeros_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_nextafter = foreign "atg_nextafter" (ptr t @-> t @-> t @-> returning void) - let stubs_nextafter_ = foreign "atg_nextafter_" (ptr t @-> t @-> t @-> returning void) - - let stubs_nextafter_out = - foreign "atg_nextafter_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_nll_loss = - foreign - "atg_nll_loss" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_nll_loss2d = - foreign - "atg_nll_loss2d" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_nll_loss2d_backward = - foreign - "atg_nll_loss2d_backward" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_nll_loss2d_backward_grad_input = - foreign - "atg_nll_loss2d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> returning void) - ;; - - let stubs_nll_loss2d_out = - foreign - "atg_nll_loss2d_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_nll_loss_backward = - foreign - "atg_nll_loss_backward" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_nll_loss_backward_grad_input = - foreign - "atg_nll_loss_backward_grad_input" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> t - @-> returning void) - ;; - - let stubs_nll_loss_nd = - foreign - "atg_nll_loss_nd" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_nll_loss_out = - foreign - "atg_nll_loss_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_nonzero = foreign "atg_nonzero" (ptr t @-> t @-> returning void) - let stubs_nonzero_numpy = foreign "atg_nonzero_numpy" (t @-> returning (ptr t)) - let stubs_nonzero_out = foreign "atg_nonzero_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_nonzero_static = - foreign "atg_nonzero_static" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_nonzero_static_out = - foreign - "atg_nonzero_static_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_norm = foreign "atg_norm" (ptr t @-> t @-> returning void) - - let stubs_norm_dtype_out = - foreign - "atg_norm_dtype_out" - (ptr t - @-> t - @-> t - @-> scalar - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_norm_except_dim = - foreign "atg_norm_except_dim" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_norm_out = - foreign - "atg_norm_out" - (ptr t @-> t @-> t @-> scalar @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_norm_scalar_out = - foreign "atg_norm_scalar_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_norm_scalaropt_dim = - foreign - "atg_norm_scalaropt_dim" - (ptr t @-> t @-> scalar @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_norm_scalaropt_dim_dtype = - foreign - "atg_norm_scalaropt_dim_dtype" - (ptr t @-> t @-> scalar @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_norm_scalaropt_dtype = - foreign "atg_norm_scalaropt_dtype" (ptr t @-> t @-> scalar @-> int @-> returning void) - ;; - - let stubs_norm_scalaropt_dtype_out = - foreign - "atg_norm_scalaropt_dtype_out" - (ptr t @-> t @-> t @-> scalar @-> int @-> returning void) - ;; - - let stubs_normal_ = - foreign "atg_normal_" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_normal_functional = - foreign "atg_normal_functional" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_not_equal = foreign "atg_not_equal" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_not_equal_ = - foreign "atg_not_equal_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_not_equal_scalar_out = - foreign "atg_not_equal_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_not_equal_tensor = - foreign "atg_not_equal_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_not_equal_tensor_ = - foreign "atg_not_equal_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_not_equal_tensor_out = - foreign "atg_not_equal_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_nuclear_norm = - foreign "atg_nuclear_norm" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_nuclear_norm_dim = - foreign - "atg_nuclear_norm_dim" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_nuclear_norm_dim_out = - foreign - "atg_nuclear_norm_dim_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_nuclear_norm_out = - foreign "atg_nuclear_norm_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_numpy_t = foreign "atg_numpy_t" (ptr t @-> t @-> returning void) - let stubs_one_hot = foreign "atg_one_hot" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_ones = - foreign "atg_ones" (ptr t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_ones_like = foreign "atg_ones_like" (ptr t @-> t @-> returning void) - - let stubs_ones_like_out = - foreign "atg_ones_like_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_ones_out = - foreign "atg_ones_out" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_orgqr = foreign "atg_orgqr" (ptr t @-> t @-> t @-> returning void) - - let stubs_orgqr_out = - foreign "atg_orgqr_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_ormqr = - foreign "atg_ormqr" (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_ormqr_out = - foreign - "atg_ormqr_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_outer = foreign "atg_outer" (ptr t @-> t @-> t @-> returning void) - - let stubs_outer_out = - foreign "atg_outer_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_output_nr = foreign "atg_output_nr" (t @-> returning int64_t) - - let stubs_pad = - foreign - "atg_pad" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> string - @-> double - @-> int - @-> returning void) - ;; - - let stubs_pad_sequence = - foreign - "atg_pad_sequence" - (ptr t @-> ptr t @-> int @-> int @-> double @-> returning void) - ;; - - let stubs_pairwise_distance = - foreign - "atg_pairwise_distance" - (ptr t @-> t @-> t @-> double @-> double @-> int @-> returning void) - ;; - - let stubs_pdist = foreign "atg_pdist" (ptr t @-> t @-> double @-> returning void) - - let stubs_permute = - foreign "atg_permute" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_permute_copy = - foreign "atg_permute_copy" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_permute_copy_out = - foreign - "atg_permute_copy_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_pin_memory = foreign "atg_pin_memory" (ptr t @-> t @-> int @-> returning void) - let stubs_pinverse = foreign "atg_pinverse" (ptr t @-> t @-> double @-> returning void) - - let stubs_pixel_shuffle = - foreign "atg_pixel_shuffle" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_pixel_shuffle_out = - foreign "atg_pixel_shuffle_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_pixel_unshuffle = - foreign "atg_pixel_unshuffle" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_pixel_unshuffle_out = - foreign "atg_pixel_unshuffle_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_poisson = foreign "atg_poisson" (ptr t @-> t @-> returning void) - - let stubs_poisson_nll_loss = - foreign - "atg_poisson_nll_loss" - (ptr t @-> t @-> t @-> int @-> int @-> double @-> int64_t @-> returning void) - ;; - - let stubs_poisson_out = foreign "atg_poisson_out" (ptr t @-> t @-> t @-> returning void) - let stubs_polar = foreign "atg_polar" (ptr t @-> t @-> t @-> returning void) - - let stubs_polar_out = - foreign "atg_polar_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_polygamma = - foreign "atg_polygamma" (ptr t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_polygamma_ = - foreign "atg_polygamma_" (ptr t @-> t @-> int64_t @-> returning void) - ;; -end - -module C18 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_polygamma_out = - foreign "atg_polygamma_out" (ptr t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_positive = foreign "atg_positive" (ptr t @-> t @-> returning void) - let stubs_pow = foreign "atg_pow" (ptr t @-> t @-> t @-> returning void) - let stubs_pow_ = foreign "atg_pow_" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_pow_scalar = - foreign "atg_pow_scalar" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_pow_scalar_out = - foreign "atg_pow_scalar_out" (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_pow_tensor_ = foreign "atg_pow_tensor_" (ptr t @-> t @-> t @-> returning void) - - let stubs_pow_tensor_scalar = - foreign "atg_pow_tensor_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_pow_tensor_scalar_out = - foreign "atg_pow_tensor_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_pow_tensor_tensor_out = - foreign "atg_pow_tensor_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_prelu = foreign "atg_prelu" (ptr t @-> t @-> t @-> returning void) - let stubs_prod = foreign "atg_prod" (ptr t @-> t @-> int @-> returning void) - - let stubs_prod_dim_int = - foreign "atg_prod_dim_int" (ptr t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_prod_int_out = - foreign - "atg_prod_int_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_prod_out = - foreign "atg_prod_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_put = foreign "atg_put" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - let stubs_put_ = foreign "atg_put_" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - - let stubs_put_out = - foreign "atg_put_out" (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_q_per_channel_axis = foreign "atg_q_per_channel_axis" (t @-> returning int64_t) - - let stubs_q_per_channel_scales = - foreign "atg_q_per_channel_scales" (ptr t @-> t @-> returning void) - ;; - - let stubs_q_per_channel_scales_out = - foreign "atg_q_per_channel_scales_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_q_per_channel_zero_points = - foreign "atg_q_per_channel_zero_points" (ptr t @-> t @-> returning void) - ;; - - let stubs_q_per_channel_zero_points_out = - foreign "atg_q_per_channel_zero_points_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_q_scale = foreign "atg_q_scale" (t @-> returning double) - let stubs_q_zero_point = foreign "atg_q_zero_point" (t @-> returning int64_t) - let stubs_qr = foreign "atg_qr" (ptr t @-> t @-> int @-> returning void) - let stubs_qr_q = foreign "atg_qr_q" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - - let stubs_quantile = - foreign - "atg_quantile" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs_quantile_out = - foreign - "atg_quantile_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs_quantile_scalar = - foreign - "atg_quantile_scalar" - (ptr t @-> t @-> double @-> int64_t @-> int @-> int @-> string @-> returning void) - ;; - - let stubs_quantile_scalar_out = - foreign - "atg_quantile_scalar_out" - (ptr t - @-> t - @-> t - @-> double - @-> int64_t - @-> int - @-> int - @-> string - @-> returning void) - ;; - - let stubs_quantize_per_channel = - foreign - "atg_quantize_per_channel" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_quantize_per_channel_out = - foreign - "atg_quantize_per_channel_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_quantize_per_tensor = - foreign - "atg_quantize_per_tensor" - (ptr t @-> t @-> double @-> int64_t @-> int @-> returning void) - ;; - - let stubs_quantize_per_tensor_dynamic = - foreign - "atg_quantize_per_tensor_dynamic" - (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_quantize_per_tensor_dynamic_out = - foreign - "atg_quantize_per_tensor_dynamic_out" - (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_quantize_per_tensor_out = - foreign - "atg_quantize_per_tensor_out" - (ptr t @-> t @-> t @-> double @-> int64_t @-> int @-> returning void) - ;; - - let stubs_quantize_per_tensor_tensor_qparams = - foreign - "atg_quantize_per_tensor_tensor_qparams" - (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_quantize_per_tensor_tensor_qparams_out = - foreign - "atg_quantize_per_tensor_tensor_qparams_out" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_quantize_per_tensor_tensors = - foreign - "atg_quantize_per_tensor_tensors" - (ptr t @-> int @-> t @-> t @-> int @-> returning (ptr t)) - ;; - - let stubs_quantize_per_tensor_tensors_out = - foreign - "atg_quantize_per_tensor_tensors_out" - (ptr t @-> int @-> ptr t @-> int @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_quantized_batch_norm = - foreign - "atg_quantized_batch_norm" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> double - @-> int64_t - @-> returning void) - ;; - - let stubs_quantized_batch_norm_out = - foreign - "atg_quantized_batch_norm_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> double - @-> int64_t - @-> returning void) - ;; - - let stubs_quantized_gru_cell = - foreign - "atg_quantized_gru_cell" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> scalar - @-> scalar - @-> scalar - @-> scalar - @-> returning void) - ;; - - let stubs_quantized_lstm_cell = - foreign - "atg_quantized_lstm_cell" - (ptr t - @-> t - @-> ptr t - @-> int - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> scalar - @-> scalar - @-> scalar - @-> scalar - @-> returning void) - ;; - - let stubs_quantized_max_pool1d = - foreign - "atg_quantized_max_pool1d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_quantized_max_pool1d_out = - foreign - "atg_quantized_max_pool1d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_quantized_max_pool2d = - foreign - "atg_quantized_max_pool2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_quantized_max_pool2d_out = - foreign - "atg_quantized_max_pool2d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_quantized_max_pool3d = - foreign - "atg_quantized_max_pool3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_quantized_max_pool3d_out = - foreign - "atg_quantized_max_pool3d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> returning void) - ;; - - let stubs_quantized_rnn_relu_cell = - foreign - "atg_quantized_rnn_relu_cell" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> scalar - @-> scalar - @-> scalar - @-> scalar - @-> returning void) - ;; - - let stubs_quantized_rnn_tanh_cell = - foreign - "atg_quantized_rnn_tanh_cell" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> t - @-> scalar - @-> scalar - @-> scalar - @-> scalar - @-> returning void) - ;; - - let stubs_rad2deg = foreign "atg_rad2deg" (ptr t @-> t @-> returning void) - let stubs_rad2deg_ = foreign "atg_rad2deg_" (ptr t @-> t @-> returning void) - let stubs_rad2deg_out = foreign "atg_rad2deg_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_rand = - foreign "atg_rand" (ptr t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_rand_like = foreign "atg_rand_like" (ptr t @-> t @-> returning void) - - let stubs_rand_like_out = - foreign "atg_rand_like_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_rand_out = - foreign "atg_rand_out" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_randint = - foreign - "atg_randint" - (ptr t @-> int64_t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_randint_like = - foreign "atg_randint_like" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_randint_like_low_dtype = - foreign - "atg_randint_like_low_dtype" - (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_randint_like_low_dtype_out = - foreign - "atg_randint_like_low_dtype_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_randint_like_out = - foreign "atg_randint_like_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_randint_low = - foreign - "atg_randint_low" - (ptr t - @-> int64_t - @-> int64_t - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_randint_low_out = - foreign - "atg_randint_low_out" - (ptr t @-> t @-> int64_t @-> int64_t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_randint_out = - foreign - "atg_randint_out" - (ptr t @-> t @-> int64_t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_randn = - foreign "atg_randn" (ptr t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_randn_like = foreign "atg_randn_like" (ptr t @-> t @-> returning void) - - let stubs_randn_like_out = - foreign "atg_randn_like_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_randn_out = - foreign "atg_randn_out" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_random = foreign "atg_random" (ptr t @-> t @-> returning void) - let stubs_random_ = foreign "atg_random_" (ptr t @-> t @-> returning void) - - let stubs_random_from = - foreign - "atg_random_from" - (ptr t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_random_from_ = - foreign - "atg_random_from_" - (ptr t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_random_from_out = - foreign - "atg_random_from_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_random_out = foreign "atg_random_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_random_to = - foreign "atg_random_to" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_random_to_ = - foreign "atg_random_to_" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_random_to_out = - foreign "atg_random_to_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_randperm = - foreign "atg_randperm" (ptr t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_randperm_out = - foreign "atg_randperm_out" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_range = - foreign "atg_range" (ptr t @-> scalar @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_range_out = - foreign "atg_range_out" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_range_out_ = - foreign "atg_range_out_" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_range_step = - foreign - "atg_range_step" - (ptr t @-> scalar @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_ravel = foreign "atg_ravel" (ptr t @-> t @-> returning void) - let stubs_real = foreign "atg_real" (ptr t @-> t @-> returning void) - let stubs_reciprocal = foreign "atg_reciprocal" (ptr t @-> t @-> returning void) - let stubs_reciprocal_ = foreign "atg_reciprocal_" (ptr t @-> t @-> returning void) - - let stubs_reciprocal_out = - foreign "atg_reciprocal_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_reflection_pad1d = - foreign "atg_reflection_pad1d" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad1d_backward = - foreign - "atg_reflection_pad1d_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad1d_backward_grad_input = - foreign - "atg_reflection_pad1d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad1d_out = - foreign - "atg_reflection_pad1d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad2d = - foreign "atg_reflection_pad2d" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad2d_backward = - foreign - "atg_reflection_pad2d_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad2d_backward_grad_input = - foreign - "atg_reflection_pad2d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad2d_out = - foreign - "atg_reflection_pad2d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; -end - -module C19 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_reflection_pad3d = - foreign "atg_reflection_pad3d" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad3d_backward = - foreign - "atg_reflection_pad3d_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad3d_backward_grad_input = - foreign - "atg_reflection_pad3d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reflection_pad3d_out = - foreign - "atg_reflection_pad3d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_relu = foreign "atg_relu" (ptr t @-> t @-> returning void) - let stubs_relu6 = foreign "atg_relu6" (ptr t @-> t @-> returning void) - let stubs_relu6_ = foreign "atg_relu6_" (ptr t @-> t @-> returning void) - let stubs_relu_ = foreign "atg_relu_" (ptr t @-> t @-> returning void) - let stubs_relu_out = foreign "atg_relu_out" (ptr t @-> t @-> t @-> returning void) - let stubs_remainder = foreign "atg_remainder" (ptr t @-> t @-> scalar @-> returning void) - - let stubs_remainder_ = - foreign "atg_remainder_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_remainder_scalar_out = - foreign "atg_remainder_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_remainder_scalar_tensor = - foreign "atg_remainder_scalar_tensor" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_remainder_scalar_tensor_out = - foreign - "atg_remainder_scalar_tensor_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_remainder_tensor = - foreign "atg_remainder_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_remainder_tensor_ = - foreign "atg_remainder_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_remainder_tensor_out = - foreign "atg_remainder_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_renorm = - foreign "atg_renorm" (ptr t @-> t @-> scalar @-> int64_t @-> scalar @-> returning void) - ;; - - let stubs_renorm_ = - foreign - "atg_renorm_" - (ptr t @-> t @-> scalar @-> int64_t @-> scalar @-> returning void) - ;; - - let stubs_renorm_out = - foreign - "atg_renorm_out" - (ptr t @-> t @-> t @-> scalar @-> int64_t @-> scalar @-> returning void) - ;; - - let stubs_repeat = - foreign "atg_repeat" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_repeat_interleave = - foreign "atg_repeat_interleave" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_repeat_interleave_self_int = - foreign - "atg_repeat_interleave_self_int" - (ptr t @-> t @-> int64_t @-> int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_repeat_interleave_self_tensor = - foreign - "atg_repeat_interleave_self_tensor" - (ptr t @-> t @-> t @-> int64_t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_repeat_interleave_tensor_out = - foreign - "atg_repeat_interleave_tensor_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_repeat_out = - foreign "atg_repeat_out" (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad1d = - foreign - "atg_replication_pad1d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad1d_backward = - foreign - "atg_replication_pad1d_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad1d_backward_grad_input = - foreign - "atg_replication_pad1d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad1d_out = - foreign - "atg_replication_pad1d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad2d = - foreign - "atg_replication_pad2d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad2d_backward = - foreign - "atg_replication_pad2d_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad2d_backward_grad_input = - foreign - "atg_replication_pad2d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad2d_out = - foreign - "atg_replication_pad2d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad3d = - foreign - "atg_replication_pad3d" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad3d_backward = - foreign - "atg_replication_pad3d_backward" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad3d_backward_grad_input = - foreign - "atg_replication_pad3d_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_replication_pad3d_out = - foreign - "atg_replication_pad3d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_requires_grad_ = - foreign "atg_requires_grad_" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_reshape = - foreign "atg_reshape" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_reshape_as = foreign "atg_reshape_as" (ptr t @-> t @-> t @-> returning void) - - let stubs_resize = - foreign "atg_resize" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_resize_ = - foreign "atg_resize_" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_resize_as = foreign "atg_resize_as" (ptr t @-> t @-> t @-> returning void) - let stubs_resize_as_ = foreign "atg_resize_as_" (ptr t @-> t @-> t @-> returning void) - - let stubs_resize_as_out = - foreign "atg_resize_as_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_resize_as_sparse = - foreign "atg_resize_as_sparse" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_resize_as_sparse_ = - foreign "atg_resize_as_sparse_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_resize_as_sparse_out = - foreign "atg_resize_as_sparse_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_resize_out = - foreign "atg_resize_out" (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_resolve_conj = foreign "atg_resolve_conj" (ptr t @-> t @-> returning void) - let stubs_resolve_neg = foreign "atg_resolve_neg" (ptr t @-> t @-> returning void) - let stubs_retains_grad = foreign "atg_retains_grad" (t @-> returning bool) - - let stubs_rnn_relu = - foreign - "atg_rnn_relu" - (ptr t - @-> t - @-> t - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_rnn_relu_cell = - foreign - "atg_rnn_relu_cell" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_rnn_relu_data = - foreign - "atg_rnn_relu_data" - (ptr t - @-> t - @-> t - @-> t - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> returning void) - ;; - - let stubs_rnn_tanh = - foreign - "atg_rnn_tanh" - (ptr t - @-> t - @-> t - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_rnn_tanh_cell = - foreign - "atg_rnn_tanh_cell" - (ptr t @-> t @-> t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_rnn_tanh_data = - foreign - "atg_rnn_tanh_data" - (ptr t - @-> t - @-> t - @-> t - @-> ptr t - @-> int - @-> int - @-> int64_t - @-> double - @-> int - @-> int - @-> returning void) - ;; - - let stubs_roll = - foreign - "atg_roll" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_roll_out = - foreign - "atg_roll_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_rot90 = - foreign - "atg_rot90" - (ptr t @-> t @-> int64_t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_rot90_out = - foreign - "atg_rot90_out" - (ptr t @-> t @-> t @-> int64_t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_round = foreign "atg_round" (ptr t @-> t @-> returning void) - let stubs_round_ = foreign "atg_round_" (ptr t @-> t @-> returning void) - - let stubs_round_decimals = - foreign "atg_round_decimals" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_round_decimals_ = - foreign "atg_round_decimals_" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_round_decimals_out = - foreign "atg_round_decimals_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_round_out = foreign "atg_round_out" (ptr t @-> t @-> t @-> returning void) - let stubs_row_indices = foreign "atg_row_indices" (ptr t @-> t @-> returning void) - - let stubs_row_indices_copy = - foreign "atg_row_indices_copy" (ptr t @-> t @-> returning void) - ;; - - let stubs_row_indices_copy_out = - foreign "atg_row_indices_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_row_stack = - foreign "atg_row_stack" (ptr t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_row_stack_out = - foreign "atg_row_stack_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_rrelu = foreign "atg_rrelu" (ptr t @-> t @-> int @-> returning void) - let stubs_rrelu_ = foreign "atg_rrelu_" (ptr t @-> t @-> int @-> returning void) - - let stubs_rrelu_with_noise = - foreign "atg_rrelu_with_noise" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_rrelu_with_noise_ = - foreign "atg_rrelu_with_noise_" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_rrelu_with_noise_backward = - foreign - "atg_rrelu_with_noise_backward" - (ptr t @-> t @-> t @-> t @-> scalar @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_rrelu_with_noise_backward_out = - foreign - "atg_rrelu_with_noise_backward_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> scalar - @-> scalar - @-> int - @-> int - @-> returning void) - ;; - - let stubs_rrelu_with_noise_out = - foreign "atg_rrelu_with_noise_out" (ptr t @-> t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_rsqrt = foreign "atg_rsqrt" (ptr t @-> t @-> returning void) - let stubs_rsqrt_ = foreign "atg_rsqrt_" (ptr t @-> t @-> returning void) - let stubs_rsqrt_out = foreign "atg_rsqrt_out" (ptr t @-> t @-> t @-> returning void) - let stubs_rsub = foreign "atg_rsub" (ptr t @-> t @-> t @-> returning void) - - let stubs_rsub_scalar = - foreign "atg_rsub_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_rsub_scalar_out = - foreign "atg_rsub_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_rsub_tensor_out = - foreign "atg_rsub_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_scalar_tensor = - foreign "atg_scalar_tensor" (ptr t @-> scalar @-> int @-> int @-> returning void) - ;; - - let stubs_scalar_tensor_out = - foreign "atg_scalar_tensor_out" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_scaled_dot_product_attention = - foreign - "atg_scaled_dot_product_attention" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_scatter = - foreign "atg_scatter" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_scatter_ = - foreign "atg_scatter_" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_scatter_add = - foreign "atg_scatter_add" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_scatter_add_ = - foreign "atg_scatter_add_" (ptr t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_scatter_add_out = - foreign - "atg_scatter_add_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; - - let stubs_scatter_reduce = - foreign - "atg_scatter_reduce" - (ptr t @-> t @-> int64_t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_scatter_reduce_ = - foreign - "atg_scatter_reduce_" - (ptr t @-> t @-> int64_t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_scatter_reduce_out = - foreign - "atg_scatter_reduce_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> string @-> returning void) - ;; - - let stubs_scatter_src_out = - foreign - "atg_scatter_src_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> t @-> returning void) - ;; -end - -module C20 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_scatter_value = - foreign - "atg_scatter_value" - (ptr t @-> t @-> int64_t @-> t @-> scalar @-> returning void) - ;; - - let stubs_scatter_value_ = - foreign - "atg_scatter_value_" - (ptr t @-> t @-> int64_t @-> t @-> scalar @-> returning void) - ;; - - let stubs_scatter_value_out = - foreign - "atg_scatter_value_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> scalar @-> returning void) - ;; - - let stubs_scatter_value_reduce = - foreign - "atg_scatter_value_reduce" - (ptr t @-> t @-> int64_t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_scatter_value_reduce_ = - foreign - "atg_scatter_value_reduce_" - (ptr t @-> t @-> int64_t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_scatter_value_reduce_out = - foreign - "atg_scatter_value_reduce_out" - (ptr t @-> t @-> t @-> int64_t @-> t @-> scalar @-> string @-> returning void) - ;; - - let stubs_searchsorted = - foreign - "atg_searchsorted" - (ptr t @-> t @-> t @-> int @-> int @-> string @-> t @-> returning void) - ;; - - let stubs_searchsorted_scalar = - foreign - "atg_searchsorted_scalar" - (ptr t @-> t @-> scalar @-> int @-> int @-> string @-> t @-> returning void) - ;; - - let stubs_searchsorted_scalar_out = - foreign - "atg_searchsorted_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> int @-> int @-> string @-> t @-> returning void) - ;; - - let stubs_searchsorted_tensor_out = - foreign - "atg_searchsorted_tensor_out" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> string @-> t @-> returning void) - ;; - - let stubs_segment_reduce = - foreign - "atg_segment_reduce" - (ptr t - @-> t - @-> string - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> scalar - @-> returning void) - ;; - - let stubs_segment_reduce_out = - foreign - "atg_segment_reduce_out" - (ptr t - @-> t - @-> t - @-> string - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> scalar - @-> returning void) - ;; - - let stubs_select = - foreign "atg_select" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_select_backward = - foreign - "atg_select_backward" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_select_backward_out = - foreign - "atg_select_backward_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_select_copy = - foreign "atg_select_copy" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_select_copy_int_out = - foreign - "atg_select_copy_int_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_select_scatter = - foreign - "atg_select_scatter" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_select_scatter_out = - foreign - "atg_select_scatter_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_selu = foreign "atg_selu" (ptr t @-> t @-> returning void) - let stubs_selu_ = foreign "atg_selu_" (ptr t @-> t @-> returning void) - let stubs_set = foreign "atg_set" (ptr t @-> t @-> returning void) - let stubs_set_ = foreign "atg_set_" (ptr t @-> t @-> returning void) - let stubs_set_out = foreign "atg_set_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_set_requires_grad = - foreign "atg_set_requires_grad" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_set_source_tensor = - foreign "atg_set_source_tensor" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_set_source_tensor_ = - foreign "atg_set_source_tensor_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_set_source_tensor_out = - foreign "atg_set_source_tensor_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_set_source_tensor_storage_offset_ = - foreign - "atg_set_source_tensor_storage_offset_" - (ptr t - @-> t - @-> t - @-> int64_t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_sgn = foreign "atg_sgn" (ptr t @-> t @-> returning void) - let stubs_sgn_ = foreign "atg_sgn_" (ptr t @-> t @-> returning void) - let stubs_sgn_out = foreign "atg_sgn_out" (ptr t @-> t @-> t @-> returning void) - let stubs_sigmoid = foreign "atg_sigmoid" (ptr t @-> t @-> returning void) - let stubs_sigmoid_ = foreign "atg_sigmoid_" (ptr t @-> t @-> returning void) - - let stubs_sigmoid_backward = - foreign "atg_sigmoid_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_sigmoid_backward_grad_input = - foreign "atg_sigmoid_backward_grad_input" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_sigmoid_out = foreign "atg_sigmoid_out" (ptr t @-> t @-> t @-> returning void) - let stubs_sign = foreign "atg_sign" (ptr t @-> t @-> returning void) - let stubs_sign_ = foreign "atg_sign_" (ptr t @-> t @-> returning void) - let stubs_sign_out = foreign "atg_sign_out" (ptr t @-> t @-> t @-> returning void) - let stubs_signbit = foreign "atg_signbit" (ptr t @-> t @-> returning void) - let stubs_signbit_out = foreign "atg_signbit_out" (ptr t @-> t @-> t @-> returning void) - let stubs_silu = foreign "atg_silu" (ptr t @-> t @-> returning void) - let stubs_silu_ = foreign "atg_silu_" (ptr t @-> t @-> returning void) - - let stubs_silu_backward = - foreign "atg_silu_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_silu_backward_grad_input = - foreign "atg_silu_backward_grad_input" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_silu_out = foreign "atg_silu_out" (ptr t @-> t @-> t @-> returning void) - let stubs_sin = foreign "atg_sin" (ptr t @-> t @-> returning void) - let stubs_sin_ = foreign "atg_sin_" (ptr t @-> t @-> returning void) - let stubs_sin_out = foreign "atg_sin_out" (ptr t @-> t @-> t @-> returning void) - let stubs_sinc = foreign "atg_sinc" (ptr t @-> t @-> returning void) - let stubs_sinc_ = foreign "atg_sinc_" (ptr t @-> t @-> returning void) - let stubs_sinc_out = foreign "atg_sinc_out" (ptr t @-> t @-> t @-> returning void) - let stubs_sinh = foreign "atg_sinh" (ptr t @-> t @-> returning void) - let stubs_sinh_ = foreign "atg_sinh_" (ptr t @-> t @-> returning void) - let stubs_sinh_out = foreign "atg_sinh_out" (ptr t @-> t @-> t @-> returning void) - let stubs_size = foreign "atg_size" (t @-> int64_t @-> returning int64_t) - - let stubs_slice = - foreign - "atg_slice" - (ptr t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_slice_backward = - foreign - "atg_slice_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_slice_backward_out = - foreign - "atg_slice_backward_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_slice_copy = - foreign - "atg_slice_copy" - (ptr t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_slice_copy_tensor_out = - foreign - "atg_slice_copy_tensor_out" - (ptr t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_slice_scatter = - foreign - "atg_slice_scatter" - (ptr t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_slice_scatter_out = - foreign - "atg_slice_scatter_out" - (ptr t - @-> t - @-> t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_slogdet = foreign "atg_slogdet" (ptr t @-> t @-> returning void) - - let stubs_slogdet_out = - foreign "atg_slogdet_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_slow_conv3d = - foreign - "atg_slow_conv3d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv3d_out = - foreign - "atg_slow_conv3d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_dilated2d = - foreign - "atg_slow_conv_dilated2d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_dilated2d_out = - foreign - "atg_slow_conv_dilated2d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_dilated3d = - foreign - "atg_slow_conv_dilated3d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_dilated3d_out = - foreign - "atg_slow_conv_dilated3d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_transpose2d = - foreign - "atg_slow_conv_transpose2d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_transpose2d_out = - foreign - "atg_slow_conv_transpose2d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_transpose3d = - foreign - "atg_slow_conv_transpose3d" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_slow_conv_transpose3d_out = - foreign - "atg_slow_conv_transpose3d_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_smm = foreign "atg_smm" (ptr t @-> t @-> t @-> returning void) - - let stubs_smooth_l1_loss = - foreign - "atg_smooth_l1_loss" - (ptr t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_smooth_l1_loss_backward = - foreign - "atg_smooth_l1_loss_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_smooth_l1_loss_backward_grad_input = - foreign - "atg_smooth_l1_loss_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_smooth_l1_loss_out = - foreign - "atg_smooth_l1_loss_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> double @-> returning void) - ;; - - let stubs_soft_margin_loss = - foreign "atg_soft_margin_loss" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_soft_margin_loss_backward = - foreign - "atg_soft_margin_loss_backward" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_soft_margin_loss_backward_grad_input = - foreign - "atg_soft_margin_loss_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_soft_margin_loss_out = - foreign - "atg_soft_margin_loss_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_softmax = - foreign "atg_softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_softmax_int_out = - foreign - "atg_softmax_int_out" - (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_softplus = foreign "atg_softplus" (ptr t @-> t @-> returning void) - - let stubs_softplus_backward = - foreign - "atg_softplus_backward" - (ptr t @-> t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_softplus_backward_grad_input = - foreign - "atg_softplus_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_softplus_out = - foreign "atg_softplus_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_softshrink = foreign "atg_softshrink" (ptr t @-> t @-> returning void) - - let stubs_softshrink_backward = - foreign "atg_softshrink_backward" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_softshrink_backward_grad_input = - foreign - "atg_softshrink_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_softshrink_out = - foreign "atg_softshrink_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_sort = foreign "atg_sort" (ptr t @-> t @-> int64_t @-> int @-> returning void) - - let stubs_sort_stable = - foreign "atg_sort_stable" (ptr t @-> t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_sort_values = - foreign - "atg_sort_values" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_sort_values_stable = - foreign - "atg_sort_values_stable" - (ptr t @-> t @-> t @-> t @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_sparse_bsc_tensor = - foreign - "atg_sparse_bsc_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; -end - -module C21 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_sparse_bsc_tensor_ccol_row_value_size = - foreign - "atg_sparse_bsc_tensor_ccol_row_value_size" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_bsr_tensor = - foreign - "atg_sparse_bsr_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_bsr_tensor_crow_col_value_size = - foreign - "atg_sparse_bsr_tensor_crow_col_value_size" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_compressed_tensor = - foreign - "atg_sparse_compressed_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_compressed_tensor_comp_plain_value_size = - foreign - "atg_sparse_compressed_tensor_comp_plain_value_size" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_coo_tensor = - foreign - "atg_sparse_coo_tensor" - (ptr t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_coo_tensor_indices = - foreign - "atg_sparse_coo_tensor_indices" - (ptr t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_coo_tensor_indices_size = - foreign - "atg_sparse_coo_tensor_indices_size" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_sparse_coo_tensor_size_out = - foreign - "atg_sparse_coo_tensor_size_out" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_sparse_csc_tensor = - foreign - "atg_sparse_csc_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_csc_tensor_ccol_row_value_size = - foreign - "atg_sparse_csc_tensor_ccol_row_value_size" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_csr_tensor = - foreign - "atg_sparse_csr_tensor" - (ptr t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_csr_tensor_crow_col_value_size = - foreign - "atg_sparse_csr_tensor_crow_col_value_size" - (ptr t @-> t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sparse_dim = foreign "atg_sparse_dim" (t @-> returning int64_t) - let stubs_sparse_mask = foreign "atg_sparse_mask" (ptr t @-> t @-> t @-> returning void) - - let stubs_sparse_mask_out = - foreign "atg_sparse_mask_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_sparse_resize = - foreign - "atg_sparse_resize" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_sparse_resize_ = - foreign - "atg_sparse_resize_" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_sparse_resize_and_clear = - foreign - "atg_sparse_resize_and_clear" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_sparse_resize_and_clear_ = - foreign - "atg_sparse_resize_and_clear_" - (ptr t @-> t @-> ptr int64_t @-> int @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_sparse_resize_and_clear_out = - foreign - "atg_sparse_resize_and_clear_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_sparse_resize_out = - foreign - "atg_sparse_resize_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_sparse_sampled_addmm = - foreign "atg_sparse_sampled_addmm" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_sparse_sampled_addmm_out = - foreign - "atg_sparse_sampled_addmm_out" - (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_airy_ai = - foreign "atg_special_airy_ai" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_airy_ai_out = - foreign "atg_special_airy_ai_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_bessel_j0 = - foreign "atg_special_bessel_j0" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_bessel_j0_out = - foreign "atg_special_bessel_j0_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_bessel_j1 = - foreign "atg_special_bessel_j1" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_bessel_j1_out = - foreign "atg_special_bessel_j1_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_bessel_y0 = - foreign "atg_special_bessel_y0" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_bessel_y0_out = - foreign "atg_special_bessel_y0_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_bessel_y1 = - foreign "atg_special_bessel_y1" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_bessel_y1_out = - foreign "atg_special_bessel_y1_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_t = - foreign "atg_special_chebyshev_polynomial_t" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_t_n_scalar = - foreign - "atg_special_chebyshev_polynomial_t_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_t_n_scalar_out = - foreign - "atg_special_chebyshev_polynomial_t_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_t_out = - foreign - "atg_special_chebyshev_polynomial_t_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_t_x_scalar = - foreign - "atg_special_chebyshev_polynomial_t_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_t_x_scalar_out = - foreign - "atg_special_chebyshev_polynomial_t_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_u = - foreign "atg_special_chebyshev_polynomial_u" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_u_n_scalar = - foreign - "atg_special_chebyshev_polynomial_u_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_u_n_scalar_out = - foreign - "atg_special_chebyshev_polynomial_u_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_u_out = - foreign - "atg_special_chebyshev_polynomial_u_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_u_x_scalar = - foreign - "atg_special_chebyshev_polynomial_u_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_u_x_scalar_out = - foreign - "atg_special_chebyshev_polynomial_u_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_v = - foreign "atg_special_chebyshev_polynomial_v" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_v_n_scalar = - foreign - "atg_special_chebyshev_polynomial_v_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_v_n_scalar_out = - foreign - "atg_special_chebyshev_polynomial_v_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_v_out = - foreign - "atg_special_chebyshev_polynomial_v_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_v_x_scalar = - foreign - "atg_special_chebyshev_polynomial_v_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_v_x_scalar_out = - foreign - "atg_special_chebyshev_polynomial_v_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_w = - foreign "atg_special_chebyshev_polynomial_w" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_w_n_scalar = - foreign - "atg_special_chebyshev_polynomial_w_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_w_n_scalar_out = - foreign - "atg_special_chebyshev_polynomial_w_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_w_out = - foreign - "atg_special_chebyshev_polynomial_w_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_w_x_scalar = - foreign - "atg_special_chebyshev_polynomial_w_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_chebyshev_polynomial_w_x_scalar_out = - foreign - "atg_special_chebyshev_polynomial_w_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_digamma = - foreign "atg_special_digamma" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_digamma_out = - foreign "atg_special_digamma_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_entr = foreign "atg_special_entr" (ptr t @-> t @-> returning void) - - let stubs_special_entr_out = - foreign "atg_special_entr_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_erf = foreign "atg_special_erf" (ptr t @-> t @-> returning void) - - let stubs_special_erf_out = - foreign "atg_special_erf_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_erfc = foreign "atg_special_erfc" (ptr t @-> t @-> returning void) - - let stubs_special_erfc_out = - foreign "atg_special_erfc_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_erfcx = foreign "atg_special_erfcx" (ptr t @-> t @-> returning void) - - let stubs_special_erfcx_out = - foreign "atg_special_erfcx_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_erfinv = foreign "atg_special_erfinv" (ptr t @-> t @-> returning void) - - let stubs_special_erfinv_out = - foreign "atg_special_erfinv_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_exp2 = foreign "atg_special_exp2" (ptr t @-> t @-> returning void) - - let stubs_special_exp2_out = - foreign "atg_special_exp2_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_expit = foreign "atg_special_expit" (ptr t @-> t @-> returning void) - - let stubs_special_expit_out = - foreign "atg_special_expit_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_expm1 = foreign "atg_special_expm1" (ptr t @-> t @-> returning void) - - let stubs_special_expm1_out = - foreign "atg_special_expm1_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_gammainc = - foreign "atg_special_gammainc" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_gammainc_out = - foreign "atg_special_gammainc_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_gammaincc = - foreign "atg_special_gammaincc" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_gammaincc_out = - foreign "atg_special_gammaincc_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_gammaln = - foreign "atg_special_gammaln" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_gammaln_out = - foreign "atg_special_gammaln_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_h = - foreign "atg_special_hermite_polynomial_h" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_h_n_scalar = - foreign - "atg_special_hermite_polynomial_h_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_hermite_polynomial_h_n_scalar_out = - foreign - "atg_special_hermite_polynomial_h_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_hermite_polynomial_h_out = - foreign - "atg_special_hermite_polynomial_h_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_h_x_scalar = - foreign - "atg_special_hermite_polynomial_h_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_h_x_scalar_out = - foreign - "atg_special_hermite_polynomial_h_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_he = - foreign "atg_special_hermite_polynomial_he" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_he_n_scalar = - foreign - "atg_special_hermite_polynomial_he_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_hermite_polynomial_he_n_scalar_out = - foreign - "atg_special_hermite_polynomial_he_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_hermite_polynomial_he_out = - foreign - "atg_special_hermite_polynomial_he_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_he_x_scalar = - foreign - "atg_special_hermite_polynomial_he_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_hermite_polynomial_he_x_scalar_out = - foreign - "atg_special_hermite_polynomial_he_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_i0 = foreign "atg_special_i0" (ptr t @-> t @-> returning void) - - let stubs_special_i0_out = - foreign "atg_special_i0_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_i0e = foreign "atg_special_i0e" (ptr t @-> t @-> returning void) - - let stubs_special_i0e_out = - foreign "atg_special_i0e_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_i1 = foreign "atg_special_i1" (ptr t @-> t @-> returning void) - - let stubs_special_i1_out = - foreign "atg_special_i1_out" (ptr t @-> t @-> t @-> returning void) - ;; -end - -module C22 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - let stubs_special_i1e = foreign "atg_special_i1e" (ptr t @-> t @-> returning void) - - let stubs_special_i1e_out = - foreign "atg_special_i1e_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_laguerre_polynomial_l = - foreign "atg_special_laguerre_polynomial_l" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_laguerre_polynomial_l_n_scalar = - foreign - "atg_special_laguerre_polynomial_l_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_laguerre_polynomial_l_n_scalar_out = - foreign - "atg_special_laguerre_polynomial_l_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_laguerre_polynomial_l_out = - foreign - "atg_special_laguerre_polynomial_l_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_laguerre_polynomial_l_x_scalar = - foreign - "atg_special_laguerre_polynomial_l_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_laguerre_polynomial_l_x_scalar_out = - foreign - "atg_special_laguerre_polynomial_l_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_legendre_polynomial_p = - foreign "atg_special_legendre_polynomial_p" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_legendre_polynomial_p_n_scalar = - foreign - "atg_special_legendre_polynomial_p_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_legendre_polynomial_p_n_scalar_out = - foreign - "atg_special_legendre_polynomial_p_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_legendre_polynomial_p_out = - foreign - "atg_special_legendre_polynomial_p_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_legendre_polynomial_p_x_scalar = - foreign - "atg_special_legendre_polynomial_p_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_legendre_polynomial_p_x_scalar_out = - foreign - "atg_special_legendre_polynomial_p_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_log1p = foreign "atg_special_log1p" (ptr t @-> t @-> returning void) - - let stubs_special_log1p_out = - foreign "atg_special_log1p_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_log_ndtr = - foreign "atg_special_log_ndtr" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_log_ndtr_out = - foreign "atg_special_log_ndtr_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_log_softmax = - foreign "atg_special_log_softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_special_logit = - foreign "atg_special_logit" (ptr t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_special_logit_out = - foreign - "atg_special_logit_out" - (ptr t @-> t @-> t @-> double @-> int @-> returning void) - ;; - - let stubs_special_logsumexp = - foreign - "atg_special_logsumexp" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_special_logsumexp_out = - foreign - "atg_special_logsumexp_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_special_modified_bessel_i0 = - foreign "atg_special_modified_bessel_i0" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_modified_bessel_i0_out = - foreign "atg_special_modified_bessel_i0_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_modified_bessel_i1 = - foreign "atg_special_modified_bessel_i1" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_modified_bessel_i1_out = - foreign "atg_special_modified_bessel_i1_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_modified_bessel_k0 = - foreign "atg_special_modified_bessel_k0" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_modified_bessel_k0_out = - foreign "atg_special_modified_bessel_k0_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_modified_bessel_k1 = - foreign "atg_special_modified_bessel_k1" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_modified_bessel_k1_out = - foreign "atg_special_modified_bessel_k1_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_multigammaln = - foreign "atg_special_multigammaln" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_special_multigammaln_out = - foreign - "atg_special_multigammaln_out" - (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_special_ndtr = foreign "atg_special_ndtr" (ptr t @-> t @-> returning void) - - let stubs_special_ndtr_out = - foreign "atg_special_ndtr_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_ndtri = foreign "atg_special_ndtri" (ptr t @-> t @-> returning void) - - let stubs_special_ndtri_out = - foreign "atg_special_ndtri_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_polygamma = - foreign "atg_special_polygamma" (ptr t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_special_polygamma_out = - foreign "atg_special_polygamma_out" (ptr t @-> t @-> int64_t @-> t @-> returning void) - ;; - - let stubs_special_psi = foreign "atg_special_psi" (ptr t @-> t @-> returning void) - - let stubs_special_psi_out = - foreign "atg_special_psi_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_round = - foreign "atg_special_round" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_special_round_out = - foreign "atg_special_round_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_special_scaled_modified_bessel_k0 = - foreign "atg_special_scaled_modified_bessel_k0" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_scaled_modified_bessel_k0_out = - foreign - "atg_special_scaled_modified_bessel_k0_out" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_scaled_modified_bessel_k1 = - foreign "atg_special_scaled_modified_bessel_k1" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_scaled_modified_bessel_k1_out = - foreign - "atg_special_scaled_modified_bessel_k1_out" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_t = - foreign - "atg_special_shifted_chebyshev_polynomial_t" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_t_n_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_t_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_t_n_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_t_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_t_out = - foreign - "atg_special_shifted_chebyshev_polynomial_t_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_t_x_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_t_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_t_x_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_t_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_u = - foreign - "atg_special_shifted_chebyshev_polynomial_u" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_u_n_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_u_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_u_n_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_u_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_u_out = - foreign - "atg_special_shifted_chebyshev_polynomial_u_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_u_x_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_u_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_u_x_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_u_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_v = - foreign - "atg_special_shifted_chebyshev_polynomial_v" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_v_n_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_v_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_v_n_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_v_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_v_out = - foreign - "atg_special_shifted_chebyshev_polynomial_v_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_v_x_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_v_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_v_x_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_v_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_w = - foreign - "atg_special_shifted_chebyshev_polynomial_w" - (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_w_n_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_w_n_scalar" - (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_w_n_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_w_n_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_w_out = - foreign - "atg_special_shifted_chebyshev_polynomial_w_out" - (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_w_x_scalar = - foreign - "atg_special_shifted_chebyshev_polynomial_w_x_scalar" - (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_shifted_chebyshev_polynomial_w_x_scalar_out = - foreign - "atg_special_shifted_chebyshev_polynomial_w_x_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_sinc = foreign "atg_special_sinc" (ptr t @-> t @-> returning void) - - let stubs_special_sinc_out = - foreign "atg_special_sinc_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_softmax = - foreign "atg_special_softmax" (ptr t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_special_spherical_bessel_j0 = - foreign "atg_special_spherical_bessel_j0" (ptr t @-> t @-> returning void) - ;; - - let stubs_special_spherical_bessel_j0_out = - foreign "atg_special_spherical_bessel_j0_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_xlog1py = - foreign "atg_special_xlog1py" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_xlog1py_other_scalar = - foreign "atg_special_xlog1py_other_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_xlog1py_other_scalar_out = - foreign - "atg_special_xlog1py_other_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_xlog1py_out = - foreign "atg_special_xlog1py_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_xlog1py_self_scalar = - foreign "atg_special_xlog1py_self_scalar" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_xlog1py_self_scalar_out = - foreign - "atg_special_xlog1py_self_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_xlogy = - foreign "atg_special_xlogy" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_xlogy_other_scalar = - foreign "atg_special_xlogy_other_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_xlogy_other_scalar_out = - foreign - "atg_special_xlogy_other_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_xlogy_out = - foreign "atg_special_xlogy_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_xlogy_self_scalar = - foreign "atg_special_xlogy_self_scalar" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_xlogy_self_scalar_out = - foreign - "atg_special_xlogy_self_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_zeta = - foreign "atg_special_zeta" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_special_zeta_other_scalar = - foreign "atg_special_zeta_other_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_zeta_other_scalar_out = - foreign - "atg_special_zeta_other_scalar_out" - (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_special_zeta_out = - foreign "atg_special_zeta_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_special_zeta_self_scalar = - foreign "atg_special_zeta_self_scalar" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_special_zeta_self_scalar_out = - foreign - "atg_special_zeta_self_scalar_out" - (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_split = foreign "atg_split" (t @-> int64_t @-> int64_t @-> returning (ptr t)) - - let stubs_split_copy = - foreign "atg_split_copy" (t @-> int64_t @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_split_copy_tensor_out = - foreign - "atg_split_copy_tensor_out" - (ptr t @-> int @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_split_sizes = - foreign "atg_split_sizes" (t @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_split_with_sizes = - foreign - "atg_split_with_sizes" - (t @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_split_with_sizes_copy = - foreign - "atg_split_with_sizes_copy" - (t @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr t)) - ;; -end - -module C23 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_split_with_sizes_copy_out = - foreign - "atg_split_with_sizes_copy_out" - (ptr t @-> int @-> t @-> ptr int64_t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_sqrt = foreign "atg_sqrt" (ptr t @-> t @-> returning void) - let stubs_sqrt_ = foreign "atg_sqrt_" (ptr t @-> t @-> returning void) - let stubs_sqrt_out = foreign "atg_sqrt_out" (ptr t @-> t @-> t @-> returning void) - let stubs_square = foreign "atg_square" (ptr t @-> t @-> returning void) - let stubs_square_ = foreign "atg_square_" (ptr t @-> t @-> returning void) - let stubs_square_out = foreign "atg_square_out" (ptr t @-> t @-> t @-> returning void) - let stubs_squeeze = foreign "atg_squeeze" (ptr t @-> t @-> returning void) - let stubs_squeeze_ = foreign "atg_squeeze_" (ptr t @-> t @-> returning void) - let stubs_squeeze_copy = foreign "atg_squeeze_copy" (ptr t @-> t @-> returning void) - - let stubs_squeeze_copy_dim = - foreign "atg_squeeze_copy_dim" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_squeeze_copy_dim_out = - foreign "atg_squeeze_copy_dim_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_squeeze_copy_dims = - foreign - "atg_squeeze_copy_dims" - (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_squeeze_copy_dims_out = - foreign - "atg_squeeze_copy_dims_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_squeeze_copy_out = - foreign "atg_squeeze_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_squeeze_dim = - foreign "atg_squeeze_dim" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_squeeze_dim_ = - foreign "atg_squeeze_dim_" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_squeeze_dims = - foreign "atg_squeeze_dims" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_squeeze_dims_ = - foreign "atg_squeeze_dims_" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_sspaddmm = foreign "atg_sspaddmm" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_sspaddmm_out = - foreign "atg_sspaddmm_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_stack = - foreign "atg_stack" (ptr t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_stack_out = - foreign "atg_stack_out" (ptr t @-> t @-> ptr t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_std = foreign "atg_std" (ptr t @-> t @-> int @-> returning void) - - let stubs_std_correction = - foreign - "atg_std_correction" - (ptr t @-> t @-> ptr int64_t @-> int @-> scalar @-> int @-> returning void) - ;; - - let stubs_std_correction_out = - foreign - "atg_std_correction_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> scalar @-> int @-> returning void) - ;; - - let stubs_std_dim = - foreign - "atg_std_dim" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_std_mean = foreign "atg_std_mean" (ptr t @-> t @-> int @-> returning void) - - let stubs_std_mean_correction = - foreign - "atg_std_mean_correction" - (ptr t @-> t @-> ptr int64_t @-> int @-> scalar @-> int @-> returning void) - ;; - - let stubs_std_mean_correction_out = - foreign - "atg_std_mean_correction_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> scalar - @-> int - @-> returning void) - ;; - - let stubs_std_mean_dim = - foreign - "atg_std_mean_dim" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_std_out = - foreign - "atg_std_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_stft = - foreign - "atg_stft" - (ptr t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_stft_center = - foreign - "atg_stft_center" - (ptr t - @-> t - @-> int64_t - @-> int64_t - @-> int - @-> int64_t - @-> int - @-> t - @-> int - @-> string - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_stride = foreign "atg_stride" (t @-> int64_t @-> returning int64_t) - let stubs_sub = foreign "atg_sub" (ptr t @-> t @-> t @-> returning void) - let stubs_sub_ = foreign "atg_sub_" (ptr t @-> t @-> t @-> returning void) - let stubs_sub_out = foreign "atg_sub_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_sub_scalar = - foreign "atg_sub_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_sub_scalar_ = - foreign "atg_sub_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_sub_scalar_out = - foreign "atg_sub_scalar_out" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_subtract = foreign "atg_subtract" (ptr t @-> t @-> t @-> returning void) - let stubs_subtract_ = foreign "atg_subtract_" (ptr t @-> t @-> t @-> returning void) - - let stubs_subtract_out = - foreign "atg_subtract_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_subtract_scalar = - foreign "atg_subtract_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_subtract_scalar_ = - foreign "atg_subtract_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_sum = foreign "atg_sum" (ptr t @-> t @-> int @-> returning void) - - let stubs_sum_dim_intlist = - foreign - "atg_sum_dim_intlist" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sum_intlist_out = - foreign - "atg_sum_intlist_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_sum_out = foreign "atg_sum_out" (ptr t @-> t @-> t @-> int @-> returning void) - - let stubs_sum_to_size = - foreign "atg_sum_to_size" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_svd = foreign "atg_svd" (ptr t @-> t @-> int @-> int @-> returning void) - - let stubs_svd_u = - foreign "atg_svd_u" (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_swapaxes = - foreign "atg_swapaxes" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_swapaxes_ = - foreign "atg_swapaxes_" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_swapdims = - foreign "atg_swapdims" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_swapdims_ = - foreign "atg_swapdims_" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_tr = foreign "atg_t" (ptr t @-> t @-> returning void) - let stubs_t_ = foreign "atg_t_" (ptr t @-> t @-> returning void) - let stubs_t_copy = foreign "atg_t_copy" (ptr t @-> t @-> returning void) - let stubs_t_copy_out = foreign "atg_t_copy_out" (ptr t @-> t @-> t @-> returning void) - let stubs_take = foreign "atg_take" (ptr t @-> t @-> t @-> returning void) - - let stubs_take_along_dim = - foreign "atg_take_along_dim" (ptr t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_take_along_dim_out = - foreign - "atg_take_along_dim_out" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int @-> returning void) - ;; - - let stubs_take_out = foreign "atg_take_out" (ptr t @-> t @-> t @-> t @-> returning void) - let stubs_tan = foreign "atg_tan" (ptr t @-> t @-> returning void) - let stubs_tan_ = foreign "atg_tan_" (ptr t @-> t @-> returning void) - let stubs_tan_out = foreign "atg_tan_out" (ptr t @-> t @-> t @-> returning void) - let stubs_tanh = foreign "atg_tanh" (ptr t @-> t @-> returning void) - let stubs_tanh_ = foreign "atg_tanh_" (ptr t @-> t @-> returning void) - - let stubs_tanh_backward = - foreign "atg_tanh_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_tanh_backward_grad_input = - foreign "atg_tanh_backward_grad_input" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_tanh_out = foreign "atg_tanh_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_tensor_split = - foreign "atg_tensor_split" (t @-> int64_t @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_tensor_split_indices = - foreign - "atg_tensor_split_indices" - (t @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_tensor_split_tensor_indices_or_sections = - foreign - "atg_tensor_split_tensor_indices_or_sections" - (t @-> t @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_tensordot = - foreign - "atg_tensordot" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_tensordot_out = - foreign - "atg_tensordot_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> returning void) - ;; - - let stubs_threshold = - foreign "atg_threshold" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_threshold_ = - foreign "atg_threshold_" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_threshold_backward = - foreign "atg_threshold_backward" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_threshold_backward_grad_input = - foreign - "atg_threshold_backward_grad_input" - (ptr t @-> t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_threshold_out = - foreign - "atg_threshold_out" - (ptr t @-> t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_tile = - foreign "atg_tile" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_to_ = foreign "atg_to" (ptr t @-> t @-> int @-> returning void) - - let stubs_to_dense = - foreign "atg_to_dense" (ptr t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_to_dense_backward = - foreign "atg_to_dense_backward" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_to_device = - foreign - "atg_to_device" - (ptr t @-> t @-> int @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_to_dtype = - foreign "atg_to_dtype" (ptr t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_to_dtype_layout = - foreign - "atg_to_dtype_layout" - (ptr t @-> t @-> int @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_to_mkldnn = foreign "atg_to_mkldnn" (ptr t @-> t @-> int @-> returning void) - - let stubs_to_mkldnn_backward = - foreign "atg_to_mkldnn_backward" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_to_mkldnn_out = - foreign "atg_to_mkldnn_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_to_other = - foreign "atg_to_other" (ptr t @-> t @-> t @-> int @-> int @-> returning void) - ;; - - let stubs_to_padded_tensor = - foreign - "atg_to_padded_tensor" - (ptr t @-> t @-> double @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_to_padded_tensor_out = - foreign - "atg_to_padded_tensor_out" - (ptr t @-> t @-> t @-> double @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_topk = - foreign - "atg_topk" - (ptr t @-> t @-> int64_t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_topk_values = - foreign - "atg_topk_values" - (ptr t @-> t @-> t @-> t @-> int64_t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_totype = foreign "atg_totype" (ptr t @-> t @-> int @-> returning void) - let stubs_trace = foreign "atg_trace" (ptr t @-> t @-> returning void) -end - -module C24 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - - let stubs_trace_backward = - foreign "atg_trace_backward" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_trace_out = foreign "atg_trace_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_transpose = - foreign "atg_transpose" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_transpose_ = - foreign "atg_transpose_" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_transpose_copy = - foreign "atg_transpose_copy" (ptr t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_transpose_copy_int_out = - foreign - "atg_transpose_copy_int_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_trapezoid = - foreign "atg_trapezoid" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_trapezoid_x = - foreign "atg_trapezoid_x" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_trapz = foreign "atg_trapz" (ptr t @-> t @-> t @-> int64_t @-> returning void) - - let stubs_trapz_dx = - foreign "atg_trapz_dx" (ptr t @-> t @-> double @-> int64_t @-> returning void) - ;; - - let stubs_triangular_solve = - foreign - "atg_triangular_solve" - (ptr t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_triangular_solve_x = - foreign - "atg_triangular_solve_x" - (ptr t @-> t @-> t @-> t @-> t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_tril = foreign "atg_tril" (ptr t @-> t @-> int64_t @-> returning void) - let stubs_tril_ = foreign "atg_tril_" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_tril_indices = - foreign - "atg_tril_indices" - (ptr t @-> int64_t @-> int64_t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_tril_indices_out = - foreign - "atg_tril_indices_out" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_tril_out = - foreign "atg_tril_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_triplet_margin_loss = - foreign - "atg_triplet_margin_loss" - (ptr t - @-> t - @-> t - @-> t - @-> double - @-> double - @-> double - @-> int - @-> int64_t - @-> returning void) - ;; - - let stubs_triu = foreign "atg_triu" (ptr t @-> t @-> int64_t @-> returning void) - let stubs_triu_ = foreign "atg_triu_" (ptr t @-> t @-> int64_t @-> returning void) - - let stubs_triu_indices = - foreign - "atg_triu_indices" - (ptr t @-> int64_t @-> int64_t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_triu_indices_out = - foreign - "atg_triu_indices_out" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_triu_out = - foreign "atg_triu_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_true_divide = foreign "atg_true_divide" (ptr t @-> t @-> t @-> returning void) - - let stubs_true_divide_ = - foreign "atg_true_divide_" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_true_divide_out = - foreign "atg_true_divide_out" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_true_divide_scalar = - foreign "atg_true_divide_scalar" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_true_divide_scalar_ = - foreign "atg_true_divide_scalar_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_trunc = foreign "atg_trunc" (ptr t @-> t @-> returning void) - let stubs_trunc_ = foreign "atg_trunc_" (ptr t @-> t @-> returning void) - let stubs_trunc_out = foreign "atg_trunc_out" (ptr t @-> t @-> t @-> returning void) - let stubs_type_as = foreign "atg_type_as" (ptr t @-> t @-> t @-> returning void) - let stubs_unbind = foreign "atg_unbind" (t @-> int64_t @-> returning (ptr t)) - let stubs_unbind_copy = foreign "atg_unbind_copy" (t @-> int64_t @-> returning (ptr t)) - - let stubs_unbind_copy_int_out = - foreign "atg_unbind_copy_int_out" (ptr t @-> int @-> t @-> int64_t @-> returning void) - ;; - - let stubs_unflatten = - foreign - "atg_unflatten" - (ptr t @-> t @-> int64_t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_unflatten_dense_tensors = - foreign "atg_unflatten_dense_tensors" (t @-> ptr t @-> int @-> returning (ptr t)) - ;; - - let stubs_unfold = - foreign - "atg_unfold" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_unfold_backward = - foreign - "atg_unfold_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_unfold_backward_out = - foreign - "atg_unfold_backward_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int64_t - @-> int64_t - @-> int64_t - @-> returning void) - ;; - - let stubs_unfold_copy = - foreign - "atg_unfold_copy" - (ptr t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_unfold_copy_out = - foreign - "atg_unfold_copy_out" - (ptr t @-> t @-> t @-> int64_t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_uniform = - foreign "atg_uniform" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_uniform_ = - foreign "atg_uniform_" (ptr t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_uniform_out = - foreign "atg_uniform_out" (ptr t @-> t @-> t @-> double @-> double @-> returning void) - ;; - - let stubs_unique_consecutive = - foreign - "atg_unique_consecutive" - (ptr t @-> t @-> int @-> int @-> int64_t @-> int @-> returning void) - ;; - - let stubs_unique_consecutive_out = - foreign - "atg_unique_consecutive_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> int - @-> int - @-> int64_t - @-> int - @-> returning void) - ;; - - let stubs_unique_dim = - foreign - "atg_unique_dim" - (ptr t @-> t @-> int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_unique_dim_consecutive = - foreign - "atg_unique_dim_consecutive" - (ptr t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_unique_dim_consecutive_out = - foreign - "atg_unique_dim_consecutive_out" - (ptr t @-> t @-> t @-> t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_unique_dim_out = - foreign - "atg_unique_dim_out" - (ptr t - @-> t - @-> t - @-> t - @-> t - @-> int64_t - @-> int - @-> int - @-> int - @-> returning void) - ;; - - let stubs_unsafe_chunk = - foreign "atg_unsafe_chunk" (t @-> int64_t @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_unsafe_split = - foreign "atg_unsafe_split" (t @-> int64_t @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_unsafe_split_tensor_out = - foreign - "atg_unsafe_split_tensor_out" - (ptr t @-> int @-> t @-> int64_t @-> int64_t @-> returning void) - ;; - - let stubs_unsafe_split_with_sizes = - foreign - "atg_unsafe_split_with_sizes" - (t @-> ptr int64_t @-> int @-> int64_t @-> returning (ptr t)) - ;; - - let stubs_unsafe_split_with_sizes_out = - foreign - "atg_unsafe_split_with_sizes_out" - (ptr t @-> int @-> t @-> ptr int64_t @-> int @-> int64_t @-> returning void) - ;; - - let stubs_unsqueeze = - foreign "atg_unsqueeze" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_unsqueeze_ = - foreign "atg_unsqueeze_" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_unsqueeze_copy = - foreign "atg_unsqueeze_copy" (ptr t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_unsqueeze_copy_out = - foreign "atg_unsqueeze_copy_out" (ptr t @-> t @-> t @-> int64_t @-> returning void) - ;; - - let stubs_upsample_bicubic2d = - foreign - "atg_upsample_bicubic2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bicubic2d_backward = - foreign - "atg_upsample_bicubic2d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bicubic2d_backward_grad_input = - foreign - "atg_upsample_bicubic2d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bicubic2d_out = - foreign - "atg_upsample_bicubic2d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bicubic2d_vec = - foreign - "atg_upsample_bicubic2d_vec" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> ptr double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bilinear2d = - foreign - "atg_upsample_bilinear2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bilinear2d_backward = - foreign - "atg_upsample_bilinear2d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bilinear2d_backward_grad_input = - foreign - "atg_upsample_bilinear2d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bilinear2d_out = - foreign - "atg_upsample_bilinear2d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_bilinear2d_vec = - foreign - "atg_upsample_bilinear2d_vec" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> ptr double - @-> int - @-> returning void) - ;; - - let stubs_upsample_linear1d = - foreign - "atg_upsample_linear1d" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> double @-> int @-> returning void) - ;; - - let stubs_upsample_linear1d_backward = - foreign - "atg_upsample_linear1d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_linear1d_backward_grad_input = - foreign - "atg_upsample_linear1d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_linear1d_out = - foreign - "atg_upsample_linear1d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_linear1d_vec = - foreign - "atg_upsample_linear1d_vec" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> ptr double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest1d = - foreign - "atg_upsample_nearest1d" - (ptr t @-> t @-> ptr int64_t @-> int @-> double @-> int @-> returning void) - ;; - - let stubs_upsample_nearest1d_backward = - foreign - "atg_upsample_nearest1d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest1d_backward_grad_input = - foreign - "atg_upsample_nearest1d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest1d_out = - foreign - "atg_upsample_nearest1d_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> double @-> int @-> returning void) - ;; - - let stubs_upsample_nearest1d_vec = - foreign - "atg_upsample_nearest1d_vec" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning void) - ;; - - let stubs_upsample_nearest2d = - foreign - "atg_upsample_nearest2d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest2d_backward = - foreign - "atg_upsample_nearest2d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest2d_backward_grad_input = - foreign - "atg_upsample_nearest2d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest2d_out = - foreign - "atg_upsample_nearest2d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest2d_vec = - foreign - "atg_upsample_nearest2d_vec" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning void) - ;; - - let stubs_upsample_nearest3d = - foreign - "atg_upsample_nearest3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest3d_backward = - foreign - "atg_upsample_nearest3d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest3d_backward_grad_input = - foreign - "atg_upsample_nearest3d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest3d_out = - foreign - "atg_upsample_nearest3d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_nearest3d_vec = - foreign - "atg_upsample_nearest3d_vec" - (ptr t @-> t @-> ptr int64_t @-> int @-> ptr double @-> int @-> returning void) - ;; - - let stubs_upsample_trilinear3d = - foreign - "atg_upsample_trilinear3d" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_trilinear3d_backward = - foreign - "atg_upsample_trilinear3d_backward" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_trilinear3d_backward_grad_input = - foreign - "atg_upsample_trilinear3d_backward_grad_input" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_trilinear3d_out = - foreign - "atg_upsample_trilinear3d_out" - (ptr t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> double - @-> int - @-> double - @-> int - @-> double - @-> int - @-> returning void) - ;; - - let stubs_upsample_trilinear3d_vec = - foreign - "atg_upsample_trilinear3d_vec" - (ptr t - @-> t - @-> ptr int64_t - @-> int - @-> int - @-> ptr double - @-> int - @-> returning void) - ;; - - let stubs_value_selecting_reduction_backward = - foreign - "atg_value_selecting_reduction_backward" - (ptr t @-> t @-> int64_t @-> t @-> ptr int64_t @-> int @-> int @-> returning void) - ;; - - let stubs_values = foreign "atg_values" (ptr t @-> t @-> returning void) - let stubs_values_copy = foreign "atg_values_copy" (ptr t @-> t @-> returning void) - - let stubs_values_copy_out = - foreign "atg_values_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_vander = - foreign "atg_vander" (ptr t @-> t @-> int64_t @-> int @-> int @-> returning void) - ;; -end - -module C25 (F : Cstubs.FOREIGN) = struct - open F - - type t = unit ptr - - let t : t typ = ptr void - - type scalar = unit ptr - - let scalar : scalar typ = ptr void - let stubs_var = foreign "atg_var" (ptr t @-> t @-> int @-> returning void) - - let stubs_var_correction = - foreign - "atg_var_correction" - (ptr t @-> t @-> ptr int64_t @-> int @-> scalar @-> int @-> returning void) - ;; - - let stubs_var_correction_out = - foreign - "atg_var_correction_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> scalar @-> int @-> returning void) - ;; - - let stubs_var_dim = - foreign - "atg_var_dim" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_var_mean = foreign "atg_var_mean" (ptr t @-> t @-> int @-> returning void) - - let stubs_var_mean_correction = - foreign - "atg_var_mean_correction" - (ptr t @-> t @-> ptr int64_t @-> int @-> scalar @-> int @-> returning void) - ;; - - let stubs_var_mean_correction_out = - foreign - "atg_var_mean_correction_out" - (ptr t - @-> t - @-> t - @-> t - @-> ptr int64_t - @-> int - @-> scalar - @-> int - @-> returning void) - ;; - - let stubs_var_mean_dim = - foreign - "atg_var_mean_dim" - (ptr t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_var_out = - foreign - "atg_var_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_vdot = foreign "atg_vdot" (ptr t @-> t @-> t @-> returning void) - let stubs_vdot_out = foreign "atg_vdot_out" (ptr t @-> t @-> t @-> t @-> returning void) - - let stubs_view = - foreign "atg_view" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_view_as = foreign "atg_view_as" (ptr t @-> t @-> t @-> returning void) - - let stubs_view_as_complex = - foreign "atg_view_as_complex" (ptr t @-> t @-> returning void) - ;; - - let stubs_view_as_complex_copy = - foreign "atg_view_as_complex_copy" (ptr t @-> t @-> returning void) - ;; - - let stubs_view_as_complex_copy_out = - foreign "atg_view_as_complex_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_view_as_real = foreign "atg_view_as_real" (ptr t @-> t @-> returning void) - - let stubs_view_as_real_copy = - foreign "atg_view_as_real_copy" (ptr t @-> t @-> returning void) - ;; - - let stubs_view_as_real_copy_out = - foreign "atg_view_as_real_copy_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_view_copy = - foreign "atg_view_copy" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_view_copy_dtype = - foreign "atg_view_copy_dtype" (ptr t @-> t @-> int @-> returning void) - ;; - - let stubs_view_copy_dtype_out = - foreign "atg_view_copy_dtype_out" (ptr t @-> t @-> t @-> int @-> returning void) - ;; - - let stubs_view_copy_out = - foreign - "atg_view_copy_out" - (ptr t @-> t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; - - let stubs_view_dtype = foreign "atg_view_dtype" (ptr t @-> t @-> int @-> returning void) - let stubs_vsplit = foreign "atg_vsplit" (t @-> int64_t @-> returning (ptr t)) - - let stubs_vsplit_array = - foreign "atg_vsplit_array" (t @-> ptr int64_t @-> int @-> returning (ptr t)) - ;; - - let stubs_vstack = foreign "atg_vstack" (ptr t @-> ptr t @-> int @-> returning void) - - let stubs_vstack_out = - foreign "atg_vstack_out" (ptr t @-> t @-> ptr t @-> int @-> returning void) - ;; - - let stubs_where = foreign "atg_where" (t @-> returning (ptr t)) - - let stubs_where_scalar = - foreign "atg_where_scalar" (ptr t @-> t @-> scalar @-> scalar @-> returning void) - ;; - - let stubs_where_scalarother = - foreign "atg_where_scalarother" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_where_scalarself = - foreign "atg_where_scalarself" (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_where_self = - foreign "atg_where_self" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_where_self_out = - foreign "atg_where_self_out" (ptr t @-> t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_xlogy = foreign "atg_xlogy" (ptr t @-> t @-> t @-> returning void) - let stubs_xlogy_ = foreign "atg_xlogy_" (ptr t @-> t @-> t @-> returning void) - - let stubs_xlogy_outscalar_other = - foreign "atg_xlogy_outscalar_other" (ptr t @-> t @-> t @-> scalar @-> returning void) - ;; - - let stubs_xlogy_outscalar_self = - foreign "atg_xlogy_outscalar_self" (ptr t @-> t @-> scalar @-> t @-> returning void) - ;; - - let stubs_xlogy_outtensor = - foreign "atg_xlogy_outtensor" (ptr t @-> t @-> t @-> t @-> returning void) - ;; - - let stubs_xlogy_scalar_other = - foreign "atg_xlogy_scalar_other" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_xlogy_scalar_other_ = - foreign "atg_xlogy_scalar_other_" (ptr t @-> t @-> scalar @-> returning void) - ;; - - let stubs_xlogy_scalar_self = - foreign "atg_xlogy_scalar_self" (ptr t @-> scalar @-> t @-> returning void) - ;; - - let stubs_zero = foreign "atg_zero" (ptr t @-> t @-> returning void) - let stubs_zero_ = foreign "atg_zero_" (ptr t @-> t @-> returning void) - let stubs_zero_out = foreign "atg_zero_out" (ptr t @-> t @-> t @-> returning void) - - let stubs_zeros = - foreign "atg_zeros" (ptr t @-> ptr int64_t @-> int @-> int @-> int @-> returning void) - ;; - - let stubs_zeros_like = foreign "atg_zeros_like" (ptr t @-> t @-> returning void) - - let stubs_zeros_like_out = - foreign "atg_zeros_like_out" (ptr t @-> t @-> t @-> returning void) - ;; - - let stubs_zeros_out = - foreign "atg_zeros_out" (ptr t @-> t @-> ptr int64_t @-> int @-> returning void) - ;; -end - -module C (F : Cstubs.FOREIGN) = struct - include C0 (F) - include C1 (F) - include C2 (F) - include C3 (F) - include C4 (F) - include C5 (F) - include C6 (F) - include C7 (F) - include C8 (F) - include C9 (F) - include C10 (F) - include C11 (F) - include C12 (F) - include C13 (F) - include C14 (F) - include C15 (F) - include C16 (F) - include C17 (F) - include C18 (F) - include C19 (F) - include C20 (F) - include C21 (F) - include C22 (F) - include C23 (F) - include C24 (F) - include C25 (F) -end diff --git a/src/tests/jit_tests.ml b/src/tests/jit_tests.ml index d410c96..64ce78b 100644 --- a/src/tests/jit_tests.ml +++ b/src/tests/jit_tests.ml @@ -37,6 +37,50 @@ let%expect_test _ = |}] ;; +(* + test module generated with these Python code: + + import torch + @torch.jit.script + def would_raise(x): + raise RuntimeError("Raising expcetion on purpose") + return x + torch.jit.save(would_raise, '/tmp/raise.pt') +*) +let%expect_test "test exception raise in torch script can be properly caught in OCaml" = + let model = Module.load "raise.pt" in + try + let output = Module.forward model [ Tensor.of_float0 0. ] in + ignore output + with + | Failure failure -> + Stdio.print_s [%message "Exception raised and caught" (failure : string)]; + (); + [%expect + {| + ("Exception raised and caught" + (failure + "The following operation failed in the TorchScript interpreter.\ + \nTraceback of TorchScript, serialized code (most recent call last):\ + \n File \"code/__torch__.py\", line 8, in forward\ + \n x: Tensor) -> NoneType:\ + \n _0 = uninitialized(NoneType)\ + \n ops.prim.RaiseException(\"Raising expcetion on purpose\", \"builtins.RuntimeError\")\ + \n ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\ + \n return _0\ + \n\ + \nTraceback of TorchScript, original code (most recent call last):\ + \n File \"/tmp/ipykernel_741402/1182469162.py\", line 5, in forward\ + \n@torch.jit.script\ + \ndef would_raise(x):\ + \n raise RuntimeError(\"Raising expcetion on purpose\")\ + \n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE\ + \n return x\ + \nbuiltins.RuntimeError: Raising expcetion on purpose\ + \n")) + |}] +;; + let%expect_test _ = (* test that we can list all the buffers in a module, modify them, and get different results *) (* This model just adds all the buffers and parameters together. *) diff --git a/src/tests/raise.pt b/src/tests/raise.pt new file mode 100644 index 0000000..9bba39d Binary files /dev/null and b/src/tests/raise.pt differ diff --git a/src/wrapper/dune b/src/wrapper/dune index 9782060..7e9eea2 100644 --- a/src/wrapper/dune +++ b/src/wrapper/dune @@ -8,14 +8,14 @@ (:include cxx_flags.sexp))) (foreign_stubs (language c) - (names torch_stubs)) + (names torch_stubs torch_stubs_generated)) (name torch_core) (public_name torch.core) (c_library_flags :standard -lstdc++ (:include c_library_flags.sexp)) - (libraries ctypes.foreign ctypes.stubs ctypes) + (libraries ctypes.foreign torch_bindings) (preprocess (pps ppx_jane))) @@ -26,19 +26,7 @@ (bash %{deps}))) (rule - (targets torch_bindings.ml) - (deps ../stubs/torch_bindings.ml) - (action - (bash "cp ../stubs/torch_bindings.ml torch_bindings.ml"))) - -(rule - (targets torch_bindings_generated.ml) - (deps ../stubs/torch_bindings_generated.ml) - (action - (bash "cp ../stubs/torch_bindings_generated.ml torch_bindings_generated.ml"))) - -(rule - (targets torch_stubs.c torch_generated.ml) - (deps ../stubs/torch_gen.exe) + (targets torch_stubs_generated.c torch_stubs_generated.ml) + (deps ../gen_stubs/gen_stubs.exe) (action (bash ./%{deps}))) diff --git a/src/wrapper/torch_api.cpp b/src/wrapper/torch_api.cpp index 1bffba4..cd8a585 100644 --- a/src/wrapper/torch_api.cpp +++ b/src/wrapper/torch_api.cpp @@ -5,27 +5,67 @@ #include #include #include -#include +#include #undef invalid_argument #include "torch_api.h" +#include "ctypes_cstubs_internals.h" using namespace std; +torch::Tensor *tensor_ptr_from_ocaml(gc_tensor t) { return (torch::Tensor *)t; } + +torch::Tensor tensor_from_ocaml(gc_tensor t) { + return *tensor_ptr_from_ocaml(t); +} + +CAMLprim void finalize_tensor(value block) { + gc_tensor t = *(void **)Data_custom_val(block); + at_free(t); +} + +static struct custom_operations ops = {"torch-tensor", + finalize_tensor, + custom_compare_default, + custom_hash_default, + custom_serialize_default, + custom_deserialize_default, + custom_compare_ext_default, + custom_fixed_length_default}; + +CAMLprim value with_tensor_gc_internal(raw_tensor t) { + // See https://v2.ocaml.org/manual/intfc.html#s%3Ac-custom + unsigned long int off_heap_cpu_memory_bytes = 0; + torch::Tensor *tensor = (torch::Tensor *)t; + if (tensor->defined() && tensor->device() == at::kCPU) { + off_heap_cpu_memory_bytes = tensor->numel() * tensor->element_size(); + } + value new_block = caml_alloc_custom_mem(&ops, sizeof(torch::Tensor *), + off_heap_cpu_memory_bytes); + *(void **)Data_custom_val(new_block) = t; + return new_block; +} + +raw_tensor tensor_to_ocaml(torch::Tensor cpp_tensor) { + torch::Tensor *res = new torch::Tensor(cpp_tensor); + return (raw_tensor)res; +} + void at_manual_seed(int64_t seed) { torch::manual_seed(seed); } -vector of_carray_tensor(torch::Tensor **vs, int len) { +vector of_carray_tensor(gc_tensor *vs, int len) { vector result; for (int i = 0; i < len; ++i) - result.push_back(*(vs[i])); + result.push_back(tensor_from_ocaml(vs[i])); return result; } -c10::List> of_carray_tensor_opt(torch::Tensor **vs, +c10::List> of_carray_tensor_opt(gc_tensor *vs, int len) { vector> result; for (int i = 0; i < len; ++i) { - result.push_back(vs[i] != nullptr ? c10::optional(*(vs[i])) - : c10::nullopt); + result.push_back( + vs[i] ? c10::optional(tensor_from_ocaml(vs[i])) + : c10::nullopt); } return c10::List>(result); } @@ -43,13 +83,13 @@ c10::optional optional_device_of_int(int d) { at::Device device_of_int(int d) { return optional_device_of_int(d).value(); } -tensor at_new_tensor() { - PROTECT(return new torch::Tensor();) +raw_tensor at_new_tensor() { + PROTECT(return tensor_to_ocaml(torch::Tensor());) return nullptr; } -tensor at_tensor_of_data(void *vs, int64_t *dims, int ndims, - int element_size_in_bytes, int type) { +raw_tensor at_tensor_of_data(void *vs, int64_t *dims, int ndims, + int element_size_in_bytes, int type) { PROTECT( torch::Tensor tensor = torch::zeros(torch::IntArrayRef(dims, ndims), torch::ScalarType(type)); @@ -57,13 +97,13 @@ tensor at_tensor_of_data(void *vs, int64_t *dims, int ndims, invalid_argument("incoherent element sizes in bytes"); void *tensor_data = tensor.data_ptr(); memcpy(tensor_data, vs, tensor.numel() * element_size_in_bytes); - return new torch::Tensor(tensor);) + return tensor_to_ocaml(tensor);) return nullptr; } -void at_copy_data(tensor tensor, void *vs, int64_t numel, - int elt_size_in_bytes) { +void at_copy_data(gc_tensor t, void *vs, int64_t numel, int elt_size_in_bytes) { PROTECT( + torch::Tensor *tensor = tensor_ptr_from_ocaml(t); if ((int64_t)elt_size_in_bytes != tensor->element_size()) throw std:: invalid_argument("incoherent element sizes in bytes"); if ((int64_t)numel > tensor->numel()) throw std::invalid_argument( @@ -79,45 +119,48 @@ void at_copy_data(tensor tensor, void *vs, int64_t numel, }) } -tensor at_float_vec(double *vs, int len, int type) { +raw_tensor at_float_vec(double *vs, int len, int type) { PROTECT(torch::Tensor tensor = torch::empty({len}, torch::ScalarType(type)); for (int i = 0; i < len; ++i) tensor[i] = vs[i]; - return new torch::Tensor(tensor);) + return tensor_to_ocaml(tensor);) return nullptr; } -tensor at_int_vec(int64_t *vs, int len, int type) { +raw_tensor at_int_vec(int64_t *vs, int len, int type) { PROTECT(torch::Tensor tensor = torch::empty({len}, torch::ScalarType(type)); for (int i = 0; i < len; ++i) tensor[i] = vs[i]; - return new torch::Tensor(tensor);) + return tensor_to_ocaml(tensor);) return nullptr; } -int at_defined(tensor t) { - PROTECT(return t->defined();) +int at_defined(gc_tensor t) { + PROTECT(return tensor_ptr_from_ocaml(t)->defined();) return -1; } -int at_is_sparse(tensor t) { - PROTECT(return t->is_sparse();) +int at_is_sparse(gc_tensor t) { + PROTECT(return tensor_ptr_from_ocaml(t)->is_sparse();) return -1; } -int at_dim(tensor t) { - PROTECT(return t->dim();) +int at_dim(gc_tensor t) { + PROTECT(return tensor_ptr_from_ocaml(t)->dim();) return -1; } -void at_shape(tensor t, int *dims) { - PROTECT(int i = 0; for (int dim : t->sizes()) dims[i++] = dim;) +void at_shape(gc_tensor t, int *dims) { + PROTECT(int i = 0; for (int dim + : tensor_ptr_from_ocaml(t)->sizes()) dims[i++] = dim;) } -void at_stride(tensor t, int64_t *dims) { - PROTECT(int i = 0; for (int64_t dim : t->strides()) dims[i++] = dim;) +void at_stride(gc_tensor t, int64_t *dims) { + PROTECT(int i = 0; + for (int64_t dim + : tensor_ptr_from_ocaml(t)->strides()) dims[i++] = dim;) } -int at_scalar_type(tensor t) { - PROTECT(return static_cast(t->scalar_type());) +int at_scalar_type(gc_tensor t) { + PROTECT(return static_cast(tensor_ptr_from_ocaml(t)->scalar_type());) } void at_autocast_clear_cache() { at::autocast::clear_cache(); } @@ -143,17 +186,23 @@ int at_autocast_set_enabled(int b) { return -1; } -int at_device(tensor tensor) { - PROTECT(auto device = tensor->device(); if (device.is_cpu()) return -1; - return device.index();) +int at_device(gc_tensor t) { + PROTECT(auto device = tensor_ptr_from_ocaml(t)->device(); + if (device.is_cpu()) return -1; return device.index();) } -void at_backward(tensor t, int keep_graph, int create_graph) { - PROTECT(t->backward({}, keep_graph, create_graph);) +void at_backward(gc_tensor t, int keep_graph, int create_graph) { + PROTECT( + caml_release_runtime_system(); try { + tensor_ptr_from_ocaml(t)->backward({}, keep_graph, create_graph); + } catch (const exception &) { + caml_acquire_runtime_system(); + throw; + } caml_acquire_runtime_system();) } -int at_requires_grad(tensor t) { - PROTECT(return t->requires_grad();) +int at_requires_grad(gc_tensor t) { + PROTECT(return tensor_ptr_from_ocaml(t)->requires_grad();) return -1; } @@ -163,112 +212,125 @@ int at_grad_set_enabled(int b) { return -1; } -tensor at_get(tensor t, int index) { - PROTECT(return new torch::Tensor((*t)[index]);) +raw_tensor at_get(gc_tensor t, int index) { + PROTECT(return tensor_to_ocaml(tensor_from_ocaml(t)[index]);) return nullptr; } template -T at_value_at_indexes(tensor t, int *indexes, int indexes_len) { - PROTECT(torch::Tensor tensor = *t; for (int i = 0; i < indexes_len; ++i) { - tensor = tensor[indexes[i]]; - } return tensor.item();) +T at_value_at_indexes(gc_tensor t, int *indexes, int indexes_len) { + PROTECT(torch::Tensor tensor = tensor_from_ocaml(t); + for (int i = 0; i < indexes_len; + ++i) { tensor = tensor[indexes[i]]; } return tensor.item();) return T(); } -double at_double_value_at_indexes(tensor t, int *indexes, int indexes_len) { +double at_double_value_at_indexes(gc_tensor t, int *indexes, int indexes_len) { return at_value_at_indexes(t, indexes, indexes_len); } -int64_t at_int64_value_at_indexes(tensor t, int *indexes, int indexes_len) { +int64_t at_int64_value_at_indexes(gc_tensor t, int *indexes, int indexes_len) { return at_value_at_indexes(t, indexes, indexes_len); } template -void at_set_value_at_indexes(tensor t, int *indexes, int indexes_len, T v) { - PROTECT(torch::Tensor tensor = *t; for (int i = 0; i < indexes_len; ++i) { - tensor = tensor[indexes[i]]; - } tensor.fill_(v);) +void at_set_value_at_indexes(gc_tensor t, int *indexes, int indexes_len, T v) { + PROTECT(torch::Tensor tensor = tensor_from_ocaml(t); + for (int i = 0; i < indexes_len; + ++i) { tensor = tensor[indexes[i]]; } tensor.fill_(v);) } -void at_set_double_value_at_indexes(tensor t, int *indexes, int indexes_len, +void at_set_double_value_at_indexes(gc_tensor t, int *indexes, int indexes_len, double v) { at_set_value_at_indexes(t, indexes, indexes_len, v); } -void at_set_int64_value_at_indexes(tensor t, int *indexes, int indexes_len, +void at_set_int64_value_at_indexes(gc_tensor t, int *indexes, int indexes_len, int64_t v) { at_set_value_at_indexes(t, indexes, indexes_len, v); } -void at_fill_double(tensor t, double v) { PROTECT(t->fill_(v);) } +void at_fill_double(gc_tensor t, double v) { + PROTECT(tensor_ptr_from_ocaml(t)->fill_(v);) +} -void at_fill_int64(tensor t, int64_t v) { PROTECT(t->fill_(v);) } +void at_fill_int64(gc_tensor t, int64_t v) { + PROTECT(tensor_ptr_from_ocaml(t)->fill_(v);) +} -void at_print(tensor t) { +void at_print(gc_tensor t) { PROTECT(torch::Tensor *tensor = (torch::Tensor *)t; cout << *tensor << endl;) } -char *at_to_string(tensor t, int line_size) { - PROTECT(std::ostringstream oss; torch::print(oss, *t, line_size); +char *at_to_string(gc_tensor t, int line_size) { + PROTECT(std::ostringstream oss; + torch::print(oss, tensor_from_ocaml(t), line_size); return strdup(oss.str().c_str());) return nullptr; } -void at_copy_(tensor dst, tensor src) { PROTECT(dst->copy_(*src);) } -void at_set_data(tensor dst, tensor src) { PROTECT(dst->set_data(*src);) } +void at_copy_(gc_tensor dst, gc_tensor src) { + PROTECT(tensor_ptr_from_ocaml(dst)->copy_(tensor_from_ocaml(src));) +} +void at_set_data(gc_tensor dst, gc_tensor src) { + PROTECT(tensor_ptr_from_ocaml(dst)->set_data(tensor_from_ocaml(src));) +} -void at_save(tensor t, char *filename) { PROTECT(torch::save(*t, filename);) } +void at_save(gc_tensor t, char *filename) { + PROTECT(torch::save(tensor_from_ocaml(t), filename);) +} -void at_save_multi(tensor *tensors, char **tensor_names, int ntensors, +void at_save_multi(gc_tensor *tensors, char **tensor_names, int ntensors, char *filename) { PROTECT(torch::serialize::OutputArchive archive; - for (int i = 0; i < ntensors; ++i) archive.write( - std::string(tensor_names[i]), *(tensors[i]), /* buffer=*/false); + for (int i = 0; i < ntensors; ++i) + archive.write(std::string(tensor_names[i]), + tensor_from_ocaml(tensors[i]), /* buffer=*/false); archive.save_to(filename);) } -void at_load_multi(tensor *tensors, char **tensor_names, int ntensors, +void at_load_multi(raw_tensor *outputs, char **tensor_names, int ntensors, char *filename) { PROTECT(torch::serialize::InputArchive archive; archive.load_from(std::string(filename)); vector ts(ntensors); for (int i = 0; i < ntensors; ++i) archive.read(std::string(tensor_names[i]), ts[i]); - // Only allocate the new tensor now so that if there is an exception + // Only allocate the new tensors now so that if there is an exception // raised during [read], no memory has to be freed. - for (int i = 0; i < ntensors; ++i) tensors[i] = - new torch::Tensor(ts[i]);) + for (int i = 0; i < ntensors; ++i) outputs[i] = + tensor_to_ocaml(ts[i]);) } -void at_load_callback(char *filename, void (*f)(char *, tensor)) { +void at_load_callback(char *filename, void (*f)(char *, raw_tensor)) { PROTECT(auto module = torch::jit::load(filename); for (const auto &p : module.named_parameters()) { auto v = p.value; - f((char *)p.name.c_str(), new torch::Tensor(v)); + f((char *)p.name.c_str(), tensor_to_ocaml(v)); }) } -void at_load_multi_(tensor *tensors, char **tensor_names, int ntensors, +void at_load_multi_(gc_tensor *tensors, char **tensor_names, int ntensors, char *filename) { PROTECT(torch::NoGradGuard no_grad; torch::serialize::InputArchive archive; archive.load_from(std::string(filename)); for (int i = 0; i < ntensors; ++i) { - if (tensors[i]->device().type() == at::kCPU) - archive.read(std::string(tensor_names[i]), *(tensors[i])); + torch::Tensor *tensor_ptr = tensor_ptr_from_ocaml(tensors[i]); + if (tensor_ptr->device().type() == at::kCPU) + archive.read(std::string(tensor_names[i]), *tensor_ptr); else { torch::Tensor tmp_tensor = - torch::empty_like(*(tensors[i]), at::device(at::kCPU)); + torch::empty_like(*tensor_ptr, at::device(at::kCPU)); archive.read(std::string(tensor_names[i]), tmp_tensor); - tensors[i]->copy_(tmp_tensor); + tensor_ptr->copy_(tmp_tensor); } }) } -tensor at_load(char *filename) { +raw_tensor at_load(char *filename) { PROTECT(torch::Tensor tensor; torch::load(tensor, filename); - return new torch::Tensor(tensor);) + return tensor_to_ocaml(tensor);) return nullptr; } @@ -290,32 +352,38 @@ void at_set_num_threads(int n_threads) { PROTECT(at::set_num_threads(n_threads);) } -void at_free(tensor t) { delete (t); } +void at_free(gc_tensor t) { delete (tensor_ptr_from_ocaml(t)); } -void at_run_backward(tensor *tensors, int ntensors, tensor *inputs, int ninputs, - tensor *outputs, int keep_graph, int create_graph) { +void at_run_backward(gc_tensor *tensors, int ntensors, gc_tensor *inputs, + int ninputs, raw_tensor *outputs, int keep_graph, + int create_graph) { PROTECT( vector roots; - for (int i = 0; i < ntensors; ++i) - roots.push_back(torch::autograd::impl::gradient_edge(*tensors[i])); + for (int i = 0; i < ntensors; ++i) roots.push_back( + torch::autograd::impl::gradient_edge(tensor_from_ocaml(tensors[i]))); vector inputs_; for (int i = 0; i < ninputs; ++i) { - if (!inputs[i]->requires_grad()) + torch::Tensor *input_ = tensor_ptr_from_ocaml(inputs[i]); + if (!input_->requires_grad()) throw std::invalid_argument( "one of the input tensor does not use set_requires_grad"); - inputs_.push_back(torch::autograd::impl::gradient_edge(*inputs[i])); + inputs_.push_back(torch::autograd::impl::gradient_edge(*input_)); } vector grads; for (int i = 0; i < ntensors; ++i) - grads.push_back(torch::ones_like(*tensors[i])); + grads.push_back(torch::ones_like(tensor_from_ocaml(tensors[i]))); - auto vl = torch::autograd::Engine::get_default_engine().execute( - roots, grads, keep_graph, create_graph, false, inputs_); - for (int i = 0; i < ninputs; ++i) { - outputs[i] = static_cast(new torch::autograd::Variable(vl[i])); - }) + caml_release_runtime_system(); torch::autograd::variable_list vl; try { + vl = torch::autograd::Engine::get_default_engine().execute( + roots, grads, keep_graph, create_graph, false, inputs_); + } catch (const exception &) { + caml_acquire_runtime_system(); + throw; + } caml_acquire_runtime_system(); + for (int i = 0; i < ninputs; + ++i) { outputs[i] = tensor_to_ocaml(vl[i]); }) } optimizer ato_adam(double learning_rate, double beta1, double beta2, @@ -351,10 +419,10 @@ optimizer ato_sgd(double learning_rate, double momentum, double dampening, return nullptr; } -void ato_add_parameters(optimizer t, tensor *tensors, int ntensors) { +void ato_add_parameters(optimizer t, gc_tensor *tensors, int ntensors) { PROTECT(for (int i = 0; i < ntensors; ++i) t->param_groups()[0] .params() - .push_back(*(tensors[i]));) + .push_back(tensor_from_ocaml(tensors[i]));) } template void set_lr(optimizer t, double learning_rate) { @@ -564,27 +632,46 @@ module atm_load_str(char *data, size_t sz, int device) { return nullptr; } -tensor atm_forward(module m, tensor *tensors, int ntensors) { - PROTECT(std::vector inputs; - for (int i = 0; i < ntensors; ++i) inputs.push_back(*(tensors[i])); - caml_release_runtime_system(); - torch::jit::IValue output = m->forward(std::move(inputs)); - caml_acquire_runtime_system(); - if (!output.isTensor()) throw std::invalid_argument( - "forward did not return a tensor"); - return new torch::Tensor(output.toTensor());) +raw_tensor atm_forward(module m, gc_tensor *tensors, int ntensors) { + PROTECT( + std::vector inputs; + for (int i = 0; i < ntensors; ++i) + inputs.push_back(tensor_from_ocaml(tensors[i])); + caml_release_runtime_system(); torch::jit::IValue output; + + // In case of exception, we need to re-acquire the runtime + // lock before re-raising, since PROTECT re-enters ocaml. + try { + output = m->forward(std::move(inputs)); + } catch (const exception &) { + caml_acquire_runtime_system(); + throw; + } + + caml_acquire_runtime_system(); + if (!output.isTensor()) throw std::invalid_argument( + "forward did not return a tensor"); + return tensor_to_ocaml(output.toTensor());) return nullptr; } ivalue atm_forward_(module m, ivalue *ivalues, int nivalues) { - PROTECT(std::vector inputs; - for (int i = 0; i < nivalues; ++i) inputs.push_back(*(ivalues[i])); - caml_release_runtime_system(); - torch::jit::IValue output = m->forward(inputs); - caml_acquire_runtime_system(); return new torch::jit::IValue(output);) + PROTECT( + std::vector inputs; + for (int i = 0; i < nivalues; ++i) inputs.push_back(*(ivalues[i])); + caml_release_runtime_system(); torch::jit::IValue output; + + // In case of exception, we need to re-acquire the runtime + // lock before re-raising, since PROTECT re-enters ocaml. + try { output = m->forward(inputs); } catch (const exception &) { + caml_acquire_runtime_system(); + throw; + } + + caml_acquire_runtime_system(); + return new torch::jit::IValue(output);) return nullptr; } - // To return this OrderedDict, we pass it a tuple // IValue containing // * list of strings IValue (names) @@ -613,8 +700,8 @@ void atm_to(module m, int device, int dtype, bool non_blocking) { PROTECT(m->to(device_of_int(device), at::ScalarType(dtype), non_blocking);) } -ivalue ati_tensor(tensor t) { - PROTECT(return new torch::jit::IValue(*t);) +ivalue ati_tensor(gc_tensor t) { + PROTECT(return new torch::jit::IValue(tensor_from_ocaml(t));) return nullptr; } @@ -694,9 +781,10 @@ ivalue ati_string_list(char **is, int nvalues) { return nullptr; } -ivalue ati_tensor_list(tensor *is, int nvalues) { +ivalue ati_tensor_list(gc_tensor *is, int nvalues) { PROTECT(c10::List vec; - for (int i = 0; i < nvalues; ++i) vec.push_back(*(is[i])); + for (int i = 0; i < nvalues; ++i) + vec.push_back(tensor_from_ocaml(is[i])); return new torch::jit::IValue(vec);) return nullptr; } @@ -735,8 +823,8 @@ char *ati_to_string(ivalue i) { return nullptr; } -tensor ati_to_tensor(ivalue i) { - PROTECT(return new torch::Tensor(i->toTensor());) +raw_tensor ati_to_tensor(ivalue i) { + PROTECT(return tensor_to_ocaml(i->toTensor());) return nullptr; } @@ -808,10 +896,10 @@ void ati_to_bool_list(ivalue i, char *outputs, int noutputs) { } for (int i = 0; i < noutputs; ++i) outputs[i] = vec[i];) } -void ati_to_tensor_list(ivalue i, tensor *outputs, int noutputs) { +void ati_to_tensor_list(ivalue i, raw_tensor *outputs, int noutputs) { PROTECT(auto vec = i->toTensorList(); if (vec.size() != noutputs) { throw std::invalid_argument("unexpected list size"); - } for (int i = 0; i < noutputs; ++i) outputs[i] = new torch::Tensor(vec[i]);) + } for (int i = 0; i < noutputs; ++i) outputs[i] = tensor_to_ocaml(vec[i]);) } ivalue ati_object_method_(ivalue i, char *method_name, ivalue *ivalues, @@ -842,4 +930,4 @@ void at_set_graph_executor_optimize(bool o) { torch::jit::setGraphExecutorOptimize(o); } -#include "torch_api_generated.cpp.h" +#include "torch_api_generated.cpp" diff --git a/src/wrapper/torch_api.h b/src/wrapper/torch_api.h index 68df5a6..027e721 100644 --- a/src/wrapper/torch_api.h +++ b/src/wrapper/torch_api.h @@ -2,10 +2,10 @@ #define __TORCH_API_H__ #include #include +#include #ifdef __cplusplus extern "C" { -typedef torch::Tensor *tensor; typedef torch::Scalar *scalar; typedef torch::optim::Optimizer *optimizer; typedef torch::jit::script::Module *module; @@ -17,29 +17,33 @@ typedef torch::jit::IValue *ivalue; caml_failwith(strdup(e.what())); \ } #else -typedef void *tensor; typedef void *optimizer; typedef void *scalar; typedef void *module; typedef void *ivalue; #endif +typedef void *raw_tensor; +typedef void *gc_tensor; + +value with_tensor_gc_internal(raw_tensor t); + void at_manual_seed(int64_t); -tensor at_new_tensor(); -tensor at_tensor_of_data(void *vs, int64_t *dims, int ndims, - int element_size_in_bytes, int type); -void at_copy_data(tensor tensor, void *vs, int64_t numel, +raw_tensor at_new_tensor(); +raw_tensor at_tensor_of_data(void *vs, int64_t *dims, int ndims, + int element_size_in_bytes, int type); +void at_copy_data(gc_tensor t, void *vs, int64_t numel, int element_size_in_bytes); -tensor at_float_vec(double *values, int value_len, int type); -tensor at_int_vec(int64_t *values, int value_len, int type); +raw_tensor at_float_vec(double *values, int value_len, int type); +raw_tensor at_int_vec(int64_t *values, int value_len, int type); -int at_defined(tensor); -int at_is_sparse(tensor); -int at_device(tensor); -int at_dim(tensor); -void at_shape(tensor, int *); -void at_stride(tensor, int *); -int at_scalar_type(tensor); +int at_defined(gc_tensor); +int at_is_sparse(gc_tensor); +int at_device(gc_tensor); +int at_dim(gc_tensor); +void at_shape(gc_tensor, int *); +void at_stride(gc_tensor, int *); +int at_scalar_type(gc_tensor); void at_autocast_clear_cache(); int at_autocast_decrement_nesting(); @@ -47,47 +51,48 @@ int at_autocast_increment_nesting(); int at_autocast_is_enabled(); int at_autocast_set_enabled(int b); -void at_backward(tensor, int, int); -int at_requires_grad(tensor); +void at_backward(gc_tensor, int, int); +int at_requires_grad(gc_tensor); int at_grad_set_enabled(int); -tensor at_get(tensor, int index); -void at_fill_double(tensor, double); -void at_fill_int64(tensor, int64_t); +raw_tensor at_get(gc_tensor, int index); +void at_fill_double(gc_tensor, double); +void at_fill_int64(gc_tensor, int64_t); -double at_double_value_at_indexes(tensor, int *indexes, int indexes_len); -int64_t at_int64_value_at_indexes(tensor, int *indexes, int indexes_len); -void at_set_double_value_at_indexes(tensor, int *indexes, int indexes_len, +double at_double_value_at_indexes(gc_tensor, int *indexes, int indexes_len); +int64_t at_int64_value_at_indexes(gc_tensor, int *indexes, int indexes_len); +void at_set_double_value_at_indexes(gc_tensor, int *indexes, int indexes_len, double v); -void at_set_int64_value_at_indexes(tensor, int *indexes, int indexes_len, +void at_set_int64_value_at_indexes(gc_tensor, int *indexes, int indexes_len, int64_t v); -void at_copy_(tensor dst, tensor src); -void at_set_data(tensor dst, tensor src); +void at_copy_(gc_tensor dst, gc_tensor src); +void at_set_data(gc_tensor dst, gc_tensor src); -void at_print(tensor); -char *at_to_string(tensor, int line_size); -void at_save(tensor, char *filename); -tensor at_load(char *filename); +void at_print(gc_tensor); +char *at_to_string(gc_tensor, int line_size); +void at_save(gc_tensor, char *filename); +raw_tensor at_load(char *filename); int at_get_num_threads(); void at_set_num_threads(int n_threads); -void at_save_multi(tensor *tensors, char **tensor_names, int ntensors, +void at_save_multi(gc_tensor *tensors, char **tensor_names, int ntensors, char *filename); /* [at_load_multi] takes as input an array of nullptr for [tensors]. */ -void at_load_multi(tensor *tensors, char **tensor_names, int ntensors, +void at_load_multi(raw_tensor *outputs, char **tensor_names, int ntensors, char *filename); /* [at_load_multi_] takes as input an array of allocation [tensors]. */ -void at_load_multi_(tensor *tensors, char **tensor_names, int ntensors, +void at_load_multi_(gc_tensor *tensors, char **tensor_names, int ntensors, char *filename); -void at_load_callback(char *filename, void (*f)(char *, tensor)); +void at_load_callback(char *filename, void (*f)(char *, raw_tensor)); -void at_free(tensor); +void at_free(gc_tensor); -void at_run_backward(tensor *tensors, int ntensors, tensor *inputs, int ninputs, - tensor *outputs, int keep_graph, int create_graph); +void at_run_backward(gc_tensor *tensors, int ntensors, gc_tensor *inputs, + int ninputs, raw_tensor *outputs, int keep_graph, + int create_graph); optimizer ato_adam(double learning_rate, double beta1, double beta2, double weight_decay, double eps); @@ -95,7 +100,7 @@ optimizer ato_rmsprop(double learning_rate, double alpha, double eps, double weight_decay, double momentum, int centered); optimizer ato_sgd(double learning_rate, double momentum, double dampening, double weight_decay, int nesterov); -void ato_add_parameters(optimizer, tensor *, int ntensors); +void ato_add_parameters(optimizer, gc_tensor *, int ntensors); void ato_set_learning_rate(optimizer, double learning_rate); void ato_set_momentum(optimizer, double momentum); void ato_zero_grad(optimizer); @@ -113,28 +118,27 @@ void atc_set_benchmark_cudnn(int b); module atm_load(char *, int); module atm_load_str(char *, size_t, int); -tensor atm_forward(module, tensor *tensors, int ntensors); +raw_tensor atm_forward(module, gc_tensor *tensors, int ntensors); ivalue atm_forward_(module, ivalue *ivalues, int nivalues); ivalue atm_named_buffers(module); void atm_free(module); ivalue ati_none(); -ivalue ati_tensor(tensor); +ivalue ati_tensor(gc_tensor); ivalue ati_bool(int); ivalue ati_int(int64_t); ivalue ati_double(double); ivalue ati_tuple(ivalue *, int); ivalue ati_string(char *); -ivalue ati_tuple(ivalue *, int); ivalue ati_generic_list(ivalue *, int); ivalue ati_generic_dict(ivalue *, int); ivalue ati_int_list(int64_t *, int); ivalue ati_double_list(double *, int); ivalue ati_bool_list(char *, int); ivalue ati_string_list(char **, int); -ivalue ati_tensor_list(tensor *, int); +ivalue ati_tensor_list(gc_tensor *, int); -tensor ati_to_tensor(ivalue); +raw_tensor ati_to_tensor(ivalue); int64_t ati_to_int(ivalue); double ati_to_double(ivalue); char *ati_to_string(ivalue); @@ -148,7 +152,7 @@ void ati_to_generic_dict(ivalue, ivalue *, int); void ati_to_int_list(ivalue, int64_t *, int); void ati_to_double_list(ivalue, double *, int); void ati_to_bool_list(ivalue, char *, int); -void ati_to_tensor_list(ivalue, tensor *, int); +void ati_to_tensor_list(ivalue, raw_tensor *, int); int ati_tag(ivalue); diff --git a/src/wrapper/torch_api_generated.cpp b/src/wrapper/torch_api_generated.cpp new file mode 100644 index 0000000..efb3208 --- /dev/null +++ b/src/wrapper/torch_api_generated.cpp @@ -0,0 +1,18377 @@ +// THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! + +raw_tensor atg___and__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::__and__(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___and__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::__and__(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___iand__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__iand__(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___iand__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__iand__(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___ilshift__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__ilshift__(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___ilshift__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__ilshift__(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___ior__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__ior__(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___ior__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__ior__(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___irshift__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__irshift__(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___irshift__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__irshift__(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___ixor__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__ixor__(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___ixor__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->__ixor__(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___lshift__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::__lshift__(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___lshift__scalar_out_(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::__lshift___out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___lshift__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::__lshift__(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___lshift__tensor_out_(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::__lshift___out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___or__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::__or__(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___or__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::__or__(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___rshift__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::__rshift__(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___rshift__scalar_out_(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::__rshift___out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___rshift__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::__rshift__(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___rshift__tensor_out_(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::__rshift___out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___xor__(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::__xor__(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg___xor__tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::__xor__(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool2d(gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool2d_backward(gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool2d_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool3d(gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool3d_backward(gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool3d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool3d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool3d_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__adaptive_avg_pool3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::_adaptive_avg_pool3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__add_batch_dim(gc_tensor self, int64_t batch_dim, int64_t level) { + PROTECT( + torch::Tensor results__ = torch::_add_batch_dim(*tensor_ptr_from_ocaml(self), batch_dim, level); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__add_relu(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_add_relu(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__add_relu_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_add_relu_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__add_relu_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_add_relu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__add_relu_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::_add_relu(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__add_relu_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::_add_relu_(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__add_relu_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::_add_relu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__addmm_activation(gc_tensor self, gc_tensor mat1, gc_tensor mat2, int use_gelu) { + PROTECT( + torch::Tensor results__ = torch::_addmm_activation(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2), (bool)use_gelu); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__addmm_activation_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2, int use_gelu) { + PROTECT( + torch::Tensor results__ = torch::_addmm_activation_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2), (bool)use_gelu); + return tensor_to_ocaml(results__); + ) +} + +void atg__aminmax(raw_tensor *out__, gc_tensor self) { + PROTECT( + auto results__ = torch::_aminmax(*tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__aminmax_dim(raw_tensor *out__, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::_aminmax(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__aminmax_dim_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::_aminmax_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__aminmax_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self) { + PROTECT( + auto results__ = torch::_aminmax_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__amp_update_scale(raw_tensor *out__, gc_tensor self, gc_tensor growth_tracker, gc_tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval) { + PROTECT( + auto results__ = torch::_amp_update_scale(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(growth_tracker), *tensor_ptr_from_ocaml(found_inf), scale_growth_factor, scale_backoff_factor, growth_interval); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__amp_update_scale_(gc_tensor self, gc_tensor growth_tracker, gc_tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval) { + PROTECT( + torch::Tensor results__ = torch::_amp_update_scale_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(growth_tracker), *tensor_ptr_from_ocaml(found_inf), scale_growth_factor, scale_backoff_factor, growth_interval); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__amp_update_scale_out(gc_tensor out, gc_tensor self, gc_tensor growth_tracker, gc_tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval) { + PROTECT( + torch::Tensor results__ = torch::_amp_update_scale_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(growth_tracker), *tensor_ptr_from_ocaml(found_inf), scale_growth_factor, scale_backoff_factor, growth_interval); + return tensor_to_ocaml(results__); + ) +} + +void atg__assert_tensor_metadata(gc_tensor a, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int dtype) { + PROTECT( + torch::_assert_tensor_metadata(*tensor_ptr_from_ocaml(a), size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(size_data, size_len)), stride_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(stride_data, stride_len)), torch::ScalarType(dtype)); + ) +} + +raw_tensor atg__autocast_to_full_precision(gc_tensor self, int cuda_enabled, int cpu_enabled) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_autocast_to_full_precision((bool)cuda_enabled, (bool)cpu_enabled); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__autocast_to_reduced_precision(gc_tensor self, int cuda_enabled, int cpu_enabled, int cuda_dtype, int cpu_dtype) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_autocast_to_reduced_precision((bool)cuda_enabled, (bool)cpu_enabled, torch::ScalarType(cuda_dtype), torch::ScalarType(cpu_dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_byte(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Byte(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_char(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Char(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_double(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Double(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_float(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Float(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_half(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Half(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_int(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Int(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_long(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Long(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cast_short(gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_cast_Short(*tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cdist_backward(gc_tensor grad, gc_tensor x1, gc_tensor x2, double p, gc_tensor cdist) { + PROTECT( + torch::Tensor results__ = torch::_cdist_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(x1), *tensor_ptr_from_ocaml(x2), p, *tensor_ptr_from_ocaml(cdist)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cdist_backward_out(gc_tensor out, gc_tensor grad, gc_tensor x1, gc_tensor x2, double p, gc_tensor cdist) { + PROTECT( + torch::Tensor results__ = torch::_cdist_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(x1), *tensor_ptr_from_ocaml(x2), p, *tensor_ptr_from_ocaml(cdist)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cholesky_solve_helper(gc_tensor self, gc_tensor A, int upper) { + PROTECT( + torch::Tensor results__ = torch::_cholesky_solve_helper(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(A), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cholesky_solve_helper_out(gc_tensor out, gc_tensor self, gc_tensor A, int upper) { + PROTECT( + torch::Tensor results__ = torch::_cholesky_solve_helper_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(A), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__coalesce(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_coalesce(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__coalesce_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_coalesce_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__coalesced(gc_tensor self, int coalesced) { + PROTECT( + torch::Tensor results__ = torch::_coalesced(*tensor_ptr_from_ocaml(self), (bool)coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__coalesced_(gc_tensor self, int coalesced) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_coalesced_((bool)coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__coalesced_out(gc_tensor out, gc_tensor self, int coalesced) { + PROTECT( + torch::Tensor results__ = torch::_coalesced_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__compute_linear_combination(gc_tensor input, gc_tensor coefficients) { + PROTECT( + torch::Tensor results__ = torch::_compute_linear_combination(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(coefficients)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__compute_linear_combination_out(gc_tensor out, gc_tensor input, gc_tensor coefficients) { + PROTECT( + torch::Tensor results__ = torch::_compute_linear_combination_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(coefficients)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__conj(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_conj(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__conj_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_conj_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__conj_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_conj_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__conj_physical(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_conj_physical(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__conj_physical_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_conj_physical_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__conv_depthwise2d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::_conv_depthwise2d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__conv_depthwise2d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::_conv_depthwise2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convert_indices_from_coo_to_csr(gc_tensor self, int64_t size, int out_int32) { + PROTECT( + torch::Tensor results__ = torch::_convert_indices_from_coo_to_csr(*tensor_ptr_from_ocaml(self), size, (bool)out_int32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convert_indices_from_coo_to_csr_out(gc_tensor out, gc_tensor self, int64_t size, int out_int32) { + PROTECT( + torch::Tensor results__ = torch::_convert_indices_from_coo_to_csr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), size, (bool)out_int32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convert_indices_from_csr_to_coo(gc_tensor crow_indices, gc_tensor col_indices, int out_int32, int transpose) { + PROTECT( + torch::Tensor results__ = torch::_convert_indices_from_csr_to_coo(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), (bool)out_int32, (bool)transpose); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convert_indices_from_csr_to_coo_out(gc_tensor out, gc_tensor crow_indices, gc_tensor col_indices, int out_int32, int transpose) { + PROTECT( + torch::Tensor results__ = torch::_convert_indices_from_csr_to_coo_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), (bool)out_int32, (bool)transpose); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convolution(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32) { + PROTECT( + torch::Tensor results__ = torch::_convolution(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups, (bool)benchmark, (bool)deterministic, (bool)cudnn_enabled, (bool)allow_tf32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convolution_deprecated(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled) { + PROTECT( + torch::Tensor results__ = torch::_convolution(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups, (bool)benchmark, (bool)deterministic, (bool)cudnn_enabled); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convolution_mode(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::_convolution_mode(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__convolution_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32) { + PROTECT( + torch::Tensor results__ = torch::_convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups, (bool)benchmark, (bool)deterministic, (bool)cudnn_enabled, (bool)allow_tf32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__copy_from(gc_tensor self, gc_tensor dst, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_copy_from(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(dst), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__copy_from_and_resize(gc_tensor self, gc_tensor dst) { + PROTECT( + torch::Tensor results__ = torch::_copy_from_and_resize(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(dst)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__copy_from_and_resize_out(gc_tensor out, gc_tensor self, gc_tensor dst) { + PROTECT( + torch::Tensor results__ = torch::_copy_from_and_resize_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(dst)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__copy_from_out(gc_tensor out, gc_tensor self, gc_tensor dst, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_copy_from_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(dst), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cslt_compress(gc_tensor input) { + PROTECT( + torch::Tensor results__ = torch::_cslt_compress(*tensor_ptr_from_ocaml(input)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cslt_sparse_mm(gc_tensor compressed_A, gc_tensor dense_B, gc_tensor bias, int transpose_result) { + PROTECT( + torch::Tensor results__ = torch::_cslt_sparse_mm(*tensor_ptr_from_ocaml(compressed_A), *tensor_ptr_from_ocaml(dense_B), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (bool)transpose_result); + return tensor_to_ocaml(results__); + ) +} + +void atg__ctc_loss(raw_tensor *out__, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity) { + PROTECT( + auto results__ = torch::_ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)zero_infinity); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__ctc_loss_backward(gc_tensor grad, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, gc_tensor neg_log_likelihood, gc_tensor log_alpha, int64_t blank, int zero_infinity) { + PROTECT( + torch::Tensor results__ = torch::_ctc_loss_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), *tensor_ptr_from_ocaml(neg_log_likelihood), *tensor_ptr_from_ocaml(log_alpha), blank, (bool)zero_infinity); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__ctc_loss_backward_out(gc_tensor out, gc_tensor grad, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, gc_tensor neg_log_likelihood, gc_tensor log_alpha, int64_t blank, int zero_infinity) { + PROTECT( + torch::Tensor results__ = torch::_ctc_loss_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), *tensor_ptr_from_ocaml(neg_log_likelihood), *tensor_ptr_from_ocaml(log_alpha), blank, (bool)zero_infinity); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__ctc_loss_backward_tensor(gc_tensor grad, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, gc_tensor neg_log_likelihood, gc_tensor log_alpha, int64_t blank, int zero_infinity) { + PROTECT( + torch::Tensor results__ = torch::_ctc_loss_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), *tensor_ptr_from_ocaml(input_lengths), *tensor_ptr_from_ocaml(target_lengths), *tensor_ptr_from_ocaml(neg_log_likelihood), *tensor_ptr_from_ocaml(log_alpha), blank, (bool)zero_infinity); + return tensor_to_ocaml(results__); + ) +} + +void atg__ctc_loss_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity) { + PROTECT( + auto results__ = torch::_ctc_loss_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)zero_infinity); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__ctc_loss_tensor(raw_tensor *out__, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int zero_infinity) { + PROTECT( + auto results__ = torch::_ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), *tensor_ptr_from_ocaml(input_lengths), *tensor_ptr_from_ocaml(target_lengths), blank, (bool)zero_infinity); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__ctc_loss_tensor_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int zero_infinity) { + PROTECT( + auto results__ = torch::_ctc_loss_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), *tensor_ptr_from_ocaml(input_lengths), *tensor_ptr_from_ocaml(target_lengths), blank, (bool)zero_infinity); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__cudnn_ctc_loss(raw_tensor *out__, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity) { + PROTECT( + auto results__ = torch::_cudnn_ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)deterministic, (bool)zero_infinity); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__cudnn_ctc_loss_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity) { + PROTECT( + auto results__ = torch::_cudnn_ctc_loss_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)deterministic, (bool)zero_infinity); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__cudnn_ctc_loss_tensor(raw_tensor *out__, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int deterministic, int zero_infinity) { + PROTECT( + auto results__ = torch::_cudnn_ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), *tensor_ptr_from_ocaml(input_lengths), *tensor_ptr_from_ocaml(target_lengths), blank, (bool)deterministic, (bool)zero_infinity); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__cudnn_init_dropout_state(double dropout, int train, int64_t dropout_seed, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_cudnn_init_dropout_state(dropout, (bool)train, dropout_seed, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cudnn_init_dropout_state_out(gc_tensor out, double dropout, int train, int64_t dropout_seed) { + PROTECT( + torch::Tensor results__ = torch::_cudnn_init_dropout_state_out(*tensor_ptr_from_ocaml(out), dropout, (bool)train, dropout_seed); + return tensor_to_ocaml(results__); + ) +} + +void atg__cudnn_rnn(raw_tensor *out__, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor weight_buf, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state) { + PROTECT( + auto results__ = torch::_cudnn_rnn(*tensor_ptr_from_ocaml(input), of_carray_tensor(weight_data, weight_len), weight_stride0, (weight_buf ? tensor_from_ocaml(weight_buf) : torch::Tensor()), *tensor_ptr_from_ocaml(hx), (cx ? tensor_from_ocaml(cx) : torch::Tensor()), mode, hidden_size, proj_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? tensor_from_ocaml(dropout_state) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +raw_tensor atg__cudnn_rnn_flatten_weight(gc_tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional) { + PROTECT( + torch::Tensor results__ = torch::_cudnn_rnn_flatten_weight(of_carray_tensor(weight_arr_data, weight_arr_len), weight_stride0, input_size, mode, hidden_size, proj_size, num_layers, (bool)batch_first, (bool)bidirectional); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__cudnn_rnn_flatten_weight_out(gc_tensor out, gc_tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional) { + PROTECT( + torch::Tensor results__ = torch::_cudnn_rnn_flatten_weight_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(weight_arr_data, weight_arr_len), weight_stride0, input_size, mode, hidden_size, proj_size, num_layers, (bool)batch_first, (bool)bidirectional); + return tensor_to_ocaml(results__); + ) +} + +void atg__cudnn_rnn_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor weight_buf, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state) { + PROTECT( + auto results__ = torch::_cudnn_rnn_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(out4), *tensor_ptr_from_ocaml(input), of_carray_tensor(weight_data, weight_len), weight_stride0, (weight_buf ? tensor_from_ocaml(weight_buf) : torch::Tensor()), *tensor_ptr_from_ocaml(hx), (cx ? tensor_from_ocaml(cx) : torch::Tensor()), mode, hidden_size, proj_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? tensor_from_ocaml(dropout_state) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +int64_t atg__debug_has_internal_overlap(gc_tensor self) { + PROTECT( + return torch::_debug_has_internal_overlap(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +raw_tensor atg__dim_arange(gc_tensor like, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::_dim_arange(*tensor_ptr_from_ocaml(like), dim); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg__dimi(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->_dimI(); + ) + return 0; +} + +int64_t atg__dimv(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->_dimV(); + ) + return 0; +} + +raw_tensor atg__dirichlet_grad(gc_tensor x, gc_tensor alpha, gc_tensor total) { + PROTECT( + torch::Tensor results__ = torch::_dirichlet_grad(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(alpha), *tensor_ptr_from_ocaml(total)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__dirichlet_grad_out(gc_tensor out, gc_tensor x, gc_tensor alpha, gc_tensor total) { + PROTECT( + torch::Tensor results__ = torch::_dirichlet_grad_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(alpha), *tensor_ptr_from_ocaml(total)); + return tensor_to_ocaml(results__); + ) +} + +void atg__efficient_attention_backward(raw_tensor *out__, gc_tensor grad_out_, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor bias, gc_tensor out, gc_tensor cu_seqlens_q, gc_tensor cu_seqlens_k, int64_t max_seqlen_k, int64_t max_seqlen_q, gc_tensor logsumexp, double dropout_p, gc_tensor philox_seed, gc_tensor philox_offset, int64_t custom_mask_type, int bias_requires_grad, double scale_v, int scale_null, int64_t num_splits_key_v, int num_splits_key_null) { + PROTECT( + auto results__ = torch::_efficient_attention_backward(*tensor_ptr_from_ocaml(grad_out_), *tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(out), (cu_seqlens_q ? tensor_from_ocaml(cu_seqlens_q) : torch::Tensor()), (cu_seqlens_k ? tensor_from_ocaml(cu_seqlens_k) : torch::Tensor()), max_seqlen_k, max_seqlen_q, *tensor_ptr_from_ocaml(logsumexp), dropout_p, *tensor_ptr_from_ocaml(philox_seed), *tensor_ptr_from_ocaml(philox_offset), custom_mask_type, (bool)bias_requires_grad, scale_null ? c10::nullopt : c10::optional(scale_v), num_splits_key_null ? c10::nullopt : c10::optional(num_splits_key_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +raw_tensor atg__efficientzerotensor(int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_efficientzerotensor(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__efficientzerotensor_out(gc_tensor out, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_efficientzerotensor_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +void atg__embedding_bag(raw_tensor *out__, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { + PROTECT( + auto results__ = torch::_embedding_bag(*tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), (bool)include_last_offset, padding_idx); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +raw_tensor atg__embedding_bag_backward(gc_tensor grad, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, gc_tensor bag_size, gc_tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int64_t padding_idx) { + PROTECT( + torch::Tensor results__ = torch::_embedding_bag_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), *tensor_ptr_from_ocaml(offset2bag), *tensor_ptr_from_ocaml(bag_size), *tensor_ptr_from_ocaml(maximum_indices), num_weights, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), padding_idx); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__embedding_bag_dense_backward(gc_tensor grad, gc_tensor indices, gc_tensor offset2bag, gc_tensor bag_size, gc_tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, gc_tensor per_sample_weights, int64_t padding_idx) { + PROTECT( + torch::Tensor results__ = torch::_embedding_bag_dense_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offset2bag), *tensor_ptr_from_ocaml(bag_size), *tensor_ptr_from_ocaml(maximum_indices), num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), padding_idx); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__embedding_bag_dense_backward_out(gc_tensor out, gc_tensor grad, gc_tensor indices, gc_tensor offset2bag, gc_tensor bag_size, gc_tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, gc_tensor per_sample_weights, int64_t padding_idx) { + PROTECT( + torch::Tensor results__ = torch::_embedding_bag_dense_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offset2bag), *tensor_ptr_from_ocaml(bag_size), *tensor_ptr_from_ocaml(maximum_indices), num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), padding_idx); + return tensor_to_ocaml(results__); + ) +} + +void atg__embedding_bag_forward_only(raw_tensor *out__, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { + PROTECT( + auto results__ = torch::_embedding_bag_forward_only(*tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), (bool)include_last_offset, padding_idx); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg__embedding_bag_forward_only_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { + PROTECT( + auto results__ = torch::_embedding_bag_forward_only_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), (bool)include_last_offset, padding_idx); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg__embedding_bag_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { + PROTECT( + auto results__ = torch::_embedding_bag_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), (bool)include_last_offset, padding_idx); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +raw_tensor atg__embedding_bag_per_sample_weights_backward(gc_tensor grad, gc_tensor weight, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, int64_t mode, int64_t padding_idx) { + PROTECT( + torch::Tensor results__ = torch::_embedding_bag_per_sample_weights_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), *tensor_ptr_from_ocaml(offset2bag), mode, padding_idx); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__embedding_bag_per_sample_weights_backward_out(gc_tensor out, gc_tensor grad, gc_tensor weight, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, int64_t mode, int64_t padding_idx) { + PROTECT( + torch::Tensor results__ = torch::_embedding_bag_per_sample_weights_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), *tensor_ptr_from_ocaml(offset2bag), mode, padding_idx); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__embedding_bag_sparse_backward(gc_tensor grad, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, gc_tensor bag_size, int64_t num_weights, int scale_grad_by_freq, int64_t mode, gc_tensor per_sample_weights, int64_t padding_idx) { + PROTECT( + torch::Tensor results__ = torch::_embedding_bag_sparse_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), *tensor_ptr_from_ocaml(offset2bag), *tensor_ptr_from_ocaml(bag_size), num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), padding_idx); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__empty_affine_quantized(int64_t *size_data, int size_len, int options_kind, int options_device, double scale, int64_t zero_point) { + PROTECT( + torch::Tensor results__ = torch::_empty_affine_quantized(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), scale, zero_point); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__empty_affine_quantized_out(gc_tensor out, int64_t *size_data, int size_len, double scale, int64_t zero_point) { + PROTECT( + torch::Tensor results__ = torch::_empty_affine_quantized_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len), scale, zero_point); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__empty_per_channel_affine_quantized(int64_t *size_data, int size_len, gc_tensor scales, gc_tensor zero_points, int64_t axis, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_empty_per_channel_affine_quantized(torch::IntArrayRef(size_data, size_len), *tensor_ptr_from_ocaml(scales), *tensor_ptr_from_ocaml(zero_points), axis, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__empty_per_channel_affine_quantized_out(gc_tensor out, int64_t *size_data, int size_len, gc_tensor scales, gc_tensor zero_points, int64_t axis) { + PROTECT( + torch::Tensor results__ = torch::_empty_per_channel_affine_quantized_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len), *tensor_ptr_from_ocaml(scales), *tensor_ptr_from_ocaml(zero_points), axis); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__euclidean_dist(gc_tensor x1, gc_tensor x2) { + PROTECT( + torch::Tensor results__ = torch::_euclidean_dist(*tensor_ptr_from_ocaml(x1), *tensor_ptr_from_ocaml(x2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__euclidean_dist_out(gc_tensor out, gc_tensor x1, gc_tensor x2) { + PROTECT( + torch::Tensor results__ = torch::_euclidean_dist_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x1), *tensor_ptr_from_ocaml(x2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fake_quantize_learnable_per_channel_affine(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor) { + PROTECT( + torch::Tensor results__ = torch::_fake_quantize_learnable_per_channel_affine(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis, quant_min, quant_max, grad_factor); + return tensor_to_ocaml(results__); + ) +} + +void atg__fake_quantize_learnable_per_channel_affine_backward(raw_tensor *out__, gc_tensor grad, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor) { + PROTECT( + auto results__ = torch::_fake_quantize_learnable_per_channel_affine_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis, quant_min, quant_max, grad_factor); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg__fake_quantize_learnable_per_channel_affine_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor) { + PROTECT( + torch::Tensor results__ = torch::_fake_quantize_learnable_per_channel_affine_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis, quant_min, quant_max, grad_factor); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fake_quantize_learnable_per_tensor_affine(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor) { + PROTECT( + torch::Tensor results__ = torch::_fake_quantize_learnable_per_tensor_affine(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), quant_min, quant_max, grad_factor); + return tensor_to_ocaml(results__); + ) +} + +void atg__fake_quantize_learnable_per_tensor_affine_backward(raw_tensor *out__, gc_tensor grad, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor) { + PROTECT( + auto results__ = torch::_fake_quantize_learnable_per_tensor_affine_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), quant_min, quant_max, grad_factor); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg__fake_quantize_learnable_per_tensor_affine_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor) { + PROTECT( + torch::Tensor results__ = torch::_fake_quantize_learnable_per_tensor_affine_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), quant_min, quant_max, grad_factor); + return tensor_to_ocaml(results__); + ) +} + +void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams(raw_tensor *out__, gc_tensor self, gc_tensor scale, gc_tensor zero_point, gc_tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max) { + PROTECT( + auto results__ = torch::_fake_quantize_per_tensor_affine_cachemask_tensor_qparams(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), *tensor_ptr_from_ocaml(fake_quant_enabled), quant_min, quant_max); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor scale, gc_tensor zero_point, gc_tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max) { + PROTECT( + auto results__ = torch::_fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), *tensor_ptr_from_ocaml(fake_quant_enabled), quant_min, quant_max); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__fft_c2c(gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward) { + PROTECT( + torch::Tensor results__ = torch::_fft_c2c(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), normalization, (bool)forward); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fft_c2c_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward) { + PROTECT( + torch::Tensor results__ = torch::_fft_c2c_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), normalization, (bool)forward); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fft_c2r(gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size) { + PROTECT( + torch::Tensor results__ = torch::_fft_c2r(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), normalization, last_dim_size); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fft_c2r_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size) { + PROTECT( + torch::Tensor results__ = torch::_fft_c2r_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), normalization, last_dim_size); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fft_r2c(gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided) { + PROTECT( + torch::Tensor results__ = torch::_fft_r2c(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), normalization, (bool)onesided); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fft_r2c_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided) { + PROTECT( + torch::Tensor results__ = torch::_fft_r2c_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), normalization, (bool)onesided); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fill_mem_eff_dropout_mask_(gc_tensor self, double dropout_p, int64_t seed, int64_t offset) { + PROTECT( + torch::Tensor results__ = torch::_fill_mem_eff_dropout_mask_(*tensor_ptr_from_ocaml(self), dropout_p, seed, offset); + return tensor_to_ocaml(results__); + ) +} + +void atg__flash_attention_backward(raw_tensor *out__, gc_tensor grad_out, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor out, gc_tensor logsumexp, gc_tensor cum_seq_q, gc_tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, gc_tensor philox_seed, gc_tensor philox_offset, double scale_v, int scale_null) { + PROTECT( + auto results__ = torch::_flash_attention_backward(*tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), *tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(logsumexp), *tensor_ptr_from_ocaml(cum_seq_q), *tensor_ptr_from_ocaml(cum_seq_k), max_q, max_k, dropout_p, (bool)is_causal, *tensor_ptr_from_ocaml(philox_seed), *tensor_ptr_from_ocaml(philox_offset), scale_null ? c10::nullopt : c10::optional(scale_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg__foobar(gc_tensor self, int arg1, int arg2, int arg3) { + PROTECT( + torch::Tensor results__ = torch::_foobar(*tensor_ptr_from_ocaml(self), (bool)arg1, (bool)arg2, (bool)arg3); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__foobar_out(gc_tensor out, gc_tensor self, int arg1, int arg2, int arg3) { + PROTECT( + torch::Tensor results__ = torch::_foobar_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)arg1, (bool)arg2, (bool)arg3); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__functional_assert_async(gc_tensor self, char * assert_msg, gc_tensor dep_token) { + PROTECT( + torch::Tensor results__ = torch::_functional_assert_async(*tensor_ptr_from_ocaml(self), std::string(assert_msg), *tensor_ptr_from_ocaml(dep_token)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__functional_sym_constrain_range(scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, gc_tensor dep_token) { + PROTECT( + torch::Tensor results__ = torch::_functional_sym_constrain_range(*size, min_null ? c10::nullopt : c10::optional(min_v), max_null ? c10::nullopt : c10::optional(max_v), *tensor_ptr_from_ocaml(dep_token)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__functional_sym_constrain_range_for_size(scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, gc_tensor dep_token) { + PROTECT( + torch::Tensor results__ = torch::_functional_sym_constrain_range_for_size(*size, min_null ? c10::nullopt : c10::optional(min_v), max_null ? c10::nullopt : c10::optional(max_v), *tensor_ptr_from_ocaml(dep_token)); + return tensor_to_ocaml(results__); + ) +} + +void atg__fused_adam(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adam_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_adam_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adam_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_adam_tensor_lr_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adam_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *tensor_ptr_from_ocaml(lr), beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_adam_tensor_lr_out(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adam_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *tensor_ptr_from_ocaml(lr), beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_adamw(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adamw_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_adamw_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adamw_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_adamw_tensor_lr_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adamw_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *tensor_ptr_from_ocaml(lr), beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_adamw_tensor_lr_out(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf) { + PROTECT( + torch::_fused_adamw_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *tensor_ptr_from_ocaml(lr), beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? tensor_from_ocaml(grad_scale) : torch::Tensor()), (found_inf ? tensor_from_ocaml(found_inf) : torch::Tensor())); + ) +} + +void atg__fused_dropout(raw_tensor *out__, gc_tensor self, double p) { + PROTECT( + auto results__ = torch::_fused_dropout(*tensor_ptr_from_ocaml(self), p); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__fused_dropout_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, double p) { + PROTECT( + auto results__ = torch::_fused_dropout_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), p); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__fused_moving_avg_obs_fq_helper(raw_tensor *out__, gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { + PROTECT( + auto results__ = torch::_fused_moving_avg_obs_fq_helper(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(observer_on), *tensor_ptr_from_ocaml(fake_quant_on), *tensor_ptr_from_ocaml(running_min), *tensor_ptr_from_ocaml(running_max), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__fused_moving_avg_obs_fq_helper_functional(raw_tensor *out__, gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { + PROTECT( + auto results__ = torch::_fused_moving_avg_obs_fq_helper_functional(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(observer_on), *tensor_ptr_from_ocaml(fake_quant_on), *tensor_ptr_from_ocaml(running_min), *tensor_ptr_from_ocaml(running_max), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + out__[5] = tensor_to_ocaml(std::get<5>(results__)); + ) +} + +void atg__fused_moving_avg_obs_fq_helper_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { + PROTECT( + auto results__ = torch::_fused_moving_avg_obs_fq_helper_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(observer_on), *tensor_ptr_from_ocaml(fake_quant_on), *tensor_ptr_from_ocaml(running_min), *tensor_ptr_from_ocaml(running_max), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +int64_t atg__fused_sdp_choice(gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null) { + PROTECT( + return torch::_fused_sdp_choice(*tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), (attn_mask ? tensor_from_ocaml(attn_mask) : torch::Tensor()), dropout_p, (bool)is_causal, scale_null ? c10::nullopt : c10::optional(scale_v)); + ) + return 0; +} + +raw_tensor atg__fw_primal(gc_tensor self, int64_t level) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_fw_primal(level); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fw_primal_copy(gc_tensor self, int64_t level) { + PROTECT( + torch::Tensor results__ = torch::_fw_primal_copy(*tensor_ptr_from_ocaml(self), level); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__fw_primal_copy_out(gc_tensor out, gc_tensor self, int64_t level) { + PROTECT( + torch::Tensor results__ = torch::_fw_primal_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), level); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__gather_sparse_backward(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor grad) { + PROTECT( + torch::Tensor results__ = torch::_gather_sparse_backward(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(grad)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__grid_sampler_2d_cpu_fallback(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::_grid_sampler_2d_cpu_fallback(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +void atg__grid_sampler_2d_cpu_fallback_backward(raw_tensor *out__, gc_tensor grad_output, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + auto results__ = torch::_grid_sampler_2d_cpu_fallback_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__grid_sampler_2d_cpu_fallback_out(gc_tensor out, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::_grid_sampler_2d_cpu_fallback_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +int atg__has_compatible_shallow_copy_type(gc_tensor self, gc_tensor from) { + PROTECT( + return torch::_has_compatible_shallow_copy_type(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(from)); + ) + return 0; +} + +int atg__has_same_storage_numel(gc_tensor self, gc_tensor other) { + PROTECT( + return torch::_has_same_storage_numel(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + ) + return 0; +} + +raw_tensor *atg__histogramdd_bin_edges(gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density) { + PROTECT( + auto results__ = torch::_histogramdd_bin_edges(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)density); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg__histogramdd_bin_edges_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density) { + PROTECT( + torch::_histogramdd_bin_edges_out(of_carray_tensor(out_data, out_len), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)density); + ) +} + +raw_tensor atg__histogramdd_from_bin_cts(gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density) { + PROTECT( + torch::Tensor results__ = torch::_histogramdd_from_bin_cts(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)density); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__histogramdd_from_bin_cts_out(gc_tensor out, gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density) { + PROTECT( + torch::Tensor results__ = torch::_histogramdd_from_bin_cts_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)density); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__histogramdd_from_bin_tensors(gc_tensor self, gc_tensor *bins_data, int bins_len, gc_tensor weight, int density) { + PROTECT( + torch::Tensor results__ = torch::_histogramdd_from_bin_tensors(*tensor_ptr_from_ocaml(self), of_carray_tensor(bins_data, bins_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)density); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__histogramdd_from_bin_tensors_out(gc_tensor out, gc_tensor self, gc_tensor *bins_data, int bins_len, gc_tensor weight, int density) { + PROTECT( + torch::Tensor results__ = torch::_histogramdd_from_bin_tensors_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), of_carray_tensor(bins_data, bins_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)density); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__index_put_impl(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate, int unsafe) { + PROTECT( + torch::Tensor results__ = torch::_index_put_impl(*tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len), *tensor_ptr_from_ocaml(values), (bool)accumulate, (bool)unsafe); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__index_put_impl_(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate, int unsafe) { + PROTECT( + torch::Tensor results__ = torch::_index_put_impl_(*tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len), *tensor_ptr_from_ocaml(values), (bool)accumulate, (bool)unsafe); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__index_put_impl_out(gc_tensor out, gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate, int unsafe) { + PROTECT( + torch::Tensor results__ = torch::_index_put_impl_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len), *tensor_ptr_from_ocaml(values), (bool)accumulate, (bool)unsafe); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__indices(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_indices(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__indices_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_indices_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__indices_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_indices_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__int_mm(gc_tensor self, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::_int_mm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__int_mm_out(gc_tensor out, gc_tensor self, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::_int_mm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__is_all_true(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_is_all_true(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__is_any_true(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_is_any_true(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +int atg__is_zerotensor(gc_tensor self) { + PROTECT( + return torch::_is_zerotensor(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +void atg__linalg_check_errors(gc_tensor info, char * api_name, int is_matrix) { + PROTECT( + torch::_linalg_check_errors(*tensor_ptr_from_ocaml(info), std::string(api_name), (bool)is_matrix); + ) +} + +void atg__linalg_det(raw_tensor *out__, gc_tensor A) { + PROTECT( + auto results__ = torch::_linalg_det(*tensor_ptr_from_ocaml(A)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__linalg_det_result(raw_tensor *out__, gc_tensor result, gc_tensor LU, gc_tensor pivots, gc_tensor A) { + PROTECT( + auto results__ = torch::_linalg_det_out(*tensor_ptr_from_ocaml(result), *tensor_ptr_from_ocaml(LU), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(A)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__linalg_eigh(raw_tensor *out__, gc_tensor A, char * UPLO, int compute_v) { + PROTECT( + auto results__ = torch::_linalg_eigh(*tensor_ptr_from_ocaml(A), std::string(UPLO), (bool)compute_v); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__linalg_eigh_eigenvalues(raw_tensor *out__, gc_tensor eigenvalues, gc_tensor eigenvectors, gc_tensor A, char * UPLO, int compute_v) { + PROTECT( + auto results__ = torch::_linalg_eigh_out(*tensor_ptr_from_ocaml(eigenvalues), *tensor_ptr_from_ocaml(eigenvectors), *tensor_ptr_from_ocaml(A), std::string(UPLO), (bool)compute_v); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__linalg_slogdet(raw_tensor *out__, gc_tensor A) { + PROTECT( + auto results__ = torch::_linalg_slogdet(*tensor_ptr_from_ocaml(A)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg__linalg_slogdet_sign(raw_tensor *out__, gc_tensor sign, gc_tensor logabsdet, gc_tensor LU, gc_tensor pivots, gc_tensor A) { + PROTECT( + auto results__ = torch::_linalg_slogdet_out(*tensor_ptr_from_ocaml(sign), *tensor_ptr_from_ocaml(logabsdet), *tensor_ptr_from_ocaml(LU), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(A)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg__linalg_solve_ex(raw_tensor *out__, gc_tensor A, gc_tensor B, int left, int check_errors) { + PROTECT( + auto results__ = torch::_linalg_solve_ex(*tensor_ptr_from_ocaml(A), *tensor_ptr_from_ocaml(B), (bool)left, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg__linalg_solve_ex_result(raw_tensor *out__, gc_tensor result, gc_tensor LU, gc_tensor pivots, gc_tensor info, gc_tensor A, gc_tensor B, int left, int check_errors) { + PROTECT( + auto results__ = torch::_linalg_solve_ex_out(*tensor_ptr_from_ocaml(result), *tensor_ptr_from_ocaml(LU), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(info), *tensor_ptr_from_ocaml(A), *tensor_ptr_from_ocaml(B), (bool)left, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg__linalg_svd(raw_tensor *out__, gc_tensor A, int full_matrices, int compute_uv, char * driver) { + PROTECT( + auto results__ = torch::_linalg_svd(*tensor_ptr_from_ocaml(A), (bool)full_matrices, (bool)compute_uv, std::string(driver)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__linalg_svd_u(raw_tensor *out__, gc_tensor U, gc_tensor S, gc_tensor Vh, gc_tensor A, int full_matrices, int compute_uv, char * driver) { + PROTECT( + auto results__ = torch::_linalg_svd_out(*tensor_ptr_from_ocaml(U), *tensor_ptr_from_ocaml(S), *tensor_ptr_from_ocaml(Vh), *tensor_ptr_from_ocaml(A), (bool)full_matrices, (bool)compute_uv, std::string(driver)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg__log_softmax(gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_log_softmax(*tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__log_softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype) { + PROTECT( + torch::Tensor results__ = torch::_log_softmax_backward_data(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, torch::ScalarType(input_dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__log_softmax_backward_data_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype) { + PROTECT( + torch::Tensor results__ = torch::_log_softmax_backward_data_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, torch::ScalarType(input_dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__log_softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_log_softmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__logcumsumexp(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::_logcumsumexp(*tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__logcumsumexp_out(gc_tensor out, gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::_logcumsumexp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +void atg__lstm_mps(raw_tensor *out__, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { + PROTECT( + auto results__ = torch::_lstm_mps(*tensor_ptr_from_ocaml(input), of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + out__[5] = tensor_to_ocaml(std::get<5>(results__)); + ) +} + +void atg__lstm_mps_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor out5, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { + PROTECT( + auto results__ = torch::_lstm_mps_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(out4), *tensor_ptr_from_ocaml(out5), *tensor_ptr_from_ocaml(input), of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + out__[5] = tensor_to_ocaml(std::get<5>(results__)); + ) +} + +void atg__lu_with_info(raw_tensor *out__, gc_tensor self, int pivot, int check_errors) { + PROTECT( + auto results__ = torch::_lu_with_info(*tensor_ptr_from_ocaml(self), (bool)pivot, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg__make_dep_token(int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_make_dep_token(at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__make_dual(gc_tensor primal, gc_tensor tangent, int64_t level) { + PROTECT( + torch::Tensor results__ = torch::_make_dual(*tensor_ptr_from_ocaml(primal), *tensor_ptr_from_ocaml(tangent), level); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__make_dual_copy(gc_tensor primal, gc_tensor tangent, int64_t level) { + PROTECT( + torch::Tensor results__ = torch::_make_dual_copy(*tensor_ptr_from_ocaml(primal), *tensor_ptr_from_ocaml(tangent), level); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__make_dual_copy_out(gc_tensor out, gc_tensor primal, gc_tensor tangent, int64_t level) { + PROTECT( + torch::Tensor results__ = torch::_make_dual_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(primal), *tensor_ptr_from_ocaml(tangent), level); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__make_per_channel_quantized_tensor(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis) { + PROTECT( + torch::Tensor results__ = torch::_make_per_channel_quantized_tensor(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__make_per_channel_quantized_tensor_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis) { + PROTECT( + torch::Tensor results__ = torch::_make_per_channel_quantized_tensor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__make_per_tensor_quantized_tensor(gc_tensor self, double scale, int64_t zero_point) { + PROTECT( + torch::Tensor results__ = torch::_make_per_tensor_quantized_tensor(*tensor_ptr_from_ocaml(self), scale, zero_point); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__make_per_tensor_quantized_tensor_out(gc_tensor out, gc_tensor self, double scale, int64_t zero_point) { + PROTECT( + torch::Tensor results__ = torch::_make_per_tensor_quantized_tensor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), scale, zero_point); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__masked_scale(gc_tensor self, gc_tensor mask, double scale) { + PROTECT( + torch::Tensor results__ = torch::_masked_scale(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), scale); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__masked_scale_out(gc_tensor out, gc_tensor self, gc_tensor mask, double scale) { + PROTECT( + torch::Tensor results__ = torch::_masked_scale_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), scale); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__masked_softmax(gc_tensor self, gc_tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null) { + PROTECT( + torch::Tensor results__ = torch::_masked_softmax(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), dim_null ? c10::nullopt : c10::optional(dim_v), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__masked_softmax_backward(gc_tensor grad_output, gc_tensor output, gc_tensor mask, int64_t dim_v, int dim_null) { + PROTECT( + torch::Tensor results__ = torch::_masked_softmax_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(mask), dim_null ? c10::nullopt : c10::optional(dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__masked_softmax_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, gc_tensor mask, int64_t dim_v, int dim_null) { + PROTECT( + torch::Tensor results__ = torch::_masked_softmax_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(mask), dim_null ? c10::nullopt : c10::optional(dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__masked_softmax_out(gc_tensor out, gc_tensor self, gc_tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null) { + PROTECT( + torch::Tensor results__ = torch::_masked_softmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), dim_null ? c10::nullopt : c10::optional(dim_v), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mkldnn_reshape(gc_tensor self, int64_t *shape_data, int shape_len) { + PROTECT( + torch::Tensor results__ = torch::_mkldnn_reshape(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(shape_data, shape_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mkldnn_reshape_out(gc_tensor out, gc_tensor self, int64_t *shape_data, int shape_len) { + PROTECT( + torch::Tensor results__ = torch::_mkldnn_reshape_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(shape_data, shape_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mkldnn_transpose(gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = torch::_mkldnn_transpose(*tensor_ptr_from_ocaml(self), dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mkldnn_transpose_(gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = torch::_mkldnn_transpose_(*tensor_ptr_from_ocaml(self), dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mkldnn_transpose_out(gc_tensor out, gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = torch::_mkldnn_transpose_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mps_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::_mps_convolution(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mps_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::_mps_convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mps_convolution_transpose(gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::_mps_convolution_transpose(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__mps_convolution_transpose_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::_mps_convolution_transpose_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +void atg__native_batch_norm_legit(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps) { + PROTECT( + auto results__ = torch::_native_batch_norm_legit(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(running_mean), *tensor_ptr_from_ocaml(running_var), (bool)training, momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__native_batch_norm_legit_functional(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps) { + PROTECT( + auto results__ = torch::_native_batch_norm_legit_functional(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(running_mean), *tensor_ptr_from_ocaml(running_var), (bool)training, momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +void atg__native_batch_norm_legit_no_stats(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, int training, double momentum, double eps) { + PROTECT( + auto results__ = torch::_native_batch_norm_legit(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (bool)training, momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__native_batch_norm_legit_no_stats_out(raw_tensor *out__, gc_tensor out, gc_tensor save_mean, gc_tensor save_invstd, gc_tensor input, gc_tensor weight, gc_tensor bias, int training, double momentum, double eps) { + PROTECT( + auto results__ = torch::_native_batch_norm_legit_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(save_mean), *tensor_ptr_from_ocaml(save_invstd), *tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (bool)training, momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__native_batch_norm_legit_no_training(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps) { + PROTECT( + auto results__ = torch::_native_batch_norm_legit_no_training(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(running_mean), *tensor_ptr_from_ocaml(running_var), momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__native_batch_norm_legit_no_training_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps) { + PROTECT( + auto results__ = torch::_native_batch_norm_legit_no_training_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(running_mean), *tensor_ptr_from_ocaml(running_var), momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__native_batch_norm_legit_out(raw_tensor *out__, gc_tensor out, gc_tensor save_mean, gc_tensor save_invstd, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps) { + PROTECT( + auto results__ = torch::_native_batch_norm_legit_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(save_mean), *tensor_ptr_from_ocaml(save_invstd), *tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(running_mean), *tensor_ptr_from_ocaml(running_var), (bool)training, momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__native_multi_head_attention(raw_tensor *out__, gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null) { + PROTECT( + auto results__ = torch::_native_multi_head_attention(*tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), embed_dim, num_head, *tensor_ptr_from_ocaml(qkv_weight), *tensor_ptr_from_ocaml(qkv_bias), *tensor_ptr_from_ocaml(proj_weight), *tensor_ptr_from_ocaml(proj_bias), (mask ? tensor_from_ocaml(mask) : torch::Tensor()), (bool)need_weights, (bool)average_attn_weights, mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__native_multi_head_attention_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null) { + PROTECT( + auto results__ = torch::_native_multi_head_attention_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), embed_dim, num_head, *tensor_ptr_from_ocaml(qkv_weight), *tensor_ptr_from_ocaml(qkv_bias), *tensor_ptr_from_ocaml(proj_weight), *tensor_ptr_from_ocaml(proj_bias), (mask ? tensor_from_ocaml(mask) : torch::Tensor()), (bool)need_weights, (bool)average_attn_weights, mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__neg_view(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_neg_view(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__neg_view_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_neg_view_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__neg_view_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_neg_view_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_from_padded(gc_tensor padded, gc_tensor cpu_nested_shape_example, int fuse_transform_0213) { + PROTECT( + torch::Tensor results__ = torch::_nested_from_padded(*tensor_ptr_from_ocaml(padded), *tensor_ptr_from_ocaml(cpu_nested_shape_example), (bool)fuse_transform_0213); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_from_padded_and_nested_example(gc_tensor padded, gc_tensor nt_example) { + PROTECT( + torch::Tensor results__ = torch::_nested_from_padded_and_nested_example(*tensor_ptr_from_ocaml(padded), *tensor_ptr_from_ocaml(nt_example)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_from_padded_and_nested_example_out(gc_tensor out, gc_tensor padded, gc_tensor nt_example) { + PROTECT( + torch::Tensor results__ = torch::_nested_from_padded_and_nested_example_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(padded), *tensor_ptr_from_ocaml(nt_example)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_from_padded_out(gc_tensor out, gc_tensor padded, gc_tensor cpu_nested_shape_example, int fuse_transform_0213) { + PROTECT( + torch::Tensor results__ = torch::_nested_from_padded_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(padded), *tensor_ptr_from_ocaml(cpu_nested_shape_example), (bool)fuse_transform_0213); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_select_backward(gc_tensor grad_output, gc_tensor self, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::_nested_select_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_sum_backward(gc_tensor grad, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::_nested_sum_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_view_from_buffer(gc_tensor self, gc_tensor nested_size, gc_tensor nested_strides, gc_tensor offsets) { + PROTECT( + torch::Tensor results__ = torch::_nested_view_from_buffer(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(nested_size), *tensor_ptr_from_ocaml(nested_strides), *tensor_ptr_from_ocaml(offsets)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_view_from_buffer_copy(gc_tensor self, gc_tensor nested_size, gc_tensor nested_strides, gc_tensor offsets) { + PROTECT( + torch::Tensor results__ = torch::_nested_view_from_buffer_copy(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(nested_size), *tensor_ptr_from_ocaml(nested_strides), *tensor_ptr_from_ocaml(offsets)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nested_view_from_buffer_copy_out(gc_tensor out, gc_tensor self, gc_tensor nested_size, gc_tensor nested_strides, gc_tensor offsets) { + PROTECT( + torch::Tensor results__ = torch::_nested_view_from_buffer_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(nested_size), *tensor_ptr_from_ocaml(nested_strides), *tensor_ptr_from_ocaml(offsets)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__new_zeros_with_same_feature_meta(gc_tensor self, gc_tensor other, int64_t self_num_batch_dims) { + PROTECT( + torch::Tensor results__ = torch::_new_zeros_with_same_feature_meta(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), self_num_batch_dims); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__new_zeros_with_same_feature_meta_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t self_num_batch_dims) { + PROTECT( + torch::Tensor results__ = torch::_new_zeros_with_same_feature_meta_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), self_num_batch_dims); + return tensor_to_ocaml(results__); + ) +} + +int atg__nnpack_available() { + PROTECT( + return torch::_nnpack_available(); + ) + return 0; +} + +raw_tensor atg__nnpack_spatial_convolution(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::_nnpack_spatial_convolution(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__nnpack_spatial_convolution_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::_nnpack_spatial_convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg__nnz(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->_nnz(); + ) + return 0; +} + +void atg__pack_padded_sequence(raw_tensor *out__, gc_tensor input, gc_tensor lengths, int batch_first) { + PROTECT( + auto results__ = torch::_pack_padded_sequence(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(lengths), (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__pack_padded_sequence_backward(gc_tensor grad, int64_t *input_size_data, int input_size_len, gc_tensor batch_sizes, int batch_first) { + PROTECT( + torch::Tensor results__ = torch::_pack_padded_sequence_backward(*tensor_ptr_from_ocaml(grad), torch::IntArrayRef(input_size_data, input_size_len), *tensor_ptr_from_ocaml(batch_sizes), (bool)batch_first); + return tensor_to_ocaml(results__); + ) +} + +void atg__pack_padded_sequence_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor lengths, int batch_first) { + PROTECT( + auto results__ = torch::_pack_padded_sequence_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(lengths), (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__pad_circular(gc_tensor self, int64_t *pad_data, int pad_len) { + PROTECT( + torch::Tensor results__ = torch::_pad_circular(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(pad_data, pad_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__pad_enum(gc_tensor self, int64_t *pad_data, int pad_len, int64_t mode, double value_v, int value_null) { + PROTECT( + torch::Tensor results__ = torch::_pad_enum(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(pad_data, pad_len), mode, value_null ? c10::nullopt : c10::optional(value_v)); + return tensor_to_ocaml(results__); + ) +} + +void atg__pad_packed_sequence(raw_tensor *out__, gc_tensor data, gc_tensor batch_sizes, int batch_first, scalar padding_value, int64_t total_length) { + PROTECT( + auto results__ = torch::_pad_packed_sequence(*tensor_ptr_from_ocaml(data), *tensor_ptr_from_ocaml(batch_sizes), (bool)batch_first, *padding_value, total_length); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__pdist_backward(gc_tensor grad, gc_tensor self, double p, gc_tensor pdist) { + PROTECT( + torch::Tensor results__ = torch::_pdist_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), p, *tensor_ptr_from_ocaml(pdist)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__pdist_backward_out(gc_tensor out, gc_tensor grad, gc_tensor self, double p, gc_tensor pdist) { + PROTECT( + torch::Tensor results__ = torch::_pdist_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), p, *tensor_ptr_from_ocaml(pdist)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__pin_memory(gc_tensor self, int device) { + PROTECT( + torch::Tensor results__ = torch::_pin_memory(*tensor_ptr_from_ocaml(self), device_of_int(device)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__pin_memory_out(gc_tensor out, gc_tensor self, int device) { + PROTECT( + torch::Tensor results__ = torch::_pin_memory_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), device_of_int(device)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__prelu_kernel(gc_tensor self, gc_tensor weight) { + PROTECT( + torch::Tensor results__ = torch::_prelu_kernel(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +void atg__prelu_kernel_backward(raw_tensor *out__, gc_tensor grad_output, gc_tensor self, gc_tensor weight) { + PROTECT( + auto results__ = torch::_prelu_kernel_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__propagate_xla_data(gc_tensor input, gc_tensor output) { + PROTECT( + torch::_propagate_xla_data(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(output)); + ) +} + +raw_tensor atg__remove_batch_dim(gc_tensor self, int64_t level, int64_t batch_size, int64_t out_dim) { + PROTECT( + torch::Tensor results__ = torch::_remove_batch_dim(*tensor_ptr_from_ocaml(self), level, batch_size, out_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__reshape_alias(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::_reshape_alias(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__reshape_alias_copy(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::_reshape_alias_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__reshape_alias_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::_reshape_alias_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__reshape_copy(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_reshape_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__reshape_from_tensor(gc_tensor self, gc_tensor shape) { + PROTECT( + torch::Tensor results__ = torch::_reshape_from_tensor(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(shape)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__resize_output(gc_tensor self, int64_t *size_data, int size_len, int device) { + PROTECT( + torch::Tensor results__ = torch::_resize_output(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), device_of_int(device)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__resize_output_(gc_tensor self, int64_t *size_data, int size_len, int device) { + PROTECT( + torch::Tensor results__ = torch::_resize_output_(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), device_of_int(device)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__resize_output_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int device) { + PROTECT( + torch::Tensor results__ = torch::_resize_output_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), device_of_int(device)); + return tensor_to_ocaml(results__); + ) +} + +void atg__rowwise_prune(raw_tensor *out__, gc_tensor weight, gc_tensor mask, int compressed_indices_dtype) { + PROTECT( + auto results__ = torch::_rowwise_prune(*tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(mask), torch::ScalarType(compressed_indices_dtype)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__sample_dirichlet(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_sample_dirichlet(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sample_dirichlet_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_sample_dirichlet_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__saturate_weight_to_fp16(gc_tensor weight) { + PROTECT( + torch::Tensor results__ = torch::_saturate_weight_to_fp16(*tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +void atg__scaled_dot_product_attention_math(raw_tensor *out__, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_mask, double dropout_p, int is_causal, gc_tensor dropout_mask, double scale_v, int scale_null) { + PROTECT( + auto results__ = torch::_scaled_dot_product_attention_math(*tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), (attn_mask ? tensor_from_ocaml(attn_mask) : torch::Tensor()), dropout_p, (bool)is_causal, (dropout_mask ? tensor_from_ocaml(dropout_mask) : torch::Tensor()), scale_null ? c10::nullopt : c10::optional(scale_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__scaled_dot_product_efficient_attention(raw_tensor *out__, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_bias, int compute_log_sumexp, double dropout_p, int is_causal, double scale_v, int scale_null) { + PROTECT( + auto results__ = torch::_scaled_dot_product_efficient_attention(*tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), (attn_bias ? tensor_from_ocaml(attn_bias) : torch::Tensor()), (bool)compute_log_sumexp, dropout_p, (bool)is_causal, scale_null ? c10::nullopt : c10::optional(scale_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg__scaled_dot_product_flash_attention_backward(raw_tensor *out__, gc_tensor grad_out, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor out, gc_tensor logsumexp, gc_tensor cum_seq_q, gc_tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, gc_tensor philox_seed, gc_tensor philox_offset, double scale_v, int scale_null) { + PROTECT( + auto results__ = torch::_scaled_dot_product_flash_attention_backward(*tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), *tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(logsumexp), *tensor_ptr_from_ocaml(cum_seq_q), *tensor_ptr_from_ocaml(cum_seq_k), max_q, max_k, dropout_p, (bool)is_causal, *tensor_ptr_from_ocaml(philox_seed), *tensor_ptr_from_ocaml(philox_offset), scale_null ? c10::nullopt : c10::optional(scale_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__scaled_mm(raw_tensor *out__, gc_tensor self, gc_tensor mat2, gc_tensor bias, int out_dtype, gc_tensor scale_a, gc_tensor scale_b, gc_tensor scale_result) { + PROTECT( + auto results__ = torch::_scaled_mm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::ScalarType(out_dtype), (scale_a ? tensor_from_ocaml(scale_a) : torch::Tensor()), (scale_b ? tensor_from_ocaml(scale_b) : torch::Tensor()), (scale_result ? tensor_from_ocaml(scale_result) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__scaled_mm_out(raw_tensor *out__, gc_tensor out, gc_tensor out_amax, gc_tensor self, gc_tensor mat2, gc_tensor bias, int out_dtype, gc_tensor scale_a, gc_tensor scale_b, gc_tensor scale_result) { + PROTECT( + auto results__ = torch::_scaled_mm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(out_amax), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::ScalarType(out_dtype), (scale_a ? tensor_from_ocaml(scale_a) : torch::Tensor()), (scale_b ? tensor_from_ocaml(scale_b) : torch::Tensor()), (scale_result ? tensor_from_ocaml(scale_result) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__scatter_reduce(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce, int include_self) { + PROTECT( + torch::Tensor results__ = torch::scatter_reduce(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src), std::string(reduce), (bool)include_self); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__scatter_reduce_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce, int include_self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->scatter_reduce_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src), std::string(reduce), (bool)include_self); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__scatter_reduce_two_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce, int include_self) { + PROTECT( + torch::Tensor results__ = torch::scatter_reduce_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src), std::string(reduce), (bool)include_self); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__segment_reduce_backward(gc_tensor grad, gc_tensor output, gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor offsets, int64_t axis, scalar initial) { + PROTECT( + torch::Tensor results__ = torch::_segment_reduce_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(data), std::string(reduce), (lengths ? tensor_from_ocaml(lengths) : torch::Tensor()), (offsets ? tensor_from_ocaml(offsets) : torch::Tensor()), axis, *initial); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__segment_reduce_backward_out(gc_tensor out, gc_tensor grad, gc_tensor output, gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor offsets, int64_t axis, scalar initial) { + PROTECT( + torch::Tensor results__ = torch::_segment_reduce_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(data), std::string(reduce), (lengths ? tensor_from_ocaml(lengths) : torch::Tensor()), (offsets ? tensor_from_ocaml(offsets) : torch::Tensor()), axis, *initial); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__shape_as_tensor(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_shape_as_tensor(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg__slow_conv2d_backward(raw_tensor *out__, gc_tensor grad_input, gc_tensor grad_weight, gc_tensor grad_bias, gc_tensor grad_output, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { + PROTECT( + auto results__ = torch::_slow_conv2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_weight), *tensor_ptr_from_ocaml(grad_bias), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__sobol_engine_draw(raw_tensor *out__, gc_tensor quasi, int64_t n, gc_tensor sobolstate, int64_t dimension, int64_t num_generated, int dtype) { + PROTECT( + auto results__ = torch::_sobol_engine_draw(*tensor_ptr_from_ocaml(quasi), n, *tensor_ptr_from_ocaml(sobolstate), dimension, num_generated, torch::ScalarType(dtype)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__sobol_engine_ff_(gc_tensor self, int64_t n, gc_tensor sobolstate, int64_t dimension, int64_t num_generated) { + PROTECT( + torch::Tensor results__ = torch::_sobol_engine_ff_(*tensor_ptr_from_ocaml(self), n, *tensor_ptr_from_ocaml(sobolstate), dimension, num_generated); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sobol_engine_initialize_state_(gc_tensor self, int64_t dimension) { + PROTECT( + torch::Tensor results__ = torch::_sobol_engine_initialize_state_(*tensor_ptr_from_ocaml(self), dimension); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sobol_engine_scramble_(gc_tensor self, gc_tensor ltm, int64_t dimension) { + PROTECT( + torch::Tensor results__ = torch::_sobol_engine_scramble_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(ltm), dimension); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__softmax(gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_softmax(*tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype) { + PROTECT( + torch::Tensor results__ = torch::_softmax_backward_data(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, torch::ScalarType(input_dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__softmax_backward_data_out(gc_tensor grad_input, gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype) { + PROTECT( + torch::Tensor results__ = torch::_softmax_backward_data_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, torch::ScalarType(input_dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_softmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_addmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::_sparse_addmm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_addmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::_sparse_addmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_broadcast_to(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_broadcast_to(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_broadcast_to_copy(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_broadcast_to_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_broadcast_to_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_broadcast_to_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_bsc_tensor_unsafe(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_sparse_bsc_tensor_unsafe(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_bsr_tensor_unsafe(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_sparse_bsr_tensor_unsafe(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_compressed_tensor_unsafe(gc_tensor compressed_indices, gc_tensor plain_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_sparse_compressed_tensor_unsafe(*tensor_ptr_from_ocaml(compressed_indices), *tensor_ptr_from_ocaml(plain_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_coo_tensor_unsafe(gc_tensor indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced) { + PROTECT( + torch::Tensor results__ = torch::_sparse_coo_tensor_unsafe(*tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_coo_tensor_with_dims(int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_sparse_coo_tensor_with_dims(sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_coo_tensor_with_dims_and_tensors(int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, gc_tensor indices, gc_tensor values, int options_kind, int options_device, int is_coalesced) { + PROTECT( + torch::Tensor results__ = torch::_sparse_coo_tensor_with_dims_and_tensors(sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(values), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_coo_tensor_with_dims_and_tensors_out(gc_tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, gc_tensor indices, gc_tensor values, int is_coalesced) { + PROTECT( + torch::Tensor results__ = torch::_sparse_coo_tensor_with_dims_and_tensors_out(*tensor_ptr_from_ocaml(out), sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(values), (bool)is_coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_coo_tensor_with_dims_out(gc_tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_coo_tensor_with_dims_out(*tensor_ptr_from_ocaml(out), sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_csc_tensor_unsafe(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_sparse_csc_tensor_unsafe(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_csr_prod(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_csr_prod(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_csr_prod_dim_dtype_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_csr_prod_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_csr_sum(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_csr_sum(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_csr_sum_dim_dtype_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_csr_sum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_csr_tensor_unsafe(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::_sparse_csr_tensor_unsafe(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_log_softmax(gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_sparse_log_softmax(*tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_log_softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_sparse_log_softmax_backward_data(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_log_softmax_backward_data_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_sparse_log_softmax_backward_data_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_log_softmax_int(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_log_softmax(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_log_softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_sparse_log_softmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_mask_projection(gc_tensor self, gc_tensor mask, int accumulate_matches) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_sparse_mask_projection(*tensor_ptr_from_ocaml(mask), (bool)accumulate_matches); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_mask_projection_out(gc_tensor out, gc_tensor self, gc_tensor mask, int accumulate_matches) { + PROTECT( + torch::Tensor results__ = torch::_sparse_mask_projection_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), (bool)accumulate_matches); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_mm(gc_tensor sparse, gc_tensor dense) { + PROTECT( + torch::Tensor results__ = torch::_sparse_mm(*tensor_ptr_from_ocaml(sparse), *tensor_ptr_from_ocaml(dense)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_mm_reduce(gc_tensor sparse, gc_tensor dense, char * reduce) { + PROTECT( + torch::Tensor results__ = torch::_sparse_mm(*tensor_ptr_from_ocaml(sparse), *tensor_ptr_from_ocaml(dense), std::string(reduce)); + return tensor_to_ocaml(results__); + ) +} + +void atg__sparse_mm_reduce_impl(raw_tensor *out__, gc_tensor self, gc_tensor other, char * reduce) { + PROTECT( + auto results__ = torch::_sparse_mm_reduce_impl(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), std::string(reduce)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__sparse_semi_structured_linear(gc_tensor input, gc_tensor weight, gc_tensor meta, gc_tensor bias, char * activation) { + PROTECT( + torch::Tensor results__ = torch::_sparse_semi_structured_linear(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(meta), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), std::string(activation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_softmax(gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_sparse_softmax(*tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_sparse_softmax_backward_data(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_softmax_backward_data_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_sparse_softmax_backward_data_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), dim, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_softmax_int(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_softmax(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float) { + PROTECT( + torch::Tensor results__ = torch::_sparse_softmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, (bool)half_to_float); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sparse_matmul(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sparse_matmul(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sparse_matmul_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sparse_matmul_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sum(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sum(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sum_backward(gc_tensor grad, gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sum_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sum_backward_out(gc_tensor out, gc_tensor grad, gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sum_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sum_dim(gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sum(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sum_dim_dtype(gc_tensor self, int64_t *dim_data, int dim_len, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sum(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sum_dim_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__sparse_sum_dtype(gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::_sparse_sum(*tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__spdiags(gc_tensor diagonals, gc_tensor offsets, int64_t *shape_data, int shape_len) { + PROTECT( + torch::Tensor results__ = torch::_spdiags(*tensor_ptr_from_ocaml(diagonals), *tensor_ptr_from_ocaml(offsets), torch::IntArrayRef(shape_data, shape_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__spdiags_out(gc_tensor out, gc_tensor diagonals, gc_tensor offsets, int64_t *shape_data, int shape_len) { + PROTECT( + torch::Tensor results__ = torch::_spdiags_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(diagonals), *tensor_ptr_from_ocaml(offsets), torch::IntArrayRef(shape_data, shape_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__stack(gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::_stack(of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::_stack_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__standard_gamma(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_standard_gamma(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__standard_gamma_grad(gc_tensor self, gc_tensor output) { + PROTECT( + torch::Tensor results__ = torch::_standard_gamma_grad(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(output)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__standard_gamma_grad_out(gc_tensor out, gc_tensor self, gc_tensor output) { + PROTECT( + torch::Tensor results__ = torch::_standard_gamma_grad_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(output)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__standard_gamma_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_standard_gamma_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_ambiguous_defaults(gc_tensor dummy, int64_t a, int64_t b) { + PROTECT( + torch::Tensor results__ = torch::_test_ambiguous_defaults(*tensor_ptr_from_ocaml(dummy), a, b); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_ambiguous_defaults_b(gc_tensor dummy, int64_t a, char * b) { + PROTECT( + torch::Tensor results__ = torch::_test_ambiguous_defaults(*tensor_ptr_from_ocaml(dummy), a, std::string(b)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_autograd_multiple_dispatch(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_autograd_multiple_dispatch(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_autograd_multiple_dispatch_fullcoverage_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_autograd_multiple_dispatch_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_autograd_multiple_dispatch_ntonly(gc_tensor self, int b) { + PROTECT( + torch::Tensor results__ = torch::_test_autograd_multiple_dispatch(*tensor_ptr_from_ocaml(self), (bool)b); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_autograd_multiple_dispatch_view(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_autograd_multiple_dispatch_view(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_autograd_multiple_dispatch_view_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_autograd_multiple_dispatch_view_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_autograd_multiple_dispatch_view_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_autograd_multiple_dispatch_view_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_check_tensor(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_check_tensor(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_functorch_fallback(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_test_functorch_fallback(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_functorch_fallback_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_test_functorch_fallback_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_optional_filled_intlist(gc_tensor values, int64_t *addends_data, int addends_len) { + PROTECT( + torch::Tensor results__ = torch::_test_optional_filled_intlist(*tensor_ptr_from_ocaml(values), addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_optional_filled_intlist_out(gc_tensor out, gc_tensor values, int64_t *addends_data, int addends_len) { + PROTECT( + torch::Tensor results__ = torch::_test_optional_filled_intlist_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(values), addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_optional_floatlist(gc_tensor values, double *addends_data, int addends_len) { + PROTECT( + torch::Tensor results__ = torch::_test_optional_floatlist(*tensor_ptr_from_ocaml(values), at::ArrayRef(addends_data, addends_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_optional_floatlist_out(gc_tensor out, gc_tensor values, double *addends_data, int addends_len) { + PROTECT( + torch::Tensor results__ = torch::_test_optional_floatlist_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(values), at::ArrayRef(addends_data, addends_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_optional_intlist(gc_tensor values, int64_t *addends_data, int addends_len) { + PROTECT( + torch::Tensor results__ = torch::_test_optional_intlist(*tensor_ptr_from_ocaml(values), addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_optional_intlist_out(gc_tensor out, gc_tensor values, int64_t *addends_data, int addends_len) { + PROTECT( + torch::Tensor results__ = torch::_test_optional_intlist_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(values), addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_serialization_subcmul(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::_test_serialization_subcmul(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_string_default(gc_tensor dummy, char * a, char * b) { + PROTECT( + torch::Tensor results__ = torch::_test_string_default(*tensor_ptr_from_ocaml(dummy), std::string(a), std::string(b)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_warn_in_autograd(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_warn_in_autograd(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__test_warn_in_autograd_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_test_warn_in_autograd_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg__thnn_differentiable_gru_cell_backward(raw_tensor *out__, gc_tensor grad_hy, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor hx, gc_tensor input_bias, gc_tensor hidden_bias) { + PROTECT( + auto results__ = torch::_thnn_differentiable_gru_cell_backward(*tensor_ptr_from_ocaml(grad_hy), *tensor_ptr_from_ocaml(input_gates), *tensor_ptr_from_ocaml(hidden_gates), *tensor_ptr_from_ocaml(hx), (input_bias ? tensor_from_ocaml(input_bias) : torch::Tensor()), (hidden_bias ? tensor_from_ocaml(hidden_bias) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +void atg__thnn_differentiable_lstm_cell_backward(raw_tensor *out__, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor input_bias, gc_tensor hidden_bias, gc_tensor cx, gc_tensor cy) { + PROTECT( + auto results__ = torch::_thnn_differentiable_lstm_cell_backward((grad_hy ? tensor_from_ocaml(grad_hy) : torch::Tensor()), (grad_cy ? tensor_from_ocaml(grad_cy) : torch::Tensor()), *tensor_ptr_from_ocaml(input_gates), *tensor_ptr_from_ocaml(hidden_gates), (input_bias ? tensor_from_ocaml(input_bias) : torch::Tensor()), (hidden_bias ? tensor_from_ocaml(hidden_bias) : torch::Tensor()), *tensor_ptr_from_ocaml(cx), *tensor_ptr_from_ocaml(cy)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +void atg__thnn_fused_gru_cell(raw_tensor *out__, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor hx, gc_tensor input_bias, gc_tensor hidden_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_gru_cell(*tensor_ptr_from_ocaml(input_gates), *tensor_ptr_from_ocaml(hidden_gates), *tensor_ptr_from_ocaml(hx), (input_bias ? tensor_from_ocaml(input_bias) : torch::Tensor()), (hidden_bias ? tensor_from_ocaml(hidden_bias) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__thnn_fused_gru_cell_backward(raw_tensor *out__, gc_tensor grad_hy, gc_tensor workspace, int has_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_gru_cell_backward(*tensor_ptr_from_ocaml(grad_hy), *tensor_ptr_from_ocaml(workspace), (bool)has_bias); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +void atg__thnn_fused_gru_cell_backward_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor grad_hy, gc_tensor workspace, int has_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_gru_cell_backward_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(out4), *tensor_ptr_from_ocaml(grad_hy), *tensor_ptr_from_ocaml(workspace), (bool)has_bias); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +void atg__thnn_fused_gru_cell_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor hx, gc_tensor input_bias, gc_tensor hidden_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_gru_cell_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(input_gates), *tensor_ptr_from_ocaml(hidden_gates), *tensor_ptr_from_ocaml(hx), (input_bias ? tensor_from_ocaml(input_bias) : torch::Tensor()), (hidden_bias ? tensor_from_ocaml(hidden_bias) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__thnn_fused_lstm_cell(raw_tensor *out__, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor cx, gc_tensor input_bias, gc_tensor hidden_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_lstm_cell(*tensor_ptr_from_ocaml(input_gates), *tensor_ptr_from_ocaml(hidden_gates), *tensor_ptr_from_ocaml(cx), (input_bias ? tensor_from_ocaml(input_bias) : torch::Tensor()), (hidden_bias ? tensor_from_ocaml(hidden_bias) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__thnn_fused_lstm_cell_backward(raw_tensor *out__, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor cx, gc_tensor cy, gc_tensor workspace, int has_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_lstm_cell_backward((grad_hy ? tensor_from_ocaml(grad_hy) : torch::Tensor()), (grad_cy ? tensor_from_ocaml(grad_cy) : torch::Tensor()), *tensor_ptr_from_ocaml(cx), *tensor_ptr_from_ocaml(cy), *tensor_ptr_from_ocaml(workspace), (bool)has_bias); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +void atg__thnn_fused_lstm_cell_backward_impl(raw_tensor *out__, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor cx, gc_tensor cy, gc_tensor workspace, int has_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_lstm_cell_backward_impl((grad_hy ? tensor_from_ocaml(grad_hy) : torch::Tensor()), (grad_cy ? tensor_from_ocaml(grad_cy) : torch::Tensor()), *tensor_ptr_from_ocaml(cx), *tensor_ptr_from_ocaml(cy), *tensor_ptr_from_ocaml(workspace), (bool)has_bias); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__thnn_fused_lstm_cell_backward_impl_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor cx, gc_tensor cy, gc_tensor workspace, int has_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_lstm_cell_backward_impl_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), (grad_hy ? tensor_from_ocaml(grad_hy) : torch::Tensor()), (grad_cy ? tensor_from_ocaml(grad_cy) : torch::Tensor()), *tensor_ptr_from_ocaml(cx), *tensor_ptr_from_ocaml(cy), *tensor_ptr_from_ocaml(workspace), (bool)has_bias); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__thnn_fused_lstm_cell_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor cx, gc_tensor input_bias, gc_tensor hidden_bias) { + PROTECT( + auto results__ = torch::_thnn_fused_lstm_cell_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(input_gates), *tensor_ptr_from_ocaml(hidden_gates), *tensor_ptr_from_ocaml(cx), (input_bias ? tensor_from_ocaml(input_bias) : torch::Tensor()), (hidden_bias ? tensor_from_ocaml(hidden_bias) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg__to_copy(gc_tensor self, int options_kind, int options_device, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_to_copy(*tensor_ptr_from_ocaml(self), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_copy_out(gc_tensor out, gc_tensor self, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::_to_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg__to_cpu(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::_to_cpu(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg__to_dense(gc_tensor self, int dtype, int masked_grad) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_to_dense(torch::ScalarType(dtype), (bool)masked_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_dense_out(gc_tensor out, gc_tensor self, int dtype, int masked_grad) { + PROTECT( + torch::Tensor results__ = torch::_to_dense_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::ScalarType(dtype), (bool)masked_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_bsc(gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_to_sparse_bsc(torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_bsc_out(gc_tensor out, gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = torch::_to_sparse_bsc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_bsr(gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_to_sparse_bsr(torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_bsr_out(gc_tensor out, gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = torch::_to_sparse_bsr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_csc(gc_tensor self, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_to_sparse_csc(dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_csc_out(gc_tensor out, gc_tensor self, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = torch::_to_sparse_csc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_csr(gc_tensor self, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_to_sparse_csr(dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__to_sparse_csr_out(gc_tensor out, gc_tensor self, int64_t dense_dim_v, int dense_dim_null) { + PROTECT( + torch::Tensor results__ = torch::_to_sparse_csr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); + return tensor_to_ocaml(results__); + ) +} + +void atg__to_sparse_semi_structured(raw_tensor *out__, gc_tensor dense) { + PROTECT( + auto results__ = torch::_to_sparse_semi_structured(*tensor_ptr_from_ocaml(dense)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__transform_bias_rescale_qkv(raw_tensor *out__, gc_tensor qkv, gc_tensor qkv_bias, int64_t num_heads) { + PROTECT( + auto results__ = torch::_transform_bias_rescale_qkv(*tensor_ptr_from_ocaml(qkv), *tensor_ptr_from_ocaml(qkv_bias), num_heads); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__transform_bias_rescale_qkv_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor qkv, gc_tensor qkv_bias, int64_t num_heads) { + PROTECT( + auto results__ = torch::_transform_bias_rescale_qkv_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(qkv), *tensor_ptr_from_ocaml(qkv_bias), num_heads); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg__transformer_encoder_layer_fwd(gc_tensor src, int64_t embed_dim, int64_t num_heads, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, int use_gelu, int norm_first, double eps, gc_tensor norm_weight_1, gc_tensor norm_bias_1, gc_tensor norm_weight_2, gc_tensor norm_bias_2, gc_tensor ffn_weight_1, gc_tensor ffn_bias_1, gc_tensor ffn_weight_2, gc_tensor ffn_bias_2, gc_tensor mask, int64_t mask_type_v, int mask_type_null) { + PROTECT( + torch::Tensor results__ = torch::_transformer_encoder_layer_fwd(*tensor_ptr_from_ocaml(src), embed_dim, num_heads, *tensor_ptr_from_ocaml(qkv_weight), *tensor_ptr_from_ocaml(qkv_bias), *tensor_ptr_from_ocaml(proj_weight), *tensor_ptr_from_ocaml(proj_bias), (bool)use_gelu, (bool)norm_first, eps, *tensor_ptr_from_ocaml(norm_weight_1), *tensor_ptr_from_ocaml(norm_bias_1), *tensor_ptr_from_ocaml(norm_weight_2), *tensor_ptr_from_ocaml(norm_bias_2), *tensor_ptr_from_ocaml(ffn_weight_1), *tensor_ptr_from_ocaml(ffn_bias_1), *tensor_ptr_from_ocaml(ffn_weight_2), *tensor_ptr_from_ocaml(ffn_bias_2), (mask ? tensor_from_ocaml(mask) : torch::Tensor()), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__transformer_encoder_layer_fwd_out(gc_tensor out, gc_tensor src, int64_t embed_dim, int64_t num_heads, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, int use_gelu, int norm_first, double eps, gc_tensor norm_weight_1, gc_tensor norm_bias_1, gc_tensor norm_weight_2, gc_tensor norm_bias_2, gc_tensor ffn_weight_1, gc_tensor ffn_bias_1, gc_tensor ffn_weight_2, gc_tensor ffn_bias_2, gc_tensor mask, int64_t mask_type_v, int mask_type_null) { + PROTECT( + torch::Tensor results__ = torch::_transformer_encoder_layer_fwd_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(src), embed_dim, num_heads, *tensor_ptr_from_ocaml(qkv_weight), *tensor_ptr_from_ocaml(qkv_bias), *tensor_ptr_from_ocaml(proj_weight), *tensor_ptr_from_ocaml(proj_bias), (bool)use_gelu, (bool)norm_first, eps, *tensor_ptr_from_ocaml(norm_weight_1), *tensor_ptr_from_ocaml(norm_bias_1), *tensor_ptr_from_ocaml(norm_weight_2), *tensor_ptr_from_ocaml(norm_bias_2), *tensor_ptr_from_ocaml(ffn_weight_1), *tensor_ptr_from_ocaml(ffn_bias_1), *tensor_ptr_from_ocaml(ffn_weight_2), *tensor_ptr_from_ocaml(ffn_bias_2), (mask ? tensor_from_ocaml(mask) : torch::Tensor()), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__trilinear(gc_tensor i1, gc_tensor i2, gc_tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim) { + PROTECT( + torch::Tensor results__ = torch::_trilinear(*tensor_ptr_from_ocaml(i1), *tensor_ptr_from_ocaml(i2), *tensor_ptr_from_ocaml(i3), torch::IntArrayRef(expand1_data, expand1_len), torch::IntArrayRef(expand2_data, expand2_len), torch::IntArrayRef(expand3_data, expand3_len), torch::IntArrayRef(sumdim_data, sumdim_len), unroll_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__trilinear_out(gc_tensor out, gc_tensor i1, gc_tensor i2, gc_tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim) { + PROTECT( + torch::Tensor results__ = torch::_trilinear_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(i1), *tensor_ptr_from_ocaml(i2), *tensor_ptr_from_ocaml(i3), torch::IntArrayRef(expand1_data, expand1_len), torch::IntArrayRef(expand2_data, expand2_len), torch::IntArrayRef(expand3_data, expand3_len), torch::IntArrayRef(sumdim_data, sumdim_len), unroll_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__triton_multi_head_attention(gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::_triton_multi_head_attention(*tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), embed_dim, num_head, *tensor_ptr_from_ocaml(qkv_weight), *tensor_ptr_from_ocaml(qkv_bias), *tensor_ptr_from_ocaml(proj_weight), *tensor_ptr_from_ocaml(proj_bias), (mask ? tensor_from_ocaml(mask) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__triton_multi_head_attention_out(gc_tensor out, gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::_triton_multi_head_attention_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), embed_dim, num_head, *tensor_ptr_from_ocaml(qkv_weight), *tensor_ptr_from_ocaml(qkv_bias), *tensor_ptr_from_ocaml(proj_weight), *tensor_ptr_from_ocaml(proj_bias), (mask ? tensor_from_ocaml(mask) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__triton_scaled_dot_attention(gc_tensor q, gc_tensor k, gc_tensor v, double dropout_p) { + PROTECT( + torch::Tensor results__ = torch::_triton_scaled_dot_attention(*tensor_ptr_from_ocaml(q), *tensor_ptr_from_ocaml(k), *tensor_ptr_from_ocaml(v), dropout_p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__triton_scaled_dot_attention_out(gc_tensor out, gc_tensor q, gc_tensor k, gc_tensor v, double dropout_p) { + PROTECT( + torch::Tensor results__ = torch::_triton_scaled_dot_attention_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(q), *tensor_ptr_from_ocaml(k), *tensor_ptr_from_ocaml(v), dropout_p); + return tensor_to_ocaml(results__); + ) +} + +void atg__unique(raw_tensor *out__, gc_tensor self, int sorted, int return_inverse) { + PROTECT( + auto results__ = torch::_unique(*tensor_ptr_from_ocaml(self), (bool)sorted, (bool)return_inverse); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__unique2(raw_tensor *out__, gc_tensor self, int sorted, int return_inverse, int return_counts) { + PROTECT( + auto results__ = torch::_unique2(*tensor_ptr_from_ocaml(self), (bool)sorted, (bool)return_inverse, (bool)return_counts); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__unique2_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int sorted, int return_inverse, int return_counts) { + PROTECT( + auto results__ = torch::_unique2_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(self), (bool)sorted, (bool)return_inverse, (bool)return_counts); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg__unique_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, int sorted, int return_inverse) { + PROTECT( + auto results__ = torch::_unique_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), (bool)sorted, (bool)return_inverse); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__unpack_dual(raw_tensor *out__, gc_tensor dual, int64_t level) { + PROTECT( + auto results__ = torch::_unpack_dual(*tensor_ptr_from_ocaml(dual), level); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg__unsafe_index(gc_tensor self, gc_tensor *indices_data, int indices_len) { + PROTECT( + torch::Tensor results__ = torch::_unsafe_index(*tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__unsafe_index_put(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate) { + PROTECT( + torch::Tensor results__ = torch::_unsafe_index_put(*tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len), *tensor_ptr_from_ocaml(values), (bool)accumulate); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__unsafe_view(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_unsafe_view(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__unsafe_view_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::_unsafe_view_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bicubic2d_aa(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bicubic2d_aa(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bicubic2d_aa_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bicubic2d_aa_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bicubic2d_aa_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bicubic2d_aa_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bicubic2d_aa_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bicubic2d_aa_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bicubic2d_aa_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bicubic2d_aa(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bilinear2d_aa(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bilinear2d_aa(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bilinear2d_aa_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bilinear2d_aa_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bilinear2d_aa_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bilinear2d_aa_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bilinear2d_aa_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bilinear2d_aa_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_bilinear2d_aa_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::_upsample_bilinear2d_aa(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact1d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact1d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact1d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact1d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact1d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact1d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact1d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact1d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact2d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact2d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact2d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact3d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact3d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact3d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__upsample_nearest_exact3d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::_upsample_nearest_exact3d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +int atg__use_cudnn_ctc_loss(gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank) { + PROTECT( + return torch::_use_cudnn_ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank); + ) + return 0; +} + +int atg__use_cudnn_ctc_loss_tensor(gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank) { + PROTECT( + return torch::_use_cudnn_ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), *tensor_ptr_from_ocaml(input_lengths), *tensor_ptr_from_ocaml(target_lengths), blank); + ) + return 0; +} + +int atg__use_cudnn_rnn_flatten_weight() { + PROTECT( + return torch::_use_cudnn_rnn_flatten_weight(); + ) + return 0; +} + +void atg__validate_compressed_sparse_indices(int is_crow, gc_tensor compressed_idx, gc_tensor plain_idx, int64_t cdim, int64_t dim, int64_t nnz) { + PROTECT( + torch::_validate_compressed_sparse_indices((bool)is_crow, *tensor_ptr_from_ocaml(compressed_idx), *tensor_ptr_from_ocaml(plain_idx), cdim, dim, nnz); + ) +} + +void atg__validate_sparse_bsc_tensor_args(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len) { + PROTECT( + torch::_validate_sparse_bsc_tensor_args(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len)); + ) +} + +void atg__validate_sparse_bsr_tensor_args(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len) { + PROTECT( + torch::_validate_sparse_bsr_tensor_args(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len)); + ) +} + +void atg__validate_sparse_csc_tensor_args(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len) { + PROTECT( + torch::_validate_sparse_csc_tensor_args(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len)); + ) +} + +raw_tensor atg__values(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->_values(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__values_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_values_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg__values_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::_values_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg__version(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->_version(); + ) + return 0; +} + +raw_tensor atg__weight_norm(gc_tensor v, gc_tensor g, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::_weight_norm(*tensor_ptr_from_ocaml(v), *tensor_ptr_from_ocaml(g), dim); + return tensor_to_ocaml(results__); + ) +} + +void atg__weight_norm_differentiable_backward(raw_tensor *out__, gc_tensor grad_w, gc_tensor saved_v, gc_tensor saved_g, gc_tensor saved_norms, int64_t dim) { + PROTECT( + auto results__ = torch::_weight_norm_differentiable_backward(*tensor_ptr_from_ocaml(grad_w), *tensor_ptr_from_ocaml(saved_v), *tensor_ptr_from_ocaml(saved_g), *tensor_ptr_from_ocaml(saved_norms), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__weight_norm_interface(raw_tensor *out__, gc_tensor v, gc_tensor g, int64_t dim) { + PROTECT( + auto results__ = torch::_weight_norm_interface(*tensor_ptr_from_ocaml(v), *tensor_ptr_from_ocaml(g), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__weight_norm_interface_backward(raw_tensor *out__, gc_tensor grad_w, gc_tensor saved_v, gc_tensor saved_g, gc_tensor saved_norms, int64_t dim) { + PROTECT( + auto results__ = torch::_weight_norm_interface_backward(*tensor_ptr_from_ocaml(grad_w), *tensor_ptr_from_ocaml(saved_v), *tensor_ptr_from_ocaml(saved_g), *tensor_ptr_from_ocaml(saved_norms), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__weight_norm_interface_backward_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor grad_w, gc_tensor saved_v, gc_tensor saved_g, gc_tensor saved_norms, int64_t dim) { + PROTECT( + auto results__ = torch::_weight_norm_interface_backward_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(grad_w), *tensor_ptr_from_ocaml(saved_v), *tensor_ptr_from_ocaml(saved_g), *tensor_ptr_from_ocaml(saved_norms), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg__weight_norm_interface_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor v, gc_tensor g, int64_t dim) { + PROTECT( + auto results__ = torch::_weight_norm_interface_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(v), *tensor_ptr_from_ocaml(g), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_abs(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::abs(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_abs_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::abs_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_abs_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::abs_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_absolute(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::absolute(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_absolute_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->absolute_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_absolute_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::absolute_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_acos(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::acos(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_acos_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::acos_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_acos_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::acos_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_acosh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::acosh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_acosh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::acosh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_acosh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::acosh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_avg_pool1d(gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::adaptive_avg_pool1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_avg_pool2d(gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::adaptive_avg_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::adaptive_avg_pool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_avg_pool3d(gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::adaptive_avg_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_avg_pool3d_backward(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::adaptive_avg_pool3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_avg_pool3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::adaptive_avg_pool3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +void atg_adaptive_max_pool1d(raw_tensor *out__, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + auto results__ = torch::adaptive_max_pool1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_adaptive_max_pool2d(raw_tensor *out__, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + auto results__ = torch::adaptive_max_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_adaptive_max_pool2d_backward(gc_tensor grad_output, gc_tensor self, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::adaptive_max_pool2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_max_pool2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::adaptive_max_pool2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +void atg_adaptive_max_pool2d_out(raw_tensor *out__, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + auto results__ = torch::adaptive_max_pool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_adaptive_max_pool3d(raw_tensor *out__, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + auto results__ = torch::adaptive_max_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_adaptive_max_pool3d_backward(gc_tensor grad_output, gc_tensor self, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::adaptive_max_pool3d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adaptive_max_pool3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::adaptive_max_pool3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +void atg_adaptive_max_pool3d_out(raw_tensor *out__, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + auto results__ = torch::adaptive_max_pool3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_add(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::add(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_add_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->add_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_add_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::add_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_add_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::add(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_add_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->add_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_add_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::add_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addbmm(gc_tensor self, gc_tensor batch1, gc_tensor batch2) { + PROTECT( + torch::Tensor results__ = torch::addbmm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(batch1), *tensor_ptr_from_ocaml(batch2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addbmm_(gc_tensor self, gc_tensor batch1, gc_tensor batch2) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->addbmm_(*tensor_ptr_from_ocaml(batch1), *tensor_ptr_from_ocaml(batch2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addbmm_out(gc_tensor out, gc_tensor self, gc_tensor batch1, gc_tensor batch2) { + PROTECT( + torch::Tensor results__ = torch::addbmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(batch1), *tensor_ptr_from_ocaml(batch2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addcdiv(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2) { + PROTECT( + torch::Tensor results__ = torch::addcdiv(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(tensor1), *tensor_ptr_from_ocaml(tensor2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addcdiv_(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->addcdiv_(*tensor_ptr_from_ocaml(tensor1), *tensor_ptr_from_ocaml(tensor2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addcdiv_out(gc_tensor out, gc_tensor self, gc_tensor tensor1, gc_tensor tensor2) { + PROTECT( + torch::Tensor results__ = torch::addcdiv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(tensor1), *tensor_ptr_from_ocaml(tensor2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addcmul(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2) { + PROTECT( + torch::Tensor results__ = torch::addcmul(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(tensor1), *tensor_ptr_from_ocaml(tensor2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addcmul_(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->addcmul_(*tensor_ptr_from_ocaml(tensor1), *tensor_ptr_from_ocaml(tensor2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addcmul_out(gc_tensor out, gc_tensor self, gc_tensor tensor1, gc_tensor tensor2) { + PROTECT( + torch::Tensor results__ = torch::addcmul_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(tensor1), *tensor_ptr_from_ocaml(tensor2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::addmm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addmm_(gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->addmm_(*tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::addmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addmv(gc_tensor self, gc_tensor mat, gc_tensor vec) { + PROTECT( + torch::Tensor results__ = torch::addmv(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat), *tensor_ptr_from_ocaml(vec)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addmv_(gc_tensor self, gc_tensor mat, gc_tensor vec) { + PROTECT( + torch::Tensor results__ = torch::addmv_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat), *tensor_ptr_from_ocaml(vec)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addmv_out(gc_tensor out, gc_tensor self, gc_tensor mat, gc_tensor vec) { + PROTECT( + torch::Tensor results__ = torch::addmv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat), *tensor_ptr_from_ocaml(vec)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addr(gc_tensor self, gc_tensor vec1, gc_tensor vec2) { + PROTECT( + torch::Tensor results__ = torch::addr(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec1), *tensor_ptr_from_ocaml(vec2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addr_(gc_tensor self, gc_tensor vec1, gc_tensor vec2) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->addr_(*tensor_ptr_from_ocaml(vec1), *tensor_ptr_from_ocaml(vec2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_addr_out(gc_tensor out, gc_tensor self, gc_tensor vec1, gc_tensor vec2) { + PROTECT( + torch::Tensor results__ = torch::addr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec1), *tensor_ptr_from_ocaml(vec2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_adjoint(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::adjoint(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_affine_grid_generator(gc_tensor theta, int64_t *size_data, int size_len, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::affine_grid_generator(*tensor_ptr_from_ocaml(theta), torch::IntArrayRef(size_data, size_len), (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_affine_grid_generator_backward(gc_tensor grad, int64_t *size_data, int size_len, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::affine_grid_generator_backward(*tensor_ptr_from_ocaml(grad), torch::IntArrayRef(size_data, size_len), (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_affine_grid_generator_out(gc_tensor out, gc_tensor theta, int64_t *size_data, int size_len, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::affine_grid_generator_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(theta), torch::IntArrayRef(size_data, size_len), (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_alias(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::alias(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_alias_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::alias_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_alias_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::alias_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_align_as(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->align_as(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_align_tensors(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::align_tensors(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_all(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::all(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_all_all_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::all_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_all_dim(gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::all(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_all_out(gc_tensor out, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::all_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +int atg_allclose(gc_tensor self, gc_tensor other, double rtol, double atol, int equal_nan) { + PROTECT( + return torch::allclose(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), rtol, atol, (bool)equal_nan); + ) + return 0; +} + +raw_tensor atg_alpha_dropout(gc_tensor input, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::alpha_dropout(*tensor_ptr_from_ocaml(input), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_alpha_dropout_(gc_tensor self, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::alpha_dropout_(*tensor_ptr_from_ocaml(self), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_amax(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::amax(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_amax_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::amax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_amin(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::amin(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_amin_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::amin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +void atg_aminmax(raw_tensor *out__, gc_tensor self, int64_t dim_v, int dim_null, int keepdim) { + PROTECT( + auto results__ = torch::aminmax(*tensor_ptr_from_ocaml(self), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_aminmax_out(raw_tensor *out__, gc_tensor min, gc_tensor max, gc_tensor self, int64_t dim_v, int dim_null, int keepdim) { + PROTECT( + auto results__ = torch::aminmax_out(*tensor_ptr_from_ocaml(min), *tensor_ptr_from_ocaml(max), *tensor_ptr_from_ocaml(self), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_angle(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::angle(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_angle_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::angle_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_any(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::any(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_any_all_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::any_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_any_dim(gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::any(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_any_out(gc_tensor out, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::any_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arange(scalar end, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::arange(*end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arange_start(scalar start, scalar end, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::arange(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arange_start_step(scalar start, scalar end, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::arange(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arccos(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arccos(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arccos_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arccos_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arccos_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arccos_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arccosh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arccosh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arccosh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arccosh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arccosh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arccosh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arcsin(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arcsin(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arcsin_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arcsin_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arcsin_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arcsin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arcsinh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arcsinh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arcsinh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arcsinh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arcsinh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arcsinh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctan(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arctan(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctan2(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::arctan2(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctan2_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->arctan2_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctan2_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::arctan2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctan_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arctan_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctan_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arctan_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctanh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arctanh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctanh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arctanh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_arctanh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::arctanh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argmax(gc_tensor self, int64_t dim_v, int dim_null, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::argmax(*tensor_ptr_from_ocaml(self), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argmax_out(gc_tensor out, gc_tensor self, int64_t dim_v, int dim_null, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::argmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argmin(gc_tensor self, int64_t dim_v, int dim_null, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::argmin(*tensor_ptr_from_ocaml(self), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argmin_out(gc_tensor out, gc_tensor self, int64_t dim_v, int dim_null, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::argmin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argsort(gc_tensor self, int64_t dim, int descending) { + PROTECT( + torch::Tensor results__ = torch::argsort(*tensor_ptr_from_ocaml(self), dim, (bool)descending); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argsort_stable(gc_tensor self, int stable, int64_t dim, int descending) { + PROTECT( + torch::Tensor results__ = torch::argsort(*tensor_ptr_from_ocaml(self), (bool)stable, dim, (bool)descending); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argsort_stable_out(gc_tensor out, gc_tensor self, int stable, int64_t dim, int descending) { + PROTECT( + torch::Tensor results__ = torch::argsort_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)stable, dim, (bool)descending); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_argwhere(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::argwhere(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_as_strided(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { + PROTECT( + torch::Tensor results__ = torch::as_strided(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_as_strided_(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { + PROTECT( + torch::Tensor results__ = torch::as_strided_(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_as_strided_copy(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { + PROTECT( + torch::Tensor results__ = torch::as_strided_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_as_strided_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { + PROTECT( + torch::Tensor results__ = torch::as_strided_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_as_strided_scatter(gc_tensor self, gc_tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { + PROTECT( + torch::Tensor results__ = torch::as_strided_scatter(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_as_strided_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { + PROTECT( + torch::Tensor results__ = torch::as_strided_scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_asin(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::asin(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_asin_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::asin_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_asin_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::asin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_asinh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::asinh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_asinh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::asinh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_asinh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::asinh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atan(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atan(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atan2(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::atan2(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atan2_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->atan2_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atan2_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::atan2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atan_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atan_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atan_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atan_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atanh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atanh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atanh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atanh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atanh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atanh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_atleast_1d(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atleast_1d(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_atleast_1d_sequence(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::atleast_1d(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_atleast_2d(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atleast_2d(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_atleast_2d_sequence(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::atleast_2d(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_atleast_3d(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::atleast_3d(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_atleast_3d_sequence(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::atleast_3d(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_avg_pool1d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad) { + PROTECT( + torch::Tensor results__ = torch::avg_pool1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool3d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_avg_pool3d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { + PROTECT( + torch::Tensor results__ = torch::avg_pool3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_baddbmm(gc_tensor self, gc_tensor batch1, gc_tensor batch2) { + PROTECT( + torch::Tensor results__ = torch::baddbmm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(batch1), *tensor_ptr_from_ocaml(batch2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_baddbmm_(gc_tensor self, gc_tensor batch1, gc_tensor batch2) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->baddbmm_(*tensor_ptr_from_ocaml(batch1), *tensor_ptr_from_ocaml(batch2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_baddbmm_out(gc_tensor out, gc_tensor self, gc_tensor batch1, gc_tensor batch2) { + PROTECT( + torch::Tensor results__ = torch::baddbmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(batch1), *tensor_ptr_from_ocaml(batch2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bartlett_window(int64_t window_length, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::bartlett_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bartlett_window_out(gc_tensor out, int64_t window_length) { + PROTECT( + torch::Tensor results__ = torch::bartlett_window_out(*tensor_ptr_from_ocaml(out), window_length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bartlett_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::bartlett_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bartlett_window_periodic_out(gc_tensor out, int64_t window_length, int periodic) { + PROTECT( + torch::Tensor results__ = torch::bartlett_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_batch_norm(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps, int cudnn_enabled) { + PROTECT( + torch::Tensor results__ = torch::batch_norm(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)training, momentum, eps, (bool)cudnn_enabled); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_batch_norm_backward_elemt(gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, gc_tensor sum_dy, gc_tensor sum_dy_xmu, gc_tensor count) { + PROTECT( + torch::Tensor results__ = torch::batch_norm_backward_elemt(*tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), *tensor_ptr_from_ocaml(sum_dy), *tensor_ptr_from_ocaml(sum_dy_xmu), *tensor_ptr_from_ocaml(count)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_batch_norm_backward_elemt_out(gc_tensor out, gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, gc_tensor sum_dy, gc_tensor sum_dy_xmu, gc_tensor count) { + PROTECT( + torch::Tensor results__ = torch::batch_norm_backward_elemt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), *tensor_ptr_from_ocaml(sum_dy), *tensor_ptr_from_ocaml(sum_dy_xmu), *tensor_ptr_from_ocaml(count)); + return tensor_to_ocaml(results__); + ) +} + +void atg_batch_norm_backward_reduce(raw_tensor *out__, gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, int input_g, int weight_g, int bias_g) { + PROTECT( + auto results__ = torch::batch_norm_backward_reduce(*tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)input_g, (bool)weight_g, (bool)bias_g); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg_batch_norm_backward_reduce_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, int input_g, int weight_g, int bias_g) { + PROTECT( + auto results__ = torch::batch_norm_backward_reduce_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bool)input_g, (bool)weight_g, (bool)bias_g); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +raw_tensor atg_batch_norm_elemt(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor invstd, double eps) { + PROTECT( + torch::Tensor results__ = torch::batch_norm_elemt(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), eps); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_batch_norm_elemt_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor invstd, double eps) { + PROTECT( + torch::Tensor results__ = torch::batch_norm_elemt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), eps); + return tensor_to_ocaml(results__); + ) +} + +void atg_batch_norm_gather_stats(raw_tensor *out__, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, int64_t count) { + PROTECT( + auto results__ = torch::batch_norm_gather_stats(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), momentum, eps, count); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_batch_norm_gather_stats_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, int64_t count) { + PROTECT( + auto results__ = torch::batch_norm_gather_stats_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), momentum, eps, count); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_batch_norm_gather_stats_with_counts(raw_tensor *out__, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, gc_tensor counts) { + PROTECT( + auto results__ = torch::batch_norm_gather_stats_with_counts(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), momentum, eps, *tensor_ptr_from_ocaml(counts)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_batch_norm_gather_stats_with_counts_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, gc_tensor counts) { + PROTECT( + auto results__ = torch::batch_norm_gather_stats_with_counts_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(invstd), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), momentum, eps, *tensor_ptr_from_ocaml(counts)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_batch_norm_stats(raw_tensor *out__, gc_tensor input, double eps) { + PROTECT( + auto results__ = torch::batch_norm_stats(*tensor_ptr_from_ocaml(input), eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_batch_norm_stats_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor input, double eps) { + PROTECT( + auto results__ = torch::batch_norm_stats_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(input), eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_batch_norm_update_stats(raw_tensor *out__, gc_tensor input, gc_tensor running_mean, gc_tensor running_var, double momentum) { + PROTECT( + auto results__ = torch::batch_norm_update_stats(*tensor_ptr_from_ocaml(input), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), momentum); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_batch_norm_update_stats_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor running_mean, gc_tensor running_var, double momentum) { + PROTECT( + auto results__ = torch::batch_norm_update_stats_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(input), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), momentum); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_bernoulli(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::bernoulli(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bernoulli_(gc_tensor self, gc_tensor p) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bernoulli_(*tensor_ptr_from_ocaml(p)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bernoulli_float_(gc_tensor self, double p) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bernoulli_(p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bernoulli_p(gc_tensor self, double p) { + PROTECT( + torch::Tensor results__ = torch::bernoulli(*tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bernoulli_tensor(gc_tensor self, gc_tensor p) { + PROTECT( + torch::Tensor results__ = torch::bernoulli(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(p)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bilinear(gc_tensor input1, gc_tensor input2, gc_tensor weight, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::bilinear(*tensor_ptr_from_ocaml(input1), *tensor_ptr_from_ocaml(input2), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binary_cross_entropy(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::binary_cross_entropy(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binary_cross_entropy_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::binary_cross_entropy_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binary_cross_entropy_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::binary_cross_entropy_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binary_cross_entropy_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::binary_cross_entropy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binary_cross_entropy_with_logits(gc_tensor self, gc_tensor target, gc_tensor weight, gc_tensor pos_weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::binary_cross_entropy_with_logits(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (pos_weight ? tensor_from_ocaml(pos_weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binary_cross_entropy_with_logits_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, gc_tensor pos_weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::binary_cross_entropy_with_logits_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (pos_weight ? tensor_from_ocaml(pos_weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bincount(gc_tensor self, gc_tensor weights, int64_t minlength) { + PROTECT( + torch::Tensor results__ = torch::bincount(*tensor_ptr_from_ocaml(self), (weights ? tensor_from_ocaml(weights) : torch::Tensor()), minlength); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bincount_out(gc_tensor out, gc_tensor self, gc_tensor weights, int64_t minlength) { + PROTECT( + torch::Tensor results__ = torch::bincount_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (weights ? tensor_from_ocaml(weights) : torch::Tensor()), minlength); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binomial(gc_tensor count, gc_tensor prob) { + PROTECT( + torch::Tensor results__ = torch::binomial(*tensor_ptr_from_ocaml(count), *tensor_ptr_from_ocaml(prob)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_binomial_out(gc_tensor out, gc_tensor count, gc_tensor prob) { + PROTECT( + torch::Tensor results__ = torch::binomial_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(count), *tensor_ptr_from_ocaml(prob)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_and(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_and_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_and_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and_scalar_tensor(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_and(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_and_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_and(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_and_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_and_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_and_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_left_shift(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_left_shift_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift_scalar_tensor(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_left_shift(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_left_shift_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_left_shift_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift_tensor_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_left_shift(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift_tensor_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_left_shift_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_left_shift_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_left_shift_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_not(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::bitwise_not(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_not_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_not_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_not_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::bitwise_not_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_or(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_or_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_or_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or_scalar_tensor(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_or(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_or_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_or(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_or_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_or_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_or_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_right_shift(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_right_shift_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift_scalar_tensor(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_right_shift(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_right_shift_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_right_shift_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift_tensor_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_right_shift(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift_tensor_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_right_shift_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_right_shift_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_right_shift_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_xor(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_xor_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_xor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor_scalar_tensor(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_xor(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_xor_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_xor(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->bitwise_xor_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bitwise_xor_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::bitwise_xor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_blackman_window(int64_t window_length, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::blackman_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_blackman_window_out(gc_tensor out, int64_t window_length) { + PROTECT( + torch::Tensor results__ = torch::blackman_window_out(*tensor_ptr_from_ocaml(out), window_length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_blackman_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::blackman_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_blackman_window_periodic_out(gc_tensor out, int64_t window_length, int periodic) { + PROTECT( + torch::Tensor results__ = torch::blackman_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_block_diag(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::block_diag(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_block_diag_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::block_diag_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bmm(gc_tensor self, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::bmm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bmm_out(gc_tensor out, gc_tensor self, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::bmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_broadcast_tensors(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::broadcast_tensors(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_broadcast_to(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::broadcast_to(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bucketize(gc_tensor self, gc_tensor boundaries, int out_int32, int right) { + PROTECT( + torch::Tensor results__ = torch::bucketize(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(boundaries), (bool)out_int32, (bool)right); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bucketize_scalar(scalar self, gc_tensor boundaries, int out_int32, int right) { + PROTECT( + torch::Tensor results__ = torch::bucketize(*self, *tensor_ptr_from_ocaml(boundaries), (bool)out_int32, (bool)right); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bucketize_scalar_out(gc_tensor out, scalar self, gc_tensor boundaries, int out_int32, int right) { + PROTECT( + torch::Tensor results__ = torch::bucketize_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(boundaries), (bool)out_int32, (bool)right); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_bucketize_tensor_out(gc_tensor out, gc_tensor self, gc_tensor boundaries, int out_int32, int right) { + PROTECT( + torch::Tensor results__ = torch::bucketize_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(boundaries), (bool)out_int32, (bool)right); + return tensor_to_ocaml(results__); + ) +} + +int atg_can_cast(int from, int to) { + PROTECT( + return torch::can_cast(torch::ScalarType(from), torch::ScalarType(to)); + ) + return 0; +} + +raw_tensor atg_cartesian_prod(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::cartesian_prod(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cat(gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::cat(of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cat_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::cat_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cauchy(gc_tensor self, double median, double sigma) { + PROTECT( + torch::Tensor results__ = torch::cauchy(*tensor_ptr_from_ocaml(self), median, sigma); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cauchy_(gc_tensor self, double median, double sigma) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->cauchy_(median, sigma); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cauchy_out(gc_tensor out, gc_tensor self, double median, double sigma) { + PROTECT( + torch::Tensor results__ = torch::cauchy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), median, sigma); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ccol_indices(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->ccol_indices(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ccol_indices_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ccol_indices_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ccol_indices_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ccol_indices_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cdist(gc_tensor x1, gc_tensor x2, double p, int64_t compute_mode_v, int compute_mode_null) { + PROTECT( + torch::Tensor results__ = torch::cdist(*tensor_ptr_from_ocaml(x1), *tensor_ptr_from_ocaml(x2), p, compute_mode_null ? c10::nullopt : c10::optional(compute_mode_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ceil(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ceil(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ceil_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ceil_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ceil_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ceil_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_celu(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::celu(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_celu_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::celu_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_celu_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::celu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_chain_matmul(gc_tensor *matrices_data, int matrices_len) { + PROTECT( + torch::Tensor results__ = torch::chain_matmul(of_carray_tensor(matrices_data, matrices_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_chain_matmul_out(gc_tensor out, gc_tensor *matrices_data, int matrices_len) { + PROTECT( + torch::Tensor results__ = torch::chain_matmul_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(matrices_data, matrices_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_chalf(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->chalf(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_channel_shuffle(gc_tensor self, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::channel_shuffle(*tensor_ptr_from_ocaml(self), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_channel_shuffle_out(gc_tensor out, gc_tensor self, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::channel_shuffle_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cholesky(gc_tensor self, int upper) { + PROTECT( + torch::Tensor results__ = torch::cholesky(*tensor_ptr_from_ocaml(self), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cholesky_inverse(gc_tensor self, int upper) { + PROTECT( + torch::Tensor results__ = torch::cholesky_inverse(*tensor_ptr_from_ocaml(self), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cholesky_inverse_out(gc_tensor out, gc_tensor self, int upper) { + PROTECT( + torch::Tensor results__ = torch::cholesky_inverse_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cholesky_out(gc_tensor out, gc_tensor self, int upper) { + PROTECT( + torch::Tensor results__ = torch::cholesky_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cholesky_solve(gc_tensor self, gc_tensor input2, int upper) { + PROTECT( + torch::Tensor results__ = torch::cholesky_solve(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(input2), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cholesky_solve_out(gc_tensor out, gc_tensor self, gc_tensor input2, int upper) { + PROTECT( + torch::Tensor results__ = torch::cholesky_solve_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(input2), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +void atg_choose_qparams_optimized(raw_tensor *out__, gc_tensor input, int64_t numel, int64_t n_bins, double ratio, int64_t bit_width) { + PROTECT( + auto results__ = torch::choose_qparams_optimized(*tensor_ptr_from_ocaml(input), numel, n_bins, ratio, bit_width); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor *atg_chunk(gc_tensor self, int64_t chunks, int64_t dim) { + PROTECT( + auto results__ = torch::chunk(*tensor_ptr_from_ocaml(self), chunks, dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_clamp(gc_tensor self, scalar min, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clamp(*tensor_ptr_from_ocaml(self), *min, *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_(gc_tensor self, scalar min, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clamp_(*tensor_ptr_from_ocaml(self), *min, *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_max(gc_tensor self, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clamp_max(*tensor_ptr_from_ocaml(self), *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_max_(gc_tensor self, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clamp_max_(*tensor_ptr_from_ocaml(self), *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_max_out(gc_tensor out, gc_tensor self, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clamp_max_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_max_tensor(gc_tensor self, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clamp_max(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(max)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_max_tensor_(gc_tensor self, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clamp_max_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(max)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_max_tensor_out(gc_tensor out, gc_tensor self, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clamp_max_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(max)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_min(gc_tensor self, scalar min) { + PROTECT( + torch::Tensor results__ = torch::clamp_min(*tensor_ptr_from_ocaml(self), *min); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_min_(gc_tensor self, scalar min) { + PROTECT( + torch::Tensor results__ = torch::clamp_min_(*tensor_ptr_from_ocaml(self), *min); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_min_out(gc_tensor out, gc_tensor self, scalar min) { + PROTECT( + torch::Tensor results__ = torch::clamp_min_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *min); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_min_tensor(gc_tensor self, gc_tensor min) { + PROTECT( + torch::Tensor results__ = torch::clamp_min(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(min)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_min_tensor_(gc_tensor self, gc_tensor min) { + PROTECT( + torch::Tensor results__ = torch::clamp_min_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(min)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_min_tensor_out(gc_tensor out, gc_tensor self, gc_tensor min) { + PROTECT( + torch::Tensor results__ = torch::clamp_min_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(min)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_out(gc_tensor out, gc_tensor self, scalar min, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clamp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *min, *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_tensor(gc_tensor self, gc_tensor min, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clamp(*tensor_ptr_from_ocaml(self), (min ? tensor_from_ocaml(min) : torch::Tensor()), (max ? tensor_from_ocaml(max) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_tensor_(gc_tensor self, gc_tensor min, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clamp_(*tensor_ptr_from_ocaml(self), (min ? tensor_from_ocaml(min) : torch::Tensor()), (max ? tensor_from_ocaml(max) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clamp_tensor_out(gc_tensor out, gc_tensor self, gc_tensor min, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clamp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (min ? tensor_from_ocaml(min) : torch::Tensor()), (max ? tensor_from_ocaml(max) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clip(gc_tensor self, scalar min, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clip(*tensor_ptr_from_ocaml(self), *min, *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clip_(gc_tensor self, scalar min, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clip_(*tensor_ptr_from_ocaml(self), *min, *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clip_out(gc_tensor out, gc_tensor self, scalar min, scalar max) { + PROTECT( + torch::Tensor results__ = torch::clip_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *min, *max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clip_tensor(gc_tensor self, gc_tensor min, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clip(*tensor_ptr_from_ocaml(self), (min ? tensor_from_ocaml(min) : torch::Tensor()), (max ? tensor_from_ocaml(max) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clip_tensor_(gc_tensor self, gc_tensor min, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clip_(*tensor_ptr_from_ocaml(self), (min ? tensor_from_ocaml(min) : torch::Tensor()), (max ? tensor_from_ocaml(max) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clip_tensor_out(gc_tensor out, gc_tensor self, gc_tensor min, gc_tensor max) { + PROTECT( + torch::Tensor results__ = torch::clip_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (min ? tensor_from_ocaml(min) : torch::Tensor()), (max ? tensor_from_ocaml(max) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clone(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::clone(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_clone_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::clone_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_coalesce(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->coalesce(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_col2im(gc_tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::col2im(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_col2im_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::col2im_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_col_indices(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->col_indices(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_col_indices_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::col_indices_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_col_indices_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::col_indices_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_column_stack(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::column_stack(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_column_stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::column_stack_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_combinations(gc_tensor self, int64_t r, int with_replacement) { + PROTECT( + torch::Tensor results__ = torch::combinations(*tensor_ptr_from_ocaml(self), r, (bool)with_replacement); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_complex(gc_tensor real, gc_tensor imag) { + PROTECT( + torch::Tensor results__ = torch::complex(*tensor_ptr_from_ocaml(real), *tensor_ptr_from_ocaml(imag)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_complex_out(gc_tensor out, gc_tensor real, gc_tensor imag) { + PROTECT( + torch::Tensor results__ = torch::complex_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(real), *tensor_ptr_from_ocaml(imag)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_concat(gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::concat(of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_concat_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::concat_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_concatenate(gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::concatenate(of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_concatenate_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::concatenate_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conj(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::conj(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conj_physical(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::conj_physical(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conj_physical_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::conj_physical_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conj_physical_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::conj_physical_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_constant_pad_nd(gc_tensor self, int64_t *pad_data, int pad_len) { + PROTECT( + torch::Tensor results__ = torch::constant_pad_nd(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(pad_data, pad_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_constant_pad_nd_out(gc_tensor out, gc_tensor self, int64_t *pad_data, int pad_len) { + PROTECT( + torch::Tensor results__ = torch::constant_pad_nd_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(pad_data, pad_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_contiguous(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->contiguous(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv1d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::conv1d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv1d_padding(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::conv1d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv2d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::conv2d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv2d_padding(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::conv2d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv3d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::conv3d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv3d_padding(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::conv3d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv_depthwise3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::conv_depthwise3d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv_depthwise3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::conv_depthwise3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv_tbc(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t pad) { + PROTECT( + torch::Tensor results__ = torch::conv_tbc(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(bias), pad); + return tensor_to_ocaml(results__); + ) +} + +void atg_conv_tbc_backward(raw_tensor *out__, gc_tensor self, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t pad) { + PROTECT( + auto results__ = torch::conv_tbc_backward(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(bias), pad); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_conv_tbc_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t pad) { + PROTECT( + torch::Tensor results__ = torch::conv_tbc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(bias), pad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv_transpose1d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::conv_transpose1d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), groups, torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv_transpose2d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::conv_transpose2d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), groups, torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_conv_transpose3d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::conv_transpose3d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), groups, torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_convolution(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::convolution(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_convolution_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_convolution_overrideable(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::convolution_overrideable(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_convolution_overrideable_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::convolution_overrideable_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copy(gc_tensor self, gc_tensor src, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::copy(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copy_out(gc_tensor out, gc_tensor self, gc_tensor src, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copy_sparse_to_sparse(gc_tensor self, gc_tensor src, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::copy_sparse_to_sparse(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copy_sparse_to_sparse_(gc_tensor self, gc_tensor src, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::copy_sparse_to_sparse_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copy_sparse_to_sparse_out(gc_tensor out, gc_tensor self, gc_tensor src, int non_blocking) { + PROTECT( + torch::Tensor results__ = torch::copy_sparse_to_sparse_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), (bool)non_blocking); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copysign(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::copysign(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copysign_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->copysign_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copysign_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::copysign_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copysign_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::copysign(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copysign_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->copysign_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_copysign_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::copysign_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_corrcoef(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::corrcoef(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cos(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::cos(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cos_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::cos_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cos_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::cos_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cosh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::cosh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cosh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::cosh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cosh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::cosh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cosine_embedding_loss(gc_tensor input1, gc_tensor input2, gc_tensor target, double margin, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::cosine_embedding_loss(*tensor_ptr_from_ocaml(input1), *tensor_ptr_from_ocaml(input2), *tensor_ptr_from_ocaml(target), margin, reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cosine_similarity(gc_tensor x1, gc_tensor x2, int64_t dim, double eps) { + PROTECT( + torch::Tensor results__ = torch::cosine_similarity(*tensor_ptr_from_ocaml(x1), *tensor_ptr_from_ocaml(x2), dim, eps); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_count_nonzero(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::count_nonzero_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_count_nonzero_out(gc_tensor out, gc_tensor self, int64_t dim_v, int dim_null) { + PROTECT( + torch::Tensor results__ = torch::count_nonzero_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_null ? c10::nullopt : c10::optional(dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cov(gc_tensor self, int64_t correction, gc_tensor fweights, gc_tensor aweights) { + PROTECT( + torch::Tensor results__ = torch::cov(*tensor_ptr_from_ocaml(self), correction, (fweights ? tensor_from_ocaml(fweights) : torch::Tensor()), (aweights ? tensor_from_ocaml(aweights) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cross(gc_tensor self, gc_tensor other, int64_t dim_v, int dim_null) { + PROTECT( + torch::Tensor results__ = torch::cross(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), dim_null ? c10::nullopt : c10::optional(dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cross_entropy_loss(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, double label_smoothing) { + PROTECT( + torch::Tensor results__ = torch::cross_entropy_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index, label_smoothing); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cross_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t dim_v, int dim_null) { + PROTECT( + torch::Tensor results__ = torch::cross_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), dim_null ? c10::nullopt : c10::optional(dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_crow_indices(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->crow_indices(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_crow_indices_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::crow_indices_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_crow_indices_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::crow_indices_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ctc_loss(gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int64_t reduction, int zero_infinity) { + PROTECT( + torch::Tensor results__ = torch::ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, reduction, (bool)zero_infinity); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ctc_loss_tensor(gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int64_t reduction, int zero_infinity) { + PROTECT( + torch::Tensor results__ = torch::ctc_loss(*tensor_ptr_from_ocaml(log_probs), *tensor_ptr_from_ocaml(targets), *tensor_ptr_from_ocaml(input_lengths), *tensor_ptr_from_ocaml(target_lengths), blank, reduction, (bool)zero_infinity); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_affine_grid_generator(gc_tensor theta, int64_t n, int64_t C, int64_t H, int64_t W) { + PROTECT( + torch::Tensor results__ = torch::cudnn_affine_grid_generator(*tensor_ptr_from_ocaml(theta), n, C, H, W); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_affine_grid_generator_backward(gc_tensor grad, int64_t n, int64_t C, int64_t H, int64_t W) { + PROTECT( + torch::Tensor results__ = torch::cudnn_affine_grid_generator_backward(*tensor_ptr_from_ocaml(grad), n, C, H, W); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_affine_grid_generator_backward_out(gc_tensor out, gc_tensor grad, int64_t n, int64_t C, int64_t H, int64_t W) { + PROTECT( + torch::Tensor results__ = torch::cudnn_affine_grid_generator_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad), n, C, H, W); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_affine_grid_generator_out(gc_tensor out, gc_tensor theta, int64_t n, int64_t C, int64_t H, int64_t W) { + PROTECT( + torch::Tensor results__ = torch::cudnn_affine_grid_generator_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(theta), n, C, H, W); + return tensor_to_ocaml(results__); + ) +} + +void atg_cudnn_batch_norm(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon) { + PROTECT( + auto results__ = torch::cudnn_batch_norm(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg_cudnn_batch_norm_backward(raw_tensor *out__, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon, gc_tensor reserveSpace) { + PROTECT( + auto results__ = torch::cudnn_batch_norm_backward(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(weight), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (save_mean ? tensor_from_ocaml(save_mean) : torch::Tensor()), (save_var ? tensor_from_ocaml(save_var) : torch::Tensor()), epsilon, *tensor_ptr_from_ocaml(reserveSpace)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_cudnn_batch_norm_backward_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon, gc_tensor reserveSpace) { + PROTECT( + auto results__ = torch::cudnn_batch_norm_backward_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(weight), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (save_mean ? tensor_from_ocaml(save_mean) : torch::Tensor()), (save_var ? tensor_from_ocaml(save_var) : torch::Tensor()), epsilon, *tensor_ptr_from_ocaml(reserveSpace)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_cudnn_batch_norm_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon) { + PROTECT( + auto results__ = torch::cudnn_batch_norm_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +raw_tensor atg_cudnn_convolution(gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_convolution_add_relu(gc_tensor self, gc_tensor weight, gc_tensor z, scalar alpha, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution_add_relu(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(z), *alpha, (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_convolution_add_relu_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor z, scalar alpha, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution_add_relu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(z), *alpha, (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_convolution_relu(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution_relu(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_convolution_relu_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution_relu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_convolution_transpose(gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution_transpose(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_convolution_transpose_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { + PROTECT( + torch::Tensor results__ = torch::cudnn_convolution_transpose_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cudnn_grid_sampler(gc_tensor self, gc_tensor grid) { + PROTECT( + torch::Tensor results__ = torch::cudnn_grid_sampler(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(grid)); + return tensor_to_ocaml(results__); + ) +} + +void atg_cudnn_grid_sampler_backward(raw_tensor *out__, gc_tensor self, gc_tensor grid, gc_tensor grad_output) { + PROTECT( + auto results__ = torch::cudnn_grid_sampler_backward(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(grid), *tensor_ptr_from_ocaml(grad_output)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_cudnn_grid_sampler_backward_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor grid, gc_tensor grad_output) { + PROTECT( + auto results__ = torch::cudnn_grid_sampler_backward_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(grid), *tensor_ptr_from_ocaml(grad_output)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_cudnn_grid_sampler_out(gc_tensor out, gc_tensor self, gc_tensor grid) { + PROTECT( + torch::Tensor results__ = torch::cudnn_grid_sampler_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(grid)); + return tensor_to_ocaml(results__); + ) +} + +int atg_cudnn_is_acceptable(gc_tensor self) { + PROTECT( + return torch::cudnn_is_acceptable(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +void atg_cummax(raw_tensor *out__, gc_tensor self, int64_t dim) { + PROTECT( + auto results__ = torch::cummax(*tensor_ptr_from_ocaml(self), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_cummax_out(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim) { + PROTECT( + auto results__ = torch::cummax_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_cummaxmin_backward(gc_tensor grad, gc_tensor input, gc_tensor indices, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::cummaxmin_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(indices), dim); + return tensor_to_ocaml(results__); + ) +} + +void atg_cummin(raw_tensor *out__, gc_tensor self, int64_t dim) { + PROTECT( + auto results__ = torch::cummin(*tensor_ptr_from_ocaml(self), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_cummin_out(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim) { + PROTECT( + auto results__ = torch::cummin_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), dim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_cumprod(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::cumprod(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumprod_(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->cumprod_(dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumprod_backward(gc_tensor grad, gc_tensor input, int64_t dim, gc_tensor output) { + PROTECT( + torch::Tensor results__ = torch::cumprod_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(input), dim, *tensor_ptr_from_ocaml(output)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumprod_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::cumprod_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumsum(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::cumsum(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumsum_(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->cumsum_(dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumsum_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::cumsum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumulative_trapezoid(gc_tensor y, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::cumulative_trapezoid(*tensor_ptr_from_ocaml(y), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_cumulative_trapezoid_x(gc_tensor y, gc_tensor x, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::cumulative_trapezoid(*tensor_ptr_from_ocaml(y), *tensor_ptr_from_ocaml(x), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_data(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->data(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_deg2rad(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::deg2rad(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_deg2rad_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::deg2rad_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_deg2rad_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::deg2rad_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg_dense_dim(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->dense_dim(); + ) + return 0; +} + +raw_tensor atg_dequantize(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::dequantize(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dequantize_self_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::dequantize_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_dequantize_tensors(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::dequantize(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg_dequantize_tensors_out(gc_tensor *out_data, int out_len, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::dequantize_out(of_carray_tensor(out_data, out_len), of_carray_tensor(tensors_data, tensors_len)); + ) +} + +raw_tensor atg_det(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::det(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_detach(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::detach(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_detach_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::detach_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_detach_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::detach_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_detach_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::detach_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diag(gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = torch::diag(*tensor_ptr_from_ocaml(self), diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diag_embed(gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diag_embed(*tensor_ptr_from_ocaml(self), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diag_embed_out(gc_tensor out, gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diag_embed_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diag_out(gc_tensor out, gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = torch::diag_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagflat(gc_tensor self, int64_t offset) { + PROTECT( + torch::Tensor results__ = torch::diagflat(*tensor_ptr_from_ocaml(self), offset); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagonal(gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diagonal(*tensor_ptr_from_ocaml(self), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagonal_backward(gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diagonal_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(input_sizes_data, input_sizes_len), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagonal_backward_out(gc_tensor out, gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diagonal_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(input_sizes_data, input_sizes_len), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagonal_copy(gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diagonal_copy(*tensor_ptr_from_ocaml(self), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagonal_copy_out(gc_tensor out, gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diagonal_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagonal_scatter(gc_tensor self, gc_tensor src, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diagonal_scatter(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diagonal_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::diagonal_scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diff(gc_tensor self, int64_t n, int64_t dim, gc_tensor prepend, gc_tensor append) { + PROTECT( + torch::Tensor results__ = torch::diff(*tensor_ptr_from_ocaml(self), n, dim, (prepend ? tensor_from_ocaml(prepend) : torch::Tensor()), (append ? tensor_from_ocaml(append) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_diff_out(gc_tensor out, gc_tensor self, int64_t n, int64_t dim, gc_tensor prepend, gc_tensor append) { + PROTECT( + torch::Tensor results__ = torch::diff_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n, dim, (prepend ? tensor_from_ocaml(prepend) : torch::Tensor()), (append ? tensor_from_ocaml(append) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_digamma(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::digamma(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_digamma_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->digamma_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_digamma_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::digamma_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dist(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::dist(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dist_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::dist_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::div(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->div_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::div_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_out_mode(gc_tensor out, gc_tensor self, gc_tensor other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = torch::div_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::div(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->div_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_scalar_mode(gc_tensor self, scalar other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = torch::div(*tensor_ptr_from_ocaml(self), *other, std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_scalar_mode_(gc_tensor self, scalar other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->div_(*other, std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_scalar_mode_out(gc_tensor out, gc_tensor self, scalar other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = torch::div_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other, std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::div_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_tensor_mode(gc_tensor self, gc_tensor other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = torch::div(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_div_tensor_mode_(gc_tensor self, gc_tensor other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->div_(*tensor_ptr_from_ocaml(other), std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::divide(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->divide_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::divide_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_out_mode(gc_tensor out, gc_tensor self, gc_tensor other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = torch::divide_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::divide(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->divide_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_scalar_mode(gc_tensor self, scalar other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = torch::divide(*tensor_ptr_from_ocaml(self), *other, std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_scalar_mode_(gc_tensor self, scalar other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->divide_(*other, std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_tensor_mode(gc_tensor self, gc_tensor other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = torch::divide(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_divide_tensor_mode_(gc_tensor self, gc_tensor other, char * rounding_mode) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->divide_(*tensor_ptr_from_ocaml(other), std::string(rounding_mode)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dot(gc_tensor self, gc_tensor tensor) { + PROTECT( + torch::Tensor results__ = torch::dot(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(tensor)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dot_out(gc_tensor out, gc_tensor self, gc_tensor tensor) { + PROTECT( + torch::Tensor results__ = torch::dot_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(tensor)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dropout(gc_tensor input, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::dropout(*tensor_ptr_from_ocaml(input), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dropout_(gc_tensor self, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::dropout_(*tensor_ptr_from_ocaml(self), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_dsplit(gc_tensor self, int64_t sections) { + PROTECT( + auto results__ = torch::dsplit(*tensor_ptr_from_ocaml(self), sections); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_dsplit_array(gc_tensor self, int64_t *indices_data, int indices_len) { + PROTECT( + auto results__ = torch::dsplit(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(indices_data, indices_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_dstack(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::dstack(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_dstack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::dstack_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_einsum(char * equation, gc_tensor *tensors_data, int tensors_len, int64_t *path_data, int path_len) { + PROTECT( + torch::Tensor results__ = torch::einsum(std::string(equation), of_carray_tensor(tensors_data, tensors_len), path_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(path_data, path_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_elu(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::elu(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_elu_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::elu_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_elu_backward(gc_tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, gc_tensor self_or_result) { + PROTECT( + torch::Tensor results__ = torch::elu_backward(*tensor_ptr_from_ocaml(grad_output), *alpha, *scale, *input_scale, (bool)is_result, *tensor_ptr_from_ocaml(self_or_result)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_elu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, gc_tensor self_or_result) { + PROTECT( + torch::Tensor results__ = torch::elu_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *alpha, *scale, *input_scale, (bool)is_result, *tensor_ptr_from_ocaml(self_or_result)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_elu_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::elu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding(gc_tensor weight, gc_tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse) { + PROTECT( + torch::Tensor results__ = torch::embedding(*tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), padding_idx, (bool)scale_grad_by_freq, (bool)sparse); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding_backward(gc_tensor grad, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq, int sparse) { + PROTECT( + torch::Tensor results__ = torch::embedding_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(indices), num_weights, padding_idx, (bool)scale_grad_by_freq, (bool)sparse); + return tensor_to_ocaml(results__); + ) +} + +void atg_embedding_bag(raw_tensor *out__, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset) { + PROTECT( + auto results__ = torch::embedding_bag(*tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), (bool)include_last_offset); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg_embedding_bag_padding_idx(raw_tensor *out__, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx_v, int padding_idx_null) { + PROTECT( + auto results__ = torch::embedding_bag(*tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(offsets), (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? tensor_from_ocaml(per_sample_weights) : torch::Tensor()), (bool)include_last_offset, padding_idx_null ? c10::nullopt : c10::optional(padding_idx_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +raw_tensor atg_embedding_dense_backward(gc_tensor grad_output, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq) { + PROTECT( + torch::Tensor results__ = torch::embedding_dense_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(indices), num_weights, padding_idx, (bool)scale_grad_by_freq); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding_dense_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq) { + PROTECT( + torch::Tensor results__ = torch::embedding_dense_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(indices), num_weights, padding_idx, (bool)scale_grad_by_freq); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding_out(gc_tensor out, gc_tensor weight, gc_tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse) { + PROTECT( + torch::Tensor results__ = torch::embedding_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(indices), padding_idx, (bool)scale_grad_by_freq, (bool)sparse); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding_renorm(gc_tensor self, gc_tensor indices, double max_norm, double norm_type) { + PROTECT( + torch::Tensor results__ = torch::embedding_renorm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), max_norm, norm_type); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding_renorm_(gc_tensor self, gc_tensor indices, double max_norm, double norm_type) { + PROTECT( + torch::Tensor results__ = torch::embedding_renorm_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), max_norm, norm_type); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding_renorm_out(gc_tensor out, gc_tensor self, gc_tensor indices, double max_norm, double norm_type) { + PROTECT( + torch::Tensor results__ = torch::embedding_renorm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), max_norm, norm_type); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_embedding_sparse_backward(gc_tensor grad, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq) { + PROTECT( + torch::Tensor results__ = torch::embedding_sparse_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(indices), num_weights, padding_idx, (bool)scale_grad_by_freq); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty(int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::empty(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_like(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::empty_like(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_like_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::empty_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_out(gc_tensor out, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::empty_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_permuted(int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::empty_permuted(torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(physical_layout_data, physical_layout_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_permuted_out(gc_tensor out, int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len) { + PROTECT( + torch::Tensor results__ = torch::empty_permuted_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(physical_layout_data, physical_layout_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_quantized(int64_t *size_data, int size_len, gc_tensor qtensor, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::empty_quantized(torch::IntArrayRef(size_data, size_len), *tensor_ptr_from_ocaml(qtensor), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_quantized_out(gc_tensor out, int64_t *size_data, int size_len, gc_tensor qtensor) { + PROTECT( + torch::Tensor results__ = torch::empty_quantized_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len), *tensor_ptr_from_ocaml(qtensor)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_strided(int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::empty_strided(torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_empty_strided_out(gc_tensor out, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::empty_strided_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eq(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::eq(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eq_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->eq_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eq_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::eq_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eq_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::eq(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eq_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->eq_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eq_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::eq_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +int atg_equal(gc_tensor self, gc_tensor other) { + PROTECT( + return torch::equal(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + ) + return 0; +} + +raw_tensor atg_erf(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erf(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erf_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erf_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erf_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erf_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erfc(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erfc(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erfc_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erfc_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erfc_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erfc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erfinv(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erfinv(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erfinv_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->erfinv_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_erfinv_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::erfinv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exp(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::exp(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exp2(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::exp2(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exp2_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::exp2_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exp2_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::exp2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exp_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::exp_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exp_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::exp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_expand(gc_tensor self, int64_t *size_data, int size_len, int implicit) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->expand(torch::IntArrayRef(size_data, size_len), (bool)implicit); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_expand_as(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->expand_as(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_expand_copy(gc_tensor self, int64_t *size_data, int size_len, int implicit) { + PROTECT( + torch::Tensor results__ = torch::expand_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), (bool)implicit); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_expand_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int implicit) { + PROTECT( + torch::Tensor results__ = torch::expand_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), (bool)implicit); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_expm1(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::expm1(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_expm1_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::expm1_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_expm1_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::expm1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exponential(gc_tensor self, double lambd) { + PROTECT( + torch::Tensor results__ = torch::exponential(*tensor_ptr_from_ocaml(self), lambd); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exponential_(gc_tensor self, double lambd) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->exponential_(lambd); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_exponential_out(gc_tensor out, gc_tensor self, double lambd) { + PROTECT( + torch::Tensor results__ = torch::exponential_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), lambd); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eye(int64_t n, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::eye(n, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eye_m(int64_t n, int64_t m, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::eye(n, m, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eye_m_out(gc_tensor out, int64_t n, int64_t m) { + PROTECT( + torch::Tensor results__ = torch::eye_out(*tensor_ptr_from_ocaml(out), n, m); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_eye_out(gc_tensor out, int64_t n) { + PROTECT( + torch::Tensor results__ = torch::eye_out(*tensor_ptr_from_ocaml(out), n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fake_quantize_per_channel_affine(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max) { + PROTECT( + torch::Tensor results__ = torch::fake_quantize_per_channel_affine(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis, quant_min, quant_max); + return tensor_to_ocaml(results__); + ) +} + +void atg_fake_quantize_per_channel_affine_cachemask(raw_tensor *out__, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max) { + PROTECT( + auto results__ = torch::fake_quantize_per_channel_affine_cachemask(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis, quant_min, quant_max); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_fake_quantize_per_channel_affine_cachemask_backward(gc_tensor grad, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::fake_quantize_per_channel_affine_cachemask_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(mask)); + return tensor_to_ocaml(results__); + ) +} + +void atg_fake_quantize_per_channel_affine_cachemask_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max) { + PROTECT( + auto results__ = torch::fake_quantize_per_channel_affine_cachemask_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), axis, quant_min, quant_max); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_fake_quantize_per_tensor_affine(gc_tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max) { + PROTECT( + torch::Tensor results__ = torch::fake_quantize_per_tensor_affine(*tensor_ptr_from_ocaml(self), scale, zero_point, quant_min, quant_max); + return tensor_to_ocaml(results__); + ) +} + +void atg_fake_quantize_per_tensor_affine_cachemask(raw_tensor *out__, gc_tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max) { + PROTECT( + auto results__ = torch::fake_quantize_per_tensor_affine_cachemask(*tensor_ptr_from_ocaml(self), scale, zero_point, quant_min, quant_max); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_fake_quantize_per_tensor_affine_cachemask_backward(gc_tensor grad, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::fake_quantize_per_tensor_affine_cachemask_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(mask)); + return tensor_to_ocaml(results__); + ) +} + +void atg_fake_quantize_per_tensor_affine_cachemask_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max) { + PROTECT( + auto results__ = torch::fake_quantize_per_tensor_affine_cachemask_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), scale, zero_point, quant_min, quant_max); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_fake_quantize_per_tensor_affine_tensor_qparams(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max) { + PROTECT( + torch::Tensor results__ = torch::fake_quantize_per_tensor_affine(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), quant_min, quant_max); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fbgemm_linear_fp16_weight(gc_tensor input, gc_tensor packed_weight, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::fbgemm_linear_fp16_weight(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(packed_weight), *tensor_ptr_from_ocaml(bias)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fbgemm_linear_fp16_weight_fp32_activation(gc_tensor input, gc_tensor packed_weight, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::fbgemm_linear_fp16_weight_fp32_activation(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(packed_weight), *tensor_ptr_from_ocaml(bias)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fbgemm_linear_int8_weight(gc_tensor input, gc_tensor weight, gc_tensor packed, gc_tensor col_offsets, scalar weight_scale, scalar weight_zero_point, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::fbgemm_linear_int8_weight(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(packed), *tensor_ptr_from_ocaml(col_offsets), *weight_scale, *weight_zero_point, *tensor_ptr_from_ocaml(bias)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fbgemm_linear_int8_weight_fp32_activation(gc_tensor input, gc_tensor weight, gc_tensor packed, gc_tensor col_offsets, scalar weight_scale, scalar weight_zero_point, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::fbgemm_linear_int8_weight_fp32_activation(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(packed), *tensor_ptr_from_ocaml(col_offsets), *weight_scale, *weight_zero_point, *tensor_ptr_from_ocaml(bias)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fbgemm_pack_gemm_matrix_fp16(gc_tensor input) { + PROTECT( + torch::Tensor results__ = torch::fbgemm_pack_gemm_matrix_fp16(*tensor_ptr_from_ocaml(input)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fbgemm_pack_quantized_matrix(gc_tensor input) { + PROTECT( + torch::Tensor results__ = torch::fbgemm_pack_quantized_matrix(*tensor_ptr_from_ocaml(input)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fbgemm_pack_quantized_matrix_kn(gc_tensor input, int64_t K, int64_t n) { + PROTECT( + torch::Tensor results__ = torch::fbgemm_pack_quantized_matrix(*tensor_ptr_from_ocaml(input), K, n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_feature_alpha_dropout(gc_tensor input, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::feature_alpha_dropout(*tensor_ptr_from_ocaml(input), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_feature_alpha_dropout_(gc_tensor self, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::feature_alpha_dropout_(*tensor_ptr_from_ocaml(self), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_feature_dropout(gc_tensor input, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::feature_dropout(*tensor_ptr_from_ocaml(input), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_feature_dropout_(gc_tensor self, double p, int train) { + PROTECT( + torch::Tensor results__ = torch::feature_dropout_(*tensor_ptr_from_ocaml(self), p, (bool)train); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_fft(*tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_fft2(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_fft2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_fft_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fftfreq(int64_t n, double d, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::fft_fftfreq(n, d, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fftfreq_out(gc_tensor out, int64_t n, double d) { + PROTECT( + torch::Tensor results__ = torch::fft_fftfreq_out(*tensor_ptr_from_ocaml(out), n, d); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_fftn(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_fftn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_fftshift(gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::fft_fftshift(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_hfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_hfft(*tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_hfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_hfft2(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_hfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_hfft2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_hfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_hfft_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_hfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_hfftn(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_hfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_hfftn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ifft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ifft(*tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ifft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ifft2(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ifft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ifft2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ifft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ifft_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ifftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ifftn(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ifftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ifftn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ifftshift(gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::fft_ifftshift(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ihfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ihfft(*tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ihfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ihfft2(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ihfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ihfft2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ihfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ihfft_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ihfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ihfftn(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_ihfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_ihfftn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_irfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_irfft(*tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_irfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_irfft2(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_irfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_irfft2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_irfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_irfft_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_irfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_irfftn(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_irfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_irfftn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_rfft(*tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_rfft2(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_rfft2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_rfft_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfftfreq(int64_t n, double d, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::fft_rfftfreq(n, d, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfftfreq_out(gc_tensor out, int64_t n, double d) { + PROTECT( + torch::Tensor results__ = torch::fft_rfftfreq_out(*tensor_ptr_from_ocaml(out), n, d); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_rfftn(*tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fft_rfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { + PROTECT( + torch::Tensor results__ = torch::fft_rfftn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fill(gc_tensor self, scalar value) { + PROTECT( + torch::Tensor results__ = torch::fill(*tensor_ptr_from_ocaml(self), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fill_(gc_tensor self, scalar value) { + PROTECT( + torch::Tensor results__ = torch::fill_(*tensor_ptr_from_ocaml(self), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fill_diagonal_(gc_tensor self, scalar fill_value, int wrap) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->fill_diagonal_(*fill_value, (bool)wrap); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fill_scalar_out(gc_tensor out, gc_tensor self, scalar value) { + PROTECT( + torch::Tensor results__ = torch::fill_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fill_tensor(gc_tensor self, gc_tensor value) { + PROTECT( + torch::Tensor results__ = torch::fill(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fill_tensor_(gc_tensor self, gc_tensor value) { + PROTECT( + torch::Tensor results__ = torch::fill_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fill_tensor_out(gc_tensor out, gc_tensor self, gc_tensor value) { + PROTECT( + torch::Tensor results__ = torch::fill_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fix(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::fix(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fix_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::fix_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fix_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::fix_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_flatten(gc_tensor self, int64_t start_dim, int64_t end_dim) { + PROTECT( + torch::Tensor results__ = torch::flatten(*tensor_ptr_from_ocaml(self), start_dim, end_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_flatten_dense_tensors(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::flatten_dense_tensors(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_flip(gc_tensor self, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::flip(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_flip_out(gc_tensor out, gc_tensor self, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::flip_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fliplr(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::fliplr(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_flipud(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::flipud(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power(gc_tensor self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::float_power(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power_(gc_tensor self, scalar exponent) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->float_power_(*exponent); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power_scalar(scalar self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::float_power(*self, *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power_scalar_out(gc_tensor out, scalar self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::float_power_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power_tensor_(gc_tensor self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->float_power_(*tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power_tensor_scalar(gc_tensor self, scalar exponent) { + PROTECT( + torch::Tensor results__ = torch::float_power(*tensor_ptr_from_ocaml(self), *exponent); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar exponent) { + PROTECT( + torch::Tensor results__ = torch::float_power_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *exponent); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_float_power_tensor_tensor_out(gc_tensor out, gc_tensor self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::float_power_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::floor(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::floor_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor_divide(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::floor_divide(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor_divide_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->floor_divide_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor_divide_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::floor_divide_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor_divide_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::floor_divide(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor_divide_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->floor_divide_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_floor_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::floor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmax(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::fmax(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmax_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::fmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmin(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::fmin(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmin_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::fmin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmod(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::fmod(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmod_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->fmod_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmod_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::fmod_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmod_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::fmod(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmod_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->fmod_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fmod_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::fmod_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_frac(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::frac(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_frac_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::frac_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_frac_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::frac_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg_fractional_max_pool2d(raw_tensor *out__, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples) { + PROTECT( + auto results__ = torch::fractional_max_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(random_samples)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_fractional_max_pool2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::fractional_max_pool2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fractional_max_pool2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::fractional_max_pool2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +void atg_fractional_max_pool2d_output(raw_tensor *out__, gc_tensor output, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples) { + PROTECT( + auto results__ = torch::fractional_max_pool2d_out(*tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(random_samples)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_fractional_max_pool3d(raw_tensor *out__, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples) { + PROTECT( + auto results__ = torch::fractional_max_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(random_samples)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_fractional_max_pool3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::fractional_max_pool3d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fractional_max_pool3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::fractional_max_pool3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +void atg_fractional_max_pool3d_output(raw_tensor *out__, gc_tensor output, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples) { + PROTECT( + auto results__ = torch::fractional_max_pool3d_out(*tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *tensor_ptr_from_ocaml(random_samples)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_frexp(raw_tensor *out__, gc_tensor self) { + PROTECT( + auto results__ = torch::frexp(*tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_frexp_tensor_out(raw_tensor *out__, gc_tensor mantissa, gc_tensor exponent, gc_tensor self) { + PROTECT( + auto results__ = torch::frexp_out(*tensor_ptr_from_ocaml(mantissa), *tensor_ptr_from_ocaml(exponent), *tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_frobenius_norm(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::frobenius_norm(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_frobenius_norm_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::frobenius_norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_from_file(char * filename, int shared, int64_t size_v, int size_null, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::from_file(std::string(filename), (bool)shared, size_null ? c10::nullopt : c10::optional(size_v), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_from_file_out(gc_tensor out, char * filename, int shared, int64_t size_v, int size_null) { + PROTECT( + torch::Tensor results__ = torch::from_file_out(*tensor_ptr_from_ocaml(out), std::string(filename), (bool)shared, size_null ? c10::nullopt : c10::optional(size_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_full(int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::full(torch::IntArrayRef(size_data, size_len), *fill_value, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_full_like(gc_tensor self, scalar fill_value) { + PROTECT( + torch::Tensor results__ = torch::full_like(*tensor_ptr_from_ocaml(self), *fill_value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_full_like_out(gc_tensor out, gc_tensor self, scalar fill_value) { + PROTECT( + torch::Tensor results__ = torch::full_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *fill_value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_full_out(gc_tensor out, int64_t *size_data, int size_len, scalar fill_value) { + PROTECT( + torch::Tensor results__ = torch::full_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len), *fill_value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_fused_moving_avg_obs_fake_quant(gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { + PROTECT( + torch::Tensor results__ = torch::fused_moving_avg_obs_fake_quant(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(observer_on), *tensor_ptr_from_ocaml(fake_quant_on), *tensor_ptr_from_ocaml(running_min), *tensor_ptr_from_ocaml(running_max), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gather(gc_tensor self, int64_t dim, gc_tensor index, int sparse_grad) { + PROTECT( + torch::Tensor results__ = torch::gather(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), (bool)sparse_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gather_backward(gc_tensor grad, gc_tensor self, int64_t dim, gc_tensor index, int sparse_grad) { + PROTECT( + torch::Tensor results__ = torch::gather_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), (bool)sparse_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gather_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, int sparse_grad) { + PROTECT( + torch::Tensor results__ = torch::gather_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), (bool)sparse_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gcd(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::gcd(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gcd_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::gcd_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gcd_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::gcd_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ge(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::ge(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ge_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->ge_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ge_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::ge_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ge_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::ge(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ge_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->ge_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ge_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::ge_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gelu(gc_tensor self, char * approximate) { + PROTECT( + torch::Tensor results__ = torch::gelu(*tensor_ptr_from_ocaml(self), std::string(approximate)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gelu_(gc_tensor self, char * approximate) { + PROTECT( + torch::Tensor results__ = torch::gelu_(*tensor_ptr_from_ocaml(self), std::string(approximate)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gelu_backward(gc_tensor grad_output, gc_tensor self, char * approximate) { + PROTECT( + torch::Tensor results__ = torch::gelu_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), std::string(approximate)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gelu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, char * approximate) { + PROTECT( + torch::Tensor results__ = torch::gelu_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), std::string(approximate)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gelu_out(gc_tensor out, gc_tensor self, char * approximate) { + PROTECT( + torch::Tensor results__ = torch::gelu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), std::string(approximate)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_geometric(gc_tensor self, double p) { + PROTECT( + torch::Tensor results__ = torch::geometric(*tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_geometric_(gc_tensor self, double p) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->geometric_(p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_geometric_out(gc_tensor out, gc_tensor self, double p) { + PROTECT( + torch::Tensor results__ = torch::geometric_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +void atg_geqrf(raw_tensor *out__, gc_tensor self) { + PROTECT( + auto results__ = torch::geqrf(*tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_geqrf_a(raw_tensor *out__, gc_tensor a, gc_tensor tau, gc_tensor self) { + PROTECT( + auto results__ = torch::geqrf_out(*tensor_ptr_from_ocaml(a), *tensor_ptr_from_ocaml(tau), *tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_ger(gc_tensor self, gc_tensor vec2) { + PROTECT( + torch::Tensor results__ = torch::ger(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ger_out(gc_tensor out, gc_tensor self, gc_tensor vec2) { + PROTECT( + torch::Tensor results__ = torch::ger_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu(*tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu_backward(gc_tensor grad_output, gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu_backward_jvp(gc_tensor grad_x, gc_tensor grad_glu, gc_tensor x, gc_tensor dgrad_glu, gc_tensor dx, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu_backward_jvp(*tensor_ptr_from_ocaml(grad_x), *tensor_ptr_from_ocaml(grad_glu), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(dgrad_glu), *tensor_ptr_from_ocaml(dx), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu_backward_jvp_out(gc_tensor out, gc_tensor grad_x, gc_tensor grad_glu, gc_tensor x, gc_tensor dgrad_glu, gc_tensor dx, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu_backward_jvp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_x), *tensor_ptr_from_ocaml(grad_glu), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(dgrad_glu), *tensor_ptr_from_ocaml(dx), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu_jvp(gc_tensor glu, gc_tensor x, gc_tensor dx, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu_jvp(*tensor_ptr_from_ocaml(glu), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(dx), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu_jvp_out(gc_tensor out, gc_tensor glu, gc_tensor x, gc_tensor dx, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu_jvp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(glu), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(dx), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_glu_out(gc_tensor out, gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::glu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_grad(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->grad(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::greater(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->greater_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_equal(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::greater_equal(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_equal_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->greater_equal_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_equal_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::greater_equal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_equal_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::greater_equal(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_equal_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->greater_equal_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_equal_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::greater_equal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::greater_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::greater(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->greater_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_greater_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::greater_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_grid_sampler(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::grid_sampler(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_grid_sampler_2d(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::grid_sampler_2d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_grid_sampler_2d_out(gc_tensor out, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::grid_sampler_2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_grid_sampler_3d(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::grid_sampler_3d(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_grid_sampler_3d_out(gc_tensor out, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { + PROTECT( + torch::Tensor results__ = torch::grid_sampler_3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grid), interpolation_mode, padding_mode, (bool)align_corners); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_group_norm(gc_tensor input, int64_t num_groups, gc_tensor weight, gc_tensor bias, double eps, int cudnn_enabled) { + PROTECT( + torch::Tensor results__ = torch::group_norm(*tensor_ptr_from_ocaml(input), num_groups, (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), eps, (bool)cudnn_enabled); + return tensor_to_ocaml(results__); + ) +} + +void atg_gru(raw_tensor *out__, gc_tensor input, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { + PROTECT( + auto results__ = torch::gru(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_gru_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh) { + PROTECT( + torch::Tensor results__ = torch::gru_cell(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), (b_ih ? tensor_from_ocaml(b_ih) : torch::Tensor()), (b_hh ? tensor_from_ocaml(b_hh) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +void atg_gru_data(raw_tensor *out__, gc_tensor data, gc_tensor batch_sizes, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { + PROTECT( + auto results__ = torch::gru(*tensor_ptr_from_ocaml(data), *tensor_ptr_from_ocaml(batch_sizes), *tensor_ptr_from_ocaml(hx), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_gt(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::gt(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gt_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->gt_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gt_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::gt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gt_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::gt(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gt_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->gt_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_gt_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::gt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window(int64_t window_length, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::hamming_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window_out(gc_tensor out, int64_t window_length) { + PROTECT( + torch::Tensor results__ = torch::hamming_window_out(*tensor_ptr_from_ocaml(out), window_length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::hamming_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window_periodic_alpha(int64_t window_length, int periodic, double alpha, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::hamming_window(window_length, (bool)periodic, alpha, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window_periodic_alpha_beta(int64_t window_length, int periodic, double alpha, double beta, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::hamming_window(window_length, (bool)periodic, alpha, beta, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window_periodic_alpha_beta_out(gc_tensor out, int64_t window_length, int periodic, double alpha, double beta) { + PROTECT( + torch::Tensor results__ = torch::hamming_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic, alpha, beta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window_periodic_alpha_out(gc_tensor out, int64_t window_length, int periodic, double alpha) { + PROTECT( + torch::Tensor results__ = torch::hamming_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic, alpha); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hamming_window_periodic_out(gc_tensor out, int64_t window_length, int periodic) { + PROTECT( + torch::Tensor results__ = torch::hamming_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hann_window(int64_t window_length, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::hann_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hann_window_out(gc_tensor out, int64_t window_length) { + PROTECT( + torch::Tensor results__ = torch::hann_window_out(*tensor_ptr_from_ocaml(out), window_length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hann_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::hann_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hann_window_periodic_out(gc_tensor out, int64_t window_length, int periodic) { + PROTECT( + torch::Tensor results__ = torch::hann_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardshrink(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardshrink(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardshrink_backward(gc_tensor grad_out, gc_tensor self, scalar lambd) { + PROTECT( + torch::Tensor results__ = torch::hardshrink_backward(*tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(self), *lambd); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardshrink_backward_grad_input(gc_tensor grad_input, gc_tensor grad_out, gc_tensor self, scalar lambd) { + PROTECT( + torch::Tensor results__ = torch::hardshrink_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_out), *tensor_ptr_from_ocaml(self), *lambd); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardshrink_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardshrink_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardsigmoid(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardsigmoid(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardsigmoid_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardsigmoid_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardsigmoid_backward(gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardsigmoid_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardsigmoid_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardsigmoid_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardsigmoid_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardsigmoid_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardswish(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardswish(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardswish_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardswish_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardswish_backward(gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardswish_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardswish_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardswish_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardswish_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardswish_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardtanh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardtanh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardtanh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardtanh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardtanh_backward(gc_tensor grad_output, gc_tensor self, scalar min_val, scalar max_val) { + PROTECT( + torch::Tensor results__ = torch::hardtanh_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *min_val, *max_val); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardtanh_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar min_val, scalar max_val) { + PROTECT( + torch::Tensor results__ = torch::hardtanh_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *min_val, *max_val); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hardtanh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::hardtanh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_heaviside(gc_tensor self, gc_tensor values) { + PROTECT( + torch::Tensor results__ = torch::heaviside(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(values)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_heaviside_(gc_tensor self, gc_tensor values) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->heaviside_(*tensor_ptr_from_ocaml(values)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_heaviside_out(gc_tensor out, gc_tensor self, gc_tensor values) { + PROTECT( + torch::Tensor results__ = torch::heaviside_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(values)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hinge_embedding_loss(gc_tensor self, gc_tensor target, double margin, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::hinge_embedding_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), margin, reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_histc(gc_tensor self, int64_t bins) { + PROTECT( + torch::Tensor results__ = torch::histc(*tensor_ptr_from_ocaml(self), bins); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_histc_out(gc_tensor out, gc_tensor self, int64_t bins) { + PROTECT( + torch::Tensor results__ = torch::histc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), bins); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_hsplit(gc_tensor self, int64_t sections) { + PROTECT( + auto results__ = torch::hsplit(*tensor_ptr_from_ocaml(self), sections); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_hsplit_array(gc_tensor self, int64_t *indices_data, int indices_len) { + PROTECT( + auto results__ = torch::hsplit(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(indices_data, indices_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_hspmm(gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::hspmm(*tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hspmm_out(gc_tensor out, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::hspmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hstack(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::hstack(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hstack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::hstack_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_huber_loss(gc_tensor self, gc_tensor target, int64_t reduction, double delta) { + PROTECT( + torch::Tensor results__ = torch::huber_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, delta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_huber_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double delta) { + PROTECT( + torch::Tensor results__ = torch::huber_loss_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, delta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_huber_loss_backward_out(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double delta) { + PROTECT( + torch::Tensor results__ = torch::huber_loss_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, delta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_huber_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction, double delta) { + PROTECT( + torch::Tensor results__ = torch::huber_loss_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, delta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hypot(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::hypot(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hypot_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->hypot_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_hypot_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::hypot_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_i0(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::i0(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_i0_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::i0_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_i0_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::i0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_igamma(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::igamma(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_igamma_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->igamma_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_igamma_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::igamma_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_igammac(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::igammac(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_igammac_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->igammac_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_igammac_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::igammac_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_im2col(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::im2col(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_im2col_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::im2col_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_imag(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::imag(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index(gc_tensor self, gc_tensor *indices_data, int indices_len) { + PROTECT( + torch::Tensor results__ = torch::index(*tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_add(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::index_add(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_add_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->index_add_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_add_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::index_add_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_copy(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::index_copy(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_copy_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->index_copy_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_copy_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::index_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_fill(gc_tensor self, int64_t dim, gc_tensor index, scalar value) { + PROTECT( + torch::Tensor results__ = torch::index_fill(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_fill_(gc_tensor self, int64_t dim, gc_tensor index, scalar value) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->index_fill_(dim, *tensor_ptr_from_ocaml(index), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_fill_int_scalar_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, scalar value) { + PROTECT( + torch::Tensor results__ = torch::index_fill_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_fill_int_tensor(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor value) { + PROTECT( + torch::Tensor results__ = torch::index_fill(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_fill_int_tensor_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor value) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->index_fill_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_fill_int_tensor_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor value) { + PROTECT( + torch::Tensor results__ = torch::index_fill_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_put(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate) { + PROTECT( + torch::Tensor results__ = torch::index_put(*tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len), *tensor_ptr_from_ocaml(values), (bool)accumulate); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_put_(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate) { + PROTECT( + torch::Tensor results__ = torch::index_put_(*tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len), *tensor_ptr_from_ocaml(values), (bool)accumulate); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_put_out(gc_tensor out, gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate) { + PROTECT( + torch::Tensor results__ = torch::index_put_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len), *tensor_ptr_from_ocaml(values), (bool)accumulate); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_reduce(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source, char * reduce, int include_self) { + PROTECT( + torch::Tensor results__ = torch::index_reduce(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source), std::string(reduce), (bool)include_self); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_reduce_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source, char * reduce, int include_self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->index_reduce_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source), std::string(reduce), (bool)include_self); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_reduce_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source, char * reduce, int include_self) { + PROTECT( + torch::Tensor results__ = torch::index_reduce_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source), std::string(reduce), (bool)include_self); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_select(gc_tensor self, int64_t dim, gc_tensor index) { + PROTECT( + torch::Tensor results__ = torch::index_select(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_select_backward(gc_tensor grad, int64_t *self_sizes_data, int self_sizes_len, int64_t dim, gc_tensor index) { + PROTECT( + torch::Tensor results__ = torch::index_select_backward(*tensor_ptr_from_ocaml(grad), torch::IntArrayRef(self_sizes_data, self_sizes_len), dim, *tensor_ptr_from_ocaml(index)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_select_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index) { + PROTECT( + torch::Tensor results__ = torch::index_select_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_index_tensor_out(gc_tensor out, gc_tensor self, gc_tensor *indices_data, int indices_len) { + PROTECT( + torch::Tensor results__ = torch::index_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), of_carray_tensor_opt(indices_data, indices_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_indices(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->indices(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_indices_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::indices_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_indices_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::indices_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_infinitely_differentiable_gelu_backward(gc_tensor grad, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::infinitely_differentiable_gelu_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_inner(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::inner(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_inner_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::inner_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_instance_norm(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int use_input_stats, double momentum, double eps, int cudnn_enabled) { + PROTECT( + torch::Tensor results__ = torch::instance_norm(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)use_input_stats, momentum, eps, (bool)cudnn_enabled); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_int_repr(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::int_repr(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_int_repr_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::int_repr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_inverse(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::inverse(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_inverse_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::inverse_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +int atg_is_coalesced(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->is_coalesced(); + ) + return 0; +} + +int atg_is_complex(gc_tensor self) { + PROTECT( + return torch::is_complex(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_conj(gc_tensor self) { + PROTECT( + return torch::is_conj(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_distributed(gc_tensor self) { + PROTECT( + return torch::is_distributed(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_floating_point(gc_tensor self) { + PROTECT( + return torch::is_floating_point(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_inference(gc_tensor self) { + PROTECT( + return torch::is_inference(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_leaf(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->is_leaf(); + ) + return 0; +} + +int atg_is_neg(gc_tensor self) { + PROTECT( + return torch::is_neg(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_nonzero(gc_tensor self) { + PROTECT( + return torch::is_nonzero(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_pinned(gc_tensor self, int device) { + PROTECT( + return tensor_ptr_from_ocaml(self)->is_pinned(device_of_int(device)); + ) + return 0; +} + +int atg_is_same_size(gc_tensor self, gc_tensor other) { + PROTECT( + return torch::is_same_size(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + ) + return 0; +} + +int atg_is_set_to(gc_tensor self, gc_tensor tensor) { + PROTECT( + return tensor_ptr_from_ocaml(self)->is_set_to(*tensor_ptr_from_ocaml(tensor)); + ) + return 0; +} + +int atg_is_signed(gc_tensor self) { + PROTECT( + return torch::is_signed(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int atg_is_vulkan_available() { + PROTECT( + return torch::is_vulkan_available(); + ) + return 0; +} + +raw_tensor atg_isclose(gc_tensor self, gc_tensor other, double rtol, double atol, int equal_nan) { + PROTECT( + torch::Tensor results__ = torch::isclose(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), rtol, atol, (bool)equal_nan); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isfinite(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isfinite(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isin(gc_tensor elements, gc_tensor test_elements, int assume_unique, int invert) { + PROTECT( + torch::Tensor results__ = torch::isin(*tensor_ptr_from_ocaml(elements), *tensor_ptr_from_ocaml(test_elements), (bool)assume_unique, (bool)invert); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isin_scalar_tensor(scalar element, gc_tensor test_elements, int assume_unique, int invert) { + PROTECT( + torch::Tensor results__ = torch::isin(*element, *tensor_ptr_from_ocaml(test_elements), (bool)assume_unique, (bool)invert); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isin_scalar_tensor_out(gc_tensor out, scalar element, gc_tensor test_elements, int assume_unique, int invert) { + PROTECT( + torch::Tensor results__ = torch::isin_out(*tensor_ptr_from_ocaml(out), *element, *tensor_ptr_from_ocaml(test_elements), (bool)assume_unique, (bool)invert); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isin_tensor_scalar(gc_tensor elements, scalar test_element, int assume_unique, int invert) { + PROTECT( + torch::Tensor results__ = torch::isin(*tensor_ptr_from_ocaml(elements), *test_element, (bool)assume_unique, (bool)invert); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isin_tensor_scalar_out(gc_tensor out, gc_tensor elements, scalar test_element, int assume_unique, int invert) { + PROTECT( + torch::Tensor results__ = torch::isin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(elements), *test_element, (bool)assume_unique, (bool)invert); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isin_tensor_tensor_out(gc_tensor out, gc_tensor elements, gc_tensor test_elements, int assume_unique, int invert) { + PROTECT( + torch::Tensor results__ = torch::isin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(elements), *tensor_ptr_from_ocaml(test_elements), (bool)assume_unique, (bool)invert); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isinf(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isinf(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isinf_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isinf_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isnan(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isnan(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isnan_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isnan_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isneginf(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isneginf(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isneginf_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isneginf_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isposinf(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isposinf(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isposinf_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isposinf_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_isreal(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::isreal(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_istft(gc_tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, gc_tensor window, int center, int normalized, int onesided, int64_t length_v, int length_null, int return_complex) { + PROTECT( + torch::Tensor results__ = torch::istft(*tensor_ptr_from_ocaml(self), n_fft, hop_length_null ? c10::nullopt : c10::optional(hop_length_v), win_length_null ? c10::nullopt : c10::optional(win_length_v), (window ? tensor_from_ocaml(window) : torch::Tensor()), (bool)center, (bool)normalized, (bool)onesided, length_null ? c10::nullopt : c10::optional(length_v), (bool)return_complex); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kaiser_window(int64_t window_length, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::kaiser_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kaiser_window_beta(int64_t window_length, int periodic, double beta, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::kaiser_window(window_length, (bool)periodic, beta, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kaiser_window_beta_out(gc_tensor out, int64_t window_length, int periodic, double beta) { + PROTECT( + torch::Tensor results__ = torch::kaiser_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic, beta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kaiser_window_out(gc_tensor out, int64_t window_length) { + PROTECT( + torch::Tensor results__ = torch::kaiser_window_out(*tensor_ptr_from_ocaml(out), window_length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kaiser_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::kaiser_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kaiser_window_periodic_out(gc_tensor out, int64_t window_length, int periodic) { + PROTECT( + torch::Tensor results__ = torch::kaiser_window_out(*tensor_ptr_from_ocaml(out), window_length, (bool)periodic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kl_div(gc_tensor self, gc_tensor target, int64_t reduction, int log_target) { + PROTECT( + torch::Tensor results__ = torch::kl_div(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, (bool)log_target); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kron(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::kron(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_kron_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::kron_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +void atg_kthvalue(raw_tensor *out__, gc_tensor self, int64_t k, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::kthvalue(*tensor_ptr_from_ocaml(self), k, dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_kthvalue_values(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t k, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::kthvalue_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), k, dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_l1_loss(gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::l1_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_layer_norm(gc_tensor input, int64_t *normalized_shape_data, int normalized_shape_len, gc_tensor weight, gc_tensor bias, double eps, int cudnn_enable) { + PROTECT( + torch::Tensor results__ = torch::layer_norm(*tensor_ptr_from_ocaml(input), torch::IntArrayRef(normalized_shape_data, normalized_shape_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), eps, (bool)cudnn_enable); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lcm(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::lcm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lcm_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::lcm_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lcm_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::lcm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ldexp(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::ldexp(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ldexp_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::ldexp_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ldexp_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::ldexp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_le(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::le(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_le_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->le_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_le_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::le_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_le_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::le(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_le_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->le_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_le_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::le_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_leaky_relu(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::leaky_relu(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_leaky_relu_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::leaky_relu_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_leaky_relu_backward(gc_tensor grad_output, gc_tensor self, scalar negative_slope, int self_is_result) { + PROTECT( + torch::Tensor results__ = torch::leaky_relu_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *negative_slope, (bool)self_is_result); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_leaky_relu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar negative_slope, int self_is_result) { + PROTECT( + torch::Tensor results__ = torch::leaky_relu_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *negative_slope, (bool)self_is_result); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_leaky_relu_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::leaky_relu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lerp(gc_tensor self, gc_tensor end, scalar weight) { + PROTECT( + torch::Tensor results__ = torch::lerp(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(end), *weight); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lerp_(gc_tensor self, gc_tensor end, scalar weight) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->lerp_(*tensor_ptr_from_ocaml(end), *weight); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lerp_scalar_out(gc_tensor out, gc_tensor self, gc_tensor end, scalar weight) { + PROTECT( + torch::Tensor results__ = torch::lerp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(end), *weight); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lerp_tensor(gc_tensor self, gc_tensor end, gc_tensor weight) { + PROTECT( + torch::Tensor results__ = torch::lerp(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(end), *tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lerp_tensor_(gc_tensor self, gc_tensor end, gc_tensor weight) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->lerp_(*tensor_ptr_from_ocaml(end), *tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lerp_tensor_out(gc_tensor out, gc_tensor self, gc_tensor end, gc_tensor weight) { + PROTECT( + torch::Tensor results__ = torch::lerp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(end), *tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::less(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->less_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_equal(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::less_equal(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_equal_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->less_equal_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_equal_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::less_equal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_equal_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::less_equal(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_equal_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->less_equal_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_equal_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::less_equal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::less_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::less(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->less_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_less_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::less_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lgamma(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::lgamma(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lgamma_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->lgamma_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lgamma_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::lgamma_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lift(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::lift(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lift_fresh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::lift_fresh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lift_fresh_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::lift_fresh_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lift_fresh_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::lift_fresh_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lift_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::lift_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_cholesky(gc_tensor self, int upper) { + PROTECT( + torch::Tensor results__ = torch::linalg_cholesky(*tensor_ptr_from_ocaml(self), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_cholesky_ex(raw_tensor *out__, gc_tensor self, int upper, int check_errors) { + PROTECT( + auto results__ = torch::linalg_cholesky_ex(*tensor_ptr_from_ocaml(self), (bool)upper, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_cholesky_ex_l(raw_tensor *out__, gc_tensor L, gc_tensor info, gc_tensor self, int upper, int check_errors) { + PROTECT( + auto results__ = torch::linalg_cholesky_ex_out(*tensor_ptr_from_ocaml(L), *tensor_ptr_from_ocaml(info), *tensor_ptr_from_ocaml(self), (bool)upper, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_linalg_cholesky_out(gc_tensor out, gc_tensor self, int upper) { + PROTECT( + torch::Tensor results__ = torch::linalg_cholesky_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)upper); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_cond(gc_tensor self, scalar p) { + PROTECT( + torch::Tensor results__ = torch::linalg_cond(*tensor_ptr_from_ocaml(self), *p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_cond_out(gc_tensor out, gc_tensor self, scalar p) { + PROTECT( + torch::Tensor results__ = torch::linalg_cond_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_cond_p_str(gc_tensor self, char * p) { + PROTECT( + torch::Tensor results__ = torch::linalg_cond(*tensor_ptr_from_ocaml(self), std::string(p)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_cond_p_str_out(gc_tensor out, gc_tensor self, char * p) { + PROTECT( + torch::Tensor results__ = torch::linalg_cond_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), std::string(p)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_cross(gc_tensor self, gc_tensor other, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::linalg_cross(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_cross_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::linalg_cross_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_det(gc_tensor A) { + PROTECT( + torch::Tensor results__ = torch::linalg_det(*tensor_ptr_from_ocaml(A)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_det_out(gc_tensor out, gc_tensor A) { + PROTECT( + torch::Tensor results__ = torch::linalg_det_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(A)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_diagonal(gc_tensor A, int64_t offset, int64_t dim1, int64_t dim2) { + PROTECT( + torch::Tensor results__ = torch::linalg_diagonal(*tensor_ptr_from_ocaml(A), offset, dim1, dim2); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_eig(raw_tensor *out__, gc_tensor self) { + PROTECT( + auto results__ = torch::linalg_eig(*tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_eig_out(raw_tensor *out__, gc_tensor eigenvalues, gc_tensor eigenvectors, gc_tensor self) { + PROTECT( + auto results__ = torch::linalg_eig_out(*tensor_ptr_from_ocaml(eigenvalues), *tensor_ptr_from_ocaml(eigenvectors), *tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_eigh(raw_tensor *out__, gc_tensor self, char * UPLO) { + PROTECT( + auto results__ = torch::linalg_eigh(*tensor_ptr_from_ocaml(self), std::string(UPLO)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_eigh_eigvals(raw_tensor *out__, gc_tensor eigvals, gc_tensor eigvecs, gc_tensor self, char * UPLO) { + PROTECT( + auto results__ = torch::linalg_eigh_out(*tensor_ptr_from_ocaml(eigvals), *tensor_ptr_from_ocaml(eigvecs), *tensor_ptr_from_ocaml(self), std::string(UPLO)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_linalg_eigvals(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::linalg_eigvals(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_eigvals_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::linalg_eigvals_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_eigvalsh(gc_tensor self, char * UPLO) { + PROTECT( + torch::Tensor results__ = torch::linalg_eigvalsh(*tensor_ptr_from_ocaml(self), std::string(UPLO)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_eigvalsh_out(gc_tensor out, gc_tensor self, char * UPLO) { + PROTECT( + torch::Tensor results__ = torch::linalg_eigvalsh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), std::string(UPLO)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_householder_product(gc_tensor input, gc_tensor tau) { + PROTECT( + torch::Tensor results__ = torch::linalg_householder_product(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(tau)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_householder_product_out(gc_tensor out, gc_tensor input, gc_tensor tau) { + PROTECT( + torch::Tensor results__ = torch::linalg_householder_product_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(tau)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_inv(gc_tensor A) { + PROTECT( + torch::Tensor results__ = torch::linalg_inv(*tensor_ptr_from_ocaml(A)); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_inv_ex(raw_tensor *out__, gc_tensor A, int check_errors) { + PROTECT( + auto results__ = torch::linalg_inv_ex(*tensor_ptr_from_ocaml(A), (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_inv_ex_inverse(raw_tensor *out__, gc_tensor inverse, gc_tensor info, gc_tensor A, int check_errors) { + PROTECT( + auto results__ = torch::linalg_inv_ex_out(*tensor_ptr_from_ocaml(inverse), *tensor_ptr_from_ocaml(info), *tensor_ptr_from_ocaml(A), (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_linalg_inv_out(gc_tensor out, gc_tensor A) { + PROTECT( + torch::Tensor results__ = torch::linalg_inv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(A)); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_ldl_factor(raw_tensor *out__, gc_tensor self, int hermitian) { + PROTECT( + auto results__ = torch::linalg_ldl_factor(*tensor_ptr_from_ocaml(self), (bool)hermitian); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_ldl_factor_ex(raw_tensor *out__, gc_tensor self, int hermitian, int check_errors) { + PROTECT( + auto results__ = torch::linalg_ldl_factor_ex(*tensor_ptr_from_ocaml(self), (bool)hermitian, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_linalg_ldl_factor_ex_out(raw_tensor *out__, gc_tensor LD, gc_tensor pivots, gc_tensor info, gc_tensor self, int hermitian, int check_errors) { + PROTECT( + auto results__ = torch::linalg_ldl_factor_ex_out(*tensor_ptr_from_ocaml(LD), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(info), *tensor_ptr_from_ocaml(self), (bool)hermitian, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_linalg_ldl_factor_out(raw_tensor *out__, gc_tensor LD, gc_tensor pivots, gc_tensor self, int hermitian) { + PROTECT( + auto results__ = torch::linalg_ldl_factor_out(*tensor_ptr_from_ocaml(LD), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(self), (bool)hermitian); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_linalg_ldl_solve(gc_tensor LD, gc_tensor pivots, gc_tensor B, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_ldl_solve(*tensor_ptr_from_ocaml(LD), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(B), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_ldl_solve_out(gc_tensor out, gc_tensor LD, gc_tensor pivots, gc_tensor B, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_ldl_solve_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(LD), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(B), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_lstsq(raw_tensor *out__, gc_tensor self, gc_tensor b, double rcond_v, int rcond_null, char * driver) { + PROTECT( + auto results__ = torch::linalg_lstsq(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(b), rcond_null ? c10::nullopt : c10::optional(rcond_v), std::string(driver)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg_linalg_lstsq_out(raw_tensor *out__, gc_tensor solution, gc_tensor residuals, gc_tensor rank, gc_tensor singular_values, gc_tensor self, gc_tensor b, double rcond_v, int rcond_null, char * driver) { + PROTECT( + auto results__ = torch::linalg_lstsq_out(*tensor_ptr_from_ocaml(solution), *tensor_ptr_from_ocaml(residuals), *tensor_ptr_from_ocaml(rank), *tensor_ptr_from_ocaml(singular_values), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(b), rcond_null ? c10::nullopt : c10::optional(rcond_v), std::string(driver)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg_linalg_lu(raw_tensor *out__, gc_tensor A, int pivot) { + PROTECT( + auto results__ = torch::linalg_lu(*tensor_ptr_from_ocaml(A), (bool)pivot); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_linalg_lu_factor(raw_tensor *out__, gc_tensor A, int pivot) { + PROTECT( + auto results__ = torch::linalg_lu_factor(*tensor_ptr_from_ocaml(A), (bool)pivot); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_lu_factor_ex(raw_tensor *out__, gc_tensor A, int pivot, int check_errors) { + PROTECT( + auto results__ = torch::linalg_lu_factor_ex(*tensor_ptr_from_ocaml(A), (bool)pivot, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_linalg_lu_factor_ex_out(raw_tensor *out__, gc_tensor LU, gc_tensor pivots, gc_tensor info, gc_tensor A, int pivot, int check_errors) { + PROTECT( + auto results__ = torch::linalg_lu_factor_ex_out(*tensor_ptr_from_ocaml(LU), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(info), *tensor_ptr_from_ocaml(A), (bool)pivot, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_linalg_lu_factor_out(raw_tensor *out__, gc_tensor LU, gc_tensor pivots, gc_tensor A, int pivot) { + PROTECT( + auto results__ = torch::linalg_lu_factor_out(*tensor_ptr_from_ocaml(LU), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(A), (bool)pivot); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_lu_out(raw_tensor *out__, gc_tensor P, gc_tensor L, gc_tensor U, gc_tensor A, int pivot) { + PROTECT( + auto results__ = torch::linalg_lu_out(*tensor_ptr_from_ocaml(P), *tensor_ptr_from_ocaml(L), *tensor_ptr_from_ocaml(U), *tensor_ptr_from_ocaml(A), (bool)pivot); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_linalg_lu_solve(gc_tensor LU, gc_tensor pivots, gc_tensor B, int left, int adjoint) { + PROTECT( + torch::Tensor results__ = torch::linalg_lu_solve(*tensor_ptr_from_ocaml(LU), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(B), (bool)left, (bool)adjoint); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_lu_solve_out(gc_tensor out, gc_tensor LU, gc_tensor pivots, gc_tensor B, int left, int adjoint) { + PROTECT( + torch::Tensor results__ = torch::linalg_lu_solve_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(LU), *tensor_ptr_from_ocaml(pivots), *tensor_ptr_from_ocaml(B), (bool)left, (bool)adjoint); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matmul(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::linalg_matmul(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matmul_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::linalg_matmul_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_exp(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_exp(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_exp_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_exp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_power(gc_tensor self, int64_t n) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_power(*tensor_ptr_from_ocaml(self), n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_power_out(gc_tensor out, gc_tensor self, int64_t n) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_power_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank(gc_tensor self, double tol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank(*tensor_ptr_from_ocaml(self), tol, (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank_atol_rtol_float(gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank(*tensor_ptr_from_ocaml(self), atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank_atol_rtol_float_out(gc_tensor out, gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank_atol_rtol_tensor(gc_tensor input, gc_tensor atol, gc_tensor rtol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank(*tensor_ptr_from_ocaml(input), (atol ? tensor_from_ocaml(atol) : torch::Tensor()), (rtol ? tensor_from_ocaml(rtol) : torch::Tensor()), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank_atol_rtol_tensor_out(gc_tensor out, gc_tensor input, gc_tensor atol, gc_tensor rtol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), (atol ? tensor_from_ocaml(atol) : torch::Tensor()), (rtol ? tensor_from_ocaml(rtol) : torch::Tensor()), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank_out(gc_tensor out, gc_tensor self, double tol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), tol, (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank_out_tol_tensor(gc_tensor out, gc_tensor input, gc_tensor tol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(tol), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_matrix_rank_tol_tensor(gc_tensor input, gc_tensor tol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_matrix_rank(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(tol), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_multi_dot(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::linalg_multi_dot(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_multi_dot_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::linalg_multi_dot_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv(gc_tensor self, double rcond, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv(*tensor_ptr_from_ocaml(self), rcond, (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv_atol_rtol_float(gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv(*tensor_ptr_from_ocaml(self), atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv_atol_rtol_float_out(gc_tensor out, gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv_atol_rtol_tensor(gc_tensor self, gc_tensor atol, gc_tensor rtol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv(*tensor_ptr_from_ocaml(self), (atol ? tensor_from_ocaml(atol) : torch::Tensor()), (rtol ? tensor_from_ocaml(rtol) : torch::Tensor()), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv_atol_rtol_tensor_out(gc_tensor out, gc_tensor self, gc_tensor atol, gc_tensor rtol, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (atol ? tensor_from_ocaml(atol) : torch::Tensor()), (rtol ? tensor_from_ocaml(rtol) : torch::Tensor()), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv_out(gc_tensor out, gc_tensor self, double rcond, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), rcond, (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv_out_rcond_tensor(gc_tensor out, gc_tensor self, gc_tensor rcond, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(rcond), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_pinv_rcond_tensor(gc_tensor self, gc_tensor rcond, int hermitian) { + PROTECT( + torch::Tensor results__ = torch::linalg_pinv(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(rcond), (bool)hermitian); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_qr(raw_tensor *out__, gc_tensor A, char * mode) { + PROTECT( + auto results__ = torch::linalg_qr(*tensor_ptr_from_ocaml(A), std::string(mode)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_qr_out(raw_tensor *out__, gc_tensor Q, gc_tensor R, gc_tensor A, char * mode) { + PROTECT( + auto results__ = torch::linalg_qr_out(*tensor_ptr_from_ocaml(Q), *tensor_ptr_from_ocaml(R), *tensor_ptr_from_ocaml(A), std::string(mode)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_slogdet(raw_tensor *out__, gc_tensor A) { + PROTECT( + auto results__ = torch::linalg_slogdet(*tensor_ptr_from_ocaml(A)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_slogdet_out(raw_tensor *out__, gc_tensor sign, gc_tensor logabsdet, gc_tensor A) { + PROTECT( + auto results__ = torch::linalg_slogdet_out(*tensor_ptr_from_ocaml(sign), *tensor_ptr_from_ocaml(logabsdet), *tensor_ptr_from_ocaml(A)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_linalg_solve(gc_tensor A, gc_tensor B, int left) { + PROTECT( + torch::Tensor results__ = torch::linalg_solve(*tensor_ptr_from_ocaml(A), *tensor_ptr_from_ocaml(B), (bool)left); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_solve_ex(raw_tensor *out__, gc_tensor A, gc_tensor B, int left, int check_errors) { + PROTECT( + auto results__ = torch::linalg_solve_ex(*tensor_ptr_from_ocaml(A), *tensor_ptr_from_ocaml(B), (bool)left, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_linalg_solve_ex_out(raw_tensor *out__, gc_tensor result, gc_tensor info, gc_tensor A, gc_tensor B, int left, int check_errors) { + PROTECT( + auto results__ = torch::linalg_solve_ex_out(*tensor_ptr_from_ocaml(result), *tensor_ptr_from_ocaml(info), *tensor_ptr_from_ocaml(A), *tensor_ptr_from_ocaml(B), (bool)left, (bool)check_errors); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_linalg_solve_out(gc_tensor out, gc_tensor A, gc_tensor B, int left) { + PROTECT( + torch::Tensor results__ = torch::linalg_solve_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(A), *tensor_ptr_from_ocaml(B), (bool)left); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_solve_triangular(gc_tensor self, gc_tensor B, int upper, int left, int unitriangular) { + PROTECT( + torch::Tensor results__ = torch::linalg_solve_triangular(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(B), (bool)upper, (bool)left, (bool)unitriangular); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_solve_triangular_out(gc_tensor out, gc_tensor self, gc_tensor B, int upper, int left, int unitriangular) { + PROTECT( + torch::Tensor results__ = torch::linalg_solve_triangular_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(B), (bool)upper, (bool)left, (bool)unitriangular); + return tensor_to_ocaml(results__); + ) +} + +void atg_linalg_svd(raw_tensor *out__, gc_tensor A, int full_matrices, char * driver) { + PROTECT( + auto results__ = torch::linalg_svd(*tensor_ptr_from_ocaml(A), (bool)full_matrices, std::string(driver)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_linalg_svd_u(raw_tensor *out__, gc_tensor U, gc_tensor S, gc_tensor Vh, gc_tensor A, int full_matrices, char * driver) { + PROTECT( + auto results__ = torch::linalg_svd_out(*tensor_ptr_from_ocaml(U), *tensor_ptr_from_ocaml(S), *tensor_ptr_from_ocaml(Vh), *tensor_ptr_from_ocaml(A), (bool)full_matrices, std::string(driver)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_linalg_svdvals(gc_tensor A, char * driver) { + PROTECT( + torch::Tensor results__ = torch::linalg_svdvals(*tensor_ptr_from_ocaml(A), std::string(driver)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_svdvals_out(gc_tensor out, gc_tensor A, char * driver) { + PROTECT( + torch::Tensor results__ = torch::linalg_svdvals_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(A), std::string(driver)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_tensorinv(gc_tensor self, int64_t ind) { + PROTECT( + torch::Tensor results__ = torch::linalg_tensorinv(*tensor_ptr_from_ocaml(self), ind); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_tensorinv_out(gc_tensor out, gc_tensor self, int64_t ind) { + PROTECT( + torch::Tensor results__ = torch::linalg_tensorinv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), ind); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_tensorsolve(gc_tensor self, gc_tensor other, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::linalg_tensorsolve(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), dims_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dims_data, dims_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_tensorsolve_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::linalg_tensorsolve_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), dims_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dims_data, dims_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_vander(gc_tensor x, int64_t n_v, int n_null) { + PROTECT( + torch::Tensor results__ = torch::linalg_vander(*tensor_ptr_from_ocaml(x), n_null ? c10::nullopt : c10::optional(n_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_vecdot(gc_tensor x, gc_tensor y, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::linalg_vecdot(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(y), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linalg_vecdot_out(gc_tensor out, gc_tensor x, gc_tensor y, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::linalg_vecdot_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(y), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linear(gc_tensor input, gc_tensor weight, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::linear(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linear_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::linear_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linspace(scalar start, scalar end, int64_t steps, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::linspace(*start, *end, steps, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_linspace_out(gc_tensor out, scalar start, scalar end, int64_t steps) { + PROTECT( + torch::Tensor results__ = torch::linspace_out(*tensor_ptr_from_ocaml(out), *start, *end, steps); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log10(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log10(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log10_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log10_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log10_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log10_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log1p(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log1p(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log1p_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log1p_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log1p_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log1p_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log2(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log2(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log2_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log2_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log2_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_normal(gc_tensor self, double mean, double std) { + PROTECT( + torch::Tensor results__ = torch::log_normal(*tensor_ptr_from_ocaml(self), mean, std); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_normal_(gc_tensor self, double mean, double std) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->log_normal_(mean, std); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_normal_out(gc_tensor out, gc_tensor self, double mean, double std) { + PROTECT( + torch::Tensor results__ = torch::log_normal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), mean, std); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_sigmoid(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log_sigmoid(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_sigmoid_backward(gc_tensor grad_output, gc_tensor self, gc_tensor buffer) { + PROTECT( + torch::Tensor results__ = torch::log_sigmoid_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(buffer)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_sigmoid_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor buffer) { + PROTECT( + torch::Tensor results__ = torch::log_sigmoid_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(buffer)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_sigmoid_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::log_sigmoid_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_softmax(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::log_softmax(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_log_softmax_int_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::log_softmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logaddexp(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logaddexp(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logaddexp2(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logaddexp2(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logaddexp2_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logaddexp2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logaddexp_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logaddexp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logcumsumexp(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::logcumsumexp(*tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logcumsumexp_out(gc_tensor out, gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::logcumsumexp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logdet(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::logdet(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_and(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logical_and(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_and_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->logical_and_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_and_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logical_and_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_not(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::logical_not(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_not_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->logical_not_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_not_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::logical_not_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_or(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logical_or(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_or_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->logical_or_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_or_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logical_or_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_xor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logical_xor(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_xor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->logical_xor_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logical_xor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::logical_xor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logit(gc_tensor self, double eps_v, int eps_null) { + PROTECT( + torch::Tensor results__ = torch::logit(*tensor_ptr_from_ocaml(self), eps_null ? c10::nullopt : c10::optional(eps_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logit_(gc_tensor self, double eps_v, int eps_null) { + PROTECT( + torch::Tensor results__ = torch::logit_(*tensor_ptr_from_ocaml(self), eps_null ? c10::nullopt : c10::optional(eps_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logit_backward(gc_tensor grad_output, gc_tensor self, double eps_v, int eps_null) { + PROTECT( + torch::Tensor results__ = torch::logit_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), eps_null ? c10::nullopt : c10::optional(eps_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logit_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, double eps_v, int eps_null) { + PROTECT( + torch::Tensor results__ = torch::logit_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), eps_null ? c10::nullopt : c10::optional(eps_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logit_out(gc_tensor out, gc_tensor self, double eps_v, int eps_null) { + PROTECT( + torch::Tensor results__ = torch::logit_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), eps_null ? c10::nullopt : c10::optional(eps_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logspace(scalar start, scalar end, int64_t steps, double base, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::logspace(*start, *end, steps, base, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logspace_out(gc_tensor out, scalar start, scalar end, int64_t steps, double base) { + PROTECT( + torch::Tensor results__ = torch::logspace_out(*tensor_ptr_from_ocaml(out), *start, *end, steps, base); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logsumexp(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::logsumexp(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_logsumexp_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::logsumexp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +void atg_lstm(raw_tensor *out__, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { + PROTECT( + auto results__ = torch::lstm(*tensor_ptr_from_ocaml(input), of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_lstm_cell(raw_tensor *out__, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh) { + PROTECT( + auto results__ = torch::lstm_cell(*tensor_ptr_from_ocaml(input), of_carray_tensor(hx_data, hx_len), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), (b_ih ? tensor_from_ocaml(b_ih) : torch::Tensor()), (b_hh ? tensor_from_ocaml(b_hh) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_lstm_data(raw_tensor *out__, gc_tensor data, gc_tensor batch_sizes, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { + PROTECT( + auto results__ = torch::lstm(*tensor_ptr_from_ocaml(data), *tensor_ptr_from_ocaml(batch_sizes), of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_lstm_mps_backward(gc_tensor out0, gc_tensor *out1_data, int out1_len, gc_tensor *out2_data, int out2_len, gc_tensor grad_y, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor z_state, gc_tensor cell_state_fwd, gc_tensor input, gc_tensor layersOutputs, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { + PROTECT( + torch::lstm_mps_backward_out(*tensor_ptr_from_ocaml(out0), of_carray_tensor(out1_data, out1_len), of_carray_tensor(out2_data, out2_len), (grad_y ? tensor_from_ocaml(grad_y) : torch::Tensor()), (grad_hy ? tensor_from_ocaml(grad_hy) : torch::Tensor()), (grad_cy ? tensor_from_ocaml(grad_cy) : torch::Tensor()), *tensor_ptr_from_ocaml(z_state), *tensor_ptr_from_ocaml(cell_state_fwd), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(layersOutputs), of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); + ) +} + +raw_tensor atg_lt(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::lt(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lt_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->lt_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lt_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::lt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lt_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::lt(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lt_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->lt_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lt_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::lt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lu_solve(gc_tensor self, gc_tensor LU_data, gc_tensor LU_pivots) { + PROTECT( + torch::Tensor results__ = torch::lu_solve(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(LU_data), *tensor_ptr_from_ocaml(LU_pivots)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_lu_solve_out(gc_tensor out, gc_tensor self, gc_tensor LU_data, gc_tensor LU_pivots) { + PROTECT( + torch::Tensor results__ = torch::lu_solve_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(LU_data), *tensor_ptr_from_ocaml(LU_pivots)); + return tensor_to_ocaml(results__); + ) +} + +void atg_lu_unpack(raw_tensor *out__, gc_tensor LU_data, gc_tensor LU_pivots, int unpack_data, int unpack_pivots) { + PROTECT( + auto results__ = torch::lu_unpack(*tensor_ptr_from_ocaml(LU_data), *tensor_ptr_from_ocaml(LU_pivots), (bool)unpack_data, (bool)unpack_pivots); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_lu_unpack_out(raw_tensor *out__, gc_tensor P, gc_tensor L, gc_tensor U, gc_tensor LU_data, gc_tensor LU_pivots, int unpack_data, int unpack_pivots) { + PROTECT( + auto results__ = torch::lu_unpack_out(*tensor_ptr_from_ocaml(P), *tensor_ptr_from_ocaml(L), *tensor_ptr_from_ocaml(U), *tensor_ptr_from_ocaml(LU_data), *tensor_ptr_from_ocaml(LU_pivots), (bool)unpack_data, (bool)unpack_pivots); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_margin_ranking_loss(gc_tensor input1, gc_tensor input2, gc_tensor target, double margin, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::margin_ranking_loss(*tensor_ptr_from_ocaml(input1), *tensor_ptr_from_ocaml(input2), *tensor_ptr_from_ocaml(target), margin, reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_fill(gc_tensor self, gc_tensor mask, scalar value) { + PROTECT( + torch::Tensor results__ = torch::masked_fill(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_fill_(gc_tensor self, gc_tensor mask, scalar value) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->masked_fill_(*tensor_ptr_from_ocaml(mask), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_fill_scalar_out(gc_tensor out, gc_tensor self, gc_tensor mask, scalar value) { + PROTECT( + torch::Tensor results__ = torch::masked_fill_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_fill_tensor(gc_tensor self, gc_tensor mask, gc_tensor value) { + PROTECT( + torch::Tensor results__ = torch::masked_fill(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_fill_tensor_(gc_tensor self, gc_tensor mask, gc_tensor value) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->masked_fill_(*tensor_ptr_from_ocaml(mask), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_fill_tensor_out(gc_tensor out, gc_tensor self, gc_tensor mask, gc_tensor value) { + PROTECT( + torch::Tensor results__ = torch::masked_fill_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), *tensor_ptr_from_ocaml(value)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_scatter(gc_tensor self, gc_tensor mask, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::masked_scatter(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_scatter_(gc_tensor self, gc_tensor mask, gc_tensor source) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->masked_scatter_(*tensor_ptr_from_ocaml(mask), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_scatter_out(gc_tensor out, gc_tensor self, gc_tensor mask, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::masked_scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_select(gc_tensor self, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::masked_select(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_select_backward(gc_tensor grad, gc_tensor input, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::masked_select_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(mask)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_masked_select_out(gc_tensor out, gc_tensor self, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::masked_select_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_matmul(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::matmul(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_matmul_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::matmul_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_matrix_exp(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::matrix_exp(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_matrix_exp_backward(gc_tensor self, gc_tensor grad) { + PROTECT( + torch::Tensor results__ = torch::matrix_exp_backward(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(grad)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_matrix_h(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->matrix_H(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_matrix_power(gc_tensor self, int64_t n) { + PROTECT( + torch::Tensor results__ = torch::matrix_power(*tensor_ptr_from_ocaml(self), n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_matrix_power_out(gc_tensor out, gc_tensor self, int64_t n) { + PROTECT( + torch::Tensor results__ = torch::matrix_power_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::max(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg_max_dim(raw_tensor *out__, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::max(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_max_dim_max(raw_tensor *out__, gc_tensor max, gc_tensor max_values, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::max_out(*tensor_ptr_from_ocaml(max), *tensor_ptr_from_ocaml(max_values), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_max_other(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::max(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::max_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_pool1d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::max_pool1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +void atg_max_pool1d_with_indices(raw_tensor *out__, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + auto results__ = torch::max_pool1d_with_indices(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_max_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::max_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_pool2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::max_pool2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::max_pool2d_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +void atg_max_pool2d_with_indices(raw_tensor *out__, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + auto results__ = torch::max_pool2d_with_indices(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_max_pool2d_with_indices_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::max_pool2d_with_indices_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_pool2d_with_indices_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::max_pool2d_with_indices_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +void atg_max_pool2d_with_indices_out(raw_tensor *out__, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + auto results__ = torch::max_pool2d_with_indices_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_max_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::max_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +void atg_max_pool3d_with_indices(raw_tensor *out__, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + auto results__ = torch::max_pool3d_with_indices(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_max_pool3d_with_indices_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::max_pool3d_with_indices_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_pool3d_with_indices_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices) { + PROTECT( + torch::Tensor results__ = torch::max_pool3d_with_indices_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *tensor_ptr_from_ocaml(indices)); + return tensor_to_ocaml(results__); + ) +} + +void atg_max_pool3d_with_indices_out(raw_tensor *out__, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + auto results__ = torch::max_pool3d_with_indices_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_max_unary_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::max_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_unpool2d(gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::max_unpool2d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_unpool2d_out(gc_tensor out, gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::max_unpool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_unpool3d(gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::max_unpool3d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_max_unpool3d_out(gc_tensor out, gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::max_unpool3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_maximum(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::maximum(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_maximum_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::maximum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mean(gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::mean(*tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mean_dim(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::mean(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mean_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::mean_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_median(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::median(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg_median_dim(raw_tensor *out__, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::median(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_median_dim_values(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::median_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_median_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::median_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_meshgrid(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::meshgrid(of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_meshgrid_indexing(gc_tensor *tensors_data, int tensors_len, char * indexing) { + PROTECT( + auto results__ = torch::meshgrid(of_carray_tensor(tensors_data, tensors_len), std::string(indexing)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_mh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->mH(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_min(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::min(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg_min_dim(raw_tensor *out__, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::min(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_min_dim_min(raw_tensor *out__, gc_tensor min, gc_tensor min_indices, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::min_out(*tensor_ptr_from_ocaml(min), *tensor_ptr_from_ocaml(min_indices), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_min_other(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::min(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_min_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::min_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_min_unary_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::min_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_minimum(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::minimum(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_minimum_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::minimum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +void atg_miopen_batch_norm(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon) { + PROTECT( + auto results__ = torch::miopen_batch_norm(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_miopen_batch_norm_backward(raw_tensor *out__, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon) { + PROTECT( + auto results__ = torch::miopen_batch_norm_backward(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(weight), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (save_mean ? tensor_from_ocaml(save_mean) : torch::Tensor()), (save_var ? tensor_from_ocaml(save_var) : torch::Tensor()), epsilon); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_miopen_batch_norm_backward_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon) { + PROTECT( + auto results__ = torch::miopen_batch_norm_backward_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(weight), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (save_mean ? tensor_from_ocaml(save_mean) : torch::Tensor()), (save_var ? tensor_from_ocaml(save_var) : torch::Tensor()), epsilon); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_miopen_batch_norm_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon) { + PROTECT( + auto results__ = torch::miopen_batch_norm_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_miopen_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { + PROTECT( + torch::Tensor results__ = torch::miopen_convolution(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_miopen_convolution_add_relu(gc_tensor self, gc_tensor weight, gc_tensor z, scalar alpha, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::miopen_convolution_add_relu(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), *tensor_ptr_from_ocaml(z), *alpha, (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_miopen_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { + PROTECT( + torch::Tensor results__ = torch::miopen_convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_miopen_convolution_relu(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::miopen_convolution_relu(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_miopen_convolution_transpose(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { + PROTECT( + torch::Tensor results__ = torch::miopen_convolution_transpose(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_miopen_convolution_transpose_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { + PROTECT( + torch::Tensor results__ = torch::miopen_convolution_transpose_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_miopen_depthwise_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { + PROTECT( + torch::Tensor results__ = torch::miopen_depthwise_convolution(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_miopen_depthwise_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { + PROTECT( + torch::Tensor results__ = torch::miopen_depthwise_convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); + return tensor_to_ocaml(results__); + ) +} + +void atg_miopen_rnn(raw_tensor *out__, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state) { + PROTECT( + auto results__ = torch::miopen_rnn(*tensor_ptr_from_ocaml(input), of_carray_tensor(weight_data, weight_len), weight_stride0, *tensor_ptr_from_ocaml(hx), (cx ? tensor_from_ocaml(cx) : torch::Tensor()), mode, hidden_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? tensor_from_ocaml(dropout_state) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +void atg_miopen_rnn_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state) { + PROTECT( + auto results__ = torch::miopen_rnn_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(out4), *tensor_ptr_from_ocaml(input), of_carray_tensor(weight_data, weight_len), weight_stride0, *tensor_ptr_from_ocaml(hx), (cx ? tensor_from_ocaml(cx) : torch::Tensor()), mode, hidden_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? tensor_from_ocaml(dropout_state) : torch::Tensor())); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + ) +} + +raw_tensor atg_mish(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::mish(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mish_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::mish_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mish_backward(gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::mish_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mish_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::mish_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_adaptive_avg_pool2d(gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_adaptive_avg_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_adaptive_avg_pool2d_backward(gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_adaptive_avg_pool2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_adaptive_avg_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_adaptive_avg_pool2d_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_adaptive_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_adaptive_avg_pool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_convolution(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_convolution_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_linear(gc_tensor self, gc_tensor weight, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_linear(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_linear_backward_input(int64_t *input_size_data, int input_size_len, gc_tensor grad_output, gc_tensor weight) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_linear_backward_input(torch::IntArrayRef(input_size_data, input_size_len), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_linear_backward_input_out(gc_tensor out, int64_t *input_size_data, int input_size_len, gc_tensor grad_output, gc_tensor weight) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_linear_backward_input_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(input_size_data, input_size_len), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +void atg_mkldnn_linear_backward_weights(raw_tensor *out__, gc_tensor grad_output, gc_tensor input, gc_tensor weight, int bias_defined) { + PROTECT( + auto results__ = torch::mkldnn_linear_backward_weights(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bool)bias_defined); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_mkldnn_linear_backward_weights_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor grad_output, gc_tensor input, gc_tensor weight, int bias_defined) { + PROTECT( + auto results__ = torch::mkldnn_linear_backward_weights_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight), (bool)bias_defined); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_mkldnn_linear_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_linear_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), (bias ? tensor_from_ocaml(bias) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool2d_backward(gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(input), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool2d_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(input), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool2d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool3d_backward(gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool3d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(input), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool3d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool3d_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(input), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_max_pool3d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_max_pool3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_reorder_conv2d_weight(gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_reorder_conv2d_weight(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, input_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(input_size_data, input_size_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_reorder_conv2d_weight_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_reorder_conv2d_weight_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, input_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(input_size_data, input_size_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_reorder_conv3d_weight(gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_reorder_conv3d_weight(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mkldnn_reorder_conv3d_weight_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::mkldnn_reorder_conv3d_weight_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); + return tensor_to_ocaml(results__); + ) +} + +void atg_mkldnn_rnn_layer(raw_tensor *out__, gc_tensor input, gc_tensor weight0, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor hx_, gc_tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train) { + PROTECT( + auto results__ = torch::mkldnn_rnn_layer(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight0), *tensor_ptr_from_ocaml(weight1), *tensor_ptr_from_ocaml(weight2), *tensor_ptr_from_ocaml(weight3), *tensor_ptr_from_ocaml(hx_), *tensor_ptr_from_ocaml(cx_), (bool)reverse, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), mode, hidden_size, num_layers, (bool)has_biases, (bool)bidirectional, (bool)batch_first, (bool)train); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +void atg_mkldnn_rnn_layer_backward(raw_tensor *out__, gc_tensor input, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor weight4, gc_tensor hx_, gc_tensor cx_tmp, gc_tensor output, gc_tensor hy_, gc_tensor cy_, gc_tensor grad_output, gc_tensor grad_hy, gc_tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, gc_tensor workspace) { + PROTECT( + auto results__ = torch::mkldnn_rnn_layer_backward(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight1), *tensor_ptr_from_ocaml(weight2), *tensor_ptr_from_ocaml(weight3), *tensor_ptr_from_ocaml(weight4), *tensor_ptr_from_ocaml(hx_), *tensor_ptr_from_ocaml(cx_tmp), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(hy_), *tensor_ptr_from_ocaml(cy_), (grad_output ? tensor_from_ocaml(grad_output) : torch::Tensor()), (grad_hy ? tensor_from_ocaml(grad_hy) : torch::Tensor()), (grad_cy ? tensor_from_ocaml(grad_cy) : torch::Tensor()), (bool)reverse, mode, hidden_size, num_layers, (bool)has_biases, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (bool)batch_first, *tensor_ptr_from_ocaml(workspace)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + out__[5] = tensor_to_ocaml(std::get<5>(results__)); + out__[6] = tensor_to_ocaml(std::get<6>(results__)); + ) +} + +void atg_mkldnn_rnn_layer_backward_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor out5, gc_tensor out6, gc_tensor input, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor weight4, gc_tensor hx_, gc_tensor cx_tmp, gc_tensor output, gc_tensor hy_, gc_tensor cy_, gc_tensor grad_output, gc_tensor grad_hy, gc_tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, gc_tensor workspace) { + PROTECT( + auto results__ = torch::mkldnn_rnn_layer_backward_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(out4), *tensor_ptr_from_ocaml(out5), *tensor_ptr_from_ocaml(out6), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight1), *tensor_ptr_from_ocaml(weight2), *tensor_ptr_from_ocaml(weight3), *tensor_ptr_from_ocaml(weight4), *tensor_ptr_from_ocaml(hx_), *tensor_ptr_from_ocaml(cx_tmp), *tensor_ptr_from_ocaml(output), *tensor_ptr_from_ocaml(hy_), *tensor_ptr_from_ocaml(cy_), (grad_output ? tensor_from_ocaml(grad_output) : torch::Tensor()), (grad_hy ? tensor_from_ocaml(grad_hy) : torch::Tensor()), (grad_cy ? tensor_from_ocaml(grad_cy) : torch::Tensor()), (bool)reverse, mode, hidden_size, num_layers, (bool)has_biases, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (bool)batch_first, *tensor_ptr_from_ocaml(workspace)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + out__[4] = tensor_to_ocaml(std::get<4>(results__)); + out__[5] = tensor_to_ocaml(std::get<5>(results__)); + out__[6] = tensor_to_ocaml(std::get<6>(results__)); + ) +} + +void atg_mkldnn_rnn_layer_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor input, gc_tensor weight0, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor hx_, gc_tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train) { + PROTECT( + auto results__ = torch::mkldnn_rnn_layer_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(out3), *tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(weight0), *tensor_ptr_from_ocaml(weight1), *tensor_ptr_from_ocaml(weight2), *tensor_ptr_from_ocaml(weight3), *tensor_ptr_from_ocaml(hx_), *tensor_ptr_from_ocaml(cx_), (bool)reverse, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), mode, hidden_size, num_layers, (bool)has_biases, (bool)bidirectional, (bool)batch_first, (bool)train); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + out__[3] = tensor_to_ocaml(std::get<3>(results__)); + ) +} + +raw_tensor atg_mm(gc_tensor self, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::mm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mm_out(gc_tensor out, gc_tensor self, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::mm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +void atg_mode(raw_tensor *out__, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::mode(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_mode_values(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::mode_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_moveaxis(gc_tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len) { + PROTECT( + torch::Tensor results__ = torch::moveaxis(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(source_data, source_len), torch::IntArrayRef(destination_data, destination_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_moveaxis_int(gc_tensor self, int64_t source, int64_t destination) { + PROTECT( + torch::Tensor results__ = torch::moveaxis(*tensor_ptr_from_ocaml(self), source, destination); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_movedim(gc_tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len) { + PROTECT( + torch::Tensor results__ = torch::movedim(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(source_data, source_len), torch::IntArrayRef(destination_data, destination_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_movedim_int(gc_tensor self, int64_t source, int64_t destination) { + PROTECT( + torch::Tensor results__ = torch::movedim(*tensor_ptr_from_ocaml(self), source, destination); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mse_loss(gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::mse_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mse_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::mse_loss_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mse_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::mse_loss_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mse_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::mse_loss_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_msort(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::msort(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_msort_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::msort_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mt(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->mT(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mul(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::mul(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mul_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->mul_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mul_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::mul_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mul_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::mul(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mul_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->mul_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mul_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::mul_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multi_margin_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, scalar p, scalar margin, gc_tensor weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::multi_margin_loss_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), *p, *margin, (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multi_margin_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, scalar p, scalar margin, gc_tensor weight, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::multi_margin_loss_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), *p, *margin, (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multilabel_margin_loss(gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::multilabel_margin_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multilabel_margin_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, gc_tensor is_target) { + PROTECT( + torch::Tensor results__ = torch::multilabel_margin_loss_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, *tensor_ptr_from_ocaml(is_target)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multilabel_margin_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, gc_tensor is_target) { + PROTECT( + torch::Tensor results__ = torch::multilabel_margin_loss_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, *tensor_ptr_from_ocaml(is_target)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multilabel_margin_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::multilabel_margin_loss_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multinomial(gc_tensor self, int64_t num_samples, int replacement) { + PROTECT( + torch::Tensor results__ = torch::multinomial(*tensor_ptr_from_ocaml(self), num_samples, (bool)replacement); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multinomial_out(gc_tensor out, gc_tensor self, int64_t num_samples, int replacement) { + PROTECT( + torch::Tensor results__ = torch::multinomial_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), num_samples, (bool)replacement); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multiply(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::multiply(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multiply_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->multiply_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multiply_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::multiply_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multiply_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::multiply(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_multiply_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->multiply_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mv(gc_tensor self, gc_tensor vec) { + PROTECT( + torch::Tensor results__ = torch::mv(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mv_out(gc_tensor out, gc_tensor self, gc_tensor vec) { + PROTECT( + torch::Tensor results__ = torch::mv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mvlgamma(gc_tensor self, int64_t p) { + PROTECT( + torch::Tensor results__ = torch::mvlgamma(*tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mvlgamma_(gc_tensor self, int64_t p) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->mvlgamma_(p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_mvlgamma_out(gc_tensor out, gc_tensor self, int64_t p) { + PROTECT( + torch::Tensor results__ = torch::mvlgamma_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nan_to_num(gc_tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null) { + PROTECT( + torch::Tensor results__ = torch::nan_to_num(*tensor_ptr_from_ocaml(self), nan_null ? c10::nullopt : c10::optional(nan_v), posinf_null ? c10::nullopt : c10::optional(posinf_v), neginf_null ? c10::nullopt : c10::optional(neginf_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nan_to_num_(gc_tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null) { + PROTECT( + torch::Tensor results__ = torch::nan_to_num_(*tensor_ptr_from_ocaml(self), nan_null ? c10::nullopt : c10::optional(nan_v), posinf_null ? c10::nullopt : c10::optional(posinf_v), neginf_null ? c10::nullopt : c10::optional(neginf_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nan_to_num_out(gc_tensor out, gc_tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null) { + PROTECT( + torch::Tensor results__ = torch::nan_to_num_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), nan_null ? c10::nullopt : c10::optional(nan_v), posinf_null ? c10::nullopt : c10::optional(posinf_v), neginf_null ? c10::nullopt : c10::optional(neginf_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nanmean(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::nanmean(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nanmean_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::nanmean_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nanmedian(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::nanmedian(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg_nanmedian_dim(raw_tensor *out__, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::nanmedian(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_nanmedian_dim_values(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int keepdim) { + PROTECT( + auto results__ = torch::nanmedian_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_nanmedian_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::nanmedian_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nanquantile(gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::nanquantile(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(q), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nanquantile_out(gc_tensor out, gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::nanquantile_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(q), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nanquantile_scalar(gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::nanquantile(*tensor_ptr_from_ocaml(self), q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nanquantile_scalar_out(gc_tensor out, gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::nanquantile_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nansum(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::nansum(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nansum_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::nansum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_narrow(gc_tensor self, int64_t dim, int64_t start, int64_t length) { + PROTECT( + torch::Tensor results__ = torch::narrow(*tensor_ptr_from_ocaml(self), dim, start, length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_narrow_copy(gc_tensor self, int64_t dim, int64_t start, int64_t length) { + PROTECT( + torch::Tensor results__ = torch::narrow_copy(*tensor_ptr_from_ocaml(self), dim, start, length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_narrow_copy_out(gc_tensor out, gc_tensor self, int64_t dim, int64_t start, int64_t length) { + PROTECT( + torch::Tensor results__ = torch::narrow_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, start, length); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_narrow_tensor(gc_tensor self, int64_t dim, gc_tensor start, int64_t length) { + PROTECT( + torch::Tensor results__ = torch::narrow(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(start), length); + return tensor_to_ocaml(results__); + ) +} + +void atg_native_batch_norm(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps) { + PROTECT( + auto results__ = torch::native_batch_norm(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)training, momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_native_batch_norm_out(raw_tensor *out__, gc_tensor out, gc_tensor save_mean, gc_tensor save_invstd, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps) { + PROTECT( + auto results__ = torch::native_batch_norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(save_mean), *tensor_ptr_from_ocaml(save_invstd), *tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), (running_mean ? tensor_from_ocaml(running_mean) : torch::Tensor()), (running_var ? tensor_from_ocaml(running_var) : torch::Tensor()), (bool)training, momentum, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_native_channel_shuffle(gc_tensor self, int64_t groups) { + PROTECT( + torch::Tensor results__ = torch::native_channel_shuffle(*tensor_ptr_from_ocaml(self), groups); + return tensor_to_ocaml(results__); + ) +} + +void atg_native_dropout(raw_tensor *out__, gc_tensor input, double p, int train) { + PROTECT( + auto results__ = torch::native_dropout(*tensor_ptr_from_ocaml(input), p, (bool)train); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_native_dropout_backward(gc_tensor grad_output, gc_tensor mask, double scale) { + PROTECT( + torch::Tensor results__ = torch::native_dropout_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(mask), scale); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_native_dropout_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor mask, double scale) { + PROTECT( + torch::Tensor results__ = torch::native_dropout_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(mask), scale); + return tensor_to_ocaml(results__); + ) +} + +void atg_native_dropout_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor input, double p, int train) { + PROTECT( + auto results__ = torch::native_dropout_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(input), p, (bool)train); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_native_group_norm(raw_tensor *out__, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps) { + PROTECT( + auto results__ = torch::native_group_norm(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), n, C, HxW, group, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_native_group_norm_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps) { + PROTECT( + auto results__ = torch::native_group_norm_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), n, C, HxW, group, eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_native_layer_norm(raw_tensor *out__, gc_tensor input, int64_t *normalized_shape_data, int normalized_shape_len, gc_tensor weight, gc_tensor bias, double eps) { + PROTECT( + auto results__ = torch::native_layer_norm(*tensor_ptr_from_ocaml(input), torch::IntArrayRef(normalized_shape_data, normalized_shape_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_native_layer_norm_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, int64_t *normalized_shape_data, int normalized_shape_len, gc_tensor weight, gc_tensor bias, double eps) { + PROTECT( + auto results__ = torch::native_layer_norm_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(input), torch::IntArrayRef(normalized_shape_data, normalized_shape_len), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), eps); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_native_norm(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::native_norm(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_native_norm_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::native_norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_native_norm_scalaropt_dim_dtype(gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::native_norm(*tensor_ptr_from_ocaml(self), *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_native_norm_scalaropt_dim_dtype_out(gc_tensor out, gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::native_norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ne(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::ne(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ne_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->ne_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ne_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::ne_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ne_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::ne(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ne_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->ne_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ne_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::ne_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_neg(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::neg(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_neg_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::neg_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_neg_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::neg_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_negative(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::negative(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_negative_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::negative_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_negative_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::negative_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nested_to_padded_tensor(gc_tensor self, double padding, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::nested_to_padded_tensor(*tensor_ptr_from_ocaml(self), padding, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_empty(gc_tensor self, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->new_empty(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_empty_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::new_empty_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_empty_strided(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->new_empty_strided(torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_empty_strided_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = torch::new_empty_strided_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_full(gc_tensor self, int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->new_full(torch::IntArrayRef(size_data, size_len), *fill_value, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_full_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, scalar fill_value) { + PROTECT( + torch::Tensor results__ = torch::new_full_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), *fill_value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_ones(gc_tensor self, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->new_ones(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_ones_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::new_ones_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_zeros(gc_tensor self, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->new_zeros(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_new_zeros_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::new_zeros_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nextafter(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::nextafter(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nextafter_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->nextafter_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nextafter_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::nextafter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index) { + PROTECT( + torch::Tensor results__ = torch::nll_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss2d(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index) { + PROTECT( + torch::Tensor results__ = torch::nll_loss2d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss2d_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight) { + PROTECT( + torch::Tensor results__ = torch::nll_loss2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index, *tensor_ptr_from_ocaml(total_weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight) { + PROTECT( + torch::Tensor results__ = torch::nll_loss2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index, *tensor_ptr_from_ocaml(total_weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss2d_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index) { + PROTECT( + torch::Tensor results__ = torch::nll_loss2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight) { + PROTECT( + torch::Tensor results__ = torch::nll_loss_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index, *tensor_ptr_from_ocaml(total_weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight) { + PROTECT( + torch::Tensor results__ = torch::nll_loss_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index, *tensor_ptr_from_ocaml(total_weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss_nd(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index) { + PROTECT( + torch::Tensor results__ = torch::nll_loss_nd(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nll_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index) { + PROTECT( + torch::Tensor results__ = torch::nll_loss_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), reduction, ignore_index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nonzero(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::nonzero(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_nonzero_numpy(gc_tensor self) { + PROTECT( + auto results__ = torch::nonzero_numpy(*tensor_ptr_from_ocaml(self)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_nonzero_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::nonzero_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nonzero_static(gc_tensor self, int64_t size, int64_t fill_value) { + PROTECT( + torch::Tensor results__ = torch::nonzero_static(*tensor_ptr_from_ocaml(self), size, fill_value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nonzero_static_out(gc_tensor out, gc_tensor self, int64_t size, int64_t fill_value) { + PROTECT( + torch::Tensor results__ = torch::nonzero_static_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), size, fill_value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::norm(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_dtype_out(gc_tensor out, gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_except_dim(gc_tensor v, int64_t pow, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::norm_except_dim(*tensor_ptr_from_ocaml(v), pow, dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_out(gc_tensor out, gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_scalar_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_scalaropt_dim(gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::norm(*tensor_ptr_from_ocaml(self), *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_scalaropt_dim_dtype(gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::norm(*tensor_ptr_from_ocaml(self), *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_scalaropt_dtype(gc_tensor self, scalar p, int dtype) { + PROTECT( + torch::Tensor results__ = torch::norm(*tensor_ptr_from_ocaml(self), *p, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_norm_scalaropt_dtype_out(gc_tensor out, gc_tensor self, scalar p, int dtype) { + PROTECT( + torch::Tensor results__ = torch::norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *p, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_normal_(gc_tensor self, double mean, double std) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->normal_(mean, std); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_normal_functional(gc_tensor self, double mean, double std) { + PROTECT( + torch::Tensor results__ = torch::normal_functional(*tensor_ptr_from_ocaml(self), mean, std); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_not_equal(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::not_equal(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_not_equal_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->not_equal_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_not_equal_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::not_equal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_not_equal_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::not_equal(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_not_equal_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->not_equal_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_not_equal_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::not_equal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nuclear_norm(gc_tensor self, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::nuclear_norm(*tensor_ptr_from_ocaml(self), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nuclear_norm_dim(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::nuclear_norm(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nuclear_norm_dim_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::nuclear_norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_nuclear_norm_out(gc_tensor out, gc_tensor self, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::nuclear_norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_numpy_t(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->numpy_T(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_one_hot(gc_tensor self, int64_t num_classes) { + PROTECT( + torch::Tensor results__ = torch::one_hot(*tensor_ptr_from_ocaml(self), num_classes); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ones(int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::ones(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ones_like(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ones_like(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ones_like_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ones_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ones_out(gc_tensor out, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::ones_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_orgqr(gc_tensor self, gc_tensor input2) { + PROTECT( + torch::Tensor results__ = torch::orgqr(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(input2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_orgqr_out(gc_tensor out, gc_tensor self, gc_tensor input2) { + PROTECT( + torch::Tensor results__ = torch::orgqr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(input2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ormqr(gc_tensor self, gc_tensor input2, gc_tensor input3, int left, int transpose) { + PROTECT( + torch::Tensor results__ = torch::ormqr(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(input2), *tensor_ptr_from_ocaml(input3), (bool)left, (bool)transpose); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ormqr_out(gc_tensor out, gc_tensor self, gc_tensor input2, gc_tensor input3, int left, int transpose) { + PROTECT( + torch::Tensor results__ = torch::ormqr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(input2), *tensor_ptr_from_ocaml(input3), (bool)left, (bool)transpose); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_outer(gc_tensor self, gc_tensor vec2) { + PROTECT( + torch::Tensor results__ = torch::outer(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_outer_out(gc_tensor out, gc_tensor self, gc_tensor vec2) { + PROTECT( + torch::Tensor results__ = torch::outer_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(vec2)); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg_output_nr(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->output_nr(); + ) + return 0; +} + +raw_tensor atg_pad(gc_tensor self, int64_t *pad_data, int pad_len, char * mode, double value_v, int value_null) { + PROTECT( + torch::Tensor results__ = torch::pad(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(pad_data, pad_len), std::string(mode), value_null ? c10::nullopt : c10::optional(value_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pad_sequence(gc_tensor *sequences_data, int sequences_len, int batch_first, double padding_value) { + PROTECT( + torch::Tensor results__ = torch::pad_sequence(of_carray_tensor(sequences_data, sequences_len), (bool)batch_first, padding_value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pairwise_distance(gc_tensor x1, gc_tensor x2, double p, double eps, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::pairwise_distance(*tensor_ptr_from_ocaml(x1), *tensor_ptr_from_ocaml(x2), p, eps, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pdist(gc_tensor self, double p) { + PROTECT( + torch::Tensor results__ = torch::pdist(*tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_permute(gc_tensor self, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::permute(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_permute_copy(gc_tensor self, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::permute_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_permute_copy_out(gc_tensor out, gc_tensor self, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::permute_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pin_memory(gc_tensor self, int device) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->pin_memory(device_of_int(device)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pinverse(gc_tensor self, double rcond) { + PROTECT( + torch::Tensor results__ = torch::pinverse(*tensor_ptr_from_ocaml(self), rcond); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pixel_shuffle(gc_tensor self, int64_t upscale_factor) { + PROTECT( + torch::Tensor results__ = torch::pixel_shuffle(*tensor_ptr_from_ocaml(self), upscale_factor); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pixel_shuffle_out(gc_tensor out, gc_tensor self, int64_t upscale_factor) { + PROTECT( + torch::Tensor results__ = torch::pixel_shuffle_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), upscale_factor); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pixel_unshuffle(gc_tensor self, int64_t downscale_factor) { + PROTECT( + torch::Tensor results__ = torch::pixel_unshuffle(*tensor_ptr_from_ocaml(self), downscale_factor); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pixel_unshuffle_out(gc_tensor out, gc_tensor self, int64_t downscale_factor) { + PROTECT( + torch::Tensor results__ = torch::pixel_unshuffle_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), downscale_factor); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_poisson(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::poisson(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_poisson_nll_loss(gc_tensor input, gc_tensor target, int log_input, int full, double eps, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::poisson_nll_loss(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(target), (bool)log_input, (bool)full, eps, reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_poisson_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::poisson_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_polar(gc_tensor abs, gc_tensor angle) { + PROTECT( + torch::Tensor results__ = torch::polar(*tensor_ptr_from_ocaml(abs), *tensor_ptr_from_ocaml(angle)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_polar_out(gc_tensor out, gc_tensor abs, gc_tensor angle) { + PROTECT( + torch::Tensor results__ = torch::polar_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(abs), *tensor_ptr_from_ocaml(angle)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_polygamma(int64_t n, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::polygamma(n, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_polygamma_(gc_tensor self, int64_t n) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->polygamma_(n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_polygamma_out(gc_tensor out, int64_t n, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::polygamma_out(*tensor_ptr_from_ocaml(out), n, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_positive(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::positive(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow(gc_tensor self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::pow(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow_(gc_tensor self, scalar exponent) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->pow_(*exponent); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow_scalar(scalar self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::pow(*self, *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow_scalar_out(gc_tensor out, scalar self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::pow_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow_tensor_(gc_tensor self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->pow_(*tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow_tensor_scalar(gc_tensor self, scalar exponent) { + PROTECT( + torch::Tensor results__ = torch::pow(*tensor_ptr_from_ocaml(self), *exponent); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar exponent) { + PROTECT( + torch::Tensor results__ = torch::pow_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *exponent); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_pow_tensor_tensor_out(gc_tensor out, gc_tensor self, gc_tensor exponent) { + PROTECT( + torch::Tensor results__ = torch::pow_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(exponent)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_prelu(gc_tensor self, gc_tensor weight) { + PROTECT( + torch::Tensor results__ = torch::prelu(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_prod(gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::prod(*tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_prod_dim_int(gc_tensor self, int64_t dim, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::prod(*tensor_ptr_from_ocaml(self), dim, (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_prod_int_out(gc_tensor out, gc_tensor self, int64_t dim, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::prod_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_prod_out(gc_tensor out, gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::prod_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_put(gc_tensor self, gc_tensor index, gc_tensor source, int accumulate) { + PROTECT( + torch::Tensor results__ = torch::put(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source), (bool)accumulate); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_put_(gc_tensor self, gc_tensor index, gc_tensor source, int accumulate) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->put_(*tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source), (bool)accumulate); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_put_out(gc_tensor out, gc_tensor self, gc_tensor index, gc_tensor source, int accumulate) { + PROTECT( + torch::Tensor results__ = torch::put_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(source), (bool)accumulate); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg_q_per_channel_axis(gc_tensor self) { + PROTECT( + return torch::q_per_channel_axis(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +raw_tensor atg_q_per_channel_scales(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::q_per_channel_scales(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_q_per_channel_scales_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::q_per_channel_scales_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_q_per_channel_zero_points(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::q_per_channel_zero_points(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_q_per_channel_zero_points_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::q_per_channel_zero_points_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +double atg_q_scale(gc_tensor self) { + PROTECT( + return torch::q_scale(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +int64_t atg_q_zero_point(gc_tensor self) { + PROTECT( + return torch::q_zero_point(*tensor_ptr_from_ocaml(self)); + ) + return 0; +} + +void atg_qr(raw_tensor *out__, gc_tensor self, int some) { + PROTECT( + auto results__ = torch::qr(*tensor_ptr_from_ocaml(self), (bool)some); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_qr_q(raw_tensor *out__, gc_tensor Q, gc_tensor R, gc_tensor self, int some) { + PROTECT( + auto results__ = torch::qr_out(*tensor_ptr_from_ocaml(Q), *tensor_ptr_from_ocaml(R), *tensor_ptr_from_ocaml(self), (bool)some); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_quantile(gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::quantile(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(q), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantile_out(gc_tensor out, gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::quantile_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(q), dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantile_scalar(gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::quantile(*tensor_ptr_from_ocaml(self), q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantile_scalar_out(gc_tensor out, gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { + PROTECT( + torch::Tensor results__ = torch::quantile_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_channel(gc_tensor self, gc_tensor scales, gc_tensor zero_points, int64_t axis, int dtype) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_channel(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scales), *tensor_ptr_from_ocaml(zero_points), axis, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_channel_out(gc_tensor out, gc_tensor self, gc_tensor scales, gc_tensor zero_points, int64_t axis, int dtype) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_channel_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scales), *tensor_ptr_from_ocaml(zero_points), axis, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_tensor(gc_tensor self, double scale, int64_t zero_point, int dtype) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_tensor(*tensor_ptr_from_ocaml(self), scale, zero_point, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_tensor_dynamic(gc_tensor self, int dtype, int reduce_range) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_tensor_dynamic(*tensor_ptr_from_ocaml(self), torch::ScalarType(dtype), (bool)reduce_range); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_tensor_dynamic_out(gc_tensor out, gc_tensor self, int dtype, int reduce_range) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_tensor_dynamic_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::ScalarType(dtype), (bool)reduce_range); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_tensor_out(gc_tensor out, gc_tensor self, double scale, int64_t zero_point, int dtype) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_tensor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), scale, zero_point, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_tensor_tensor_qparams(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int dtype) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_tensor(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantize_per_tensor_tensor_qparams_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int dtype) { + PROTECT( + torch::Tensor results__ = torch::quantize_per_tensor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(scale), *tensor_ptr_from_ocaml(zero_point), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_quantize_per_tensor_tensors(gc_tensor *tensors_data, int tensors_len, gc_tensor scales, gc_tensor zero_points, int dtype) { + PROTECT( + auto results__ = torch::quantize_per_tensor(of_carray_tensor(tensors_data, tensors_len), *tensor_ptr_from_ocaml(scales), *tensor_ptr_from_ocaml(zero_points), torch::ScalarType(dtype)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg_quantize_per_tensor_tensors_out(gc_tensor *out_data, int out_len, gc_tensor *tensors_data, int tensors_len, gc_tensor scales, gc_tensor zero_points, int dtype) { + PROTECT( + torch::quantize_per_tensor_out(of_carray_tensor(out_data, out_len), of_carray_tensor(tensors_data, tensors_len), *tensor_ptr_from_ocaml(scales), *tensor_ptr_from_ocaml(zero_points), torch::ScalarType(dtype)); + ) +} + +raw_tensor atg_quantized_batch_norm(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor var, double eps, double output_scale, int64_t output_zero_point) { + PROTECT( + torch::Tensor results__ = torch::quantized_batch_norm(*tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(var), eps, output_scale, output_zero_point); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_batch_norm_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor var, double eps, double output_scale, int64_t output_zero_point) { + PROTECT( + torch::Tensor results__ = torch::quantized_batch_norm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(input), (weight ? tensor_from_ocaml(weight) : torch::Tensor()), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), *tensor_ptr_from_ocaml(mean), *tensor_ptr_from_ocaml(var), eps, output_scale, output_zero_point); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_gru_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { + PROTECT( + torch::Tensor results__ = torch::quantized_gru_cell(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), *tensor_ptr_from_ocaml(b_ih), *tensor_ptr_from_ocaml(b_hh), *tensor_ptr_from_ocaml(packed_ih), *tensor_ptr_from_ocaml(packed_hh), *tensor_ptr_from_ocaml(col_offsets_ih), *tensor_ptr_from_ocaml(col_offsets_hh), *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); + return tensor_to_ocaml(results__); + ) +} + +void atg_quantized_lstm_cell(raw_tensor *out__, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { + PROTECT( + auto results__ = torch::quantized_lstm_cell(*tensor_ptr_from_ocaml(input), of_carray_tensor(hx_data, hx_len), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), *tensor_ptr_from_ocaml(b_ih), *tensor_ptr_from_ocaml(b_hh), *tensor_ptr_from_ocaml(packed_ih), *tensor_ptr_from_ocaml(packed_hh), *tensor_ptr_from_ocaml(col_offsets_ih), *tensor_ptr_from_ocaml(col_offsets_hh), *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_quantized_max_pool1d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::quantized_max_pool1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_max_pool1d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::quantized_max_pool1d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_max_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::quantized_max_pool2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_max_pool2d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::quantized_max_pool2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_max_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::quantized_max_pool3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_max_pool3d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { + PROTECT( + torch::Tensor results__ = torch::quantized_max_pool3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_rnn_relu_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { + PROTECT( + torch::Tensor results__ = torch::quantized_rnn_relu_cell(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), *tensor_ptr_from_ocaml(b_ih), *tensor_ptr_from_ocaml(b_hh), *tensor_ptr_from_ocaml(packed_ih), *tensor_ptr_from_ocaml(packed_hh), *tensor_ptr_from_ocaml(col_offsets_ih), *tensor_ptr_from_ocaml(col_offsets_hh), *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_quantized_rnn_tanh_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { + PROTECT( + torch::Tensor results__ = torch::quantized_rnn_tanh_cell(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), *tensor_ptr_from_ocaml(b_ih), *tensor_ptr_from_ocaml(b_hh), *tensor_ptr_from_ocaml(packed_ih), *tensor_ptr_from_ocaml(packed_hh), *tensor_ptr_from_ocaml(col_offsets_ih), *tensor_ptr_from_ocaml(col_offsets_hh), *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rad2deg(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rad2deg(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rad2deg_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rad2deg_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rad2deg_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rad2deg_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rand(int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::rand(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rand_like(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rand_like(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rand_like_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rand_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rand_out(gc_tensor out, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::rand_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint(int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::randint(high, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint_like(gc_tensor self, int64_t high) { + PROTECT( + torch::Tensor results__ = torch::randint_like(*tensor_ptr_from_ocaml(self), high); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint_like_low_dtype(gc_tensor self, int64_t low, int64_t high) { + PROTECT( + torch::Tensor results__ = torch::randint_like(*tensor_ptr_from_ocaml(self), low, high); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint_like_low_dtype_out(gc_tensor out, gc_tensor self, int64_t low, int64_t high) { + PROTECT( + torch::Tensor results__ = torch::randint_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), low, high); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint_like_out(gc_tensor out, gc_tensor self, int64_t high) { + PROTECT( + torch::Tensor results__ = torch::randint_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), high); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint_low(int64_t low, int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::randint(low, high, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint_low_out(gc_tensor out, int64_t low, int64_t high, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::randint_out(*tensor_ptr_from_ocaml(out), low, high, torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randint_out(gc_tensor out, int64_t high, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::randint_out(*tensor_ptr_from_ocaml(out), high, torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randn(int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::randn(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randn_like(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::randn_like(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randn_like_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::randn_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randn_out(gc_tensor out, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::randn_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::random(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->random_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_from(gc_tensor self, int64_t from, int64_t to_v, int to_null) { + PROTECT( + torch::Tensor results__ = torch::random(*tensor_ptr_from_ocaml(self), from, to_null ? c10::nullopt : c10::optional(to_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_from_(gc_tensor self, int64_t from, int64_t to_v, int to_null) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->random_(from, to_null ? c10::nullopt : c10::optional(to_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_from_out(gc_tensor out, gc_tensor self, int64_t from, int64_t to_v, int to_null) { + PROTECT( + torch::Tensor results__ = torch::random_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), from, to_null ? c10::nullopt : c10::optional(to_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::random_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_to(gc_tensor self, int64_t to) { + PROTECT( + torch::Tensor results__ = torch::random(*tensor_ptr_from_ocaml(self), to); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_to_(gc_tensor self, int64_t to) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->random_(to); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_random_to_out(gc_tensor out, gc_tensor self, int64_t to) { + PROTECT( + torch::Tensor results__ = torch::random_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), to); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randperm(int64_t n, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::randperm(n, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_randperm_out(gc_tensor out, int64_t n) { + PROTECT( + torch::Tensor results__ = torch::randperm_out(*tensor_ptr_from_ocaml(out), n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_range(scalar start, scalar end, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::range(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_range_out(gc_tensor out, scalar start, scalar end) { + PROTECT( + torch::Tensor results__ = torch::range_out(*tensor_ptr_from_ocaml(out), *start, *end); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_range_out_(gc_tensor out, scalar start, scalar end) { + PROTECT( + torch::Tensor results__ = torch::range_out(*tensor_ptr_from_ocaml(out), *start, *end); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_range_step(scalar start, scalar end, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::range(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_ravel(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::ravel(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_real(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::real(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reciprocal(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::reciprocal(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reciprocal_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::reciprocal_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reciprocal_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::reciprocal_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad1d(gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad1d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad1d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad1d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad1d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad1d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad2d(gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad2d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad3d(gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad3d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reflection_pad3d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::reflection_pad3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_relu(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::relu(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_relu6(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::relu6(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_relu6_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::relu6_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_relu_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::relu_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_relu_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::relu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::remainder(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->remainder_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::remainder_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder_scalar_tensor(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::remainder(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::remainder_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder_tensor(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::remainder(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder_tensor_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->remainder_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_remainder_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::remainder_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_renorm(gc_tensor self, scalar p, int64_t dim, scalar maxnorm) { + PROTECT( + torch::Tensor results__ = torch::renorm(*tensor_ptr_from_ocaml(self), *p, dim, *maxnorm); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_renorm_(gc_tensor self, scalar p, int64_t dim, scalar maxnorm) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->renorm_(*p, dim, *maxnorm); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_renorm_out(gc_tensor out, gc_tensor self, scalar p, int64_t dim, scalar maxnorm) { + PROTECT( + torch::Tensor results__ = torch::renorm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *p, dim, *maxnorm); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_repeat(gc_tensor self, int64_t *repeats_data, int repeats_len) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->repeat(torch::IntArrayRef(repeats_data, repeats_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_repeat_interleave(gc_tensor repeats, int64_t output_size_v, int output_size_null) { + PROTECT( + torch::Tensor results__ = torch::repeat_interleave(*tensor_ptr_from_ocaml(repeats), output_size_null ? c10::nullopt : c10::optional(output_size_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_repeat_interleave_self_int(gc_tensor self, int64_t repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null) { + PROTECT( + torch::Tensor results__ = torch::repeat_interleave(*tensor_ptr_from_ocaml(self), repeats, dim_null ? c10::nullopt : c10::optional(dim_v), output_size_null ? c10::nullopt : c10::optional(output_size_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_repeat_interleave_self_tensor(gc_tensor self, gc_tensor repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null) { + PROTECT( + torch::Tensor results__ = torch::repeat_interleave(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(repeats), dim_null ? c10::nullopt : c10::optional(dim_v), output_size_null ? c10::nullopt : c10::optional(output_size_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_repeat_interleave_tensor_out(gc_tensor out, gc_tensor repeats, int64_t output_size_v, int output_size_null) { + PROTECT( + torch::Tensor results__ = torch::repeat_interleave_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(repeats), output_size_null ? c10::nullopt : c10::optional(output_size_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_repeat_out(gc_tensor out, gc_tensor self, int64_t *repeats_data, int repeats_len) { + PROTECT( + torch::Tensor results__ = torch::repeat_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(repeats_data, repeats_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad1d(gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad1d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad1d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad1d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad1d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad1d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad2d(gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad2d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad2d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad3d(gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad3d_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_replication_pad3d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::replication_pad3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_requires_grad_(gc_tensor self, int requires_grad) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->requires_grad_((bool)requires_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reshape(gc_tensor self, int64_t *shape_data, int shape_len) { + PROTECT( + torch::Tensor results__ = torch::reshape(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(shape_data, shape_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_reshape_as(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->reshape_as(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::resize(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->resize_(torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_as(gc_tensor self, gc_tensor the_template) { + PROTECT( + torch::Tensor results__ = torch::resize_as(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(the_template)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_as_(gc_tensor self, gc_tensor the_template) { + PROTECT( + torch::Tensor results__ = torch::resize_as_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(the_template)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_as_out(gc_tensor out, gc_tensor self, gc_tensor the_template) { + PROTECT( + torch::Tensor results__ = torch::resize_as_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(the_template)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_as_sparse(gc_tensor self, gc_tensor the_template) { + PROTECT( + torch::Tensor results__ = torch::resize_as_sparse(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(the_template)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_as_sparse_(gc_tensor self, gc_tensor the_template) { + PROTECT( + torch::Tensor results__ = torch::resize_as_sparse_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(the_template)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_as_sparse_out(gc_tensor out, gc_tensor self, gc_tensor the_template) { + PROTECT( + torch::Tensor results__ = torch::resize_as_sparse_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(the_template)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resize_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::resize_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resolve_conj(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::resolve_conj(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_resolve_neg(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::resolve_neg(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +int atg_retains_grad(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->retains_grad(); + ) + return 0; +} + +void atg_rnn_relu(raw_tensor *out__, gc_tensor input, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { + PROTECT( + auto results__ = torch::rnn_relu(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_rnn_relu_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh) { + PROTECT( + torch::Tensor results__ = torch::rnn_relu_cell(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), (b_ih ? tensor_from_ocaml(b_ih) : torch::Tensor()), (b_hh ? tensor_from_ocaml(b_hh) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +void atg_rnn_relu_data(raw_tensor *out__, gc_tensor data, gc_tensor batch_sizes, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { + PROTECT( + auto results__ = torch::rnn_relu(*tensor_ptr_from_ocaml(data), *tensor_ptr_from_ocaml(batch_sizes), *tensor_ptr_from_ocaml(hx), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_rnn_tanh(raw_tensor *out__, gc_tensor input, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { + PROTECT( + auto results__ = torch::rnn_tanh(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_rnn_tanh_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh) { + PROTECT( + torch::Tensor results__ = torch::rnn_tanh_cell(*tensor_ptr_from_ocaml(input), *tensor_ptr_from_ocaml(hx), *tensor_ptr_from_ocaml(w_ih), *tensor_ptr_from_ocaml(w_hh), (b_ih ? tensor_from_ocaml(b_ih) : torch::Tensor()), (b_hh ? tensor_from_ocaml(b_hh) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +void atg_rnn_tanh_data(raw_tensor *out__, gc_tensor data, gc_tensor batch_sizes, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { + PROTECT( + auto results__ = torch::rnn_tanh(*tensor_ptr_from_ocaml(data), *tensor_ptr_from_ocaml(batch_sizes), *tensor_ptr_from_ocaml(hx), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_roll(gc_tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::roll(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(shifts_data, shifts_len), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_roll_out(gc_tensor out, gc_tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::roll_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(shifts_data, shifts_len), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rot90(gc_tensor self, int64_t k, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::rot90(*tensor_ptr_from_ocaml(self), k, torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rot90_out(gc_tensor out, gc_tensor self, int64_t k, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::rot90_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), k, torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_round(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::round(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_round_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::round_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_round_decimals(gc_tensor self, int64_t decimals) { + PROTECT( + torch::Tensor results__ = torch::round(*tensor_ptr_from_ocaml(self), decimals); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_round_decimals_(gc_tensor self, int64_t decimals) { + PROTECT( + torch::Tensor results__ = torch::round_(*tensor_ptr_from_ocaml(self), decimals); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_round_decimals_out(gc_tensor out, gc_tensor self, int64_t decimals) { + PROTECT( + torch::Tensor results__ = torch::round_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), decimals); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_round_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::round_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_row_indices(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->row_indices(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_row_indices_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::row_indices_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_row_indices_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::row_indices_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_row_stack(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::row_stack(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_row_stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::row_stack_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rrelu(gc_tensor self, int training) { + PROTECT( + torch::Tensor results__ = torch::rrelu(*tensor_ptr_from_ocaml(self), (bool)training); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rrelu_(gc_tensor self, int training) { + PROTECT( + torch::Tensor results__ = torch::rrelu_(*tensor_ptr_from_ocaml(self), (bool)training); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rrelu_with_noise(gc_tensor self, gc_tensor noise, int training) { + PROTECT( + torch::Tensor results__ = torch::rrelu_with_noise(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(noise), (bool)training); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rrelu_with_noise_(gc_tensor self, gc_tensor noise, int training) { + PROTECT( + torch::Tensor results__ = torch::rrelu_with_noise_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(noise), (bool)training); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rrelu_with_noise_backward(gc_tensor grad_output, gc_tensor self, gc_tensor noise, scalar lower, scalar upper, int training, int self_is_result) { + PROTECT( + torch::Tensor results__ = torch::rrelu_with_noise_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(noise), *lower, *upper, (bool)training, (bool)self_is_result); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rrelu_with_noise_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self, gc_tensor noise, scalar lower, scalar upper, int training, int self_is_result) { + PROTECT( + torch::Tensor results__ = torch::rrelu_with_noise_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(noise), *lower, *upper, (bool)training, (bool)self_is_result); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rrelu_with_noise_out(gc_tensor out, gc_tensor self, gc_tensor noise, int training) { + PROTECT( + torch::Tensor results__ = torch::rrelu_with_noise_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(noise), (bool)training); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rsqrt(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rsqrt(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rsqrt_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rsqrt_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rsqrt_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::rsqrt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rsub(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::rsub(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rsub_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::rsub(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rsub_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::rsub_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_rsub_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::rsub_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scalar_tensor(scalar s, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::scalar_tensor(*s, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scalar_tensor_out(gc_tensor out, scalar s) { + PROTECT( + torch::Tensor results__ = torch::scalar_tensor_out(*tensor_ptr_from_ocaml(out), *s); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scaled_dot_product_attention(gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null) { + PROTECT( + torch::Tensor results__ = torch::scaled_dot_product_attention(*tensor_ptr_from_ocaml(query), *tensor_ptr_from_ocaml(key), *tensor_ptr_from_ocaml(value), (attn_mask ? tensor_from_ocaml(attn_mask) : torch::Tensor()), dropout_p, (bool)is_causal, scale_null ? c10::nullopt : c10::optional(scale_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src) { + PROTECT( + torch::Tensor results__ = torch::scatter(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->scatter_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_add(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src) { + PROTECT( + torch::Tensor results__ = torch::scatter_add(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_add_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->scatter_add_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_add_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src) { + PROTECT( + torch::Tensor results__ = torch::scatter_add_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_reduce(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce) { + PROTECT( + torch::Tensor results__ = torch::scatter(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src), std::string(reduce)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_reduce_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->scatter_(dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src), std::string(reduce)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_reduce_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce) { + PROTECT( + torch::Tensor results__ = torch::scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src), std::string(reduce)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_src_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src) { + PROTECT( + torch::Tensor results__ = torch::scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *tensor_ptr_from_ocaml(src)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_value(gc_tensor self, int64_t dim, gc_tensor index, scalar value) { + PROTECT( + torch::Tensor results__ = torch::scatter(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_value_(gc_tensor self, int64_t dim, gc_tensor index, scalar value) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->scatter_(dim, *tensor_ptr_from_ocaml(index), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_value_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, scalar value) { + PROTECT( + torch::Tensor results__ = torch::scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_value_reduce(gc_tensor self, int64_t dim, gc_tensor index, scalar value, char * reduce) { + PROTECT( + torch::Tensor results__ = torch::scatter(*tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *value, std::string(reduce)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_value_reduce_(gc_tensor self, int64_t dim, gc_tensor index, scalar value, char * reduce) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->scatter_(dim, *tensor_ptr_from_ocaml(index), *value, std::string(reduce)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_scatter_value_reduce_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, scalar value, char * reduce) { + PROTECT( + torch::Tensor results__ = torch::scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, *tensor_ptr_from_ocaml(index), *value, std::string(reduce)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_searchsorted(gc_tensor sorted_sequence, gc_tensor self, int out_int32, int right, char * side, gc_tensor sorter) { + PROTECT( + torch::Tensor results__ = torch::searchsorted(*tensor_ptr_from_ocaml(sorted_sequence), *tensor_ptr_from_ocaml(self), (bool)out_int32, (bool)right, std::string(side), (sorter ? tensor_from_ocaml(sorter) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_searchsorted_scalar(gc_tensor sorted_sequence, scalar self, int out_int32, int right, char * side, gc_tensor sorter) { + PROTECT( + torch::Tensor results__ = torch::searchsorted(*tensor_ptr_from_ocaml(sorted_sequence), *self, (bool)out_int32, (bool)right, std::string(side), (sorter ? tensor_from_ocaml(sorter) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_searchsorted_scalar_out(gc_tensor out, gc_tensor sorted_sequence, scalar self, int out_int32, int right, char * side, gc_tensor sorter) { + PROTECT( + torch::Tensor results__ = torch::searchsorted_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(sorted_sequence), *self, (bool)out_int32, (bool)right, std::string(side), (sorter ? tensor_from_ocaml(sorter) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_searchsorted_tensor_out(gc_tensor out, gc_tensor sorted_sequence, gc_tensor self, int out_int32, int right, char * side, gc_tensor sorter) { + PROTECT( + torch::Tensor results__ = torch::searchsorted_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(sorted_sequence), *tensor_ptr_from_ocaml(self), (bool)out_int32, (bool)right, std::string(side), (sorter ? tensor_from_ocaml(sorter) : torch::Tensor())); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_segment_reduce(gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor indices, gc_tensor offsets, int64_t axis, int unsafe, scalar initial) { + PROTECT( + torch::Tensor results__ = torch::segment_reduce(*tensor_ptr_from_ocaml(data), std::string(reduce), (lengths ? tensor_from_ocaml(lengths) : torch::Tensor()), (indices ? tensor_from_ocaml(indices) : torch::Tensor()), (offsets ? tensor_from_ocaml(offsets) : torch::Tensor()), axis, (bool)unsafe, *initial); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_segment_reduce_out(gc_tensor out, gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor indices, gc_tensor offsets, int64_t axis, int unsafe, scalar initial) { + PROTECT( + torch::Tensor results__ = torch::segment_reduce_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(data), std::string(reduce), (lengths ? tensor_from_ocaml(lengths) : torch::Tensor()), (indices ? tensor_from_ocaml(indices) : torch::Tensor()), (offsets ? tensor_from_ocaml(offsets) : torch::Tensor()), axis, (bool)unsafe, *initial); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_select(gc_tensor self, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::select(*tensor_ptr_from_ocaml(self), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_select_backward(gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::select_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_select_backward_out(gc_tensor out, gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::select_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_select_copy(gc_tensor self, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::select_copy(*tensor_ptr_from_ocaml(self), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_select_copy_int_out(gc_tensor out, gc_tensor self, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::select_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_select_scatter(gc_tensor self, gc_tensor src, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::select_scatter(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_select_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t dim, int64_t index) { + PROTECT( + torch::Tensor results__ = torch::select_scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), dim, index); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_selu(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::selu(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_selu_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::selu_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::set(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->set_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::set_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set_requires_grad(gc_tensor self, int r) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->set_requires_grad((bool)r); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set_source_tensor(gc_tensor self, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::set(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set_source_tensor_(gc_tensor self, gc_tensor source) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->set_(*tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set_source_tensor_out(gc_tensor out, gc_tensor self, gc_tensor source) { + PROTECT( + torch::Tensor results__ = torch::set_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(source)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_set_source_tensor_storage_offset_(gc_tensor self, gc_tensor source, int64_t storage_offset, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->set_(*tensor_ptr_from_ocaml(source), storage_offset, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sgn(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sgn(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sgn_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sgn_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sgn_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sgn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sigmoid(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sigmoid(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sigmoid_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sigmoid_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sigmoid_backward(gc_tensor grad_output, gc_tensor output) { + PROTECT( + torch::Tensor results__ = torch::sigmoid_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sigmoid_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor output) { + PROTECT( + torch::Tensor results__ = torch::sigmoid_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sigmoid_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sigmoid_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sign(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sign(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sign_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sign_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sign_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sign_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_signbit(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::signbit(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_signbit_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::signbit_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_silu(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::silu(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_silu_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::silu_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_silu_backward(gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::silu_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_silu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::silu_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_silu_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::silu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sin(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sin(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sin_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sin_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sin_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sin_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sinc(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sinc(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sinc_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sinc_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sinc_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sinc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sinh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sinh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sinh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sinh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sinh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sinh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg_size(gc_tensor self, int64_t dim) { + PROTECT( + return torch::size(*tensor_ptr_from_ocaml(self), dim); + ) + return 0; +} + +raw_tensor atg_slice(gc_tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::slice(*tensor_ptr_from_ocaml(self), dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slice_backward(gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::slice_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, start, end, step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slice_backward_out(gc_tensor out, gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::slice_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, start, end, step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slice_copy(gc_tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::slice_copy(*tensor_ptr_from_ocaml(self), dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slice_copy_tensor_out(gc_tensor out, gc_tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::slice_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slice_scatter(gc_tensor self, gc_tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::slice_scatter(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slice_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::slice_scatter_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(src), dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); + return tensor_to_ocaml(results__); + ) +} + +void atg_slogdet(raw_tensor *out__, gc_tensor self) { + PROTECT( + auto results__ = torch::slogdet(*tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_slogdet_out(raw_tensor *out__, gc_tensor sign, gc_tensor logabsdet, gc_tensor self) { + PROTECT( + auto results__ = torch::slogdet_out(*tensor_ptr_from_ocaml(sign), *tensor_ptr_from_ocaml(logabsdet), *tensor_ptr_from_ocaml(self)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_slow_conv3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv3d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_dilated2d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_dilated2d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_dilated2d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_dilated2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_dilated3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_dilated3d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_dilated3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_dilated3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_transpose2d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_transpose2d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_transpose2d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_transpose2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_transpose3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_transpose3d(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_slow_conv_transpose3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { + PROTECT( + torch::Tensor results__ = torch::slow_conv_transpose3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(weight), torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? tensor_from_ocaml(bias) : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_smm(gc_tensor self, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::smm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_smooth_l1_loss(gc_tensor self, gc_tensor target, int64_t reduction, double beta) { + PROTECT( + torch::Tensor results__ = torch::smooth_l1_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, beta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_smooth_l1_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double beta) { + PROTECT( + torch::Tensor results__ = torch::smooth_l1_loss_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, beta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_smooth_l1_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double beta) { + PROTECT( + torch::Tensor results__ = torch::smooth_l1_loss_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, beta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_smooth_l1_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction, double beta) { + PROTECT( + torch::Tensor results__ = torch::smooth_l1_loss_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction, beta); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_soft_margin_loss(gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::soft_margin_loss(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_soft_margin_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::soft_margin_loss_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_soft_margin_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::soft_margin_loss_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_soft_margin_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::soft_margin_loss_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(target), reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softmax(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::softmax(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softmax_int_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::softmax_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softplus(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::softplus(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softplus_backward(gc_tensor grad_output, gc_tensor self, scalar beta, scalar threshold) { + PROTECT( + torch::Tensor results__ = torch::softplus_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *beta, *threshold); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softplus_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar beta, scalar threshold) { + PROTECT( + torch::Tensor results__ = torch::softplus_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *beta, *threshold); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softplus_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::softplus_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softshrink(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::softshrink(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softshrink_backward(gc_tensor grad_output, gc_tensor self, scalar lambd) { + PROTECT( + torch::Tensor results__ = torch::softshrink_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *lambd); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softshrink_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar lambd) { + PROTECT( + torch::Tensor results__ = torch::softshrink_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *lambd); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_softshrink_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::softshrink_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +void atg_sort(raw_tensor *out__, gc_tensor self, int64_t dim, int descending) { + PROTECT( + auto results__ = torch::sort(*tensor_ptr_from_ocaml(self), dim, (bool)descending); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_sort_stable(raw_tensor *out__, gc_tensor self, int stable, int64_t dim, int descending) { + PROTECT( + auto results__ = torch::sort(*tensor_ptr_from_ocaml(self), (bool)stable, dim, (bool)descending); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_sort_values(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int descending) { + PROTECT( + auto results__ = torch::sort_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), dim, (bool)descending); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_sort_values_stable(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int stable, int64_t dim, int descending) { + PROTECT( + auto results__ = torch::sort_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), (bool)stable, dim, (bool)descending); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_sparse_bsc_tensor(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_bsc_tensor(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_bsc_tensor_ccol_row_value_size(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_bsc_tensor(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_bsr_tensor(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_bsr_tensor(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), *tensor_ptr_from_ocaml(values), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_bsr_tensor_crow_col_value_size(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_bsr_tensor(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_compressed_tensor(gc_tensor compressed_indices, gc_tensor plain_indices, gc_tensor values, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_compressed_tensor(*tensor_ptr_from_ocaml(compressed_indices), *tensor_ptr_from_ocaml(plain_indices), *tensor_ptr_from_ocaml(values), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_compressed_tensor_comp_plain_value_size(gc_tensor compressed_indices, gc_tensor plain_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_compressed_tensor(*tensor_ptr_from_ocaml(compressed_indices), *tensor_ptr_from_ocaml(plain_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_coo_tensor(int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_coo_tensor(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_coo_tensor_indices(gc_tensor indices, gc_tensor values, int options_kind, int options_device, int is_coalesced) { + PROTECT( + torch::Tensor results__ = torch::sparse_coo_tensor(*tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(values), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_coo_tensor_indices_size(gc_tensor indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced) { + PROTECT( + torch::Tensor results__ = torch::sparse_coo_tensor(*tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_coo_tensor_size_out(gc_tensor out, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::sparse_coo_tensor_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_csc_tensor(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_csc_tensor(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_csc_tensor_ccol_row_value_size(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_csc_tensor(*tensor_ptr_from_ocaml(ccol_indices), *tensor_ptr_from_ocaml(row_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_csr_tensor(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_csr_tensor(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), *tensor_ptr_from_ocaml(values), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_csr_tensor_crow_col_value_size(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::sparse_csr_tensor(*tensor_ptr_from_ocaml(crow_indices), *tensor_ptr_from_ocaml(col_indices), *tensor_ptr_from_ocaml(values), torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg_sparse_dim(gc_tensor self) { + PROTECT( + return tensor_ptr_from_ocaml(self)->sparse_dim(); + ) + return 0; +} + +raw_tensor atg_sparse_mask(gc_tensor self, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sparse_mask(*tensor_ptr_from_ocaml(mask)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_mask_out(gc_tensor out, gc_tensor self, gc_tensor mask) { + PROTECT( + torch::Tensor results__ = torch::sparse_mask_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mask)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_resize(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { + PROTECT( + torch::Tensor results__ = torch::sparse_resize(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_resize_(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sparse_resize_(torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_resize_and_clear(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { + PROTECT( + torch::Tensor results__ = torch::sparse_resize_and_clear(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_resize_and_clear_(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sparse_resize_and_clear_(torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_resize_and_clear_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { + PROTECT( + torch::Tensor results__ = torch::sparse_resize_and_clear_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_resize_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { + PROTECT( + torch::Tensor results__ = torch::sparse_resize_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_sampled_addmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::sparse_sampled_addmm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sparse_sampled_addmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::sparse_sampled_addmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_airy_ai(gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_airy_ai(*tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_airy_ai_out(gc_tensor out, gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_airy_ai_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_j0(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_j0(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_j0_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_j0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_j1(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_j1(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_j1_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_j1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_y0(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_y0(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_y0_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_y0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_y1(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_y1(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_bessel_y1_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_bessel_y1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_t(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_t(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_t_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_t(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_t_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_t_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_t_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_t_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_t_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_t(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_t_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_t_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_u(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_u(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_u_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_u(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_u_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_u_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_u_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_u_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_u_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_u(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_u_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_u_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_v(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_v(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_v_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_v(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_v_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_v_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_v_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_v_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_v_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_v(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_v_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_v_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_w(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_w(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_w_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_w(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_w_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_w_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_w_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_w_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_w_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_w(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_chebyshev_polynomial_w_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_chebyshev_polynomial_w_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_digamma(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_digamma(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_digamma_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_digamma_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_entr(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_entr(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_entr_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_entr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erf(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erf(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erf_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erf_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erfc(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erfc(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erfc_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erfc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erfcx(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erfcx(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erfcx_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erfcx_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erfinv(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erfinv(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_erfinv_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_erfinv_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_exp2(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_exp2(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_exp2_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_exp2_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_expit(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_expit(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_expit_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_expit_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_expm1(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_expm1(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_expm1_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_expm1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_gammainc(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_gammainc(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_gammainc_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_gammainc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_gammaincc(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_gammaincc(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_gammaincc_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_gammaincc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_gammaln(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_gammaln(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_gammaln_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_gammaln_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_h(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_h(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_h_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_h(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_h_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_h_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_h_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_h_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_h_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_h(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_h_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_h_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_he(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_he(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_he_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_he(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_he_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_he_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_he_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_he_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_he_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_he(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_hermite_polynomial_he_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_hermite_polynomial_he_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i0(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i0(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i0_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i0e(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i0e(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i0e_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i0e_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i1(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i1(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i1_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i1e(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i1e(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_i1e_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_i1e_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_laguerre_polynomial_l(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_laguerre_polynomial_l(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_laguerre_polynomial_l_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_laguerre_polynomial_l(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_laguerre_polynomial_l_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_laguerre_polynomial_l_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_laguerre_polynomial_l_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_laguerre_polynomial_l_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_laguerre_polynomial_l_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_laguerre_polynomial_l(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_laguerre_polynomial_l_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_laguerre_polynomial_l_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_legendre_polynomial_p(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_legendre_polynomial_p(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_legendre_polynomial_p_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_legendre_polynomial_p(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_legendre_polynomial_p_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_legendre_polynomial_p_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_legendre_polynomial_p_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_legendre_polynomial_p_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_legendre_polynomial_p_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_legendre_polynomial_p(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_legendre_polynomial_p_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_legendre_polynomial_p_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_log1p(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_log1p(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_log1p_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_log1p_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_log_ndtr(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_log_ndtr(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_log_ndtr_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_log_ndtr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_log_softmax(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::special_log_softmax(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_logit(gc_tensor self, double eps_v, int eps_null) { + PROTECT( + torch::Tensor results__ = torch::special_logit(*tensor_ptr_from_ocaml(self), eps_null ? c10::nullopt : c10::optional(eps_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_logit_out(gc_tensor out, gc_tensor self, double eps_v, int eps_null) { + PROTECT( + torch::Tensor results__ = torch::special_logit_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), eps_null ? c10::nullopt : c10::optional(eps_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_logsumexp(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::special_logsumexp(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_logsumexp_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::special_logsumexp_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_i0(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_i0(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_i0_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_i0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_i1(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_i1(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_i1_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_i1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_k0(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_k0(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_k0_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_k0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_k1(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_k1(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_modified_bessel_k1_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_modified_bessel_k1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_multigammaln(gc_tensor self, int64_t p) { + PROTECT( + torch::Tensor results__ = torch::special_multigammaln(*tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_multigammaln_out(gc_tensor out, gc_tensor self, int64_t p) { + PROTECT( + torch::Tensor results__ = torch::special_multigammaln_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), p); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_ndtr(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_ndtr(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_ndtr_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_ndtr_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_ndtri(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_ndtri(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_ndtri_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_ndtri_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_polygamma(int64_t n, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_polygamma(n, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_polygamma_out(gc_tensor out, int64_t n, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_polygamma_out(*tensor_ptr_from_ocaml(out), n, *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_psi(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_psi(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_psi_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_psi_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_round(gc_tensor self, int64_t decimals) { + PROTECT( + torch::Tensor results__ = torch::special_round(*tensor_ptr_from_ocaml(self), decimals); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_round_out(gc_tensor out, gc_tensor self, int64_t decimals) { + PROTECT( + torch::Tensor results__ = torch::special_round_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), decimals); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_scaled_modified_bessel_k0(gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_scaled_modified_bessel_k0(*tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_scaled_modified_bessel_k0_out(gc_tensor out, gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_scaled_modified_bessel_k0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_scaled_modified_bessel_k1(gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_scaled_modified_bessel_k1(*tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_scaled_modified_bessel_k1_out(gc_tensor out, gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_scaled_modified_bessel_k1_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_t(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_t(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_t_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_t(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_t_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_t_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_t_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_t_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_t_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_t(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_t_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_t_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_u(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_u(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_u_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_u(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_u_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_u_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_u_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_u_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_u_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_u(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_u_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_u_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_v(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_v(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_v_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_v(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_v_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_v_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_v_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_v_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_v_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_v(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_v_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_v_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_w(gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_w(*tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_w_n_scalar(gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_w(*tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_w_n_scalar_out(gc_tensor out, gc_tensor x, scalar n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_w_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *n); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_w_out(gc_tensor out, gc_tensor x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_w_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x), *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_w_x_scalar(scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_w(*x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_shifted_chebyshev_polynomial_w_x_scalar_out(gc_tensor out, scalar x, gc_tensor n) { + PROTECT( + torch::Tensor results__ = torch::special_shifted_chebyshev_polynomial_w_out(*tensor_ptr_from_ocaml(out), *x, *tensor_ptr_from_ocaml(n)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_sinc(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_sinc(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_sinc_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::special_sinc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_softmax(gc_tensor self, int64_t dim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::special_softmax(*tensor_ptr_from_ocaml(self), dim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_spherical_bessel_j0(gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_spherical_bessel_j0(*tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_spherical_bessel_j0_out(gc_tensor out, gc_tensor x) { + PROTECT( + torch::Tensor results__ = torch::special_spherical_bessel_j0_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(x)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlog1py(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlog1py(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlog1py_other_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::special_xlog1py(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlog1py_other_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::special_xlog1py_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlog1py_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlog1py_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlog1py_self_scalar(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlog1py(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlog1py_self_scalar_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlog1py_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlogy(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlogy(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlogy_other_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::special_xlogy(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlogy_other_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::special_xlogy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlogy_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlogy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlogy_self_scalar(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlogy(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_xlogy_self_scalar_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_xlogy_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_zeta(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_zeta(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_zeta_other_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::special_zeta(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_zeta_other_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::special_zeta_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_zeta_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_zeta_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_zeta_self_scalar(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_zeta(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_special_zeta_self_scalar_out(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::special_zeta_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_split(gc_tensor self, int64_t split_size, int64_t dim) { + PROTECT( + auto results__ = torch::split(*tensor_ptr_from_ocaml(self), split_size, dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_split_copy(gc_tensor self, int64_t split_size, int64_t dim) { + PROTECT( + auto results__ = torch::split_copy(*tensor_ptr_from_ocaml(self), split_size, dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg_split_copy_tensor_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t split_size, int64_t dim) { + PROTECT( + torch::split_copy_out(of_carray_tensor(out_data, out_len), *tensor_ptr_from_ocaml(self), split_size, dim); + ) +} + +raw_tensor *atg_split_sizes(gc_tensor self, int64_t *split_size_data, int split_size_len, int64_t dim) { + PROTECT( + auto results__ = torch::split(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(split_size_data, split_size_len), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_split_with_sizes(gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { + PROTECT( + auto results__ = torch::split_with_sizes(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_split_with_sizes_copy(gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { + PROTECT( + auto results__ = torch::split_with_sizes_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg_split_with_sizes_copy_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { + PROTECT( + torch::split_with_sizes_copy_out(of_carray_tensor(out_data, out_len), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); + ) +} + +raw_tensor atg_sqrt(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sqrt(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sqrt_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sqrt_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sqrt_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::sqrt_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_square(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::square(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_square_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::square_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_square_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::square_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::squeeze(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->squeeze_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::squeeze_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_copy_dim(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::squeeze_copy(*tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_copy_dim_out(gc_tensor out, gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::squeeze_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_copy_dims(gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::squeeze_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_copy_dims_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::squeeze_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::squeeze_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_dim(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::squeeze(*tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_dim_(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->squeeze_(dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_dims(gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = torch::squeeze(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_squeeze_dims_(gc_tensor self, int64_t *dim_data, int dim_len) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->squeeze_(torch::IntArrayRef(dim_data, dim_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sspaddmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::sspaddmm(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sspaddmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2) { + PROTECT( + torch::Tensor results__ = torch::sspaddmm_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(mat1), *tensor_ptr_from_ocaml(mat2)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_stack(gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::stack(of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::stack_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_std(gc_tensor self, int unbiased) { + PROTECT( + torch::Tensor results__ = torch::std(*tensor_ptr_from_ocaml(self), (bool)unbiased); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_std_correction(gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::std(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_std_correction_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::std_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_std_dim(gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::std(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +void atg_std_mean(raw_tensor *out__, gc_tensor self, int unbiased) { + PROTECT( + auto results__ = torch::std_mean(*tensor_ptr_from_ocaml(self), (bool)unbiased); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_std_mean_correction(raw_tensor *out__, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + auto results__ = torch::std_mean(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_std_mean_correction_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + auto results__ = torch::std_mean_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_std_mean_dim(raw_tensor *out__, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { + PROTECT( + auto results__ = torch::std_mean(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_std_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::std_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_stft(gc_tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, gc_tensor window, int normalized, int onesided, int return_complex) { + PROTECT( + torch::Tensor results__ = torch::stft(*tensor_ptr_from_ocaml(self), n_fft, hop_length_null ? c10::nullopt : c10::optional(hop_length_v), win_length_null ? c10::nullopt : c10::optional(win_length_v), (window ? tensor_from_ocaml(window) : torch::Tensor()), (bool)normalized, (bool)onesided, (bool)return_complex); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_stft_center(gc_tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, gc_tensor window, int center, char * pad_mode, int normalized, int onesided, int return_complex) { + PROTECT( + torch::Tensor results__ = torch::stft(*tensor_ptr_from_ocaml(self), n_fft, hop_length_null ? c10::nullopt : c10::optional(hop_length_v), win_length_null ? c10::nullopt : c10::optional(win_length_v), (window ? tensor_from_ocaml(window) : torch::Tensor()), (bool)center, std::string(pad_mode), (bool)normalized, (bool)onesided, (bool)return_complex); + return tensor_to_ocaml(results__); + ) +} + +int64_t atg_stride(gc_tensor self, int64_t dim) { + PROTECT( + return torch::stride(*tensor_ptr_from_ocaml(self), dim); + ) + return 0; +} + +raw_tensor atg_sub(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::sub(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sub_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sub_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sub_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::sub_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sub_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::sub(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sub_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sub_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sub_scalar_out(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::sub_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_subtract(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::subtract(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_subtract_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->subtract_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_subtract_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::subtract_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_subtract_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::subtract(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_subtract_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->subtract_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sum(gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::sum(*tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sum_dim_intlist(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::sum(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sum_intlist_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { + PROTECT( + torch::Tensor results__ = torch::sum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sum_out(gc_tensor out, gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::sum_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_sum_to_size(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->sum_to_size(torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +void atg_svd(raw_tensor *out__, gc_tensor self, int some, int compute_uv) { + PROTECT( + auto results__ = torch::svd(*tensor_ptr_from_ocaml(self), (bool)some, (bool)compute_uv); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_svd_u(raw_tensor *out__, gc_tensor U, gc_tensor S, gc_tensor V, gc_tensor self, int some, int compute_uv) { + PROTECT( + auto results__ = torch::svd_out(*tensor_ptr_from_ocaml(U), *tensor_ptr_from_ocaml(S), *tensor_ptr_from_ocaml(V), *tensor_ptr_from_ocaml(self), (bool)some, (bool)compute_uv); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor atg_swapaxes(gc_tensor self, int64_t axis0, int64_t axis1) { + PROTECT( + torch::Tensor results__ = torch::swapaxes(*tensor_ptr_from_ocaml(self), axis0, axis1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_swapaxes_(gc_tensor self, int64_t axis0, int64_t axis1) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->swapaxes_(axis0, axis1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_swapdims(gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = torch::swapdims(*tensor_ptr_from_ocaml(self), dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_swapdims_(gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->swapdims_(dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_t(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::t(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_t_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->t_(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_t_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::t_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_t_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::t_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_take(gc_tensor self, gc_tensor index) { + PROTECT( + torch::Tensor results__ = torch::take(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(index)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_take_along_dim(gc_tensor self, gc_tensor indices, int64_t dim_v, int dim_null) { + PROTECT( + torch::Tensor results__ = torch::take_along_dim(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), dim_null ? c10::nullopt : c10::optional(dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_take_along_dim_out(gc_tensor out, gc_tensor self, gc_tensor indices, int64_t dim_v, int dim_null) { + PROTECT( + torch::Tensor results__ = torch::take_along_dim_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(indices), dim_null ? c10::nullopt : c10::optional(dim_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_take_out(gc_tensor out, gc_tensor self, gc_tensor index) { + PROTECT( + torch::Tensor results__ = torch::take_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(index)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tan(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::tan(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tan_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::tan_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tan_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::tan_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tanh(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::tanh(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tanh_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::tanh_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tanh_backward(gc_tensor grad_output, gc_tensor output) { + PROTECT( + torch::Tensor results__ = torch::tanh_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tanh_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor output) { + PROTECT( + torch::Tensor results__ = torch::tanh_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(output)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tanh_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::tanh_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_tensor_split(gc_tensor self, int64_t sections, int64_t dim) { + PROTECT( + auto results__ = torch::tensor_split(*tensor_ptr_from_ocaml(self), sections, dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_tensor_split_indices(gc_tensor self, int64_t *indices_data, int indices_len, int64_t dim) { + PROTECT( + auto results__ = torch::tensor_split(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(indices_data, indices_len), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_tensor_split_tensor_indices_or_sections(gc_tensor self, gc_tensor tensor_indices_or_sections, int64_t dim) { + PROTECT( + auto results__ = torch::tensor_split(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(tensor_indices_or_sections), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_tensordot(gc_tensor self, gc_tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len) { + PROTECT( + torch::Tensor results__ = torch::tensordot(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), torch::IntArrayRef(dims_self_data, dims_self_len), torch::IntArrayRef(dims_other_data, dims_other_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tensordot_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len) { + PROTECT( + torch::Tensor results__ = torch::tensordot_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other), torch::IntArrayRef(dims_self_data, dims_self_len), torch::IntArrayRef(dims_other_data, dims_other_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_threshold(gc_tensor self, scalar threshold, scalar value) { + PROTECT( + torch::Tensor results__ = torch::threshold(*tensor_ptr_from_ocaml(self), *threshold, *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_threshold_(gc_tensor self, scalar threshold, scalar value) { + PROTECT( + torch::Tensor results__ = torch::threshold_(*tensor_ptr_from_ocaml(self), *threshold, *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_threshold_backward(gc_tensor grad_output, gc_tensor self, scalar threshold) { + PROTECT( + torch::Tensor results__ = torch::threshold_backward(*tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *threshold); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_threshold_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar threshold) { + PROTECT( + torch::Tensor results__ = torch::threshold_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), *tensor_ptr_from_ocaml(self), *threshold); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_threshold_out(gc_tensor out, gc_tensor self, scalar threshold, scalar value) { + PROTECT( + torch::Tensor results__ = torch::threshold_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *threshold, *value); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tile(gc_tensor self, int64_t *dims_data, int dims_len) { + PROTECT( + torch::Tensor results__ = torch::tile(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(dims_data, dims_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to(gc_tensor self, int device) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to(device_of_int(device)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_dense(gc_tensor self, int dtype, int masked_grad) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to_dense(torch::ScalarType(dtype), (bool)masked_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_dense_backward(gc_tensor grad, gc_tensor input, int masked_grad) { + PROTECT( + torch::Tensor results__ = torch::to_dense_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(input), (bool)masked_grad); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_device(gc_tensor self, int device, int dtype, int non_blocking, int copy) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to(device_of_int(device), torch::ScalarType(dtype), (bool)non_blocking, (bool)copy); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_dtype(gc_tensor self, int dtype, int non_blocking, int copy) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to(torch::ScalarType(dtype), (bool)non_blocking, (bool)copy); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_dtype_layout(gc_tensor self, int options_kind, int options_device, int non_blocking, int copy) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to(at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)non_blocking, (bool)copy); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_mkldnn(gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to_mkldnn(torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_mkldnn_backward(gc_tensor grad, gc_tensor input) { + PROTECT( + torch::Tensor results__ = torch::to_mkldnn_backward(*tensor_ptr_from_ocaml(grad), *tensor_ptr_from_ocaml(input)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_mkldnn_out(gc_tensor out, gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::to_mkldnn_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_other(gc_tensor self, gc_tensor other, int non_blocking, int copy) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to(*tensor_ptr_from_ocaml(other), (bool)non_blocking, (bool)copy); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_padded_tensor(gc_tensor self, double padding, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->to_padded_tensor(padding, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_to_padded_tensor_out(gc_tensor out, gc_tensor self, double padding, int64_t *output_size_data, int output_size_len) { + PROTECT( + torch::Tensor results__ = torch::to_padded_tensor_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), padding, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len))); + return tensor_to_ocaml(results__); + ) +} + +void atg_topk(raw_tensor *out__, gc_tensor self, int64_t k, int64_t dim, int largest, int sorted) { + PROTECT( + auto results__ = torch::topk(*tensor_ptr_from_ocaml(self), k, dim, (bool)largest, (bool)sorted); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_topk_values(raw_tensor *out__, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t k, int64_t dim, int largest, int sorted) { + PROTECT( + auto results__ = torch::topk_out(*tensor_ptr_from_ocaml(values), *tensor_ptr_from_ocaml(indices), *tensor_ptr_from_ocaml(self), k, dim, (bool)largest, (bool)sorted); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_totype(gc_tensor self, int scalar_type) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->toType(torch::ScalarType(scalar_type)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trace(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::trace(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trace_backward(gc_tensor grad, int64_t *sizes_data, int sizes_len) { + PROTECT( + torch::Tensor results__ = torch::trace_backward(*tensor_ptr_from_ocaml(grad), torch::IntArrayRef(sizes_data, sizes_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trace_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::trace_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_transpose(gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = torch::transpose(*tensor_ptr_from_ocaml(self), dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_transpose_(gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->transpose_(dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_transpose_copy(gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = torch::transpose_copy(*tensor_ptr_from_ocaml(self), dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_transpose_copy_int_out(gc_tensor out, gc_tensor self, int64_t dim0, int64_t dim1) { + PROTECT( + torch::Tensor results__ = torch::transpose_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim0, dim1); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trapezoid(gc_tensor y, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::trapezoid(*tensor_ptr_from_ocaml(y), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trapezoid_x(gc_tensor y, gc_tensor x, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::trapezoid(*tensor_ptr_from_ocaml(y), *tensor_ptr_from_ocaml(x), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trapz(gc_tensor y, gc_tensor x, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::trapz(*tensor_ptr_from_ocaml(y), *tensor_ptr_from_ocaml(x), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trapz_dx(gc_tensor y, double dx, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::trapz(*tensor_ptr_from_ocaml(y), dx, dim); + return tensor_to_ocaml(results__); + ) +} + +void atg_triangular_solve(raw_tensor *out__, gc_tensor self, gc_tensor A, int upper, int transpose, int unitriangular) { + PROTECT( + auto results__ = torch::triangular_solve(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(A), (bool)upper, (bool)transpose, (bool)unitriangular); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_triangular_solve_x(raw_tensor *out__, gc_tensor X, gc_tensor M, gc_tensor self, gc_tensor A, int upper, int transpose, int unitriangular) { + PROTECT( + auto results__ = torch::triangular_solve_out(*tensor_ptr_from_ocaml(X), *tensor_ptr_from_ocaml(M), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(A), (bool)upper, (bool)transpose, (bool)unitriangular); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_tril(gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = torch::tril(*tensor_ptr_from_ocaml(self), diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tril_(gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->tril_(diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tril_indices(int64_t row, int64_t col, int64_t offset, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::tril_indices(row, col, offset, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tril_indices_out(gc_tensor out, int64_t row, int64_t col, int64_t offset) { + PROTECT( + torch::Tensor results__ = torch::tril_indices_out(*tensor_ptr_from_ocaml(out), row, col, offset); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_tril_out(gc_tensor out, gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = torch::tril_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_triplet_margin_loss(gc_tensor anchor, gc_tensor positive, gc_tensor negative, double margin, double p, double eps, int swap, int64_t reduction) { + PROTECT( + torch::Tensor results__ = torch::triplet_margin_loss(*tensor_ptr_from_ocaml(anchor), *tensor_ptr_from_ocaml(positive), *tensor_ptr_from_ocaml(negative), margin, p, eps, (bool)swap, reduction); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_triu(gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = torch::triu(*tensor_ptr_from_ocaml(self), diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_triu_(gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->triu_(diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_triu_indices(int64_t row, int64_t col, int64_t offset, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::triu_indices(row, col, offset, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_triu_indices_out(gc_tensor out, int64_t row, int64_t col, int64_t offset) { + PROTECT( + torch::Tensor results__ = torch::triu_indices_out(*tensor_ptr_from_ocaml(out), row, col, offset); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_triu_out(gc_tensor out, gc_tensor self, int64_t diagonal) { + PROTECT( + torch::Tensor results__ = torch::triu_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), diagonal); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_true_divide(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::true_divide(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_true_divide_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->true_divide_(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_true_divide_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::true_divide_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_true_divide_scalar(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::true_divide(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_true_divide_scalar_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->true_divide_(*other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trunc(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::trunc(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trunc_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::trunc_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_trunc_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::trunc_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_type_as(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->type_as(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_unbind(gc_tensor self, int64_t dim) { + PROTECT( + auto results__ = torch::unbind(*tensor_ptr_from_ocaml(self), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_unbind_copy(gc_tensor self, int64_t dim) { + PROTECT( + auto results__ = torch::unbind_copy(*tensor_ptr_from_ocaml(self), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg_unbind_copy_int_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t dim) { + PROTECT( + torch::unbind_copy_out(of_carray_tensor(out_data, out_len), *tensor_ptr_from_ocaml(self), dim); + ) +} + +raw_tensor atg_unflatten(gc_tensor self, int64_t dim, int64_t *sizes_data, int sizes_len) { + PROTECT( + torch::Tensor results__ = torch::unflatten(*tensor_ptr_from_ocaml(self), dim, torch::IntArrayRef(sizes_data, sizes_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_unflatten_dense_tensors(gc_tensor flat, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + auto results__ = torch::unflatten_dense_tensors(*tensor_ptr_from_ocaml(flat), of_carray_tensor(tensors_data, tensors_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_unfold(gc_tensor self, int64_t dimension, int64_t size, int64_t step) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->unfold(dimension, size, step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_unfold_backward(gc_tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::unfold_backward(*tensor_ptr_from_ocaml(grad_in), torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, size, step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_unfold_backward_out(gc_tensor out, gc_tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::unfold_backward_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(grad_in), torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, size, step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_unfold_copy(gc_tensor self, int64_t dimension, int64_t size, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::unfold_copy(*tensor_ptr_from_ocaml(self), dimension, size, step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_unfold_copy_out(gc_tensor out, gc_tensor self, int64_t dimension, int64_t size, int64_t step) { + PROTECT( + torch::Tensor results__ = torch::unfold_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dimension, size, step); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_uniform(gc_tensor self, double from, double to) { + PROTECT( + torch::Tensor results__ = torch::uniform(*tensor_ptr_from_ocaml(self), from, to); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_uniform_(gc_tensor self, double from, double to) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->uniform_(from, to); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_uniform_out(gc_tensor out, gc_tensor self, double from, double to) { + PROTECT( + torch::Tensor results__ = torch::uniform_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), from, to); + return tensor_to_ocaml(results__); + ) +} + +void atg_unique_consecutive(raw_tensor *out__, gc_tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null) { + PROTECT( + auto results__ = torch::unique_consecutive(*tensor_ptr_from_ocaml(self), (bool)return_inverse, (bool)return_counts, dim_null ? c10::nullopt : c10::optional(dim_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_unique_consecutive_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null) { + PROTECT( + auto results__ = torch::unique_consecutive_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(self), (bool)return_inverse, (bool)return_counts, dim_null ? c10::nullopt : c10::optional(dim_v)); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_unique_dim(raw_tensor *out__, gc_tensor self, int64_t dim, int sorted, int return_inverse, int return_counts) { + PROTECT( + auto results__ = torch::unique_dim(*tensor_ptr_from_ocaml(self), dim, (bool)sorted, (bool)return_inverse, (bool)return_counts); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_unique_dim_consecutive(raw_tensor *out__, gc_tensor self, int64_t dim, int return_inverse, int return_counts) { + PROTECT( + auto results__ = torch::unique_dim_consecutive(*tensor_ptr_from_ocaml(self), dim, (bool)return_inverse, (bool)return_counts); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_unique_dim_consecutive_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int64_t dim, int return_inverse, int return_counts) { + PROTECT( + auto results__ = torch::unique_dim_consecutive_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(self), dim, (bool)return_inverse, (bool)return_counts); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +void atg_unique_dim_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int64_t dim, int sorted, int return_inverse, int return_counts) { + PROTECT( + auto results__ = torch::unique_dim_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(out2), *tensor_ptr_from_ocaml(self), dim, (bool)sorted, (bool)return_inverse, (bool)return_counts); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + out__[2] = tensor_to_ocaml(std::get<2>(results__)); + ) +} + +raw_tensor *atg_unsafe_chunk(gc_tensor self, int64_t chunks, int64_t dim) { + PROTECT( + auto results__ = torch::unsafe_chunk(*tensor_ptr_from_ocaml(self), chunks, dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_unsafe_split(gc_tensor self, int64_t split_size, int64_t dim) { + PROTECT( + auto results__ = torch::unsafe_split(*tensor_ptr_from_ocaml(self), split_size, dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg_unsafe_split_tensor_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t split_size, int64_t dim) { + PROTECT( + torch::unsafe_split_out(of_carray_tensor(out_data, out_len), *tensor_ptr_from_ocaml(self), split_size, dim); + ) +} + +raw_tensor *atg_unsafe_split_with_sizes(gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { + PROTECT( + auto results__ = torch::unsafe_split_with_sizes(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +void atg_unsafe_split_with_sizes_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { + PROTECT( + torch::unsafe_split_with_sizes_out(of_carray_tensor(out_data, out_len), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); + ) +} + +raw_tensor atg_unsqueeze(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::unsqueeze(*tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_unsqueeze_(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->unsqueeze_(dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_unsqueeze_copy(gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::unsqueeze_copy(*tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_unsqueeze_copy_out(gc_tensor out, gc_tensor self, int64_t dim) { + PROTECT( + torch::Tensor results__ = torch::unsqueeze_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bicubic2d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bicubic2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bicubic2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bicubic2d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bicubic2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bicubic2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bicubic2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bicubic2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bicubic2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::upsample_bicubic2d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bilinear2d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bilinear2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bilinear2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bilinear2d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bilinear2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bilinear2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bilinear2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_bilinear2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_bilinear2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::upsample_bilinear2d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_linear1d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_linear1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_linear1d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_linear1d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_linear1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_linear1d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_linear1d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_linear1d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_linear1d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::upsample_linear1d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest1d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest1d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest1d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest1d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest1d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest1d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest1d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest1d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest1d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest2d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest2d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest2d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest2d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest2d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest2d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest3d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest3d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest3d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_nearest3d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::upsample_nearest3d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_trilinear3d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_trilinear3d(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_trilinear3d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_trilinear3d_backward(*tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_trilinear3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_trilinear3d_backward_out(*tensor_ptr_from_ocaml(grad_input), *tensor_ptr_from_ocaml(grad_output), torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_trilinear3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { + PROTECT( + torch::Tensor results__ = torch::upsample_trilinear3d_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_upsample_trilinear3d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { + PROTECT( + torch::Tensor results__ = torch::upsample_trilinear3d(*tensor_ptr_from_ocaml(input), output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_value_selecting_reduction_backward(gc_tensor grad, int64_t dim, gc_tensor indices, int64_t *sizes_data, int sizes_len, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::value_selecting_reduction_backward(*tensor_ptr_from_ocaml(grad), dim, *tensor_ptr_from_ocaml(indices), torch::IntArrayRef(sizes_data, sizes_len), (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_values(gc_tensor self) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->values(); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_values_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::values_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_values_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::values_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_vander(gc_tensor x, int64_t n_v, int n_null, int increasing) { + PROTECT( + torch::Tensor results__ = torch::vander(*tensor_ptr_from_ocaml(x), n_null ? c10::nullopt : c10::optional(n_v), (bool)increasing); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_var(gc_tensor self, int unbiased) { + PROTECT( + torch::Tensor results__ = torch::var(*tensor_ptr_from_ocaml(self), (bool)unbiased); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_var_correction(gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::var(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_var_correction_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::var_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_var_dim(gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::var(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +void atg_var_mean(raw_tensor *out__, gc_tensor self, int unbiased) { + PROTECT( + auto results__ = torch::var_mean(*tensor_ptr_from_ocaml(self), (bool)unbiased); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_var_mean_correction(raw_tensor *out__, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + auto results__ = torch::var_mean(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_var_mean_correction_out(raw_tensor *out__, gc_tensor out0, gc_tensor out1, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { + PROTECT( + auto results__ = torch::var_mean_out(*tensor_ptr_from_ocaml(out0), *tensor_ptr_from_ocaml(out1), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +void atg_var_mean_dim(raw_tensor *out__, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { + PROTECT( + auto results__ = torch::var_mean(*tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); + out__[0] = tensor_to_ocaml(std::get<0>(results__)); + out__[1] = tensor_to_ocaml(std::get<1>(results__)); + ) +} + +raw_tensor atg_var_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { + PROTECT( + torch::Tensor results__ = torch::var_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_vdot(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::vdot(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_vdot_out(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::vdot_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->view(torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_as(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->view_as(*tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_as_complex(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::view_as_complex(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_as_complex_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::view_as_complex_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_as_complex_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::view_as_complex_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_as_real(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::view_as_real(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_as_real_copy(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::view_as_real_copy(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_as_real_copy_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::view_as_real_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_copy(gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::view_copy(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_copy_dtype(gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::view_copy(*tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_copy_dtype_out(gc_tensor out, gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = torch::view_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::view_copy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_view_dtype(gc_tensor self, int dtype) { + PROTECT( + torch::Tensor results__ = tensor_ptr_from_ocaml(self)->view(torch::ScalarType(dtype)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_vsplit(gc_tensor self, int64_t sections) { + PROTECT( + auto results__ = torch::vsplit(*tensor_ptr_from_ocaml(self), sections); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor *atg_vsplit_array(gc_tensor self, int64_t *indices_data, int indices_len) { + PROTECT( + auto results__ = torch::vsplit(*tensor_ptr_from_ocaml(self), torch::IntArrayRef(indices_data, indices_len)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_vstack(gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::vstack(of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_vstack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len) { + PROTECT( + torch::Tensor results__ = torch::vstack_out(*tensor_ptr_from_ocaml(out), of_carray_tensor(tensors_data, tensors_len)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor *atg_where(gc_tensor condition) { + PROTECT( + auto results__ = torch::where(*tensor_ptr_from_ocaml(condition)); + int sz = results__.size(); + raw_tensor *out__ = (raw_tensor*)malloc((sz + 1) * sizeof(raw_tensor)); + for (int i = 0; i < sz; ++i) + out__[i] = tensor_to_ocaml(results__[i]); + out__[sz] = nullptr; + return out__; + ) +} + +raw_tensor atg_where_scalar(gc_tensor condition, scalar self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::where(*tensor_ptr_from_ocaml(condition), *self, *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_where_scalarother(gc_tensor condition, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::where(*tensor_ptr_from_ocaml(condition), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_where_scalarself(gc_tensor condition, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::where(*tensor_ptr_from_ocaml(condition), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_where_self(gc_tensor condition, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::where(*tensor_ptr_from_ocaml(condition), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_where_self_out(gc_tensor out, gc_tensor condition, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::where_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(condition), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::xlogy(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy_(gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::xlogy_(*tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy_outscalar_other(gc_tensor out, gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::xlogy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy_outscalar_self(gc_tensor out, scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::xlogy_out(*tensor_ptr_from_ocaml(out), *self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy_outtensor(gc_tensor out, gc_tensor self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::xlogy_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self), *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy_scalar_other(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::xlogy(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy_scalar_other_(gc_tensor self, scalar other) { + PROTECT( + torch::Tensor results__ = torch::xlogy_(*tensor_ptr_from_ocaml(self), *other); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_xlogy_scalar_self(scalar self, gc_tensor other) { + PROTECT( + torch::Tensor results__ = torch::xlogy(*self, *tensor_ptr_from_ocaml(other)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_zero(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::zero(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_zero_(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::zero_(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_zero_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::zero_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_zeros(int64_t *size_data, int size_len, int options_kind, int options_device) { + PROTECT( + torch::Tensor results__ = torch::zeros(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_zeros_like(gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::zeros_like(*tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_zeros_like_out(gc_tensor out, gc_tensor self) { + PROTECT( + torch::Tensor results__ = torch::zeros_like_out(*tensor_ptr_from_ocaml(out), *tensor_ptr_from_ocaml(self)); + return tensor_to_ocaml(results__); + ) +} + +raw_tensor atg_zeros_out(gc_tensor out, int64_t *size_data, int size_len) { + PROTECT( + torch::Tensor results__ = torch::zeros_out(*tensor_ptr_from_ocaml(out), torch::IntArrayRef(size_data, size_len)); + return tensor_to_ocaml(results__); + ) +} + diff --git a/src/wrapper/torch_api_generated.cpp.h b/src/wrapper/torch_api_generated.cpp.h deleted file mode 100644 index a223d65..0000000 --- a/src/wrapper/torch_api_generated.cpp.h +++ /dev/null @@ -1,18377 +0,0 @@ -// THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! - -void atg___and__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::__and__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___and__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::__and__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___iand__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->__iand__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___iand__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->__iand__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___ilshift__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->__ilshift__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___ilshift__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->__ilshift__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___ior__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->__ior__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___ior__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->__ior__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___irshift__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->__irshift__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___irshift__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->__irshift__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___ixor__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->__ixor__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___ixor__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->__ixor__(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___lshift__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::__lshift__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___lshift__scalar_out_(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::__lshift___out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___lshift__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::__lshift__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___lshift__tensor_out_(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::__lshift___out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___or__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::__or__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___or__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::__or__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___rshift__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::__rshift__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___rshift__scalar_out_(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::__rshift___out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___rshift__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::__rshift__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___rshift__tensor_out_(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::__rshift___out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___xor__(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::__xor__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg___xor__tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::__xor__(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool2d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool2d_backward(tensor *out__, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool2d_backward(*grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool2d_backward_out(tensor *out__, tensor out, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool2d_backward_out(*out, *grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool2d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool2d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool3d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool3d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool3d_backward(tensor *out__, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool3d_backward(*grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool3d_backward_out(tensor *out__, tensor out, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool3d_backward_out(*out, *grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__adaptive_avg_pool3d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::_adaptive_avg_pool3d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__add_batch_dim(tensor *out__, tensor self, int64_t batch_dim, int64_t level) { - PROTECT( - auto outputs__ = torch::_add_batch_dim(*self, batch_dim, level); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__add_relu(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_add_relu(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__add_relu_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_add_relu_(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__add_relu_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_add_relu_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__add_relu_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::_add_relu(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__add_relu_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::_add_relu_(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__add_relu_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::_add_relu_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__addmm_activation(tensor *out__, tensor self, tensor mat1, tensor mat2, int use_gelu) { - PROTECT( - auto outputs__ = torch::_addmm_activation(*self, *mat1, *mat2, (bool)use_gelu); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__addmm_activation_out(tensor *out__, tensor out, tensor self, tensor mat1, tensor mat2, int use_gelu) { - PROTECT( - auto outputs__ = torch::_addmm_activation_out(*out, *self, *mat1, *mat2, (bool)use_gelu); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__aminmax(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_aminmax(*self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__aminmax_dim(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::_aminmax(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__aminmax_dim_out(tensor *out__, tensor out0, tensor out1, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::_aminmax_out(*out0, *out1, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__aminmax_out(tensor *out__, tensor out0, tensor out1, tensor self) { - PROTECT( - auto outputs__ = torch::_aminmax_out(*out0, *out1, *self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__amp_update_scale(tensor *out__, tensor self, tensor growth_tracker, tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval) { - PROTECT( - auto outputs__ = torch::_amp_update_scale(*self, *growth_tracker, *found_inf, scale_growth_factor, scale_backoff_factor, growth_interval); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__amp_update_scale_(tensor *out__, tensor self, tensor growth_tracker, tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval) { - PROTECT( - auto outputs__ = torch::_amp_update_scale_(*self, *growth_tracker, *found_inf, scale_growth_factor, scale_backoff_factor, growth_interval); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__amp_update_scale_out(tensor *out__, tensor out, tensor self, tensor growth_tracker, tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval) { - PROTECT( - auto outputs__ = torch::_amp_update_scale_out(*out, *self, *growth_tracker, *found_inf, scale_growth_factor, scale_backoff_factor, growth_interval); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__assert_tensor_metadata(tensor a, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int dtype) { - PROTECT( - torch::_assert_tensor_metadata(*a, size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(size_data, size_len)), stride_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(stride_data, stride_len)), torch::ScalarType(dtype)); - ) -} - -void atg__autocast_to_full_precision(tensor *out__, tensor self, int cuda_enabled, int cpu_enabled) { - PROTECT( - auto outputs__ = self->_autocast_to_full_precision((bool)cuda_enabled, (bool)cpu_enabled); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__autocast_to_reduced_precision(tensor *out__, tensor self, int cuda_enabled, int cpu_enabled, int cuda_dtype, int cpu_dtype) { - PROTECT( - auto outputs__ = self->_autocast_to_reduced_precision((bool)cuda_enabled, (bool)cpu_enabled, torch::ScalarType(cuda_dtype), torch::ScalarType(cpu_dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_byte(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Byte(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_char(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Char(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_double(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Double(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_float(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Float(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_half(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Half(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_int(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Int(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_long(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Long(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cast_short(tensor *out__, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_cast_Short(*self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cdist_backward(tensor *out__, tensor grad, tensor x1, tensor x2, double p, tensor cdist) { - PROTECT( - auto outputs__ = torch::_cdist_backward(*grad, *x1, *x2, p, *cdist); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cdist_backward_out(tensor *out__, tensor out, tensor grad, tensor x1, tensor x2, double p, tensor cdist) { - PROTECT( - auto outputs__ = torch::_cdist_backward_out(*out, *grad, *x1, *x2, p, *cdist); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cholesky_solve_helper(tensor *out__, tensor self, tensor A, int upper) { - PROTECT( - auto outputs__ = torch::_cholesky_solve_helper(*self, *A, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cholesky_solve_helper_out(tensor *out__, tensor out, tensor self, tensor A, int upper) { - PROTECT( - auto outputs__ = torch::_cholesky_solve_helper_out(*out, *self, *A, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__coalesce(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_coalesce(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__coalesce_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_coalesce_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__coalesced(tensor *out__, tensor self, int coalesced) { - PROTECT( - auto outputs__ = torch::_coalesced(*self, (bool)coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__coalesced_(tensor *out__, tensor self, int coalesced) { - PROTECT( - auto outputs__ = self->_coalesced_((bool)coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__coalesced_out(tensor *out__, tensor out, tensor self, int coalesced) { - PROTECT( - auto outputs__ = torch::_coalesced_out(*out, *self, (bool)coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__compute_linear_combination(tensor *out__, tensor input, tensor coefficients) { - PROTECT( - auto outputs__ = torch::_compute_linear_combination(*input, *coefficients); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__compute_linear_combination_out(tensor *out__, tensor out, tensor input, tensor coefficients) { - PROTECT( - auto outputs__ = torch::_compute_linear_combination_out(*out, *input, *coefficients); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__conj(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_conj(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__conj_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_conj_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__conj_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_conj_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__conj_physical(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_conj_physical(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__conj_physical_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_conj_physical_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__conv_depthwise2d(tensor *out__, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::_conv_depthwise2d(*self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__conv_depthwise2d_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::_conv_depthwise2d_out(*out, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convert_indices_from_coo_to_csr(tensor *out__, tensor self, int64_t size, int out_int32) { - PROTECT( - auto outputs__ = torch::_convert_indices_from_coo_to_csr(*self, size, (bool)out_int32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convert_indices_from_coo_to_csr_out(tensor *out__, tensor out, tensor self, int64_t size, int out_int32) { - PROTECT( - auto outputs__ = torch::_convert_indices_from_coo_to_csr_out(*out, *self, size, (bool)out_int32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convert_indices_from_csr_to_coo(tensor *out__, tensor crow_indices, tensor col_indices, int out_int32, int transpose) { - PROTECT( - auto outputs__ = torch::_convert_indices_from_csr_to_coo(*crow_indices, *col_indices, (bool)out_int32, (bool)transpose); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convert_indices_from_csr_to_coo_out(tensor *out__, tensor out, tensor crow_indices, tensor col_indices, int out_int32, int transpose) { - PROTECT( - auto outputs__ = torch::_convert_indices_from_csr_to_coo_out(*out, *crow_indices, *col_indices, (bool)out_int32, (bool)transpose); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convolution(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32) { - PROTECT( - auto outputs__ = torch::_convolution(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups, (bool)benchmark, (bool)deterministic, (bool)cudnn_enabled, (bool)allow_tf32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convolution_deprecated(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled) { - PROTECT( - auto outputs__ = torch::_convolution(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups, (bool)benchmark, (bool)deterministic, (bool)cudnn_enabled); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convolution_mode(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::_convolution_mode(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__convolution_out(tensor *out__, tensor out, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32) { - PROTECT( - auto outputs__ = torch::_convolution_out(*out, *input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups, (bool)benchmark, (bool)deterministic, (bool)cudnn_enabled, (bool)allow_tf32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__copy_from(tensor *out__, tensor self, tensor dst, int non_blocking) { - PROTECT( - auto outputs__ = torch::_copy_from(*self, *dst, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__copy_from_and_resize(tensor *out__, tensor self, tensor dst) { - PROTECT( - auto outputs__ = torch::_copy_from_and_resize(*self, *dst); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__copy_from_and_resize_out(tensor *out__, tensor out, tensor self, tensor dst) { - PROTECT( - auto outputs__ = torch::_copy_from_and_resize_out(*out, *self, *dst); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__copy_from_out(tensor *out__, tensor out, tensor self, tensor dst, int non_blocking) { - PROTECT( - auto outputs__ = torch::_copy_from_out(*out, *self, *dst, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cslt_compress(tensor *out__, tensor input) { - PROTECT( - auto outputs__ = torch::_cslt_compress(*input); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cslt_sparse_mm(tensor *out__, tensor compressed_A, tensor dense_B, tensor bias, int transpose_result) { - PROTECT( - auto outputs__ = torch::_cslt_sparse_mm(*compressed_A, *dense_B, (bias ? *bias : torch::Tensor()), (bool)transpose_result); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__ctc_loss(tensor *out__, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_ctc_loss(*log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)zero_infinity); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__ctc_loss_backward(tensor *out__, tensor grad, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, tensor neg_log_likelihood, tensor log_alpha, int64_t blank, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_ctc_loss_backward(*grad, *log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), *neg_log_likelihood, *log_alpha, blank, (bool)zero_infinity); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__ctc_loss_backward_out(tensor *out__, tensor out, tensor grad, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, tensor neg_log_likelihood, tensor log_alpha, int64_t blank, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_ctc_loss_backward_out(*out, *grad, *log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), *neg_log_likelihood, *log_alpha, blank, (bool)zero_infinity); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__ctc_loss_backward_tensor(tensor *out__, tensor grad, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, tensor neg_log_likelihood, tensor log_alpha, int64_t blank, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_ctc_loss_backward(*grad, *log_probs, *targets, *input_lengths, *target_lengths, *neg_log_likelihood, *log_alpha, blank, (bool)zero_infinity); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__ctc_loss_out(tensor *out__, tensor out0, tensor out1, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_ctc_loss_out(*out0, *out1, *log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)zero_infinity); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__ctc_loss_tensor(tensor *out__, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_ctc_loss(*log_probs, *targets, *input_lengths, *target_lengths, blank, (bool)zero_infinity); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__ctc_loss_tensor_out(tensor *out__, tensor out0, tensor out1, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_ctc_loss_out(*out0, *out1, *log_probs, *targets, *input_lengths, *target_lengths, blank, (bool)zero_infinity); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__cudnn_ctc_loss(tensor *out__, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_cudnn_ctc_loss(*log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)deterministic, (bool)zero_infinity); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__cudnn_ctc_loss_out(tensor *out__, tensor out0, tensor out1, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_cudnn_ctc_loss_out(*out0, *out1, *log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, (bool)deterministic, (bool)zero_infinity); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__cudnn_ctc_loss_tensor(tensor *out__, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int deterministic, int zero_infinity) { - PROTECT( - auto outputs__ = torch::_cudnn_ctc_loss(*log_probs, *targets, *input_lengths, *target_lengths, blank, (bool)deterministic, (bool)zero_infinity); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__cudnn_init_dropout_state(tensor *out__, double dropout, int train, int64_t dropout_seed, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_cudnn_init_dropout_state(dropout, (bool)train, dropout_seed, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cudnn_init_dropout_state_out(tensor *out__, tensor out, double dropout, int train, int64_t dropout_seed) { - PROTECT( - auto outputs__ = torch::_cudnn_init_dropout_state_out(*out, dropout, (bool)train, dropout_seed); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cudnn_rnn(tensor *out__, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor weight_buf, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state) { - PROTECT( - auto outputs__ = torch::_cudnn_rnn(*input, of_carray_tensor(weight_data, weight_len), weight_stride0, (weight_buf ? *weight_buf : torch::Tensor()), *hx, (cx ? *cx : torch::Tensor()), mode, hidden_size, proj_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? *dropout_state : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg__cudnn_rnn_flatten_weight(tensor *out__, tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional) { - PROTECT( - auto outputs__ = torch::_cudnn_rnn_flatten_weight(of_carray_tensor(weight_arr_data, weight_arr_len), weight_stride0, input_size, mode, hidden_size, proj_size, num_layers, (bool)batch_first, (bool)bidirectional); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cudnn_rnn_flatten_weight_out(tensor *out__, tensor out, tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional) { - PROTECT( - auto outputs__ = torch::_cudnn_rnn_flatten_weight_out(*out, of_carray_tensor(weight_arr_data, weight_arr_len), weight_stride0, input_size, mode, hidden_size, proj_size, num_layers, (bool)batch_first, (bool)bidirectional); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__cudnn_rnn_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor weight_buf, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state) { - PROTECT( - auto outputs__ = torch::_cudnn_rnn_out(*out0, *out1, *out2, *out3, *out4, *input, of_carray_tensor(weight_data, weight_len), weight_stride0, (weight_buf ? *weight_buf : torch::Tensor()), *hx, (cx ? *cx : torch::Tensor()), mode, hidden_size, proj_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? *dropout_state : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -int64_t atg__debug_has_internal_overlap(tensor self) { - PROTECT( - return torch::_debug_has_internal_overlap(*self); - ) - return 0; -} - -void atg__dim_arange(tensor *out__, tensor like, int64_t dim) { - PROTECT( - auto outputs__ = torch::_dim_arange(*like, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg__dimi(tensor self) { - PROTECT( - return self->_dimI(); - ) - return 0; -} - -int64_t atg__dimv(tensor self) { - PROTECT( - return self->_dimV(); - ) - return 0; -} - -void atg__dirichlet_grad(tensor *out__, tensor x, tensor alpha, tensor total) { - PROTECT( - auto outputs__ = torch::_dirichlet_grad(*x, *alpha, *total); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__dirichlet_grad_out(tensor *out__, tensor out, tensor x, tensor alpha, tensor total) { - PROTECT( - auto outputs__ = torch::_dirichlet_grad_out(*out, *x, *alpha, *total); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__efficient_attention_backward(tensor *out__, tensor grad_out_, tensor query, tensor key, tensor value, tensor bias, tensor out, tensor cu_seqlens_q, tensor cu_seqlens_k, int64_t max_seqlen_k, int64_t max_seqlen_q, tensor logsumexp, double dropout_p, tensor philox_seed, tensor philox_offset, int64_t custom_mask_type, int bias_requires_grad, double scale_v, int scale_null, int64_t num_splits_key_v, int num_splits_key_null) { - PROTECT( - auto outputs__ = torch::_efficient_attention_backward(*grad_out_, *query, *key, *value, (bias ? *bias : torch::Tensor()), *out, (cu_seqlens_q ? *cu_seqlens_q : torch::Tensor()), (cu_seqlens_k ? *cu_seqlens_k : torch::Tensor()), max_seqlen_k, max_seqlen_q, *logsumexp, dropout_p, *philox_seed, *philox_offset, custom_mask_type, (bool)bias_requires_grad, scale_null ? c10::nullopt : c10::optional(scale_v), num_splits_key_null ? c10::nullopt : c10::optional(num_splits_key_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__efficientzerotensor(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_efficientzerotensor(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__efficientzerotensor_out(tensor *out__, tensor out, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_efficientzerotensor_out(*out, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__embedding_bag(tensor *out__, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag(*weight, *indices, *offsets, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? *per_sample_weights : torch::Tensor()), (bool)include_last_offset, padding_idx); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__embedding_bag_backward(tensor *out__, tensor grad, tensor indices, tensor offsets, tensor offset2bag, tensor bag_size, tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_backward(*grad, *indices, *offsets, *offset2bag, *bag_size, *maximum_indices, num_weights, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? *per_sample_weights : torch::Tensor()), padding_idx); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__embedding_bag_dense_backward(tensor *out__, tensor grad, tensor indices, tensor offset2bag, tensor bag_size, tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, tensor per_sample_weights, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_dense_backward(*grad, *indices, *offset2bag, *bag_size, *maximum_indices, num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? *per_sample_weights : torch::Tensor()), padding_idx); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__embedding_bag_dense_backward_out(tensor *out__, tensor out, tensor grad, tensor indices, tensor offset2bag, tensor bag_size, tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, tensor per_sample_weights, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_dense_backward_out(*out, *grad, *indices, *offset2bag, *bag_size, *maximum_indices, num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? *per_sample_weights : torch::Tensor()), padding_idx); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__embedding_bag_forward_only(tensor *out__, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_forward_only(*weight, *indices, *offsets, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? *per_sample_weights : torch::Tensor()), (bool)include_last_offset, padding_idx); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__embedding_bag_forward_only_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_forward_only_out(*out0, *out1, *out2, *out3, *weight, *indices, *offsets, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? *per_sample_weights : torch::Tensor()), (bool)include_last_offset, padding_idx); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__embedding_bag_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_out(*out0, *out1, *out2, *out3, *weight, *indices, *offsets, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? *per_sample_weights : torch::Tensor()), (bool)include_last_offset, padding_idx); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__embedding_bag_per_sample_weights_backward(tensor *out__, tensor grad, tensor weight, tensor indices, tensor offsets, tensor offset2bag, int64_t mode, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_per_sample_weights_backward(*grad, *weight, *indices, *offsets, *offset2bag, mode, padding_idx); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__embedding_bag_per_sample_weights_backward_out(tensor *out__, tensor out, tensor grad, tensor weight, tensor indices, tensor offsets, tensor offset2bag, int64_t mode, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_per_sample_weights_backward_out(*out, *grad, *weight, *indices, *offsets, *offset2bag, mode, padding_idx); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__embedding_bag_sparse_backward(tensor *out__, tensor grad, tensor indices, tensor offsets, tensor offset2bag, tensor bag_size, int64_t num_weights, int scale_grad_by_freq, int64_t mode, tensor per_sample_weights, int64_t padding_idx) { - PROTECT( - auto outputs__ = torch::_embedding_bag_sparse_backward(*grad, *indices, *offsets, *offset2bag, *bag_size, num_weights, (bool)scale_grad_by_freq, mode, (per_sample_weights ? *per_sample_weights : torch::Tensor()), padding_idx); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__empty_affine_quantized(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device, double scale, int64_t zero_point) { - PROTECT( - auto outputs__ = torch::_empty_affine_quantized(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), scale, zero_point); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__empty_affine_quantized_out(tensor *out__, tensor out, int64_t *size_data, int size_len, double scale, int64_t zero_point) { - PROTECT( - auto outputs__ = torch::_empty_affine_quantized_out(*out, torch::IntArrayRef(size_data, size_len), scale, zero_point); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__empty_per_channel_affine_quantized(tensor *out__, int64_t *size_data, int size_len, tensor scales, tensor zero_points, int64_t axis, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_empty_per_channel_affine_quantized(torch::IntArrayRef(size_data, size_len), *scales, *zero_points, axis, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__empty_per_channel_affine_quantized_out(tensor *out__, tensor out, int64_t *size_data, int size_len, tensor scales, tensor zero_points, int64_t axis) { - PROTECT( - auto outputs__ = torch::_empty_per_channel_affine_quantized_out(*out, torch::IntArrayRef(size_data, size_len), *scales, *zero_points, axis); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__euclidean_dist(tensor *out__, tensor x1, tensor x2) { - PROTECT( - auto outputs__ = torch::_euclidean_dist(*x1, *x2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__euclidean_dist_out(tensor *out__, tensor out, tensor x1, tensor x2) { - PROTECT( - auto outputs__ = torch::_euclidean_dist_out(*out, *x1, *x2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fake_quantize_learnable_per_channel_affine(tensor *out__, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor) { - PROTECT( - auto outputs__ = torch::_fake_quantize_learnable_per_channel_affine(*self, *scale, *zero_point, axis, quant_min, quant_max, grad_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fake_quantize_learnable_per_channel_affine_backward(tensor *out__, tensor grad, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor) { - PROTECT( - auto outputs__ = torch::_fake_quantize_learnable_per_channel_affine_backward(*grad, *self, *scale, *zero_point, axis, quant_min, quant_max, grad_factor); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__fake_quantize_learnable_per_channel_affine_out(tensor *out__, tensor out, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor) { - PROTECT( - auto outputs__ = torch::_fake_quantize_learnable_per_channel_affine_out(*out, *self, *scale, *zero_point, axis, quant_min, quant_max, grad_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fake_quantize_learnable_per_tensor_affine(tensor *out__, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor) { - PROTECT( - auto outputs__ = torch::_fake_quantize_learnable_per_tensor_affine(*self, *scale, *zero_point, quant_min, quant_max, grad_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fake_quantize_learnable_per_tensor_affine_backward(tensor *out__, tensor grad, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor) { - PROTECT( - auto outputs__ = torch::_fake_quantize_learnable_per_tensor_affine_backward(*grad, *self, *scale, *zero_point, quant_min, quant_max, grad_factor); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__fake_quantize_learnable_per_tensor_affine_out(tensor *out__, tensor out, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor) { - PROTECT( - auto outputs__ = torch::_fake_quantize_learnable_per_tensor_affine_out(*out, *self, *scale, *zero_point, quant_min, quant_max, grad_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams(tensor *out__, tensor self, tensor scale, tensor zero_point, tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::_fake_quantize_per_tensor_affine_cachemask_tensor_qparams(*self, *scale, *zero_point, *fake_quant_enabled, quant_min, quant_max); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out(tensor *out__, tensor out0, tensor out1, tensor self, tensor scale, tensor zero_point, tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::_fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out(*out0, *out1, *self, *scale, *zero_point, *fake_quant_enabled, quant_min, quant_max); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__fft_c2c(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward) { - PROTECT( - auto outputs__ = torch::_fft_c2c(*self, torch::IntArrayRef(dim_data, dim_len), normalization, (bool)forward); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fft_c2c_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward) { - PROTECT( - auto outputs__ = torch::_fft_c2c_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), normalization, (bool)forward); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fft_c2r(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size) { - PROTECT( - auto outputs__ = torch::_fft_c2r(*self, torch::IntArrayRef(dim_data, dim_len), normalization, last_dim_size); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fft_c2r_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size) { - PROTECT( - auto outputs__ = torch::_fft_c2r_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), normalization, last_dim_size); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fft_r2c(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided) { - PROTECT( - auto outputs__ = torch::_fft_r2c(*self, torch::IntArrayRef(dim_data, dim_len), normalization, (bool)onesided); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fft_r2c_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided) { - PROTECT( - auto outputs__ = torch::_fft_r2c_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), normalization, (bool)onesided); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fill_mem_eff_dropout_mask_(tensor *out__, tensor self, double dropout_p, int64_t seed, int64_t offset) { - PROTECT( - auto outputs__ = torch::_fill_mem_eff_dropout_mask_(*self, dropout_p, seed, offset); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__flash_attention_backward(tensor *out__, tensor grad_out, tensor query, tensor key, tensor value, tensor out, tensor logsumexp, tensor cum_seq_q, tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, tensor philox_seed, tensor philox_offset, double scale_v, int scale_null) { - PROTECT( - auto outputs__ = torch::_flash_attention_backward(*grad_out, *query, *key, *value, *out, *logsumexp, *cum_seq_q, *cum_seq_k, max_q, max_k, dropout_p, (bool)is_causal, *philox_seed, *philox_offset, scale_null ? c10::nullopt : c10::optional(scale_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__foobar(tensor *out__, tensor self, int arg1, int arg2, int arg3) { - PROTECT( - auto outputs__ = torch::_foobar(*self, (bool)arg1, (bool)arg2, (bool)arg3); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__foobar_out(tensor *out__, tensor out, tensor self, int arg1, int arg2, int arg3) { - PROTECT( - auto outputs__ = torch::_foobar_out(*out, *self, (bool)arg1, (bool)arg2, (bool)arg3); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__functional_assert_async(tensor *out__, tensor self, char * assert_msg, tensor dep_token) { - PROTECT( - auto outputs__ = torch::_functional_assert_async(*self, std::string(assert_msg), *dep_token); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__functional_sym_constrain_range(tensor *out__, scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, tensor dep_token) { - PROTECT( - auto outputs__ = torch::_functional_sym_constrain_range(*size, min_null ? c10::nullopt : c10::optional(min_v), max_null ? c10::nullopt : c10::optional(max_v), *dep_token); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__functional_sym_constrain_range_for_size(tensor *out__, scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, tensor dep_token) { - PROTECT( - auto outputs__ = torch::_functional_sym_constrain_range_for_size(*size, min_null ? c10::nullopt : c10::optional(min_v), max_null ? c10::nullopt : c10::optional(max_v), *dep_token); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fused_adam(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adam_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_adam_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adam_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_adam_tensor_lr_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adam_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_adam_tensor_lr_out(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adam_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_adamw(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adamw_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_adamw_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adamw_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_adamw_tensor_lr_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adamw_(of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_adamw_tensor_lr_out(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf) { - PROTECT( - torch::_fused_adamw_out(of_carray_tensor(out_data, out_len), of_carray_tensor(self_data, self_len), of_carray_tensor(grads_data, grads_len), of_carray_tensor(exp_avgs_data, exp_avgs_len), of_carray_tensor(exp_avg_sqs_data, exp_avg_sqs_len), of_carray_tensor(max_exp_avg_sqs_data, max_exp_avg_sqs_len), of_carray_tensor(state_steps_data, state_steps_len), *lr, beta1, beta2, weight_decay, eps, (bool)amsgrad, (bool)maximize, (grad_scale ? *grad_scale : torch::Tensor()), (found_inf ? *found_inf : torch::Tensor())); - ) -} - -void atg__fused_dropout(tensor *out__, tensor self, double p) { - PROTECT( - auto outputs__ = torch::_fused_dropout(*self, p); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__fused_dropout_out(tensor *out__, tensor out0, tensor out1, tensor self, double p) { - PROTECT( - auto outputs__ = torch::_fused_dropout_out(*out0, *out1, *self, p); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__fused_moving_avg_obs_fq_helper(tensor *out__, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { - PROTECT( - auto outputs__ = torch::_fused_moving_avg_obs_fq_helper(*self, *observer_on, *fake_quant_on, *running_min, *running_max, *scale, *zero_point, averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__fused_moving_avg_obs_fq_helper_functional(tensor *out__, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { - PROTECT( - auto outputs__ = torch::_fused_moving_avg_obs_fq_helper_functional(*self, *observer_on, *fake_quant_on, *running_min, *running_max, *scale, *zero_point, averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - out__[5] = new torch::Tensor(std::get<5>(outputs__)); - ) -} - -void atg__fused_moving_avg_obs_fq_helper_out(tensor *out__, tensor out0, tensor out1, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { - PROTECT( - auto outputs__ = torch::_fused_moving_avg_obs_fq_helper_out(*out0, *out1, *self, *observer_on, *fake_quant_on, *running_min, *running_max, *scale, *zero_point, averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -int64_t atg__fused_sdp_choice(tensor query, tensor key, tensor value, tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null) { - PROTECT( - return torch::_fused_sdp_choice(*query, *key, *value, (attn_mask ? *attn_mask : torch::Tensor()), dropout_p, (bool)is_causal, scale_null ? c10::nullopt : c10::optional(scale_v)); - ) - return 0; -} - -void atg__fw_primal(tensor *out__, tensor self, int64_t level) { - PROTECT( - auto outputs__ = self->_fw_primal(level); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fw_primal_copy(tensor *out__, tensor self, int64_t level) { - PROTECT( - auto outputs__ = torch::_fw_primal_copy(*self, level); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__fw_primal_copy_out(tensor *out__, tensor out, tensor self, int64_t level) { - PROTECT( - auto outputs__ = torch::_fw_primal_copy_out(*out, *self, level); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__gather_sparse_backward(tensor *out__, tensor self, int64_t dim, tensor index, tensor grad) { - PROTECT( - auto outputs__ = torch::_gather_sparse_backward(*self, dim, *index, *grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__grid_sampler_2d_cpu_fallback(tensor *out__, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::_grid_sampler_2d_cpu_fallback(*input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__grid_sampler_2d_cpu_fallback_backward(tensor *out__, tensor grad_output, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::_grid_sampler_2d_cpu_fallback_backward(*grad_output, *input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__grid_sampler_2d_cpu_fallback_out(tensor *out__, tensor out, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::_grid_sampler_2d_cpu_fallback_out(*out, *input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg__has_compatible_shallow_copy_type(tensor self, tensor from) { - PROTECT( - return torch::_has_compatible_shallow_copy_type(*self, *from); - ) - return 0; -} - -int atg__has_same_storage_numel(tensor self, tensor other) { - PROTECT( - return torch::_has_same_storage_numel(*self, *other); - ) - return 0; -} - -tensor *atg__histogramdd_bin_edges(tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density) { - PROTECT( - auto outputs__ = torch::_histogramdd_bin_edges(*self, torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? *weight : torch::Tensor()), (bool)density); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg__histogramdd_bin_edges_out(tensor *out_data, int out_len, tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density) { - PROTECT( - torch::_histogramdd_bin_edges_out(of_carray_tensor(out_data, out_len), *self, torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? *weight : torch::Tensor()), (bool)density); - ) -} - -void atg__histogramdd_from_bin_cts(tensor *out__, tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density) { - PROTECT( - auto outputs__ = torch::_histogramdd_from_bin_cts(*self, torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? *weight : torch::Tensor()), (bool)density); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__histogramdd_from_bin_cts_out(tensor *out__, tensor out, tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density) { - PROTECT( - auto outputs__ = torch::_histogramdd_from_bin_cts_out(*out, *self, torch::IntArrayRef(bins_data, bins_len), at::ArrayRef(range_data, range_len), (weight ? *weight : torch::Tensor()), (bool)density); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__histogramdd_from_bin_tensors(tensor *out__, tensor self, tensor *bins_data, int bins_len, tensor weight, int density) { - PROTECT( - auto outputs__ = torch::_histogramdd_from_bin_tensors(*self, of_carray_tensor(bins_data, bins_len), (weight ? *weight : torch::Tensor()), (bool)density); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__histogramdd_from_bin_tensors_out(tensor *out__, tensor out, tensor self, tensor *bins_data, int bins_len, tensor weight, int density) { - PROTECT( - auto outputs__ = torch::_histogramdd_from_bin_tensors_out(*out, *self, of_carray_tensor(bins_data, bins_len), (weight ? *weight : torch::Tensor()), (bool)density); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__index_put_impl(tensor *out__, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate, int unsafe) { - PROTECT( - auto outputs__ = torch::_index_put_impl(*self, of_carray_tensor_opt(indices_data, indices_len), *values, (bool)accumulate, (bool)unsafe); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__index_put_impl_(tensor *out__, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate, int unsafe) { - PROTECT( - auto outputs__ = torch::_index_put_impl_(*self, of_carray_tensor_opt(indices_data, indices_len), *values, (bool)accumulate, (bool)unsafe); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__index_put_impl_out(tensor *out__, tensor out, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate, int unsafe) { - PROTECT( - auto outputs__ = torch::_index_put_impl_out(*out, *self, of_carray_tensor_opt(indices_data, indices_len), *values, (bool)accumulate, (bool)unsafe); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__indices(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->_indices(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__indices_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_indices_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__indices_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_indices_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__int_mm(tensor *out__, tensor self, tensor mat2) { - PROTECT( - auto outputs__ = torch::_int_mm(*self, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__int_mm_out(tensor *out__, tensor out, tensor self, tensor mat2) { - PROTECT( - auto outputs__ = torch::_int_mm_out(*out, *self, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__is_all_true(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_is_all_true(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__is_any_true(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_is_any_true(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg__is_zerotensor(tensor self) { - PROTECT( - return torch::_is_zerotensor(*self); - ) - return 0; -} - -void atg__linalg_check_errors(tensor info, char * api_name, int is_matrix) { - PROTECT( - torch::_linalg_check_errors(*info, std::string(api_name), (bool)is_matrix); - ) -} - -void atg__linalg_det(tensor *out__, tensor A) { - PROTECT( - auto outputs__ = torch::_linalg_det(*A); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__linalg_det_result(tensor *out__, tensor result, tensor LU, tensor pivots, tensor A) { - PROTECT( - auto outputs__ = torch::_linalg_det_out(*result, *LU, *pivots, *A); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__linalg_eigh(tensor *out__, tensor A, char * UPLO, int compute_v) { - PROTECT( - auto outputs__ = torch::_linalg_eigh(*A, std::string(UPLO), (bool)compute_v); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__linalg_eigh_eigenvalues(tensor *out__, tensor eigenvalues, tensor eigenvectors, tensor A, char * UPLO, int compute_v) { - PROTECT( - auto outputs__ = torch::_linalg_eigh_out(*eigenvalues, *eigenvectors, *A, std::string(UPLO), (bool)compute_v); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__linalg_slogdet(tensor *out__, tensor A) { - PROTECT( - auto outputs__ = torch::_linalg_slogdet(*A); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__linalg_slogdet_sign(tensor *out__, tensor sign, tensor logabsdet, tensor LU, tensor pivots, tensor A) { - PROTECT( - auto outputs__ = torch::_linalg_slogdet_out(*sign, *logabsdet, *LU, *pivots, *A); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__linalg_solve_ex(tensor *out__, tensor A, tensor B, int left, int check_errors) { - PROTECT( - auto outputs__ = torch::_linalg_solve_ex(*A, *B, (bool)left, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__linalg_solve_ex_result(tensor *out__, tensor result, tensor LU, tensor pivots, tensor info, tensor A, tensor B, int left, int check_errors) { - PROTECT( - auto outputs__ = torch::_linalg_solve_ex_out(*result, *LU, *pivots, *info, *A, *B, (bool)left, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__linalg_svd(tensor *out__, tensor A, int full_matrices, int compute_uv, char * driver) { - PROTECT( - auto outputs__ = torch::_linalg_svd(*A, (bool)full_matrices, (bool)compute_uv, std::string(driver)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__linalg_svd_u(tensor *out__, tensor U, tensor S, tensor Vh, tensor A, int full_matrices, int compute_uv, char * driver) { - PROTECT( - auto outputs__ = torch::_linalg_svd_out(*U, *S, *Vh, *A, (bool)full_matrices, (bool)compute_uv, std::string(driver)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__log_softmax(tensor *out__, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_log_softmax(*self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__log_softmax_backward_data(tensor *out__, tensor grad_output, tensor output, int64_t dim, int input_dtype) { - PROTECT( - auto outputs__ = torch::_log_softmax_backward_data(*grad_output, *output, dim, torch::ScalarType(input_dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__log_softmax_backward_data_out(tensor *out__, tensor out, tensor grad_output, tensor output, int64_t dim, int input_dtype) { - PROTECT( - auto outputs__ = torch::_log_softmax_backward_data_out(*out, *grad_output, *output, dim, torch::ScalarType(input_dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__log_softmax_out(tensor *out__, tensor out, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_log_softmax_out(*out, *self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__logcumsumexp(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::_logcumsumexp(*self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__logcumsumexp_out(tensor *out__, tensor out, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::_logcumsumexp_out(*out, *self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__lstm_mps(tensor *out__, tensor input, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { - PROTECT( - auto outputs__ = torch::_lstm_mps(*input, of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - out__[5] = new torch::Tensor(std::get<5>(outputs__)); - ) -} - -void atg__lstm_mps_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor out5, tensor input, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { - PROTECT( - auto outputs__ = torch::_lstm_mps_out(*out0, *out1, *out2, *out3, *out4, *out5, *input, of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - out__[5] = new torch::Tensor(std::get<5>(outputs__)); - ) -} - -void atg__lu_with_info(tensor *out__, tensor self, int pivot, int check_errors) { - PROTECT( - auto outputs__ = torch::_lu_with_info(*self, (bool)pivot, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__make_dep_token(tensor *out__, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_make_dep_token(at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__make_dual(tensor *out__, tensor primal, tensor tangent, int64_t level) { - PROTECT( - auto outputs__ = torch::_make_dual(*primal, *tangent, level); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__make_dual_copy(tensor *out__, tensor primal, tensor tangent, int64_t level) { - PROTECT( - auto outputs__ = torch::_make_dual_copy(*primal, *tangent, level); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__make_dual_copy_out(tensor *out__, tensor out, tensor primal, tensor tangent, int64_t level) { - PROTECT( - auto outputs__ = torch::_make_dual_copy_out(*out, *primal, *tangent, level); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__make_per_channel_quantized_tensor(tensor *out__, tensor self, tensor scale, tensor zero_point, int64_t axis) { - PROTECT( - auto outputs__ = torch::_make_per_channel_quantized_tensor(*self, *scale, *zero_point, axis); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__make_per_channel_quantized_tensor_out(tensor *out__, tensor out, tensor self, tensor scale, tensor zero_point, int64_t axis) { - PROTECT( - auto outputs__ = torch::_make_per_channel_quantized_tensor_out(*out, *self, *scale, *zero_point, axis); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__make_per_tensor_quantized_tensor(tensor *out__, tensor self, double scale, int64_t zero_point) { - PROTECT( - auto outputs__ = torch::_make_per_tensor_quantized_tensor(*self, scale, zero_point); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__make_per_tensor_quantized_tensor_out(tensor *out__, tensor out, tensor self, double scale, int64_t zero_point) { - PROTECT( - auto outputs__ = torch::_make_per_tensor_quantized_tensor_out(*out, *self, scale, zero_point); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__masked_scale(tensor *out__, tensor self, tensor mask, double scale) { - PROTECT( - auto outputs__ = torch::_masked_scale(*self, *mask, scale); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__masked_scale_out(tensor *out__, tensor out, tensor self, tensor mask, double scale) { - PROTECT( - auto outputs__ = torch::_masked_scale_out(*out, *self, *mask, scale); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__masked_softmax(tensor *out__, tensor self, tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null) { - PROTECT( - auto outputs__ = torch::_masked_softmax(*self, *mask, dim_null ? c10::nullopt : c10::optional(dim_v), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__masked_softmax_backward(tensor *out__, tensor grad_output, tensor output, tensor mask, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::_masked_softmax_backward(*grad_output, *output, *mask, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__masked_softmax_backward_out(tensor *out__, tensor out, tensor grad_output, tensor output, tensor mask, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::_masked_softmax_backward_out(*out, *grad_output, *output, *mask, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__masked_softmax_out(tensor *out__, tensor out, tensor self, tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null) { - PROTECT( - auto outputs__ = torch::_masked_softmax_out(*out, *self, *mask, dim_null ? c10::nullopt : c10::optional(dim_v), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mkldnn_reshape(tensor *out__, tensor self, int64_t *shape_data, int shape_len) { - PROTECT( - auto outputs__ = torch::_mkldnn_reshape(*self, torch::IntArrayRef(shape_data, shape_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mkldnn_reshape_out(tensor *out__, tensor out, tensor self, int64_t *shape_data, int shape_len) { - PROTECT( - auto outputs__ = torch::_mkldnn_reshape_out(*out, *self, torch::IntArrayRef(shape_data, shape_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mkldnn_transpose(tensor *out__, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = torch::_mkldnn_transpose(*self, dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mkldnn_transpose_(tensor *out__, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = torch::_mkldnn_transpose_(*self, dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mkldnn_transpose_out(tensor *out__, tensor out, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = torch::_mkldnn_transpose_out(*out, *self, dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mps_convolution(tensor *out__, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::_mps_convolution(*self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mps_convolution_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::_mps_convolution_out(*out, *self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mps_convolution_transpose(tensor *out__, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::_mps_convolution_transpose(*self, *weight, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__mps_convolution_transpose_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::_mps_convolution_transpose_out(*out, *self, *weight, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__native_batch_norm_legit(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::_native_batch_norm_legit(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *running_mean, *running_var, (bool)training, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__native_batch_norm_legit_functional(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::_native_batch_norm_legit_functional(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *running_mean, *running_var, (bool)training, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg__native_batch_norm_legit_no_stats(tensor *out__, tensor input, tensor weight, tensor bias, int training, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::_native_batch_norm_legit(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), (bool)training, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__native_batch_norm_legit_no_stats_out(tensor *out__, tensor out, tensor save_mean, tensor save_invstd, tensor input, tensor weight, tensor bias, int training, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::_native_batch_norm_legit_out(*out, *save_mean, *save_invstd, *input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), (bool)training, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__native_batch_norm_legit_no_training(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::_native_batch_norm_legit_no_training(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *running_mean, *running_var, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__native_batch_norm_legit_no_training_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::_native_batch_norm_legit_no_training_out(*out0, *out1, *out2, *input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *running_mean, *running_var, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__native_batch_norm_legit_out(tensor *out__, tensor out, tensor save_mean, tensor save_invstd, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::_native_batch_norm_legit_out(*out, *save_mean, *save_invstd, *input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *running_mean, *running_var, (bool)training, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__native_multi_head_attention(tensor *out__, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null) { - PROTECT( - auto outputs__ = torch::_native_multi_head_attention(*query, *key, *value, embed_dim, num_head, *qkv_weight, *qkv_bias, *proj_weight, *proj_bias, (mask ? *mask : torch::Tensor()), (bool)need_weights, (bool)average_attn_weights, mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__native_multi_head_attention_out(tensor *out__, tensor out0, tensor out1, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null) { - PROTECT( - auto outputs__ = torch::_native_multi_head_attention_out(*out0, *out1, *query, *key, *value, embed_dim, num_head, *qkv_weight, *qkv_bias, *proj_weight, *proj_bias, (mask ? *mask : torch::Tensor()), (bool)need_weights, (bool)average_attn_weights, mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__neg_view(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_neg_view(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__neg_view_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_neg_view_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__neg_view_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_neg_view_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_from_padded(tensor *out__, tensor padded, tensor cpu_nested_shape_example, int fuse_transform_0213) { - PROTECT( - auto outputs__ = torch::_nested_from_padded(*padded, *cpu_nested_shape_example, (bool)fuse_transform_0213); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_from_padded_and_nested_example(tensor *out__, tensor padded, tensor nt_example) { - PROTECT( - auto outputs__ = torch::_nested_from_padded_and_nested_example(*padded, *nt_example); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_from_padded_and_nested_example_out(tensor *out__, tensor out, tensor padded, tensor nt_example) { - PROTECT( - auto outputs__ = torch::_nested_from_padded_and_nested_example_out(*out, *padded, *nt_example); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_from_padded_out(tensor *out__, tensor out, tensor padded, tensor cpu_nested_shape_example, int fuse_transform_0213) { - PROTECT( - auto outputs__ = torch::_nested_from_padded_out(*out, *padded, *cpu_nested_shape_example, (bool)fuse_transform_0213); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_select_backward(tensor *out__, tensor grad_output, tensor self, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::_nested_select_backward(*grad_output, *self, dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_sum_backward(tensor *out__, tensor grad, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::_nested_sum_backward(*grad, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_view_from_buffer(tensor *out__, tensor self, tensor nested_size, tensor nested_strides, tensor offsets) { - PROTECT( - auto outputs__ = torch::_nested_view_from_buffer(*self, *nested_size, *nested_strides, *offsets); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_view_from_buffer_copy(tensor *out__, tensor self, tensor nested_size, tensor nested_strides, tensor offsets) { - PROTECT( - auto outputs__ = torch::_nested_view_from_buffer_copy(*self, *nested_size, *nested_strides, *offsets); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nested_view_from_buffer_copy_out(tensor *out__, tensor out, tensor self, tensor nested_size, tensor nested_strides, tensor offsets) { - PROTECT( - auto outputs__ = torch::_nested_view_from_buffer_copy_out(*out, *self, *nested_size, *nested_strides, *offsets); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__new_zeros_with_same_feature_meta(tensor *out__, tensor self, tensor other, int64_t self_num_batch_dims) { - PROTECT( - auto outputs__ = torch::_new_zeros_with_same_feature_meta(*self, *other, self_num_batch_dims); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__new_zeros_with_same_feature_meta_out(tensor *out__, tensor out, tensor self, tensor other, int64_t self_num_batch_dims) { - PROTECT( - auto outputs__ = torch::_new_zeros_with_same_feature_meta_out(*out, *self, *other, self_num_batch_dims); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg__nnpack_available() { - PROTECT( - return torch::_nnpack_available(); - ) - return 0; -} - -void atg__nnpack_spatial_convolution(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::_nnpack_spatial_convolution(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__nnpack_spatial_convolution_out(tensor *out__, tensor out, tensor input, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::_nnpack_spatial_convolution_out(*out, *input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg__nnz(tensor self) { - PROTECT( - return self->_nnz(); - ) - return 0; -} - -void atg__pack_padded_sequence(tensor *out__, tensor input, tensor lengths, int batch_first) { - PROTECT( - auto outputs__ = torch::_pack_padded_sequence(*input, *lengths, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__pack_padded_sequence_backward(tensor *out__, tensor grad, int64_t *input_size_data, int input_size_len, tensor batch_sizes, int batch_first) { - PROTECT( - auto outputs__ = torch::_pack_padded_sequence_backward(*grad, torch::IntArrayRef(input_size_data, input_size_len), *batch_sizes, (bool)batch_first); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__pack_padded_sequence_out(tensor *out__, tensor out0, tensor out1, tensor input, tensor lengths, int batch_first) { - PROTECT( - auto outputs__ = torch::_pack_padded_sequence_out(*out0, *out1, *input, *lengths, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__pad_circular(tensor *out__, tensor self, int64_t *pad_data, int pad_len) { - PROTECT( - auto outputs__ = torch::_pad_circular(*self, torch::IntArrayRef(pad_data, pad_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__pad_enum(tensor *out__, tensor self, int64_t *pad_data, int pad_len, int64_t mode, double value_v, int value_null) { - PROTECT( - auto outputs__ = torch::_pad_enum(*self, torch::IntArrayRef(pad_data, pad_len), mode, value_null ? c10::nullopt : c10::optional(value_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__pad_packed_sequence(tensor *out__, tensor data, tensor batch_sizes, int batch_first, scalar padding_value, int64_t total_length) { - PROTECT( - auto outputs__ = torch::_pad_packed_sequence(*data, *batch_sizes, (bool)batch_first, *padding_value, total_length); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__pdist_backward(tensor *out__, tensor grad, tensor self, double p, tensor pdist) { - PROTECT( - auto outputs__ = torch::_pdist_backward(*grad, *self, p, *pdist); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__pdist_backward_out(tensor *out__, tensor out, tensor grad, tensor self, double p, tensor pdist) { - PROTECT( - auto outputs__ = torch::_pdist_backward_out(*out, *grad, *self, p, *pdist); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__pin_memory(tensor *out__, tensor self, int device) { - PROTECT( - auto outputs__ = torch::_pin_memory(*self, device_of_int(device)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__pin_memory_out(tensor *out__, tensor out, tensor self, int device) { - PROTECT( - auto outputs__ = torch::_pin_memory_out(*out, *self, device_of_int(device)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__prelu_kernel(tensor *out__, tensor self, tensor weight) { - PROTECT( - auto outputs__ = torch::_prelu_kernel(*self, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__prelu_kernel_backward(tensor *out__, tensor grad_output, tensor self, tensor weight) { - PROTECT( - auto outputs__ = torch::_prelu_kernel_backward(*grad_output, *self, *weight); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__propagate_xla_data(tensor input, tensor output) { - PROTECT( - torch::_propagate_xla_data(*input, *output); - ) -} - -void atg__remove_batch_dim(tensor *out__, tensor self, int64_t level, int64_t batch_size, int64_t out_dim) { - PROTECT( - auto outputs__ = torch::_remove_batch_dim(*self, level, batch_size, out_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__reshape_alias(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::_reshape_alias(*self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__reshape_alias_copy(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::_reshape_alias_copy(*self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__reshape_alias_copy_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::_reshape_alias_copy_out(*out, *self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__reshape_copy(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_reshape_copy(*self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__reshape_from_tensor(tensor *out__, tensor self, tensor shape) { - PROTECT( - auto outputs__ = torch::_reshape_from_tensor(*self, *shape); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__resize_output(tensor *out__, tensor self, int64_t *size_data, int size_len, int device) { - PROTECT( - auto outputs__ = torch::_resize_output(*self, torch::IntArrayRef(size_data, size_len), device_of_int(device)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__resize_output_(tensor *out__, tensor self, int64_t *size_data, int size_len, int device) { - PROTECT( - auto outputs__ = torch::_resize_output_(*self, torch::IntArrayRef(size_data, size_len), device_of_int(device)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__resize_output_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, int device) { - PROTECT( - auto outputs__ = torch::_resize_output_out(*out, *self, torch::IntArrayRef(size_data, size_len), device_of_int(device)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__rowwise_prune(tensor *out__, tensor weight, tensor mask, int compressed_indices_dtype) { - PROTECT( - auto outputs__ = torch::_rowwise_prune(*weight, *mask, torch::ScalarType(compressed_indices_dtype)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__sample_dirichlet(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_sample_dirichlet(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sample_dirichlet_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_sample_dirichlet_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__saturate_weight_to_fp16(tensor *out__, tensor weight) { - PROTECT( - auto outputs__ = torch::_saturate_weight_to_fp16(*weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__scaled_dot_product_attention_math(tensor *out__, tensor query, tensor key, tensor value, tensor attn_mask, double dropout_p, int is_causal, tensor dropout_mask, double scale_v, int scale_null) { - PROTECT( - auto outputs__ = torch::_scaled_dot_product_attention_math(*query, *key, *value, (attn_mask ? *attn_mask : torch::Tensor()), dropout_p, (bool)is_causal, (dropout_mask ? *dropout_mask : torch::Tensor()), scale_null ? c10::nullopt : c10::optional(scale_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__scaled_dot_product_efficient_attention(tensor *out__, tensor query, tensor key, tensor value, tensor attn_bias, int compute_log_sumexp, double dropout_p, int is_causal, double scale_v, int scale_null) { - PROTECT( - auto outputs__ = torch::_scaled_dot_product_efficient_attention(*query, *key, *value, (attn_bias ? *attn_bias : torch::Tensor()), (bool)compute_log_sumexp, dropout_p, (bool)is_causal, scale_null ? c10::nullopt : c10::optional(scale_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg__scaled_dot_product_flash_attention_backward(tensor *out__, tensor grad_out, tensor query, tensor key, tensor value, tensor out, tensor logsumexp, tensor cum_seq_q, tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, tensor philox_seed, tensor philox_offset, double scale_v, int scale_null) { - PROTECT( - auto outputs__ = torch::_scaled_dot_product_flash_attention_backward(*grad_out, *query, *key, *value, *out, *logsumexp, *cum_seq_q, *cum_seq_k, max_q, max_k, dropout_p, (bool)is_causal, *philox_seed, *philox_offset, scale_null ? c10::nullopt : c10::optional(scale_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__scaled_mm(tensor *out__, tensor self, tensor mat2, tensor bias, int out_dtype, tensor scale_a, tensor scale_b, tensor scale_result) { - PROTECT( - auto outputs__ = torch::_scaled_mm(*self, *mat2, (bias ? *bias : torch::Tensor()), torch::ScalarType(out_dtype), (scale_a ? *scale_a : torch::Tensor()), (scale_b ? *scale_b : torch::Tensor()), (scale_result ? *scale_result : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__scaled_mm_out(tensor *out__, tensor out, tensor out_amax, tensor self, tensor mat2, tensor bias, int out_dtype, tensor scale_a, tensor scale_b, tensor scale_result) { - PROTECT( - auto outputs__ = torch::_scaled_mm_out(*out, *out_amax, *self, *mat2, (bias ? *bias : torch::Tensor()), torch::ScalarType(out_dtype), (scale_a ? *scale_a : torch::Tensor()), (scale_b ? *scale_b : torch::Tensor()), (scale_result ? *scale_result : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__scatter_reduce(tensor *out__, tensor self, int64_t dim, tensor index, tensor src, char * reduce, int include_self) { - PROTECT( - auto outputs__ = torch::scatter_reduce(*self, dim, *index, *src, std::string(reduce), (bool)include_self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__scatter_reduce_(tensor *out__, tensor self, int64_t dim, tensor index, tensor src, char * reduce, int include_self) { - PROTECT( - auto outputs__ = self->scatter_reduce_(dim, *index, *src, std::string(reduce), (bool)include_self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__scatter_reduce_two_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor src, char * reduce, int include_self) { - PROTECT( - auto outputs__ = torch::scatter_reduce_out(*out, *self, dim, *index, *src, std::string(reduce), (bool)include_self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__segment_reduce_backward(tensor *out__, tensor grad, tensor output, tensor data, char * reduce, tensor lengths, tensor offsets, int64_t axis, scalar initial) { - PROTECT( - auto outputs__ = torch::_segment_reduce_backward(*grad, *output, *data, std::string(reduce), (lengths ? *lengths : torch::Tensor()), (offsets ? *offsets : torch::Tensor()), axis, *initial); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__segment_reduce_backward_out(tensor *out__, tensor out, tensor grad, tensor output, tensor data, char * reduce, tensor lengths, tensor offsets, int64_t axis, scalar initial) { - PROTECT( - auto outputs__ = torch::_segment_reduce_backward_out(*out, *grad, *output, *data, std::string(reduce), (lengths ? *lengths : torch::Tensor()), (offsets ? *offsets : torch::Tensor()), axis, *initial); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__shape_as_tensor(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_shape_as_tensor(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__slow_conv2d_backward(tensor *out__, tensor grad_input, tensor grad_weight, tensor grad_bias, tensor grad_output, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::_slow_conv2d_backward_out(*grad_input, *grad_weight, *grad_bias, *grad_output, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__sobol_engine_draw(tensor *out__, tensor quasi, int64_t n, tensor sobolstate, int64_t dimension, int64_t num_generated, int dtype) { - PROTECT( - auto outputs__ = torch::_sobol_engine_draw(*quasi, n, *sobolstate, dimension, num_generated, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__sobol_engine_ff_(tensor *out__, tensor self, int64_t n, tensor sobolstate, int64_t dimension, int64_t num_generated) { - PROTECT( - auto outputs__ = torch::_sobol_engine_ff_(*self, n, *sobolstate, dimension, num_generated); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sobol_engine_initialize_state_(tensor *out__, tensor self, int64_t dimension) { - PROTECT( - auto outputs__ = torch::_sobol_engine_initialize_state_(*self, dimension); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sobol_engine_scramble_(tensor *out__, tensor self, tensor ltm, int64_t dimension) { - PROTECT( - auto outputs__ = torch::_sobol_engine_scramble_(*self, *ltm, dimension); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__softmax(tensor *out__, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_softmax(*self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__softmax_backward_data(tensor *out__, tensor grad_output, tensor output, int64_t dim, int input_dtype) { - PROTECT( - auto outputs__ = torch::_softmax_backward_data(*grad_output, *output, dim, torch::ScalarType(input_dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__softmax_backward_data_out(tensor *out__, tensor grad_input, tensor grad_output, tensor output, int64_t dim, int input_dtype) { - PROTECT( - auto outputs__ = torch::_softmax_backward_data_out(*grad_input, *grad_output, *output, dim, torch::ScalarType(input_dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__softmax_out(tensor *out__, tensor out, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_softmax_out(*out, *self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_addmm(tensor *out__, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::_sparse_addmm(*self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_addmm_out(tensor *out__, tensor out, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::_sparse_addmm_out(*out, *self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_broadcast_to(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_sparse_broadcast_to(*self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_broadcast_to_copy(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_sparse_broadcast_to_copy(*self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_broadcast_to_copy_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_sparse_broadcast_to_copy_out(*out, *self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_bsc_tensor_unsafe(tensor *out__, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_sparse_bsc_tensor_unsafe(*ccol_indices, *row_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_bsr_tensor_unsafe(tensor *out__, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_sparse_bsr_tensor_unsafe(*crow_indices, *col_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_compressed_tensor_unsafe(tensor *out__, tensor compressed_indices, tensor plain_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_sparse_compressed_tensor_unsafe(*compressed_indices, *plain_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_coo_tensor_unsafe(tensor *out__, tensor indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced) { - PROTECT( - auto outputs__ = torch::_sparse_coo_tensor_unsafe(*indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_coo_tensor_with_dims(tensor *out__, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_sparse_coo_tensor_with_dims(sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_coo_tensor_with_dims_and_tensors(tensor *out__, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, tensor indices, tensor values, int options_kind, int options_device, int is_coalesced) { - PROTECT( - auto outputs__ = torch::_sparse_coo_tensor_with_dims_and_tensors(sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len), *indices, *values, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_coo_tensor_with_dims_and_tensors_out(tensor *out__, tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, tensor indices, tensor values, int is_coalesced) { - PROTECT( - auto outputs__ = torch::_sparse_coo_tensor_with_dims_and_tensors_out(*out, sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len), *indices, *values, (bool)is_coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_coo_tensor_with_dims_out(tensor *out__, tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_sparse_coo_tensor_with_dims_out(*out, sparse_dim, dense_dim, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_csc_tensor_unsafe(tensor *out__, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_sparse_csc_tensor_unsafe(*ccol_indices, *row_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_csr_prod(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_csr_prod(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_csr_prod_dim_dtype_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_csr_prod_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_csr_sum(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_csr_sum(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_csr_sum_dim_dtype_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_csr_sum_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_csr_tensor_unsafe(tensor *out__, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::_sparse_csr_tensor_unsafe(*crow_indices, *col_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_log_softmax(tensor *out__, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_sparse_log_softmax(*self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_log_softmax_backward_data(tensor *out__, tensor grad_output, tensor output, int64_t dim, tensor self) { - PROTECT( - auto outputs__ = torch::_sparse_log_softmax_backward_data(*grad_output, *output, dim, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_log_softmax_backward_data_out(tensor *out__, tensor out, tensor grad_output, tensor output, int64_t dim, tensor self) { - PROTECT( - auto outputs__ = torch::_sparse_log_softmax_backward_data_out(*out, *grad_output, *output, dim, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_log_softmax_int(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_log_softmax(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_log_softmax_out(tensor *out__, tensor out, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_sparse_log_softmax_out(*out, *self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_mask_projection(tensor *out__, tensor self, tensor mask, int accumulate_matches) { - PROTECT( - auto outputs__ = self->_sparse_mask_projection(*mask, (bool)accumulate_matches); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_mask_projection_out(tensor *out__, tensor out, tensor self, tensor mask, int accumulate_matches) { - PROTECT( - auto outputs__ = torch::_sparse_mask_projection_out(*out, *self, *mask, (bool)accumulate_matches); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_mm(tensor *out__, tensor sparse, tensor dense) { - PROTECT( - auto outputs__ = torch::_sparse_mm(*sparse, *dense); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_mm_reduce(tensor *out__, tensor sparse, tensor dense, char * reduce) { - PROTECT( - auto outputs__ = torch::_sparse_mm(*sparse, *dense, std::string(reduce)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_mm_reduce_impl(tensor *out__, tensor self, tensor other, char * reduce) { - PROTECT( - auto outputs__ = torch::_sparse_mm_reduce_impl(*self, *other, std::string(reduce)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__sparse_semi_structured_linear(tensor *out__, tensor input, tensor weight, tensor meta, tensor bias, char * activation) { - PROTECT( - auto outputs__ = torch::_sparse_semi_structured_linear(*input, *weight, *meta, (bias ? *bias : torch::Tensor()), std::string(activation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_softmax(tensor *out__, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_sparse_softmax(*self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_softmax_backward_data(tensor *out__, tensor grad_output, tensor output, int64_t dim, tensor self) { - PROTECT( - auto outputs__ = torch::_sparse_softmax_backward_data(*grad_output, *output, dim, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_softmax_backward_data_out(tensor *out__, tensor out, tensor grad_output, tensor output, int64_t dim, tensor self) { - PROTECT( - auto outputs__ = torch::_sparse_softmax_backward_data_out(*out, *grad_output, *output, dim, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_softmax_int(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_softmax(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_softmax_out(tensor *out__, tensor out, tensor self, int64_t dim, int half_to_float) { - PROTECT( - auto outputs__ = torch::_sparse_softmax_out(*out, *self, dim, (bool)half_to_float); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sparse_matmul(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_sparse_sparse_matmul(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sparse_matmul_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_sparse_sparse_matmul_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sum(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_sparse_sum(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sum_backward(tensor *out__, tensor grad, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::_sparse_sum_backward(*grad, *self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sum_backward_out(tensor *out__, tensor out, tensor grad, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::_sparse_sum_backward_out(*out, *grad, *self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sum_dim(tensor *out__, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::_sparse_sum(*self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sum_dim_dtype(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_sum(*self, torch::IntArrayRef(dim_data, dim_len), torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sum_dim_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::_sparse_sum_out(*out, *self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__sparse_sum_dtype(tensor *out__, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::_sparse_sum(*self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__spdiags(tensor *out__, tensor diagonals, tensor offsets, int64_t *shape_data, int shape_len) { - PROTECT( - auto outputs__ = torch::_spdiags(*diagonals, *offsets, torch::IntArrayRef(shape_data, shape_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__spdiags_out(tensor *out__, tensor out, tensor diagonals, tensor offsets, int64_t *shape_data, int shape_len) { - PROTECT( - auto outputs__ = torch::_spdiags_out(*out, *diagonals, *offsets, torch::IntArrayRef(shape_data, shape_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__stack(tensor *out__, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::_stack(of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__stack_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::_stack_out(*out, of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__standard_gamma(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_standard_gamma(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__standard_gamma_grad(tensor *out__, tensor self, tensor output) { - PROTECT( - auto outputs__ = torch::_standard_gamma_grad(*self, *output); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__standard_gamma_grad_out(tensor *out__, tensor out, tensor self, tensor output) { - PROTECT( - auto outputs__ = torch::_standard_gamma_grad_out(*out, *self, *output); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__standard_gamma_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_standard_gamma_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_ambiguous_defaults(tensor *out__, tensor dummy, int64_t a, int64_t b) { - PROTECT( - auto outputs__ = torch::_test_ambiguous_defaults(*dummy, a, b); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_ambiguous_defaults_b(tensor *out__, tensor dummy, int64_t a, char * b) { - PROTECT( - auto outputs__ = torch::_test_ambiguous_defaults(*dummy, a, std::string(b)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_autograd_multiple_dispatch(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_test_autograd_multiple_dispatch(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_autograd_multiple_dispatch_fullcoverage_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_test_autograd_multiple_dispatch_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_autograd_multiple_dispatch_ntonly(tensor *out__, tensor self, int b) { - PROTECT( - auto outputs__ = torch::_test_autograd_multiple_dispatch(*self, (bool)b); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_autograd_multiple_dispatch_view(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_test_autograd_multiple_dispatch_view(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_autograd_multiple_dispatch_view_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_test_autograd_multiple_dispatch_view_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_autograd_multiple_dispatch_view_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_test_autograd_multiple_dispatch_view_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_check_tensor(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_test_check_tensor(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_functorch_fallback(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_test_functorch_fallback(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_functorch_fallback_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_test_functorch_fallback_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_optional_filled_intlist(tensor *out__, tensor values, int64_t *addends_data, int addends_len) { - PROTECT( - auto outputs__ = torch::_test_optional_filled_intlist(*values, addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_optional_filled_intlist_out(tensor *out__, tensor out, tensor values, int64_t *addends_data, int addends_len) { - PROTECT( - auto outputs__ = torch::_test_optional_filled_intlist_out(*out, *values, addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_optional_floatlist(tensor *out__, tensor values, double *addends_data, int addends_len) { - PROTECT( - auto outputs__ = torch::_test_optional_floatlist(*values, at::ArrayRef(addends_data, addends_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_optional_floatlist_out(tensor *out__, tensor out, tensor values, double *addends_data, int addends_len) { - PROTECT( - auto outputs__ = torch::_test_optional_floatlist_out(*out, *values, at::ArrayRef(addends_data, addends_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_optional_intlist(tensor *out__, tensor values, int64_t *addends_data, int addends_len) { - PROTECT( - auto outputs__ = torch::_test_optional_intlist(*values, addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_optional_intlist_out(tensor *out__, tensor out, tensor values, int64_t *addends_data, int addends_len) { - PROTECT( - auto outputs__ = torch::_test_optional_intlist_out(*out, *values, addends_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(addends_data, addends_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_serialization_subcmul(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::_test_serialization_subcmul(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_string_default(tensor *out__, tensor dummy, char * a, char * b) { - PROTECT( - auto outputs__ = torch::_test_string_default(*dummy, std::string(a), std::string(b)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_warn_in_autograd(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_test_warn_in_autograd(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__test_warn_in_autograd_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_test_warn_in_autograd_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__thnn_differentiable_gru_cell_backward(tensor *out__, tensor grad_hy, tensor input_gates, tensor hidden_gates, tensor hx, tensor input_bias, tensor hidden_bias) { - PROTECT( - auto outputs__ = torch::_thnn_differentiable_gru_cell_backward(*grad_hy, *input_gates, *hidden_gates, *hx, (input_bias ? *input_bias : torch::Tensor()), (hidden_bias ? *hidden_bias : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg__thnn_differentiable_lstm_cell_backward(tensor *out__, tensor grad_hy, tensor grad_cy, tensor input_gates, tensor hidden_gates, tensor input_bias, tensor hidden_bias, tensor cx, tensor cy) { - PROTECT( - auto outputs__ = torch::_thnn_differentiable_lstm_cell_backward((grad_hy ? *grad_hy : torch::Tensor()), (grad_cy ? *grad_cy : torch::Tensor()), *input_gates, *hidden_gates, (input_bias ? *input_bias : torch::Tensor()), (hidden_bias ? *hidden_bias : torch::Tensor()), *cx, *cy); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg__thnn_fused_gru_cell(tensor *out__, tensor input_gates, tensor hidden_gates, tensor hx, tensor input_bias, tensor hidden_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_gru_cell(*input_gates, *hidden_gates, *hx, (input_bias ? *input_bias : torch::Tensor()), (hidden_bias ? *hidden_bias : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__thnn_fused_gru_cell_backward(tensor *out__, tensor grad_hy, tensor workspace, int has_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_gru_cell_backward(*grad_hy, *workspace, (bool)has_bias); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg__thnn_fused_gru_cell_backward_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor grad_hy, tensor workspace, int has_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_gru_cell_backward_out(*out0, *out1, *out2, *out3, *out4, *grad_hy, *workspace, (bool)has_bias); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg__thnn_fused_gru_cell_out(tensor *out__, tensor out0, tensor out1, tensor input_gates, tensor hidden_gates, tensor hx, tensor input_bias, tensor hidden_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_gru_cell_out(*out0, *out1, *input_gates, *hidden_gates, *hx, (input_bias ? *input_bias : torch::Tensor()), (hidden_bias ? *hidden_bias : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__thnn_fused_lstm_cell(tensor *out__, tensor input_gates, tensor hidden_gates, tensor cx, tensor input_bias, tensor hidden_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_lstm_cell(*input_gates, *hidden_gates, *cx, (input_bias ? *input_bias : torch::Tensor()), (hidden_bias ? *hidden_bias : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__thnn_fused_lstm_cell_backward(tensor *out__, tensor grad_hy, tensor grad_cy, tensor cx, tensor cy, tensor workspace, int has_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_lstm_cell_backward((grad_hy ? *grad_hy : torch::Tensor()), (grad_cy ? *grad_cy : torch::Tensor()), *cx, *cy, *workspace, (bool)has_bias); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg__thnn_fused_lstm_cell_backward_impl(tensor *out__, tensor grad_hy, tensor grad_cy, tensor cx, tensor cy, tensor workspace, int has_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_lstm_cell_backward_impl((grad_hy ? *grad_hy : torch::Tensor()), (grad_cy ? *grad_cy : torch::Tensor()), *cx, *cy, *workspace, (bool)has_bias); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__thnn_fused_lstm_cell_backward_impl_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor grad_hy, tensor grad_cy, tensor cx, tensor cy, tensor workspace, int has_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_lstm_cell_backward_impl_out(*out0, *out1, *out2, (grad_hy ? *grad_hy : torch::Tensor()), (grad_cy ? *grad_cy : torch::Tensor()), *cx, *cy, *workspace, (bool)has_bias); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__thnn_fused_lstm_cell_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor input_gates, tensor hidden_gates, tensor cx, tensor input_bias, tensor hidden_bias) { - PROTECT( - auto outputs__ = torch::_thnn_fused_lstm_cell_out(*out0, *out1, *out2, *input_gates, *hidden_gates, *cx, (input_bias ? *input_bias : torch::Tensor()), (hidden_bias ? *hidden_bias : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__to_copy(tensor *out__, tensor self, int options_kind, int options_device, int non_blocking) { - PROTECT( - auto outputs__ = torch::_to_copy(*self, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_copy_out(tensor *out__, tensor out, tensor self, int non_blocking) { - PROTECT( - auto outputs__ = torch::_to_copy_out(*out, *self, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg__to_cpu(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::_to_cpu(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg__to_dense(tensor *out__, tensor self, int dtype, int masked_grad) { - PROTECT( - auto outputs__ = self->_to_dense(torch::ScalarType(dtype), (bool)masked_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_dense_out(tensor *out__, tensor out, tensor self, int dtype, int masked_grad) { - PROTECT( - auto outputs__ = torch::_to_dense_out(*out, *self, torch::ScalarType(dtype), (bool)masked_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_bsc(tensor *out__, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = self->_to_sparse_bsc(torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_bsc_out(tensor *out__, tensor out, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = torch::_to_sparse_bsc_out(*out, *self, torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_bsr(tensor *out__, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = self->_to_sparse_bsr(torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_bsr_out(tensor *out__, tensor out, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = torch::_to_sparse_bsr_out(*out, *self, torch::IntArrayRef(blocksize_data, blocksize_len), dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_csc(tensor *out__, tensor self, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = self->_to_sparse_csc(dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_csc_out(tensor *out__, tensor out, tensor self, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = torch::_to_sparse_csc_out(*out, *self, dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_csr(tensor *out__, tensor self, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = self->_to_sparse_csr(dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_csr_out(tensor *out__, tensor out, tensor self, int64_t dense_dim_v, int dense_dim_null) { - PROTECT( - auto outputs__ = torch::_to_sparse_csr_out(*out, *self, dense_dim_null ? c10::nullopt : c10::optional(dense_dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__to_sparse_semi_structured(tensor *out__, tensor dense) { - PROTECT( - auto outputs__ = torch::_to_sparse_semi_structured(*dense); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__transform_bias_rescale_qkv(tensor *out__, tensor qkv, tensor qkv_bias, int64_t num_heads) { - PROTECT( - auto outputs__ = torch::_transform_bias_rescale_qkv(*qkv, *qkv_bias, num_heads); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__transform_bias_rescale_qkv_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor qkv, tensor qkv_bias, int64_t num_heads) { - PROTECT( - auto outputs__ = torch::_transform_bias_rescale_qkv_out(*out0, *out1, *out2, *qkv, *qkv_bias, num_heads); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__transformer_encoder_layer_fwd(tensor *out__, tensor src, int64_t embed_dim, int64_t num_heads, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, int use_gelu, int norm_first, double eps, tensor norm_weight_1, tensor norm_bias_1, tensor norm_weight_2, tensor norm_bias_2, tensor ffn_weight_1, tensor ffn_bias_1, tensor ffn_weight_2, tensor ffn_bias_2, tensor mask, int64_t mask_type_v, int mask_type_null) { - PROTECT( - auto outputs__ = torch::_transformer_encoder_layer_fwd(*src, embed_dim, num_heads, *qkv_weight, *qkv_bias, *proj_weight, *proj_bias, (bool)use_gelu, (bool)norm_first, eps, *norm_weight_1, *norm_bias_1, *norm_weight_2, *norm_bias_2, *ffn_weight_1, *ffn_bias_1, *ffn_weight_2, *ffn_bias_2, (mask ? *mask : torch::Tensor()), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__transformer_encoder_layer_fwd_out(tensor *out__, tensor out, tensor src, int64_t embed_dim, int64_t num_heads, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, int use_gelu, int norm_first, double eps, tensor norm_weight_1, tensor norm_bias_1, tensor norm_weight_2, tensor norm_bias_2, tensor ffn_weight_1, tensor ffn_bias_1, tensor ffn_weight_2, tensor ffn_bias_2, tensor mask, int64_t mask_type_v, int mask_type_null) { - PROTECT( - auto outputs__ = torch::_transformer_encoder_layer_fwd_out(*out, *src, embed_dim, num_heads, *qkv_weight, *qkv_bias, *proj_weight, *proj_bias, (bool)use_gelu, (bool)norm_first, eps, *norm_weight_1, *norm_bias_1, *norm_weight_2, *norm_bias_2, *ffn_weight_1, *ffn_bias_1, *ffn_weight_2, *ffn_bias_2, (mask ? *mask : torch::Tensor()), mask_type_null ? c10::nullopt : c10::optional(mask_type_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__trilinear(tensor *out__, tensor i1, tensor i2, tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim) { - PROTECT( - auto outputs__ = torch::_trilinear(*i1, *i2, *i3, torch::IntArrayRef(expand1_data, expand1_len), torch::IntArrayRef(expand2_data, expand2_len), torch::IntArrayRef(expand3_data, expand3_len), torch::IntArrayRef(sumdim_data, sumdim_len), unroll_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__trilinear_out(tensor *out__, tensor out, tensor i1, tensor i2, tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim) { - PROTECT( - auto outputs__ = torch::_trilinear_out(*out, *i1, *i2, *i3, torch::IntArrayRef(expand1_data, expand1_len), torch::IntArrayRef(expand2_data, expand2_len), torch::IntArrayRef(expand3_data, expand3_len), torch::IntArrayRef(sumdim_data, sumdim_len), unroll_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__triton_multi_head_attention(tensor *out__, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask) { - PROTECT( - auto outputs__ = torch::_triton_multi_head_attention(*query, *key, *value, embed_dim, num_head, *qkv_weight, *qkv_bias, *proj_weight, *proj_bias, (mask ? *mask : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__triton_multi_head_attention_out(tensor *out__, tensor out, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask) { - PROTECT( - auto outputs__ = torch::_triton_multi_head_attention_out(*out, *query, *key, *value, embed_dim, num_head, *qkv_weight, *qkv_bias, *proj_weight, *proj_bias, (mask ? *mask : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__triton_scaled_dot_attention(tensor *out__, tensor q, tensor k, tensor v, double dropout_p) { - PROTECT( - auto outputs__ = torch::_triton_scaled_dot_attention(*q, *k, *v, dropout_p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__triton_scaled_dot_attention_out(tensor *out__, tensor out, tensor q, tensor k, tensor v, double dropout_p) { - PROTECT( - auto outputs__ = torch::_triton_scaled_dot_attention_out(*out, *q, *k, *v, dropout_p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__unique(tensor *out__, tensor self, int sorted, int return_inverse) { - PROTECT( - auto outputs__ = torch::_unique(*self, (bool)sorted, (bool)return_inverse); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__unique2(tensor *out__, tensor self, int sorted, int return_inverse, int return_counts) { - PROTECT( - auto outputs__ = torch::_unique2(*self, (bool)sorted, (bool)return_inverse, (bool)return_counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__unique2_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor self, int sorted, int return_inverse, int return_counts) { - PROTECT( - auto outputs__ = torch::_unique2_out(*out0, *out1, *out2, *self, (bool)sorted, (bool)return_inverse, (bool)return_counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg__unique_out(tensor *out__, tensor out0, tensor out1, tensor self, int sorted, int return_inverse) { - PROTECT( - auto outputs__ = torch::_unique_out(*out0, *out1, *self, (bool)sorted, (bool)return_inverse); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__unpack_dual(tensor *out__, tensor dual, int64_t level) { - PROTECT( - auto outputs__ = torch::_unpack_dual(*dual, level); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__unsafe_index(tensor *out__, tensor self, tensor *indices_data, int indices_len) { - PROTECT( - auto outputs__ = torch::_unsafe_index(*self, of_carray_tensor_opt(indices_data, indices_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__unsafe_index_put(tensor *out__, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate) { - PROTECT( - auto outputs__ = torch::_unsafe_index_put(*self, of_carray_tensor_opt(indices_data, indices_len), *values, (bool)accumulate); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__unsafe_view(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_unsafe_view(*self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__unsafe_view_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::_unsafe_view_out(*out, *self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bicubic2d_aa(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bicubic2d_aa(*self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bicubic2d_aa_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bicubic2d_aa_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bicubic2d_aa_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bicubic2d_aa_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bicubic2d_aa_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bicubic2d_aa_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bicubic2d_aa_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::_upsample_bicubic2d_aa(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bilinear2d_aa(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bilinear2d_aa(*self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bilinear2d_aa_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bilinear2d_aa_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bilinear2d_aa_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bilinear2d_aa_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bilinear2d_aa_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_bilinear2d_aa_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_bilinear2d_aa_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::_upsample_bilinear2d_aa(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact1d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact1d(*self, torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact1d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact1d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact1d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact1d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact1d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact1d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact1d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact1d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact2d(*self, torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact2d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact2d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact2d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact2d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact2d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact2d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact2d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact3d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact3d(*self, torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact3d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact3d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact3d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact3d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact3d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__upsample_nearest_exact3d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::_upsample_nearest_exact3d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg__use_cudnn_ctc_loss(tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank) { - PROTECT( - return torch::_use_cudnn_ctc_loss(*log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank); - ) - return 0; -} - -int atg__use_cudnn_ctc_loss_tensor(tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank) { - PROTECT( - return torch::_use_cudnn_ctc_loss(*log_probs, *targets, *input_lengths, *target_lengths, blank); - ) - return 0; -} - -int atg__use_cudnn_rnn_flatten_weight() { - PROTECT( - return torch::_use_cudnn_rnn_flatten_weight(); - ) - return 0; -} - -void atg__validate_compressed_sparse_indices(int is_crow, tensor compressed_idx, tensor plain_idx, int64_t cdim, int64_t dim, int64_t nnz) { - PROTECT( - torch::_validate_compressed_sparse_indices((bool)is_crow, *compressed_idx, *plain_idx, cdim, dim, nnz); - ) -} - -void atg__validate_sparse_bsc_tensor_args(tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len) { - PROTECT( - torch::_validate_sparse_bsc_tensor_args(*ccol_indices, *row_indices, *values, torch::IntArrayRef(size_data, size_len)); - ) -} - -void atg__validate_sparse_bsr_tensor_args(tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len) { - PROTECT( - torch::_validate_sparse_bsr_tensor_args(*crow_indices, *col_indices, *values, torch::IntArrayRef(size_data, size_len)); - ) -} - -void atg__validate_sparse_csc_tensor_args(tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len) { - PROTECT( - torch::_validate_sparse_csc_tensor_args(*ccol_indices, *row_indices, *values, torch::IntArrayRef(size_data, size_len)); - ) -} - -void atg__values(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->_values(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__values_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::_values_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__values_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::_values_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg__version(tensor self) { - PROTECT( - return self->_version(); - ) - return 0; -} - -void atg__weight_norm(tensor *out__, tensor v, tensor g, int64_t dim) { - PROTECT( - auto outputs__ = torch::_weight_norm(*v, *g, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg__weight_norm_differentiable_backward(tensor *out__, tensor grad_w, tensor saved_v, tensor saved_g, tensor saved_norms, int64_t dim) { - PROTECT( - auto outputs__ = torch::_weight_norm_differentiable_backward(*grad_w, *saved_v, *saved_g, *saved_norms, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__weight_norm_interface(tensor *out__, tensor v, tensor g, int64_t dim) { - PROTECT( - auto outputs__ = torch::_weight_norm_interface(*v, *g, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__weight_norm_interface_backward(tensor *out__, tensor grad_w, tensor saved_v, tensor saved_g, tensor saved_norms, int64_t dim) { - PROTECT( - auto outputs__ = torch::_weight_norm_interface_backward(*grad_w, *saved_v, *saved_g, *saved_norms, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__weight_norm_interface_backward_out(tensor *out__, tensor out0, tensor out1, tensor grad_w, tensor saved_v, tensor saved_g, tensor saved_norms, int64_t dim) { - PROTECT( - auto outputs__ = torch::_weight_norm_interface_backward_out(*out0, *out1, *grad_w, *saved_v, *saved_g, *saved_norms, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg__weight_norm_interface_out(tensor *out__, tensor out0, tensor out1, tensor v, tensor g, int64_t dim) { - PROTECT( - auto outputs__ = torch::_weight_norm_interface_out(*out0, *out1, *v, *g, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_abs(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::abs(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_abs_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::abs_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_abs_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::abs_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_absolute(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::absolute(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_absolute_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->absolute_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_absolute_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::absolute_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_acos(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::acos(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_acos_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::acos_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_acos_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::acos_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_acosh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::acosh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_acosh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::acosh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_acosh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::acosh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_avg_pool1d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_avg_pool1d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_avg_pool2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_avg_pool2d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_avg_pool2d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_avg_pool2d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_avg_pool3d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_avg_pool3d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_avg_pool3d_backward(tensor *out__, tensor grad_input, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::adaptive_avg_pool3d_backward_out(*grad_input, *grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_avg_pool3d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_avg_pool3d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_max_pool1d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool1d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_adaptive_max_pool2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool2d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_adaptive_max_pool2d_backward(tensor *out__, tensor grad_output, tensor self, tensor indices) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool2d_backward(*grad_output, *self, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_max_pool2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor indices) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool2d_backward_out(*grad_input, *grad_output, *self, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_max_pool2d_out(tensor *out__, tensor out, tensor indices, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool2d_out(*out, *indices, *self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_adaptive_max_pool3d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool3d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_adaptive_max_pool3d_backward(tensor *out__, tensor grad_output, tensor self, tensor indices) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool3d_backward(*grad_output, *self, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_max_pool3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor indices) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool3d_backward_out(*grad_input, *grad_output, *self, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adaptive_max_pool3d_out(tensor *out__, tensor out, tensor indices, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::adaptive_max_pool3d_out(*out, *indices, *self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_add(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::add(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_add_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->add_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_add_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::add_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_add_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::add(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_add_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->add_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_add_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::add_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addbmm(tensor *out__, tensor self, tensor batch1, tensor batch2) { - PROTECT( - auto outputs__ = torch::addbmm(*self, *batch1, *batch2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addbmm_(tensor *out__, tensor self, tensor batch1, tensor batch2) { - PROTECT( - auto outputs__ = self->addbmm_(*batch1, *batch2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addbmm_out(tensor *out__, tensor out, tensor self, tensor batch1, tensor batch2) { - PROTECT( - auto outputs__ = torch::addbmm_out(*out, *self, *batch1, *batch2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addcdiv(tensor *out__, tensor self, tensor tensor1, tensor tensor2) { - PROTECT( - auto outputs__ = torch::addcdiv(*self, *tensor1, *tensor2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addcdiv_(tensor *out__, tensor self, tensor tensor1, tensor tensor2) { - PROTECT( - auto outputs__ = self->addcdiv_(*tensor1, *tensor2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addcdiv_out(tensor *out__, tensor out, tensor self, tensor tensor1, tensor tensor2) { - PROTECT( - auto outputs__ = torch::addcdiv_out(*out, *self, *tensor1, *tensor2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addcmul(tensor *out__, tensor self, tensor tensor1, tensor tensor2) { - PROTECT( - auto outputs__ = torch::addcmul(*self, *tensor1, *tensor2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addcmul_(tensor *out__, tensor self, tensor tensor1, tensor tensor2) { - PROTECT( - auto outputs__ = self->addcmul_(*tensor1, *tensor2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addcmul_out(tensor *out__, tensor out, tensor self, tensor tensor1, tensor tensor2) { - PROTECT( - auto outputs__ = torch::addcmul_out(*out, *self, *tensor1, *tensor2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addmm(tensor *out__, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::addmm(*self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addmm_(tensor *out__, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = self->addmm_(*mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addmm_out(tensor *out__, tensor out, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::addmm_out(*out, *self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addmv(tensor *out__, tensor self, tensor mat, tensor vec) { - PROTECT( - auto outputs__ = torch::addmv(*self, *mat, *vec); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addmv_(tensor *out__, tensor self, tensor mat, tensor vec) { - PROTECT( - auto outputs__ = torch::addmv_(*self, *mat, *vec); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addmv_out(tensor *out__, tensor out, tensor self, tensor mat, tensor vec) { - PROTECT( - auto outputs__ = torch::addmv_out(*out, *self, *mat, *vec); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addr(tensor *out__, tensor self, tensor vec1, tensor vec2) { - PROTECT( - auto outputs__ = torch::addr(*self, *vec1, *vec2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addr_(tensor *out__, tensor self, tensor vec1, tensor vec2) { - PROTECT( - auto outputs__ = self->addr_(*vec1, *vec2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_addr_out(tensor *out__, tensor out, tensor self, tensor vec1, tensor vec2) { - PROTECT( - auto outputs__ = torch::addr_out(*out, *self, *vec1, *vec2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_adjoint(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::adjoint(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_affine_grid_generator(tensor *out__, tensor theta, int64_t *size_data, int size_len, int align_corners) { - PROTECT( - auto outputs__ = torch::affine_grid_generator(*theta, torch::IntArrayRef(size_data, size_len), (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_affine_grid_generator_backward(tensor *out__, tensor grad, int64_t *size_data, int size_len, int align_corners) { - PROTECT( - auto outputs__ = torch::affine_grid_generator_backward(*grad, torch::IntArrayRef(size_data, size_len), (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_affine_grid_generator_out(tensor *out__, tensor out, tensor theta, int64_t *size_data, int size_len, int align_corners) { - PROTECT( - auto outputs__ = torch::affine_grid_generator_out(*out, *theta, torch::IntArrayRef(size_data, size_len), (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_alias(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::alias(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_alias_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::alias_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_alias_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::alias_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_align_as(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->align_as(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_align_tensors(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::align_tensors(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_all(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::all(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_all_all_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::all_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_all_dim(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::all(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_all_out(tensor *out__, tensor out, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::all_out(*out, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg_allclose(tensor self, tensor other, double rtol, double atol, int equal_nan) { - PROTECT( - return torch::allclose(*self, *other, rtol, atol, (bool)equal_nan); - ) - return 0; -} - -void atg_alpha_dropout(tensor *out__, tensor input, double p, int train) { - PROTECT( - auto outputs__ = torch::alpha_dropout(*input, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_alpha_dropout_(tensor *out__, tensor self, double p, int train) { - PROTECT( - auto outputs__ = torch::alpha_dropout_(*self, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_amax(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::amax(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_amax_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::amax_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_amin(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::amin(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_amin_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::amin_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_aminmax(tensor *out__, tensor self, int64_t dim_v, int dim_null, int keepdim) { - PROTECT( - auto outputs__ = torch::aminmax(*self, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_aminmax_out(tensor *out__, tensor min, tensor max, tensor self, int64_t dim_v, int dim_null, int keepdim) { - PROTECT( - auto outputs__ = torch::aminmax_out(*min, *max, *self, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_angle(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::angle(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_angle_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::angle_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_any(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::any(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_any_all_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::any_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_any_dim(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::any(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_any_out(tensor *out__, tensor out, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::any_out(*out, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arange(tensor *out__, scalar end, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::arange(*end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arange_start(tensor *out__, scalar start, scalar end, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::arange(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arange_start_step(tensor *out__, scalar start, scalar end, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::arange(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arccos(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arccos(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arccos_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arccos_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arccos_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::arccos_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arccosh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arccosh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arccosh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arccosh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arccosh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::arccosh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arcsin(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arcsin(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arcsin_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arcsin_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arcsin_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::arcsin_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arcsinh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arcsinh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arcsinh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arcsinh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arcsinh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::arcsinh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctan(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arctan(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctan2(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::arctan2(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctan2_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->arctan2_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctan2_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::arctan2_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctan_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arctan_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctan_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::arctan_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctanh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arctanh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctanh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::arctanh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_arctanh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::arctanh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argmax(tensor *out__, tensor self, int64_t dim_v, int dim_null, int keepdim) { - PROTECT( - auto outputs__ = torch::argmax(*self, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argmax_out(tensor *out__, tensor out, tensor self, int64_t dim_v, int dim_null, int keepdim) { - PROTECT( - auto outputs__ = torch::argmax_out(*out, *self, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argmin(tensor *out__, tensor self, int64_t dim_v, int dim_null, int keepdim) { - PROTECT( - auto outputs__ = torch::argmin(*self, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argmin_out(tensor *out__, tensor out, tensor self, int64_t dim_v, int dim_null, int keepdim) { - PROTECT( - auto outputs__ = torch::argmin_out(*out, *self, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argsort(tensor *out__, tensor self, int64_t dim, int descending) { - PROTECT( - auto outputs__ = torch::argsort(*self, dim, (bool)descending); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argsort_stable(tensor *out__, tensor self, int stable, int64_t dim, int descending) { - PROTECT( - auto outputs__ = torch::argsort(*self, (bool)stable, dim, (bool)descending); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argsort_stable_out(tensor *out__, tensor out, tensor self, int stable, int64_t dim, int descending) { - PROTECT( - auto outputs__ = torch::argsort_out(*out, *self, (bool)stable, dim, (bool)descending); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_argwhere(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::argwhere(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_as_strided(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { - PROTECT( - auto outputs__ = torch::as_strided(*self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_as_strided_(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { - PROTECT( - auto outputs__ = torch::as_strided_(*self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_as_strided_copy(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { - PROTECT( - auto outputs__ = torch::as_strided_copy(*self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_as_strided_copy_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { - PROTECT( - auto outputs__ = torch::as_strided_copy_out(*out, *self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_as_strided_scatter(tensor *out__, tensor self, tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { - PROTECT( - auto outputs__ = torch::as_strided_scatter(*self, *src, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_as_strided_scatter_out(tensor *out__, tensor out, tensor self, tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null) { - PROTECT( - auto outputs__ = torch::as_strided_scatter_out(*out, *self, *src, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), storage_offset_null ? c10::nullopt : c10::optional(storage_offset_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_asin(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::asin(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_asin_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::asin_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_asin_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::asin_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_asinh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::asinh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_asinh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::asinh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_asinh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::asinh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atan(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::atan(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atan2(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::atan2(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atan2_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->atan2_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atan2_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::atan2_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atan_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::atan_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atan_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::atan_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atanh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::atanh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atanh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::atanh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atanh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::atanh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_atleast_1d(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::atleast_1d(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_atleast_1d_sequence(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::atleast_1d(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_atleast_2d(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::atleast_2d(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_atleast_2d_sequence(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::atleast_2d(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_atleast_3d(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::atleast_3d(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_atleast_3d_sequence(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::atleast_3d(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_avg_pool1d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad) { - PROTECT( - auto outputs__ = torch::avg_pool1d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool2d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool2d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool2d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool2d_backward(*grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool2d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool2d_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool2d_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool3d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool3d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool3d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool3d_backward(*grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool3d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_avg_pool3d_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null) { - PROTECT( - auto outputs__ = torch::avg_pool3d_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), (bool)ceil_mode, (bool)count_include_pad, divisor_override_null ? c10::nullopt : c10::optional(divisor_override_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_baddbmm(tensor *out__, tensor self, tensor batch1, tensor batch2) { - PROTECT( - auto outputs__ = torch::baddbmm(*self, *batch1, *batch2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_baddbmm_(tensor *out__, tensor self, tensor batch1, tensor batch2) { - PROTECT( - auto outputs__ = self->baddbmm_(*batch1, *batch2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_baddbmm_out(tensor *out__, tensor out, tensor self, tensor batch1, tensor batch2) { - PROTECT( - auto outputs__ = torch::baddbmm_out(*out, *self, *batch1, *batch2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bartlett_window(tensor *out__, int64_t window_length, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::bartlett_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bartlett_window_out(tensor *out__, tensor out, int64_t window_length) { - PROTECT( - auto outputs__ = torch::bartlett_window_out(*out, window_length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bartlett_window_periodic(tensor *out__, int64_t window_length, int periodic, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::bartlett_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bartlett_window_periodic_out(tensor *out__, tensor out, int64_t window_length, int periodic) { - PROTECT( - auto outputs__ = torch::bartlett_window_out(*out, window_length, (bool)periodic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_batch_norm(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps, int cudnn_enabled) { - PROTECT( - auto outputs__ = torch::batch_norm(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)training, momentum, eps, (bool)cudnn_enabled); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_batch_norm_backward_elemt(tensor *out__, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, tensor sum_dy, tensor sum_dy_xmu, tensor count) { - PROTECT( - auto outputs__ = torch::batch_norm_backward_elemt(*grad_out, *input, *mean, *invstd, (weight ? *weight : torch::Tensor()), *sum_dy, *sum_dy_xmu, *count); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_batch_norm_backward_elemt_out(tensor *out__, tensor out, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, tensor sum_dy, tensor sum_dy_xmu, tensor count) { - PROTECT( - auto outputs__ = torch::batch_norm_backward_elemt_out(*out, *grad_out, *input, *mean, *invstd, (weight ? *weight : torch::Tensor()), *sum_dy, *sum_dy_xmu, *count); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_batch_norm_backward_reduce(tensor *out__, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, int input_g, int weight_g, int bias_g) { - PROTECT( - auto outputs__ = torch::batch_norm_backward_reduce(*grad_out, *input, *mean, *invstd, (weight ? *weight : torch::Tensor()), (bool)input_g, (bool)weight_g, (bool)bias_g); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_batch_norm_backward_reduce_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, int input_g, int weight_g, int bias_g) { - PROTECT( - auto outputs__ = torch::batch_norm_backward_reduce_out(*out0, *out1, *out2, *out3, *grad_out, *input, *mean, *invstd, (weight ? *weight : torch::Tensor()), (bool)input_g, (bool)weight_g, (bool)bias_g); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_batch_norm_elemt(tensor *out__, tensor input, tensor weight, tensor bias, tensor mean, tensor invstd, double eps) { - PROTECT( - auto outputs__ = torch::batch_norm_elemt(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *mean, *invstd, eps); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_batch_norm_elemt_out(tensor *out__, tensor out, tensor input, tensor weight, tensor bias, tensor mean, tensor invstd, double eps) { - PROTECT( - auto outputs__ = torch::batch_norm_elemt_out(*out, *input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *mean, *invstd, eps); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_batch_norm_gather_stats(tensor *out__, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, int64_t count) { - PROTECT( - auto outputs__ = torch::batch_norm_gather_stats(*input, *mean, *invstd, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), momentum, eps, count); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_batch_norm_gather_stats_out(tensor *out__, tensor out0, tensor out1, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, int64_t count) { - PROTECT( - auto outputs__ = torch::batch_norm_gather_stats_out(*out0, *out1, *input, *mean, *invstd, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), momentum, eps, count); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_batch_norm_gather_stats_with_counts(tensor *out__, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, tensor counts) { - PROTECT( - auto outputs__ = torch::batch_norm_gather_stats_with_counts(*input, *mean, *invstd, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), momentum, eps, *counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_batch_norm_gather_stats_with_counts_out(tensor *out__, tensor out0, tensor out1, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, tensor counts) { - PROTECT( - auto outputs__ = torch::batch_norm_gather_stats_with_counts_out(*out0, *out1, *input, *mean, *invstd, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), momentum, eps, *counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_batch_norm_stats(tensor *out__, tensor input, double eps) { - PROTECT( - auto outputs__ = torch::batch_norm_stats(*input, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_batch_norm_stats_out(tensor *out__, tensor out0, tensor out1, tensor input, double eps) { - PROTECT( - auto outputs__ = torch::batch_norm_stats_out(*out0, *out1, *input, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_batch_norm_update_stats(tensor *out__, tensor input, tensor running_mean, tensor running_var, double momentum) { - PROTECT( - auto outputs__ = torch::batch_norm_update_stats(*input, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), momentum); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_batch_norm_update_stats_out(tensor *out__, tensor out0, tensor out1, tensor input, tensor running_mean, tensor running_var, double momentum) { - PROTECT( - auto outputs__ = torch::batch_norm_update_stats_out(*out0, *out1, *input, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), momentum); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_bernoulli(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::bernoulli(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bernoulli_(tensor *out__, tensor self, tensor p) { - PROTECT( - auto outputs__ = self->bernoulli_(*p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bernoulli_float_(tensor *out__, tensor self, double p) { - PROTECT( - auto outputs__ = self->bernoulli_(p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bernoulli_p(tensor *out__, tensor self, double p) { - PROTECT( - auto outputs__ = torch::bernoulli(*self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bernoulli_tensor(tensor *out__, tensor self, tensor p) { - PROTECT( - auto outputs__ = torch::bernoulli(*self, *p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bilinear(tensor *out__, tensor input1, tensor input2, tensor weight, tensor bias) { - PROTECT( - auto outputs__ = torch::bilinear(*input1, *input2, *weight, (bias ? *bias : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binary_cross_entropy(tensor *out__, tensor self, tensor target, tensor weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::binary_cross_entropy(*self, *target, (weight ? *weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binary_cross_entropy_backward(tensor *out__, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::binary_cross_entropy_backward(*grad_output, *self, *target, (weight ? *weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binary_cross_entropy_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::binary_cross_entropy_backward_out(*grad_input, *grad_output, *self, *target, (weight ? *weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binary_cross_entropy_out(tensor *out__, tensor out, tensor self, tensor target, tensor weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::binary_cross_entropy_out(*out, *self, *target, (weight ? *weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binary_cross_entropy_with_logits(tensor *out__, tensor self, tensor target, tensor weight, tensor pos_weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::binary_cross_entropy_with_logits(*self, *target, (weight ? *weight : torch::Tensor()), (pos_weight ? *pos_weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binary_cross_entropy_with_logits_out(tensor *out__, tensor out, tensor self, tensor target, tensor weight, tensor pos_weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::binary_cross_entropy_with_logits_out(*out, *self, *target, (weight ? *weight : torch::Tensor()), (pos_weight ? *pos_weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bincount(tensor *out__, tensor self, tensor weights, int64_t minlength) { - PROTECT( - auto outputs__ = torch::bincount(*self, (weights ? *weights : torch::Tensor()), minlength); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bincount_out(tensor *out__, tensor out, tensor self, tensor weights, int64_t minlength) { - PROTECT( - auto outputs__ = torch::bincount_out(*out, *self, (weights ? *weights : torch::Tensor()), minlength); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binomial(tensor *out__, tensor count, tensor prob) { - PROTECT( - auto outputs__ = torch::binomial(*count, *prob); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_binomial_out(tensor *out__, tensor out, tensor count, tensor prob) { - PROTECT( - auto outputs__ = torch::binomial_out(*out, *count, *prob); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_and(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->bitwise_and_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_and_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and_scalar_tensor(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_and(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and_scalar_tensor_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_and_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_and(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->bitwise_and_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_and_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_and_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_left_shift(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->bitwise_left_shift_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift_scalar_tensor(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_left_shift(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift_scalar_tensor_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_left_shift_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_left_shift_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift_tensor_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_left_shift(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift_tensor_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->bitwise_left_shift_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_left_shift_tensor_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_left_shift_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_not(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::bitwise_not(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_not_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->bitwise_not_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_not_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::bitwise_not_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_or(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->bitwise_or_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_or_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or_scalar_tensor(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_or(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or_scalar_tensor_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_or_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_or(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->bitwise_or_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_or_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_or_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_right_shift(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->bitwise_right_shift_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift_scalar_tensor(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_right_shift(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift_scalar_tensor_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_right_shift_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_right_shift_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift_tensor_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_right_shift(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift_tensor_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->bitwise_right_shift_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_right_shift_tensor_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_right_shift_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_xor(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->bitwise_xor_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::bitwise_xor_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor_scalar_tensor(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_xor(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor_scalar_tensor_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_xor_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_xor(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->bitwise_xor_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bitwise_xor_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::bitwise_xor_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_blackman_window(tensor *out__, int64_t window_length, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::blackman_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_blackman_window_out(tensor *out__, tensor out, int64_t window_length) { - PROTECT( - auto outputs__ = torch::blackman_window_out(*out, window_length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_blackman_window_periodic(tensor *out__, int64_t window_length, int periodic, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::blackman_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_blackman_window_periodic_out(tensor *out__, tensor out, int64_t window_length, int periodic) { - PROTECT( - auto outputs__ = torch::blackman_window_out(*out, window_length, (bool)periodic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_block_diag(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::block_diag(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_block_diag_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::block_diag_out(*out, of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bmm(tensor *out__, tensor self, tensor mat2) { - PROTECT( - auto outputs__ = torch::bmm(*self, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bmm_out(tensor *out__, tensor out, tensor self, tensor mat2) { - PROTECT( - auto outputs__ = torch::bmm_out(*out, *self, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_broadcast_tensors(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::broadcast_tensors(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_broadcast_to(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::broadcast_to(*self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bucketize(tensor *out__, tensor self, tensor boundaries, int out_int32, int right) { - PROTECT( - auto outputs__ = torch::bucketize(*self, *boundaries, (bool)out_int32, (bool)right); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bucketize_scalar(tensor *out__, scalar self, tensor boundaries, int out_int32, int right) { - PROTECT( - auto outputs__ = torch::bucketize(*self, *boundaries, (bool)out_int32, (bool)right); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bucketize_scalar_out(tensor *out__, tensor out, scalar self, tensor boundaries, int out_int32, int right) { - PROTECT( - auto outputs__ = torch::bucketize_out(*out, *self, *boundaries, (bool)out_int32, (bool)right); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_bucketize_tensor_out(tensor *out__, tensor out, tensor self, tensor boundaries, int out_int32, int right) { - PROTECT( - auto outputs__ = torch::bucketize_out(*out, *self, *boundaries, (bool)out_int32, (bool)right); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg_can_cast(int from, int to) { - PROTECT( - return torch::can_cast(torch::ScalarType(from), torch::ScalarType(to)); - ) - return 0; -} - -void atg_cartesian_prod(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::cartesian_prod(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cat(tensor *out__, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::cat(of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cat_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::cat_out(*out, of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cauchy(tensor *out__, tensor self, double median, double sigma) { - PROTECT( - auto outputs__ = torch::cauchy(*self, median, sigma); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cauchy_(tensor *out__, tensor self, double median, double sigma) { - PROTECT( - auto outputs__ = self->cauchy_(median, sigma); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cauchy_out(tensor *out__, tensor out, tensor self, double median, double sigma) { - PROTECT( - auto outputs__ = torch::cauchy_out(*out, *self, median, sigma); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ccol_indices(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->ccol_indices(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ccol_indices_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::ccol_indices_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ccol_indices_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::ccol_indices_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cdist(tensor *out__, tensor x1, tensor x2, double p, int64_t compute_mode_v, int compute_mode_null) { - PROTECT( - auto outputs__ = torch::cdist(*x1, *x2, p, compute_mode_null ? c10::nullopt : c10::optional(compute_mode_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ceil(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::ceil(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ceil_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::ceil_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ceil_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::ceil_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_celu(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::celu(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_celu_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::celu_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_celu_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::celu_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_chain_matmul(tensor *out__, tensor *matrices_data, int matrices_len) { - PROTECT( - auto outputs__ = torch::chain_matmul(of_carray_tensor(matrices_data, matrices_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_chain_matmul_out(tensor *out__, tensor out, tensor *matrices_data, int matrices_len) { - PROTECT( - auto outputs__ = torch::chain_matmul_out(*out, of_carray_tensor(matrices_data, matrices_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_chalf(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->chalf(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_channel_shuffle(tensor *out__, tensor self, int64_t groups) { - PROTECT( - auto outputs__ = torch::channel_shuffle(*self, groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_channel_shuffle_out(tensor *out__, tensor out, tensor self, int64_t groups) { - PROTECT( - auto outputs__ = torch::channel_shuffle_out(*out, *self, groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cholesky(tensor *out__, tensor self, int upper) { - PROTECT( - auto outputs__ = torch::cholesky(*self, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cholesky_inverse(tensor *out__, tensor self, int upper) { - PROTECT( - auto outputs__ = torch::cholesky_inverse(*self, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cholesky_inverse_out(tensor *out__, tensor out, tensor self, int upper) { - PROTECT( - auto outputs__ = torch::cholesky_inverse_out(*out, *self, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cholesky_out(tensor *out__, tensor out, tensor self, int upper) { - PROTECT( - auto outputs__ = torch::cholesky_out(*out, *self, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cholesky_solve(tensor *out__, tensor self, tensor input2, int upper) { - PROTECT( - auto outputs__ = torch::cholesky_solve(*self, *input2, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cholesky_solve_out(tensor *out__, tensor out, tensor self, tensor input2, int upper) { - PROTECT( - auto outputs__ = torch::cholesky_solve_out(*out, *self, *input2, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_choose_qparams_optimized(tensor *out__, tensor input, int64_t numel, int64_t n_bins, double ratio, int64_t bit_width) { - PROTECT( - auto outputs__ = torch::choose_qparams_optimized(*input, numel, n_bins, ratio, bit_width); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -tensor *atg_chunk(tensor self, int64_t chunks, int64_t dim) { - PROTECT( - auto outputs__ = torch::chunk(*self, chunks, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_clamp(tensor *out__, tensor self, scalar min, scalar max) { - PROTECT( - auto outputs__ = torch::clamp(*self, *min, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_(tensor *out__, tensor self, scalar min, scalar max) { - PROTECT( - auto outputs__ = torch::clamp_(*self, *min, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_max(tensor *out__, tensor self, scalar max) { - PROTECT( - auto outputs__ = torch::clamp_max(*self, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_max_(tensor *out__, tensor self, scalar max) { - PROTECT( - auto outputs__ = torch::clamp_max_(*self, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_max_out(tensor *out__, tensor out, tensor self, scalar max) { - PROTECT( - auto outputs__ = torch::clamp_max_out(*out, *self, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_max_tensor(tensor *out__, tensor self, tensor max) { - PROTECT( - auto outputs__ = torch::clamp_max(*self, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_max_tensor_(tensor *out__, tensor self, tensor max) { - PROTECT( - auto outputs__ = torch::clamp_max_(*self, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_max_tensor_out(tensor *out__, tensor out, tensor self, tensor max) { - PROTECT( - auto outputs__ = torch::clamp_max_out(*out, *self, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_min(tensor *out__, tensor self, scalar min) { - PROTECT( - auto outputs__ = torch::clamp_min(*self, *min); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_min_(tensor *out__, tensor self, scalar min) { - PROTECT( - auto outputs__ = torch::clamp_min_(*self, *min); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_min_out(tensor *out__, tensor out, tensor self, scalar min) { - PROTECT( - auto outputs__ = torch::clamp_min_out(*out, *self, *min); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_min_tensor(tensor *out__, tensor self, tensor min) { - PROTECT( - auto outputs__ = torch::clamp_min(*self, *min); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_min_tensor_(tensor *out__, tensor self, tensor min) { - PROTECT( - auto outputs__ = torch::clamp_min_(*self, *min); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_min_tensor_out(tensor *out__, tensor out, tensor self, tensor min) { - PROTECT( - auto outputs__ = torch::clamp_min_out(*out, *self, *min); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_out(tensor *out__, tensor out, tensor self, scalar min, scalar max) { - PROTECT( - auto outputs__ = torch::clamp_out(*out, *self, *min, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_tensor(tensor *out__, tensor self, tensor min, tensor max) { - PROTECT( - auto outputs__ = torch::clamp(*self, (min ? *min : torch::Tensor()), (max ? *max : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_tensor_(tensor *out__, tensor self, tensor min, tensor max) { - PROTECT( - auto outputs__ = torch::clamp_(*self, (min ? *min : torch::Tensor()), (max ? *max : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clamp_tensor_out(tensor *out__, tensor out, tensor self, tensor min, tensor max) { - PROTECT( - auto outputs__ = torch::clamp_out(*out, *self, (min ? *min : torch::Tensor()), (max ? *max : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clip(tensor *out__, tensor self, scalar min, scalar max) { - PROTECT( - auto outputs__ = torch::clip(*self, *min, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clip_(tensor *out__, tensor self, scalar min, scalar max) { - PROTECT( - auto outputs__ = torch::clip_(*self, *min, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clip_out(tensor *out__, tensor out, tensor self, scalar min, scalar max) { - PROTECT( - auto outputs__ = torch::clip_out(*out, *self, *min, *max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clip_tensor(tensor *out__, tensor self, tensor min, tensor max) { - PROTECT( - auto outputs__ = torch::clip(*self, (min ? *min : torch::Tensor()), (max ? *max : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clip_tensor_(tensor *out__, tensor self, tensor min, tensor max) { - PROTECT( - auto outputs__ = torch::clip_(*self, (min ? *min : torch::Tensor()), (max ? *max : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clip_tensor_out(tensor *out__, tensor out, tensor self, tensor min, tensor max) { - PROTECT( - auto outputs__ = torch::clip_out(*out, *self, (min ? *min : torch::Tensor()), (max ? *max : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clone(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::clone(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_clone_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::clone_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_coalesce(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->coalesce(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_col2im(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::col2im(*self, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_col2im_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::col2im_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_col_indices(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->col_indices(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_col_indices_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::col_indices_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_col_indices_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::col_indices_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_column_stack(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::column_stack(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_column_stack_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::column_stack_out(*out, of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_combinations(tensor *out__, tensor self, int64_t r, int with_replacement) { - PROTECT( - auto outputs__ = torch::combinations(*self, r, (bool)with_replacement); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_complex(tensor *out__, tensor real, tensor imag) { - PROTECT( - auto outputs__ = torch::complex(*real, *imag); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_complex_out(tensor *out__, tensor out, tensor real, tensor imag) { - PROTECT( - auto outputs__ = torch::complex_out(*out, *real, *imag); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_concat(tensor *out__, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::concat(of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_concat_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::concat_out(*out, of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_concatenate(tensor *out__, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::concatenate(of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_concatenate_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::concatenate_out(*out, of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conj(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::conj(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conj_physical(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::conj_physical(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conj_physical_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::conj_physical_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conj_physical_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::conj_physical_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_constant_pad_nd(tensor *out__, tensor self, int64_t *pad_data, int pad_len) { - PROTECT( - auto outputs__ = torch::constant_pad_nd(*self, torch::IntArrayRef(pad_data, pad_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_constant_pad_nd_out(tensor *out__, tensor out, tensor self, int64_t *pad_data, int pad_len) { - PROTECT( - auto outputs__ = torch::constant_pad_nd_out(*out, *self, torch::IntArrayRef(pad_data, pad_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_contiguous(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->contiguous(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv1d(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::conv1d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv1d_padding(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::conv1d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv2d(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::conv2d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv2d_padding(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::conv2d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv3d(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::conv3d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv3d_padding(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::conv3d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), std::string(padding), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv_depthwise3d(tensor *out__, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::conv_depthwise3d(*self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv_depthwise3d_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::conv_depthwise3d_out(*out, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv_tbc(tensor *out__, tensor self, tensor weight, tensor bias, int64_t pad) { - PROTECT( - auto outputs__ = torch::conv_tbc(*self, *weight, *bias, pad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv_tbc_backward(tensor *out__, tensor self, tensor input, tensor weight, tensor bias, int64_t pad) { - PROTECT( - auto outputs__ = torch::conv_tbc_backward(*self, *input, *weight, *bias, pad); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_conv_tbc_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias, int64_t pad) { - PROTECT( - auto outputs__ = torch::conv_tbc_out(*out, *self, *weight, *bias, pad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv_transpose1d(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::conv_transpose1d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), groups, torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv_transpose2d(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::conv_transpose2d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), groups, torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_conv_transpose3d(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::conv_transpose3d(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), groups, torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_convolution(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::convolution(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_convolution_out(tensor *out__, tensor out, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::convolution_out(*out, *input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_convolution_overrideable(tensor *out__, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::convolution_overrideable(*input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_convolution_overrideable_out(tensor *out__, tensor out, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::convolution_overrideable_out(*out, *input, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)transposed, torch::IntArrayRef(output_padding_data, output_padding_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copy(tensor *out__, tensor self, tensor src, int non_blocking) { - PROTECT( - auto outputs__ = torch::copy(*self, *src, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copy_out(tensor *out__, tensor out, tensor self, tensor src, int non_blocking) { - PROTECT( - auto outputs__ = torch::copy_out(*out, *self, *src, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copy_sparse_to_sparse(tensor *out__, tensor self, tensor src, int non_blocking) { - PROTECT( - auto outputs__ = torch::copy_sparse_to_sparse(*self, *src, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copy_sparse_to_sparse_(tensor *out__, tensor self, tensor src, int non_blocking) { - PROTECT( - auto outputs__ = torch::copy_sparse_to_sparse_(*self, *src, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copy_sparse_to_sparse_out(tensor *out__, tensor out, tensor self, tensor src, int non_blocking) { - PROTECT( - auto outputs__ = torch::copy_sparse_to_sparse_out(*out, *self, *src, (bool)non_blocking); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copysign(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::copysign(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copysign_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->copysign_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copysign_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::copysign_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copysign_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::copysign(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copysign_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->copysign_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_copysign_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::copysign_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_corrcoef(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::corrcoef(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cos(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::cos(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cos_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::cos_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cos_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::cos_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cosh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::cosh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cosh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::cosh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cosh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::cosh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cosine_embedding_loss(tensor *out__, tensor input1, tensor input2, tensor target, double margin, int64_t reduction) { - PROTECT( - auto outputs__ = torch::cosine_embedding_loss(*input1, *input2, *target, margin, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cosine_similarity(tensor *out__, tensor x1, tensor x2, int64_t dim, double eps) { - PROTECT( - auto outputs__ = torch::cosine_similarity(*x1, *x2, dim, eps); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_count_nonzero(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::count_nonzero_out(*out, *self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_count_nonzero_out(tensor *out__, tensor out, tensor self, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::count_nonzero_out(*out, *self, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cov(tensor *out__, tensor self, int64_t correction, tensor fweights, tensor aweights) { - PROTECT( - auto outputs__ = torch::cov(*self, correction, (fweights ? *fweights : torch::Tensor()), (aweights ? *aweights : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cross(tensor *out__, tensor self, tensor other, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::cross(*self, *other, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cross_entropy_loss(tensor *out__, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, double label_smoothing) { - PROTECT( - auto outputs__ = torch::cross_entropy_loss(*self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index, label_smoothing); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cross_out(tensor *out__, tensor out, tensor self, tensor other, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::cross_out(*out, *self, *other, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_crow_indices(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->crow_indices(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_crow_indices_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::crow_indices_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_crow_indices_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::crow_indices_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ctc_loss(tensor *out__, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int64_t reduction, int zero_infinity) { - PROTECT( - auto outputs__ = torch::ctc_loss(*log_probs, *targets, torch::IntArrayRef(input_lengths_data, input_lengths_len), torch::IntArrayRef(target_lengths_data, target_lengths_len), blank, reduction, (bool)zero_infinity); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ctc_loss_tensor(tensor *out__, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int64_t reduction, int zero_infinity) { - PROTECT( - auto outputs__ = torch::ctc_loss(*log_probs, *targets, *input_lengths, *target_lengths, blank, reduction, (bool)zero_infinity); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_affine_grid_generator(tensor *out__, tensor theta, int64_t n, int64_t C, int64_t H, int64_t W) { - PROTECT( - auto outputs__ = torch::cudnn_affine_grid_generator(*theta, n, C, H, W); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_affine_grid_generator_backward(tensor *out__, tensor grad, int64_t n, int64_t C, int64_t H, int64_t W) { - PROTECT( - auto outputs__ = torch::cudnn_affine_grid_generator_backward(*grad, n, C, H, W); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_affine_grid_generator_backward_out(tensor *out__, tensor out, tensor grad, int64_t n, int64_t C, int64_t H, int64_t W) { - PROTECT( - auto outputs__ = torch::cudnn_affine_grid_generator_backward_out(*out, *grad, n, C, H, W); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_affine_grid_generator_out(tensor *out__, tensor out, tensor theta, int64_t n, int64_t C, int64_t H, int64_t W) { - PROTECT( - auto outputs__ = torch::cudnn_affine_grid_generator_out(*out, *theta, n, C, H, W); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_batch_norm(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon) { - PROTECT( - auto outputs__ = torch::cudnn_batch_norm(*input, *weight, (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_cudnn_batch_norm_backward(tensor *out__, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon, tensor reserveSpace) { - PROTECT( - auto outputs__ = torch::cudnn_batch_norm_backward(*input, *grad_output, *weight, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (save_mean ? *save_mean : torch::Tensor()), (save_var ? *save_var : torch::Tensor()), epsilon, *reserveSpace); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_cudnn_batch_norm_backward_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon, tensor reserveSpace) { - PROTECT( - auto outputs__ = torch::cudnn_batch_norm_backward_out(*out0, *out1, *out2, *input, *grad_output, *weight, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (save_mean ? *save_mean : torch::Tensor()), (save_var ? *save_var : torch::Tensor()), epsilon, *reserveSpace); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_cudnn_batch_norm_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon) { - PROTECT( - auto outputs__ = torch::cudnn_batch_norm_out(*out0, *out1, *out2, *out3, *input, *weight, (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_cudnn_convolution(tensor *out__, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { - PROTECT( - auto outputs__ = torch::cudnn_convolution(*self, *weight, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_convolution_add_relu(tensor *out__, tensor self, tensor weight, tensor z, scalar alpha, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::cudnn_convolution_add_relu(*self, *weight, *z, *alpha, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_convolution_add_relu_out(tensor *out__, tensor out, tensor self, tensor weight, tensor z, scalar alpha, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::cudnn_convolution_add_relu_out(*out, *self, *weight, *z, *alpha, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_convolution_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { - PROTECT( - auto outputs__ = torch::cudnn_convolution_out(*out, *self, *weight, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_convolution_relu(tensor *out__, tensor self, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::cudnn_convolution_relu(*self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_convolution_relu_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::cudnn_convolution_relu_out(*out, *self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_convolution_transpose(tensor *out__, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { - PROTECT( - auto outputs__ = torch::cudnn_convolution_transpose(*self, *weight, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_convolution_transpose_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32) { - PROTECT( - auto outputs__ = torch::cudnn_convolution_transpose_out(*out, *self, *weight, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic, (bool)allow_tf32); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_grid_sampler(tensor *out__, tensor self, tensor grid) { - PROTECT( - auto outputs__ = torch::cudnn_grid_sampler(*self, *grid); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cudnn_grid_sampler_backward(tensor *out__, tensor self, tensor grid, tensor grad_output) { - PROTECT( - auto outputs__ = torch::cudnn_grid_sampler_backward(*self, *grid, *grad_output); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_cudnn_grid_sampler_backward_out(tensor *out__, tensor out0, tensor out1, tensor self, tensor grid, tensor grad_output) { - PROTECT( - auto outputs__ = torch::cudnn_grid_sampler_backward_out(*out0, *out1, *self, *grid, *grad_output); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_cudnn_grid_sampler_out(tensor *out__, tensor out, tensor self, tensor grid) { - PROTECT( - auto outputs__ = torch::cudnn_grid_sampler_out(*out, *self, *grid); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg_cudnn_is_acceptable(tensor self) { - PROTECT( - return torch::cudnn_is_acceptable(*self); - ) - return 0; -} - -void atg_cummax(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::cummax(*self, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_cummax_out(tensor *out__, tensor values, tensor indices, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::cummax_out(*values, *indices, *self, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_cummaxmin_backward(tensor *out__, tensor grad, tensor input, tensor indices, int64_t dim) { - PROTECT( - auto outputs__ = torch::cummaxmin_backward(*grad, *input, *indices, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cummin(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::cummin(*self, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_cummin_out(tensor *out__, tensor values, tensor indices, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::cummin_out(*values, *indices, *self, dim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_cumprod(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::cumprod(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumprod_(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = self->cumprod_(dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumprod_backward(tensor *out__, tensor grad, tensor input, int64_t dim, tensor output) { - PROTECT( - auto outputs__ = torch::cumprod_backward(*grad, *input, dim, *output); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumprod_out(tensor *out__, tensor out, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::cumprod_out(*out, *self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumsum(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::cumsum(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumsum_(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = self->cumsum_(dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumsum_out(tensor *out__, tensor out, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::cumsum_out(*out, *self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumulative_trapezoid(tensor *out__, tensor y, int64_t dim) { - PROTECT( - auto outputs__ = torch::cumulative_trapezoid(*y, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_cumulative_trapezoid_x(tensor *out__, tensor y, tensor x, int64_t dim) { - PROTECT( - auto outputs__ = torch::cumulative_trapezoid(*y, *x, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_data(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->data(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_deg2rad(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::deg2rad(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_deg2rad_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::deg2rad_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_deg2rad_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::deg2rad_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg_dense_dim(tensor self) { - PROTECT( - return self->dense_dim(); - ) - return 0; -} - -void atg_dequantize(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::dequantize(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dequantize_self_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::dequantize_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_dequantize_tensors(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::dequantize(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_dequantize_tensors_out(tensor *out_data, int out_len, tensor *tensors_data, int tensors_len) { - PROTECT( - torch::dequantize_out(of_carray_tensor(out_data, out_len), of_carray_tensor(tensors_data, tensors_len)); - ) -} - -void atg_det(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::det(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_detach(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::detach(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_detach_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::detach_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_detach_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::detach_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_detach_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::detach_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diag(tensor *out__, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = torch::diag(*self, diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diag_embed(tensor *out__, tensor self, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diag_embed(*self, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diag_embed_out(tensor *out__, tensor out, tensor self, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diag_embed_out(*out, *self, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diag_out(tensor *out__, tensor out, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = torch::diag_out(*out, *self, diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagflat(tensor *out__, tensor self, int64_t offset) { - PROTECT( - auto outputs__ = torch::diagflat(*self, offset); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagonal(tensor *out__, tensor self, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diagonal(*self, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagonal_backward(tensor *out__, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diagonal_backward(*grad_output, torch::IntArrayRef(input_sizes_data, input_sizes_len), offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagonal_backward_out(tensor *out__, tensor out, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diagonal_backward_out(*out, *grad_output, torch::IntArrayRef(input_sizes_data, input_sizes_len), offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagonal_copy(tensor *out__, tensor self, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diagonal_copy(*self, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagonal_copy_out(tensor *out__, tensor out, tensor self, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diagonal_copy_out(*out, *self, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagonal_scatter(tensor *out__, tensor self, tensor src, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diagonal_scatter(*self, *src, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diagonal_scatter_out(tensor *out__, tensor out, tensor self, tensor src, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::diagonal_scatter_out(*out, *self, *src, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diff(tensor *out__, tensor self, int64_t n, int64_t dim, tensor prepend, tensor append) { - PROTECT( - auto outputs__ = torch::diff(*self, n, dim, (prepend ? *prepend : torch::Tensor()), (append ? *append : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_diff_out(tensor *out__, tensor out, tensor self, int64_t n, int64_t dim, tensor prepend, tensor append) { - PROTECT( - auto outputs__ = torch::diff_out(*out, *self, n, dim, (prepend ? *prepend : torch::Tensor()), (append ? *append : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_digamma(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::digamma(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_digamma_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->digamma_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_digamma_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::digamma_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dist(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::dist(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dist_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::dist_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::div(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->div_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::div_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_out_mode(tensor *out__, tensor out, tensor self, tensor other, char * rounding_mode) { - PROTECT( - auto outputs__ = torch::div_out(*out, *self, *other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::div(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->div_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_scalar_mode(tensor *out__, tensor self, scalar other, char * rounding_mode) { - PROTECT( - auto outputs__ = torch::div(*self, *other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_scalar_mode_(tensor *out__, tensor self, scalar other, char * rounding_mode) { - PROTECT( - auto outputs__ = self->div_(*other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_scalar_mode_out(tensor *out__, tensor out, tensor self, scalar other, char * rounding_mode) { - PROTECT( - auto outputs__ = torch::div_out(*out, *self, *other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::div_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_tensor_mode(tensor *out__, tensor self, tensor other, char * rounding_mode) { - PROTECT( - auto outputs__ = torch::div(*self, *other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_div_tensor_mode_(tensor *out__, tensor self, tensor other, char * rounding_mode) { - PROTECT( - auto outputs__ = self->div_(*other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::divide(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->divide_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::divide_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_out_mode(tensor *out__, tensor out, tensor self, tensor other, char * rounding_mode) { - PROTECT( - auto outputs__ = torch::divide_out(*out, *self, *other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::divide(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->divide_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_scalar_mode(tensor *out__, tensor self, scalar other, char * rounding_mode) { - PROTECT( - auto outputs__ = torch::divide(*self, *other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_scalar_mode_(tensor *out__, tensor self, scalar other, char * rounding_mode) { - PROTECT( - auto outputs__ = self->divide_(*other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_tensor_mode(tensor *out__, tensor self, tensor other, char * rounding_mode) { - PROTECT( - auto outputs__ = torch::divide(*self, *other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_divide_tensor_mode_(tensor *out__, tensor self, tensor other, char * rounding_mode) { - PROTECT( - auto outputs__ = self->divide_(*other, std::string(rounding_mode)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dot(tensor *out__, tensor self, tensor tensor) { - PROTECT( - auto outputs__ = torch::dot(*self, *tensor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dot_out(tensor *out__, tensor out, tensor self, tensor tensor) { - PROTECT( - auto outputs__ = torch::dot_out(*out, *self, *tensor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dropout(tensor *out__, tensor input, double p, int train) { - PROTECT( - auto outputs__ = torch::dropout(*input, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dropout_(tensor *out__, tensor self, double p, int train) { - PROTECT( - auto outputs__ = torch::dropout_(*self, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_dsplit(tensor self, int64_t sections) { - PROTECT( - auto outputs__ = torch::dsplit(*self, sections); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_dsplit_array(tensor self, int64_t *indices_data, int indices_len) { - PROTECT( - auto outputs__ = torch::dsplit(*self, torch::IntArrayRef(indices_data, indices_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_dstack(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::dstack(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_dstack_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::dstack_out(*out, of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_einsum(tensor *out__, char * equation, tensor *tensors_data, int tensors_len, int64_t *path_data, int path_len) { - PROTECT( - auto outputs__ = torch::einsum(std::string(equation), of_carray_tensor(tensors_data, tensors_len), path_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(path_data, path_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_elu(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::elu(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_elu_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::elu_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_elu_backward(tensor *out__, tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, tensor self_or_result) { - PROTECT( - auto outputs__ = torch::elu_backward(*grad_output, *alpha, *scale, *input_scale, (bool)is_result, *self_or_result); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_elu_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, tensor self_or_result) { - PROTECT( - auto outputs__ = torch::elu_backward_out(*grad_input, *grad_output, *alpha, *scale, *input_scale, (bool)is_result, *self_or_result); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_elu_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::elu_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding(tensor *out__, tensor weight, tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse) { - PROTECT( - auto outputs__ = torch::embedding(*weight, *indices, padding_idx, (bool)scale_grad_by_freq, (bool)sparse); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_backward(tensor *out__, tensor grad, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq, int sparse) { - PROTECT( - auto outputs__ = torch::embedding_backward(*grad, *indices, num_weights, padding_idx, (bool)scale_grad_by_freq, (bool)sparse); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_bag(tensor *out__, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset) { - PROTECT( - auto outputs__ = torch::embedding_bag(*weight, *indices, *offsets, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? *per_sample_weights : torch::Tensor()), (bool)include_last_offset); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_embedding_bag_padding_idx(tensor *out__, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx_v, int padding_idx_null) { - PROTECT( - auto outputs__ = torch::embedding_bag(*weight, *indices, *offsets, (bool)scale_grad_by_freq, mode, (bool)sparse, (per_sample_weights ? *per_sample_weights : torch::Tensor()), (bool)include_last_offset, padding_idx_null ? c10::nullopt : c10::optional(padding_idx_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_embedding_dense_backward(tensor *out__, tensor grad_output, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq) { - PROTECT( - auto outputs__ = torch::embedding_dense_backward(*grad_output, *indices, num_weights, padding_idx, (bool)scale_grad_by_freq); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_dense_backward_out(tensor *out__, tensor out, tensor grad_output, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq) { - PROTECT( - auto outputs__ = torch::embedding_dense_backward_out(*out, *grad_output, *indices, num_weights, padding_idx, (bool)scale_grad_by_freq); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_out(tensor *out__, tensor out, tensor weight, tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse) { - PROTECT( - auto outputs__ = torch::embedding_out(*out, *weight, *indices, padding_idx, (bool)scale_grad_by_freq, (bool)sparse); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_renorm(tensor *out__, tensor self, tensor indices, double max_norm, double norm_type) { - PROTECT( - auto outputs__ = torch::embedding_renorm(*self, *indices, max_norm, norm_type); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_renorm_(tensor *out__, tensor self, tensor indices, double max_norm, double norm_type) { - PROTECT( - auto outputs__ = torch::embedding_renorm_(*self, *indices, max_norm, norm_type); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_renorm_out(tensor *out__, tensor out, tensor self, tensor indices, double max_norm, double norm_type) { - PROTECT( - auto outputs__ = torch::embedding_renorm_out(*out, *self, *indices, max_norm, norm_type); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_embedding_sparse_backward(tensor *out__, tensor grad, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq) { - PROTECT( - auto outputs__ = torch::embedding_sparse_backward(*grad, *indices, num_weights, padding_idx, (bool)scale_grad_by_freq); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::empty(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_like(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::empty_like(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_like_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::empty_like_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_out(tensor *out__, tensor out, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::empty_out(*out, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_permuted(tensor *out__, int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::empty_permuted(torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(physical_layout_data, physical_layout_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_permuted_out(tensor *out__, tensor out, int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len) { - PROTECT( - auto outputs__ = torch::empty_permuted_out(*out, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(physical_layout_data, physical_layout_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_quantized(tensor *out__, int64_t *size_data, int size_len, tensor qtensor, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::empty_quantized(torch::IntArrayRef(size_data, size_len), *qtensor, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_quantized_out(tensor *out__, tensor out, int64_t *size_data, int size_len, tensor qtensor) { - PROTECT( - auto outputs__ = torch::empty_quantized_out(*out, torch::IntArrayRef(size_data, size_len), *qtensor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_strided(tensor *out__, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::empty_strided(torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_empty_strided_out(tensor *out__, tensor out, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::empty_strided_out(*out, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eq(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::eq(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eq_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->eq_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eq_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::eq_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eq_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::eq(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eq_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->eq_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eq_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::eq_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg_equal(tensor self, tensor other) { - PROTECT( - return torch::equal(*self, *other); - ) - return 0; -} - -void atg_erf(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::erf(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erf_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::erf_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erf_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::erf_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erfc(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::erfc(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erfc_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::erfc_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erfc_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::erfc_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erfinv(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::erfinv(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erfinv_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->erfinv_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_erfinv_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::erfinv_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exp(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::exp(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exp2(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::exp2(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exp2_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::exp2_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exp2_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::exp2_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exp_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::exp_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exp_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::exp_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_expand(tensor *out__, tensor self, int64_t *size_data, int size_len, int implicit) { - PROTECT( - auto outputs__ = self->expand(torch::IntArrayRef(size_data, size_len), (bool)implicit); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_expand_as(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->expand_as(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_expand_copy(tensor *out__, tensor self, int64_t *size_data, int size_len, int implicit) { - PROTECT( - auto outputs__ = torch::expand_copy(*self, torch::IntArrayRef(size_data, size_len), (bool)implicit); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_expand_copy_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, int implicit) { - PROTECT( - auto outputs__ = torch::expand_copy_out(*out, *self, torch::IntArrayRef(size_data, size_len), (bool)implicit); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_expm1(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::expm1(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_expm1_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::expm1_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_expm1_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::expm1_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exponential(tensor *out__, tensor self, double lambd) { - PROTECT( - auto outputs__ = torch::exponential(*self, lambd); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exponential_(tensor *out__, tensor self, double lambd) { - PROTECT( - auto outputs__ = self->exponential_(lambd); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_exponential_out(tensor *out__, tensor out, tensor self, double lambd) { - PROTECT( - auto outputs__ = torch::exponential_out(*out, *self, lambd); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eye(tensor *out__, int64_t n, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::eye(n, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eye_m(tensor *out__, int64_t n, int64_t m, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::eye(n, m, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eye_m_out(tensor *out__, tensor out, int64_t n, int64_t m) { - PROTECT( - auto outputs__ = torch::eye_out(*out, n, m); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_eye_out(tensor *out__, tensor out, int64_t n) { - PROTECT( - auto outputs__ = torch::eye_out(*out, n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fake_quantize_per_channel_affine(tensor *out__, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_channel_affine(*self, *scale, *zero_point, axis, quant_min, quant_max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fake_quantize_per_channel_affine_cachemask(tensor *out__, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_channel_affine_cachemask(*self, *scale, *zero_point, axis, quant_min, quant_max); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_fake_quantize_per_channel_affine_cachemask_backward(tensor *out__, tensor grad, tensor mask) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_channel_affine_cachemask_backward(*grad, *mask); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fake_quantize_per_channel_affine_cachemask_out(tensor *out__, tensor out0, tensor out1, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_channel_affine_cachemask_out(*out0, *out1, *self, *scale, *zero_point, axis, quant_min, quant_max); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_fake_quantize_per_tensor_affine(tensor *out__, tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_tensor_affine(*self, scale, zero_point, quant_min, quant_max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fake_quantize_per_tensor_affine_cachemask(tensor *out__, tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_tensor_affine_cachemask(*self, scale, zero_point, quant_min, quant_max); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_fake_quantize_per_tensor_affine_cachemask_backward(tensor *out__, tensor grad, tensor mask) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_tensor_affine_cachemask_backward(*grad, *mask); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fake_quantize_per_tensor_affine_cachemask_out(tensor *out__, tensor out0, tensor out1, tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_tensor_affine_cachemask_out(*out0, *out1, *self, scale, zero_point, quant_min, quant_max); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_fake_quantize_per_tensor_affine_tensor_qparams(tensor *out__, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max) { - PROTECT( - auto outputs__ = torch::fake_quantize_per_tensor_affine(*self, *scale, *zero_point, quant_min, quant_max); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fbgemm_linear_fp16_weight(tensor *out__, tensor input, tensor packed_weight, tensor bias) { - PROTECT( - auto outputs__ = torch::fbgemm_linear_fp16_weight(*input, *packed_weight, *bias); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fbgemm_linear_fp16_weight_fp32_activation(tensor *out__, tensor input, tensor packed_weight, tensor bias) { - PROTECT( - auto outputs__ = torch::fbgemm_linear_fp16_weight_fp32_activation(*input, *packed_weight, *bias); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fbgemm_linear_int8_weight(tensor *out__, tensor input, tensor weight, tensor packed, tensor col_offsets, scalar weight_scale, scalar weight_zero_point, tensor bias) { - PROTECT( - auto outputs__ = torch::fbgemm_linear_int8_weight(*input, *weight, *packed, *col_offsets, *weight_scale, *weight_zero_point, *bias); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fbgemm_linear_int8_weight_fp32_activation(tensor *out__, tensor input, tensor weight, tensor packed, tensor col_offsets, scalar weight_scale, scalar weight_zero_point, tensor bias) { - PROTECT( - auto outputs__ = torch::fbgemm_linear_int8_weight_fp32_activation(*input, *weight, *packed, *col_offsets, *weight_scale, *weight_zero_point, *bias); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fbgemm_pack_gemm_matrix_fp16(tensor *out__, tensor input) { - PROTECT( - auto outputs__ = torch::fbgemm_pack_gemm_matrix_fp16(*input); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fbgemm_pack_quantized_matrix(tensor *out__, tensor input) { - PROTECT( - auto outputs__ = torch::fbgemm_pack_quantized_matrix(*input); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fbgemm_pack_quantized_matrix_kn(tensor *out__, tensor input, int64_t K, int64_t n) { - PROTECT( - auto outputs__ = torch::fbgemm_pack_quantized_matrix(*input, K, n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_feature_alpha_dropout(tensor *out__, tensor input, double p, int train) { - PROTECT( - auto outputs__ = torch::feature_alpha_dropout(*input, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_feature_alpha_dropout_(tensor *out__, tensor self, double p, int train) { - PROTECT( - auto outputs__ = torch::feature_alpha_dropout_(*self, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_feature_dropout(tensor *out__, tensor input, double p, int train) { - PROTECT( - auto outputs__ = torch::feature_dropout(*input, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_feature_dropout_(tensor *out__, tensor self, double p, int train) { - PROTECT( - auto outputs__ = torch::feature_dropout_(*self, p, (bool)train); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fft(tensor *out__, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_fft(*self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fft2(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_fft2(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fft2_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_fft2_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fft_out(tensor *out__, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_fft_out(*out, *self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fftfreq(tensor *out__, int64_t n, double d, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::fft_fftfreq(n, d, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fftfreq_out(tensor *out__, tensor out, int64_t n, double d) { - PROTECT( - auto outputs__ = torch::fft_fftfreq_out(*out, n, d); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fftn(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_fftn(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fftn_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_fftn_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_fftshift(tensor *out__, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::fft_fftshift(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_hfft(tensor *out__, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_hfft(*self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_hfft2(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_hfft2(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_hfft2_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_hfft2_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_hfft_out(tensor *out__, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_hfft_out(*out, *self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_hfftn(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_hfftn(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_hfftn_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_hfftn_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ifft(tensor *out__, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ifft(*self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ifft2(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ifft2(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ifft2_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ifft2_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ifft_out(tensor *out__, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ifft_out(*out, *self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ifftn(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ifftn(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ifftn_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ifftn_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ifftshift(tensor *out__, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::fft_ifftshift(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ihfft(tensor *out__, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ihfft(*self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ihfft2(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ihfft2(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ihfft2_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ihfft2_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ihfft_out(tensor *out__, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ihfft_out(*out, *self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ihfftn(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ihfftn(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_ihfftn_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_ihfftn_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_irfft(tensor *out__, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_irfft(*self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_irfft2(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_irfft2(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_irfft2_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_irfft2_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_irfft_out(tensor *out__, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_irfft_out(*out, *self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_irfftn(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_irfftn(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_irfftn_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_irfftn_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfft(tensor *out__, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_rfft(*self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfft2(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_rfft2(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfft2_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_rfft2_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), torch::IntArrayRef(dim_data, dim_len), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfft_out(tensor *out__, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm) { - PROTECT( - auto outputs__ = torch::fft_rfft_out(*out, *self, n_null ? c10::nullopt : c10::optional(n_v), dim, std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfftfreq(tensor *out__, int64_t n, double d, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::fft_rfftfreq(n, d, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfftfreq_out(tensor *out__, tensor out, int64_t n, double d) { - PROTECT( - auto outputs__ = torch::fft_rfftfreq_out(*out, n, d); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfftn(tensor *out__, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_rfftn(*self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fft_rfftn_out(tensor *out__, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm) { - PROTECT( - auto outputs__ = torch::fft_rfftn_out(*out, *self, s_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(s_data, s_len)), dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), std::string(norm)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fill(tensor *out__, tensor self, scalar value) { - PROTECT( - auto outputs__ = torch::fill(*self, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fill_(tensor *out__, tensor self, scalar value) { - PROTECT( - auto outputs__ = torch::fill_(*self, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fill_diagonal_(tensor *out__, tensor self, scalar fill_value, int wrap) { - PROTECT( - auto outputs__ = self->fill_diagonal_(*fill_value, (bool)wrap); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fill_scalar_out(tensor *out__, tensor out, tensor self, scalar value) { - PROTECT( - auto outputs__ = torch::fill_out(*out, *self, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fill_tensor(tensor *out__, tensor self, tensor value) { - PROTECT( - auto outputs__ = torch::fill(*self, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fill_tensor_(tensor *out__, tensor self, tensor value) { - PROTECT( - auto outputs__ = torch::fill_(*self, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fill_tensor_out(tensor *out__, tensor out, tensor self, tensor value) { - PROTECT( - auto outputs__ = torch::fill_out(*out, *self, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fix(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::fix(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fix_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::fix_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fix_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::fix_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_flatten(tensor *out__, tensor self, int64_t start_dim, int64_t end_dim) { - PROTECT( - auto outputs__ = torch::flatten(*self, start_dim, end_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_flatten_dense_tensors(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::flatten_dense_tensors(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_flip(tensor *out__, tensor self, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::flip(*self, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_flip_out(tensor *out__, tensor out, tensor self, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::flip_out(*out, *self, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fliplr(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::fliplr(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_flipud(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::flipud(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power(tensor *out__, tensor self, tensor exponent) { - PROTECT( - auto outputs__ = torch::float_power(*self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power_(tensor *out__, tensor self, scalar exponent) { - PROTECT( - auto outputs__ = self->float_power_(*exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power_scalar(tensor *out__, scalar self, tensor exponent) { - PROTECT( - auto outputs__ = torch::float_power(*self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power_scalar_out(tensor *out__, tensor out, scalar self, tensor exponent) { - PROTECT( - auto outputs__ = torch::float_power_out(*out, *self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power_tensor_(tensor *out__, tensor self, tensor exponent) { - PROTECT( - auto outputs__ = self->float_power_(*exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power_tensor_scalar(tensor *out__, tensor self, scalar exponent) { - PROTECT( - auto outputs__ = torch::float_power(*self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power_tensor_scalar_out(tensor *out__, tensor out, tensor self, scalar exponent) { - PROTECT( - auto outputs__ = torch::float_power_out(*out, *self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_float_power_tensor_tensor_out(tensor *out__, tensor out, tensor self, tensor exponent) { - PROTECT( - auto outputs__ = torch::float_power_out(*out, *self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::floor(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::floor_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor_divide(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::floor_divide(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor_divide_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->floor_divide_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor_divide_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::floor_divide_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor_divide_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::floor_divide(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor_divide_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->floor_divide_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_floor_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::floor_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmax(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::fmax(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmax_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::fmax_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmin(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::fmin(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmin_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::fmin_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmod(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::fmod(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmod_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->fmod_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmod_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::fmod_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmod_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::fmod(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmod_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->fmod_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fmod_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::fmod_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_frac(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::frac(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_frac_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::frac_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_frac_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::frac_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fractional_max_pool2d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples) { - PROTECT( - auto outputs__ = torch::fractional_max_pool2d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *random_samples); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_fractional_max_pool2d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices) { - PROTECT( - auto outputs__ = torch::fractional_max_pool2d_backward(*grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fractional_max_pool2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices) { - PROTECT( - auto outputs__ = torch::fractional_max_pool2d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fractional_max_pool2d_output(tensor *out__, tensor output, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples) { - PROTECT( - auto outputs__ = torch::fractional_max_pool2d_out(*output, *indices, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *random_samples); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_fractional_max_pool3d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples) { - PROTECT( - auto outputs__ = torch::fractional_max_pool3d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *random_samples); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_fractional_max_pool3d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices) { - PROTECT( - auto outputs__ = torch::fractional_max_pool3d_backward(*grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fractional_max_pool3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices) { - PROTECT( - auto outputs__ = torch::fractional_max_pool3d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fractional_max_pool3d_output(tensor *out__, tensor output, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples) { - PROTECT( - auto outputs__ = torch::fractional_max_pool3d_out(*output, *indices, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(output_size_data, output_size_len), *random_samples); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_frexp(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::frexp(*self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_frexp_tensor_out(tensor *out__, tensor mantissa, tensor exponent, tensor self) { - PROTECT( - auto outputs__ = torch::frexp_out(*mantissa, *exponent, *self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_frobenius_norm(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::frobenius_norm(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_frobenius_norm_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::frobenius_norm_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_from_file(tensor *out__, char * filename, int shared, int64_t size_v, int size_null, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::from_file(std::string(filename), (bool)shared, size_null ? c10::nullopt : c10::optional(size_v), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_from_file_out(tensor *out__, tensor out, char * filename, int shared, int64_t size_v, int size_null) { - PROTECT( - auto outputs__ = torch::from_file_out(*out, std::string(filename), (bool)shared, size_null ? c10::nullopt : c10::optional(size_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_full(tensor *out__, int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::full(torch::IntArrayRef(size_data, size_len), *fill_value, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_full_like(tensor *out__, tensor self, scalar fill_value) { - PROTECT( - auto outputs__ = torch::full_like(*self, *fill_value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_full_like_out(tensor *out__, tensor out, tensor self, scalar fill_value) { - PROTECT( - auto outputs__ = torch::full_like_out(*out, *self, *fill_value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_full_out(tensor *out__, tensor out, int64_t *size_data, int size_len, scalar fill_value) { - PROTECT( - auto outputs__ = torch::full_out(*out, torch::IntArrayRef(size_data, size_len), *fill_value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_fused_moving_avg_obs_fake_quant(tensor *out__, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant) { - PROTECT( - auto outputs__ = torch::fused_moving_avg_obs_fake_quant(*self, *observer_on, *fake_quant_on, *running_min, *running_max, *scale, *zero_point, averaging_const, quant_min, quant_max, ch_axis, (bool)per_row_fake_quant, (bool)symmetric_quant); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gather(tensor *out__, tensor self, int64_t dim, tensor index, int sparse_grad) { - PROTECT( - auto outputs__ = torch::gather(*self, dim, *index, (bool)sparse_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gather_backward(tensor *out__, tensor grad, tensor self, int64_t dim, tensor index, int sparse_grad) { - PROTECT( - auto outputs__ = torch::gather_backward(*grad, *self, dim, *index, (bool)sparse_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gather_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, int sparse_grad) { - PROTECT( - auto outputs__ = torch::gather_out(*out, *self, dim, *index, (bool)sparse_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gcd(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::gcd(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gcd_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::gcd_(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gcd_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::gcd_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ge(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::ge(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ge_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->ge_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ge_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::ge_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ge_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::ge(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ge_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->ge_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ge_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::ge_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gelu(tensor *out__, tensor self, char * approximate) { - PROTECT( - auto outputs__ = torch::gelu(*self, std::string(approximate)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gelu_(tensor *out__, tensor self, char * approximate) { - PROTECT( - auto outputs__ = torch::gelu_(*self, std::string(approximate)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gelu_backward(tensor *out__, tensor grad_output, tensor self, char * approximate) { - PROTECT( - auto outputs__ = torch::gelu_backward(*grad_output, *self, std::string(approximate)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gelu_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, char * approximate) { - PROTECT( - auto outputs__ = torch::gelu_backward_out(*grad_input, *grad_output, *self, std::string(approximate)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gelu_out(tensor *out__, tensor out, tensor self, char * approximate) { - PROTECT( - auto outputs__ = torch::gelu_out(*out, *self, std::string(approximate)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_geometric(tensor *out__, tensor self, double p) { - PROTECT( - auto outputs__ = torch::geometric(*self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_geometric_(tensor *out__, tensor self, double p) { - PROTECT( - auto outputs__ = self->geometric_(p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_geometric_out(tensor *out__, tensor out, tensor self, double p) { - PROTECT( - auto outputs__ = torch::geometric_out(*out, *self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_geqrf(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::geqrf(*self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_geqrf_a(tensor *out__, tensor a, tensor tau, tensor self) { - PROTECT( - auto outputs__ = torch::geqrf_out(*a, *tau, *self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_ger(tensor *out__, tensor self, tensor vec2) { - PROTECT( - auto outputs__ = torch::ger(*self, *vec2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ger_out(tensor *out__, tensor out, tensor self, tensor vec2) { - PROTECT( - auto outputs__ = torch::ger_out(*out, *self, *vec2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu(*self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu_backward(tensor *out__, tensor grad_output, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu_backward(*grad_output, *self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu_backward_out(*grad_input, *grad_output, *self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu_backward_jvp(tensor *out__, tensor grad_x, tensor grad_glu, tensor x, tensor dgrad_glu, tensor dx, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu_backward_jvp(*grad_x, *grad_glu, *x, *dgrad_glu, *dx, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu_backward_jvp_out(tensor *out__, tensor out, tensor grad_x, tensor grad_glu, tensor x, tensor dgrad_glu, tensor dx, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu_backward_jvp_out(*out, *grad_x, *grad_glu, *x, *dgrad_glu, *dx, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu_jvp(tensor *out__, tensor glu, tensor x, tensor dx, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu_jvp(*glu, *x, *dx, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu_jvp_out(tensor *out__, tensor out, tensor glu, tensor x, tensor dx, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu_jvp_out(*out, *glu, *x, *dx, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_glu_out(tensor *out__, tensor out, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::glu_out(*out, *self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_grad(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->grad(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::greater(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->greater_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_equal(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::greater_equal(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_equal_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->greater_equal_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_equal_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::greater_equal_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_equal_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::greater_equal(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_equal_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->greater_equal_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_equal_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::greater_equal_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::greater_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::greater(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->greater_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_greater_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::greater_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_grid_sampler(tensor *out__, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::grid_sampler(*input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_grid_sampler_2d(tensor *out__, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::grid_sampler_2d(*input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_grid_sampler_2d_out(tensor *out__, tensor out, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::grid_sampler_2d_out(*out, *input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_grid_sampler_3d(tensor *out__, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::grid_sampler_3d(*input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_grid_sampler_3d_out(tensor *out__, tensor out, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners) { - PROTECT( - auto outputs__ = torch::grid_sampler_3d_out(*out, *input, *grid, interpolation_mode, padding_mode, (bool)align_corners); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_group_norm(tensor *out__, tensor input, int64_t num_groups, tensor weight, tensor bias, double eps, int cudnn_enabled) { - PROTECT( - auto outputs__ = torch::group_norm(*input, num_groups, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), eps, (bool)cudnn_enabled); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gru(tensor *out__, tensor input, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { - PROTECT( - auto outputs__ = torch::gru(*input, *hx, of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_gru_cell(tensor *out__, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh) { - PROTECT( - auto outputs__ = torch::gru_cell(*input, *hx, *w_ih, *w_hh, (b_ih ? *b_ih : torch::Tensor()), (b_hh ? *b_hh : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gru_data(tensor *out__, tensor data, tensor batch_sizes, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { - PROTECT( - auto outputs__ = torch::gru(*data, *batch_sizes, *hx, of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_gt(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::gt(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gt_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->gt_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gt_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::gt_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gt_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::gt(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gt_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->gt_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_gt_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::gt_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window(tensor *out__, int64_t window_length, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::hamming_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window_out(tensor *out__, tensor out, int64_t window_length) { - PROTECT( - auto outputs__ = torch::hamming_window_out(*out, window_length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window_periodic(tensor *out__, int64_t window_length, int periodic, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::hamming_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window_periodic_alpha(tensor *out__, int64_t window_length, int periodic, double alpha, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::hamming_window(window_length, (bool)periodic, alpha, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window_periodic_alpha_beta(tensor *out__, int64_t window_length, int periodic, double alpha, double beta, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::hamming_window(window_length, (bool)periodic, alpha, beta, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window_periodic_alpha_beta_out(tensor *out__, tensor out, int64_t window_length, int periodic, double alpha, double beta) { - PROTECT( - auto outputs__ = torch::hamming_window_out(*out, window_length, (bool)periodic, alpha, beta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window_periodic_alpha_out(tensor *out__, tensor out, int64_t window_length, int periodic, double alpha) { - PROTECT( - auto outputs__ = torch::hamming_window_out(*out, window_length, (bool)periodic, alpha); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hamming_window_periodic_out(tensor *out__, tensor out, int64_t window_length, int periodic) { - PROTECT( - auto outputs__ = torch::hamming_window_out(*out, window_length, (bool)periodic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hann_window(tensor *out__, int64_t window_length, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::hann_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hann_window_out(tensor *out__, tensor out, int64_t window_length) { - PROTECT( - auto outputs__ = torch::hann_window_out(*out, window_length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hann_window_periodic(tensor *out__, int64_t window_length, int periodic, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::hann_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hann_window_periodic_out(tensor *out__, tensor out, int64_t window_length, int periodic) { - PROTECT( - auto outputs__ = torch::hann_window_out(*out, window_length, (bool)periodic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardshrink(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::hardshrink(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardshrink_backward(tensor *out__, tensor grad_out, tensor self, scalar lambd) { - PROTECT( - auto outputs__ = torch::hardshrink_backward(*grad_out, *self, *lambd); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardshrink_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_out, tensor self, scalar lambd) { - PROTECT( - auto outputs__ = torch::hardshrink_backward_out(*grad_input, *grad_out, *self, *lambd); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardshrink_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::hardshrink_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardsigmoid(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::hardsigmoid(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardsigmoid_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::hardsigmoid_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardsigmoid_backward(tensor *out__, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::hardsigmoid_backward(*grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardsigmoid_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::hardsigmoid_backward_out(*grad_input, *grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardsigmoid_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::hardsigmoid_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardswish(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::hardswish(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardswish_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::hardswish_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardswish_backward(tensor *out__, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::hardswish_backward(*grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardswish_backward_out(tensor *out__, tensor out, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::hardswish_backward_out(*out, *grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardswish_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::hardswish_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardtanh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::hardtanh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardtanh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::hardtanh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardtanh_backward(tensor *out__, tensor grad_output, tensor self, scalar min_val, scalar max_val) { - PROTECT( - auto outputs__ = torch::hardtanh_backward(*grad_output, *self, *min_val, *max_val); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardtanh_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, scalar min_val, scalar max_val) { - PROTECT( - auto outputs__ = torch::hardtanh_backward_out(*grad_input, *grad_output, *self, *min_val, *max_val); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hardtanh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::hardtanh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_heaviside(tensor *out__, tensor self, tensor values) { - PROTECT( - auto outputs__ = torch::heaviside(*self, *values); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_heaviside_(tensor *out__, tensor self, tensor values) { - PROTECT( - auto outputs__ = self->heaviside_(*values); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_heaviside_out(tensor *out__, tensor out, tensor self, tensor values) { - PROTECT( - auto outputs__ = torch::heaviside_out(*out, *self, *values); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hinge_embedding_loss(tensor *out__, tensor self, tensor target, double margin, int64_t reduction) { - PROTECT( - auto outputs__ = torch::hinge_embedding_loss(*self, *target, margin, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_histc(tensor *out__, tensor self, int64_t bins) { - PROTECT( - auto outputs__ = torch::histc(*self, bins); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_histc_out(tensor *out__, tensor out, tensor self, int64_t bins) { - PROTECT( - auto outputs__ = torch::histc_out(*out, *self, bins); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_hsplit(tensor self, int64_t sections) { - PROTECT( - auto outputs__ = torch::hsplit(*self, sections); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_hsplit_array(tensor self, int64_t *indices_data, int indices_len) { - PROTECT( - auto outputs__ = torch::hsplit(*self, torch::IntArrayRef(indices_data, indices_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_hspmm(tensor *out__, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::hspmm(*mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hspmm_out(tensor *out__, tensor out, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::hspmm_out(*out, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hstack(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::hstack(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hstack_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::hstack_out(*out, of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_huber_loss(tensor *out__, tensor self, tensor target, int64_t reduction, double delta) { - PROTECT( - auto outputs__ = torch::huber_loss(*self, *target, reduction, delta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_huber_loss_backward(tensor *out__, tensor grad_output, tensor self, tensor target, int64_t reduction, double delta) { - PROTECT( - auto outputs__ = torch::huber_loss_backward(*grad_output, *self, *target, reduction, delta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_huber_loss_backward_out(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction, double delta) { - PROTECT( - auto outputs__ = torch::huber_loss_backward_out(*grad_input, *grad_output, *self, *target, reduction, delta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_huber_loss_out(tensor *out__, tensor out, tensor self, tensor target, int64_t reduction, double delta) { - PROTECT( - auto outputs__ = torch::huber_loss_out(*out, *self, *target, reduction, delta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hypot(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::hypot(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hypot_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->hypot_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_hypot_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::hypot_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_i0(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::i0(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_i0_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::i0_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_i0_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::i0_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_igamma(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::igamma(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_igamma_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->igamma_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_igamma_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::igamma_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_igammac(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::igammac(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_igammac_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->igammac_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_igammac_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::igammac_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_im2col(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::im2col(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_im2col_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::im2col_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(dilation_data, dilation_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_imag(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::imag(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index(tensor *out__, tensor self, tensor *indices_data, int indices_len) { - PROTECT( - auto outputs__ = torch::index(*self, of_carray_tensor_opt(indices_data, indices_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_add(tensor *out__, tensor self, int64_t dim, tensor index, tensor source) { - PROTECT( - auto outputs__ = torch::index_add(*self, dim, *index, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_add_(tensor *out__, tensor self, int64_t dim, tensor index, tensor source) { - PROTECT( - auto outputs__ = self->index_add_(dim, *index, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_add_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor source) { - PROTECT( - auto outputs__ = torch::index_add_out(*out, *self, dim, *index, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_copy(tensor *out__, tensor self, int64_t dim, tensor index, tensor source) { - PROTECT( - auto outputs__ = torch::index_copy(*self, dim, *index, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_copy_(tensor *out__, tensor self, int64_t dim, tensor index, tensor source) { - PROTECT( - auto outputs__ = self->index_copy_(dim, *index, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_copy_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor source) { - PROTECT( - auto outputs__ = torch::index_copy_out(*out, *self, dim, *index, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_fill(tensor *out__, tensor self, int64_t dim, tensor index, scalar value) { - PROTECT( - auto outputs__ = torch::index_fill(*self, dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_fill_(tensor *out__, tensor self, int64_t dim, tensor index, scalar value) { - PROTECT( - auto outputs__ = self->index_fill_(dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_fill_int_scalar_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, scalar value) { - PROTECT( - auto outputs__ = torch::index_fill_out(*out, *self, dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_fill_int_tensor(tensor *out__, tensor self, int64_t dim, tensor index, tensor value) { - PROTECT( - auto outputs__ = torch::index_fill(*self, dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_fill_int_tensor_(tensor *out__, tensor self, int64_t dim, tensor index, tensor value) { - PROTECT( - auto outputs__ = self->index_fill_(dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_fill_int_tensor_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor value) { - PROTECT( - auto outputs__ = torch::index_fill_out(*out, *self, dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_put(tensor *out__, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate) { - PROTECT( - auto outputs__ = torch::index_put(*self, of_carray_tensor_opt(indices_data, indices_len), *values, (bool)accumulate); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_put_(tensor *out__, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate) { - PROTECT( - auto outputs__ = torch::index_put_(*self, of_carray_tensor_opt(indices_data, indices_len), *values, (bool)accumulate); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_put_out(tensor *out__, tensor out, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate) { - PROTECT( - auto outputs__ = torch::index_put_out(*out, *self, of_carray_tensor_opt(indices_data, indices_len), *values, (bool)accumulate); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_reduce(tensor *out__, tensor self, int64_t dim, tensor index, tensor source, char * reduce, int include_self) { - PROTECT( - auto outputs__ = torch::index_reduce(*self, dim, *index, *source, std::string(reduce), (bool)include_self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_reduce_(tensor *out__, tensor self, int64_t dim, tensor index, tensor source, char * reduce, int include_self) { - PROTECT( - auto outputs__ = self->index_reduce_(dim, *index, *source, std::string(reduce), (bool)include_self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_reduce_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor source, char * reduce, int include_self) { - PROTECT( - auto outputs__ = torch::index_reduce_out(*out, *self, dim, *index, *source, std::string(reduce), (bool)include_self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_select(tensor *out__, tensor self, int64_t dim, tensor index) { - PROTECT( - auto outputs__ = torch::index_select(*self, dim, *index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_select_backward(tensor *out__, tensor grad, int64_t *self_sizes_data, int self_sizes_len, int64_t dim, tensor index) { - PROTECT( - auto outputs__ = torch::index_select_backward(*grad, torch::IntArrayRef(self_sizes_data, self_sizes_len), dim, *index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_select_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index) { - PROTECT( - auto outputs__ = torch::index_select_out(*out, *self, dim, *index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_index_tensor_out(tensor *out__, tensor out, tensor self, tensor *indices_data, int indices_len) { - PROTECT( - auto outputs__ = torch::index_out(*out, *self, of_carray_tensor_opt(indices_data, indices_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_indices(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->indices(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_indices_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::indices_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_indices_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::indices_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_infinitely_differentiable_gelu_backward(tensor *out__, tensor grad, tensor self) { - PROTECT( - auto outputs__ = torch::infinitely_differentiable_gelu_backward(*grad, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_inner(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::inner(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_inner_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::inner_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_instance_norm(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int use_input_stats, double momentum, double eps, int cudnn_enabled) { - PROTECT( - auto outputs__ = torch::instance_norm(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)use_input_stats, momentum, eps, (bool)cudnn_enabled); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_int_repr(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::int_repr(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_int_repr_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::int_repr_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_inverse(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::inverse(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_inverse_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::inverse_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg_is_coalesced(tensor self) { - PROTECT( - return self->is_coalesced(); - ) - return 0; -} - -int atg_is_complex(tensor self) { - PROTECT( - return torch::is_complex(*self); - ) - return 0; -} - -int atg_is_conj(tensor self) { - PROTECT( - return torch::is_conj(*self); - ) - return 0; -} - -int atg_is_distributed(tensor self) { - PROTECT( - return torch::is_distributed(*self); - ) - return 0; -} - -int atg_is_floating_point(tensor self) { - PROTECT( - return torch::is_floating_point(*self); - ) - return 0; -} - -int atg_is_inference(tensor self) { - PROTECT( - return torch::is_inference(*self); - ) - return 0; -} - -int atg_is_leaf(tensor self) { - PROTECT( - return self->is_leaf(); - ) - return 0; -} - -int atg_is_neg(tensor self) { - PROTECT( - return torch::is_neg(*self); - ) - return 0; -} - -int atg_is_nonzero(tensor self) { - PROTECT( - return torch::is_nonzero(*self); - ) - return 0; -} - -int atg_is_pinned(tensor self, int device) { - PROTECT( - return self->is_pinned(device_of_int(device)); - ) - return 0; -} - -int atg_is_same_size(tensor self, tensor other) { - PROTECT( - return torch::is_same_size(*self, *other); - ) - return 0; -} - -int atg_is_set_to(tensor self, tensor tensor) { - PROTECT( - return self->is_set_to(*tensor); - ) - return 0; -} - -int atg_is_signed(tensor self) { - PROTECT( - return torch::is_signed(*self); - ) - return 0; -} - -int atg_is_vulkan_available() { - PROTECT( - return torch::is_vulkan_available(); - ) - return 0; -} - -void atg_isclose(tensor *out__, tensor self, tensor other, double rtol, double atol, int equal_nan) { - PROTECT( - auto outputs__ = torch::isclose(*self, *other, rtol, atol, (bool)equal_nan); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isfinite(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::isfinite(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isin(tensor *out__, tensor elements, tensor test_elements, int assume_unique, int invert) { - PROTECT( - auto outputs__ = torch::isin(*elements, *test_elements, (bool)assume_unique, (bool)invert); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isin_scalar_tensor(tensor *out__, scalar element, tensor test_elements, int assume_unique, int invert) { - PROTECT( - auto outputs__ = torch::isin(*element, *test_elements, (bool)assume_unique, (bool)invert); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isin_scalar_tensor_out(tensor *out__, tensor out, scalar element, tensor test_elements, int assume_unique, int invert) { - PROTECT( - auto outputs__ = torch::isin_out(*out, *element, *test_elements, (bool)assume_unique, (bool)invert); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isin_tensor_scalar(tensor *out__, tensor elements, scalar test_element, int assume_unique, int invert) { - PROTECT( - auto outputs__ = torch::isin(*elements, *test_element, (bool)assume_unique, (bool)invert); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isin_tensor_scalar_out(tensor *out__, tensor out, tensor elements, scalar test_element, int assume_unique, int invert) { - PROTECT( - auto outputs__ = torch::isin_out(*out, *elements, *test_element, (bool)assume_unique, (bool)invert); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isin_tensor_tensor_out(tensor *out__, tensor out, tensor elements, tensor test_elements, int assume_unique, int invert) { - PROTECT( - auto outputs__ = torch::isin_out(*out, *elements, *test_elements, (bool)assume_unique, (bool)invert); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isinf(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::isinf(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isinf_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::isinf_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isnan(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::isnan(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isnan_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::isnan_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isneginf(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::isneginf(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isneginf_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::isneginf_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isposinf(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::isposinf(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isposinf_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::isposinf_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_isreal(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::isreal(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_istft(tensor *out__, tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, tensor window, int center, int normalized, int onesided, int64_t length_v, int length_null, int return_complex) { - PROTECT( - auto outputs__ = torch::istft(*self, n_fft, hop_length_null ? c10::nullopt : c10::optional(hop_length_v), win_length_null ? c10::nullopt : c10::optional(win_length_v), (window ? *window : torch::Tensor()), (bool)center, (bool)normalized, (bool)onesided, length_null ? c10::nullopt : c10::optional(length_v), (bool)return_complex); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kaiser_window(tensor *out__, int64_t window_length, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::kaiser_window(window_length, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kaiser_window_beta(tensor *out__, int64_t window_length, int periodic, double beta, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::kaiser_window(window_length, (bool)periodic, beta, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kaiser_window_beta_out(tensor *out__, tensor out, int64_t window_length, int periodic, double beta) { - PROTECT( - auto outputs__ = torch::kaiser_window_out(*out, window_length, (bool)periodic, beta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kaiser_window_out(tensor *out__, tensor out, int64_t window_length) { - PROTECT( - auto outputs__ = torch::kaiser_window_out(*out, window_length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kaiser_window_periodic(tensor *out__, int64_t window_length, int periodic, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::kaiser_window(window_length, (bool)periodic, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kaiser_window_periodic_out(tensor *out__, tensor out, int64_t window_length, int periodic) { - PROTECT( - auto outputs__ = torch::kaiser_window_out(*out, window_length, (bool)periodic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kl_div(tensor *out__, tensor self, tensor target, int64_t reduction, int log_target) { - PROTECT( - auto outputs__ = torch::kl_div(*self, *target, reduction, (bool)log_target); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kron(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::kron(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kron_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::kron_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_kthvalue(tensor *out__, tensor self, int64_t k, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::kthvalue(*self, k, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_kthvalue_values(tensor *out__, tensor values, tensor indices, tensor self, int64_t k, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::kthvalue_out(*values, *indices, *self, k, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_l1_loss(tensor *out__, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::l1_loss(*self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_layer_norm(tensor *out__, tensor input, int64_t *normalized_shape_data, int normalized_shape_len, tensor weight, tensor bias, double eps, int cudnn_enable) { - PROTECT( - auto outputs__ = torch::layer_norm(*input, torch::IntArrayRef(normalized_shape_data, normalized_shape_len), (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), eps, (bool)cudnn_enable); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lcm(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::lcm(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lcm_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::lcm_(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lcm_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::lcm_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ldexp(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::ldexp(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ldexp_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::ldexp_(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ldexp_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::ldexp_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_le(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::le(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_le_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->le_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_le_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::le_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_le_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::le(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_le_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->le_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_le_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::le_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_leaky_relu(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::leaky_relu(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_leaky_relu_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::leaky_relu_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_leaky_relu_backward(tensor *out__, tensor grad_output, tensor self, scalar negative_slope, int self_is_result) { - PROTECT( - auto outputs__ = torch::leaky_relu_backward(*grad_output, *self, *negative_slope, (bool)self_is_result); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_leaky_relu_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, scalar negative_slope, int self_is_result) { - PROTECT( - auto outputs__ = torch::leaky_relu_backward_out(*grad_input, *grad_output, *self, *negative_slope, (bool)self_is_result); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_leaky_relu_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::leaky_relu_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lerp(tensor *out__, tensor self, tensor end, scalar weight) { - PROTECT( - auto outputs__ = torch::lerp(*self, *end, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lerp_(tensor *out__, tensor self, tensor end, scalar weight) { - PROTECT( - auto outputs__ = self->lerp_(*end, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lerp_scalar_out(tensor *out__, tensor out, tensor self, tensor end, scalar weight) { - PROTECT( - auto outputs__ = torch::lerp_out(*out, *self, *end, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lerp_tensor(tensor *out__, tensor self, tensor end, tensor weight) { - PROTECT( - auto outputs__ = torch::lerp(*self, *end, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lerp_tensor_(tensor *out__, tensor self, tensor end, tensor weight) { - PROTECT( - auto outputs__ = self->lerp_(*end, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lerp_tensor_out(tensor *out__, tensor out, tensor self, tensor end, tensor weight) { - PROTECT( - auto outputs__ = torch::lerp_out(*out, *self, *end, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::less(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->less_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_equal(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::less_equal(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_equal_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->less_equal_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_equal_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::less_equal_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_equal_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::less_equal(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_equal_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->less_equal_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_equal_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::less_equal_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::less_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::less(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->less_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_less_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::less_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lgamma(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::lgamma(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lgamma_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->lgamma_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lgamma_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::lgamma_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lift(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::lift(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lift_fresh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::lift_fresh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lift_fresh_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::lift_fresh_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lift_fresh_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::lift_fresh_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lift_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::lift_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cholesky(tensor *out__, tensor self, int upper) { - PROTECT( - auto outputs__ = torch::linalg_cholesky(*self, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cholesky_ex(tensor *out__, tensor self, int upper, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_cholesky_ex(*self, (bool)upper, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_cholesky_ex_l(tensor *out__, tensor L, tensor info, tensor self, int upper, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_cholesky_ex_out(*L, *info, *self, (bool)upper, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_cholesky_out(tensor *out__, tensor out, tensor self, int upper) { - PROTECT( - auto outputs__ = torch::linalg_cholesky_out(*out, *self, (bool)upper); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cond(tensor *out__, tensor self, scalar p) { - PROTECT( - auto outputs__ = torch::linalg_cond(*self, *p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cond_out(tensor *out__, tensor out, tensor self, scalar p) { - PROTECT( - auto outputs__ = torch::linalg_cond_out(*out, *self, *p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cond_p_str(tensor *out__, tensor self, char * p) { - PROTECT( - auto outputs__ = torch::linalg_cond(*self, std::string(p)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cond_p_str_out(tensor *out__, tensor out, tensor self, char * p) { - PROTECT( - auto outputs__ = torch::linalg_cond_out(*out, *self, std::string(p)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cross(tensor *out__, tensor self, tensor other, int64_t dim) { - PROTECT( - auto outputs__ = torch::linalg_cross(*self, *other, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_cross_out(tensor *out__, tensor out, tensor self, tensor other, int64_t dim) { - PROTECT( - auto outputs__ = torch::linalg_cross_out(*out, *self, *other, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_det(tensor *out__, tensor A) { - PROTECT( - auto outputs__ = torch::linalg_det(*A); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_det_out(tensor *out__, tensor out, tensor A) { - PROTECT( - auto outputs__ = torch::linalg_det_out(*out, *A); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_diagonal(tensor *out__, tensor A, int64_t offset, int64_t dim1, int64_t dim2) { - PROTECT( - auto outputs__ = torch::linalg_diagonal(*A, offset, dim1, dim2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_eig(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::linalg_eig(*self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_eig_out(tensor *out__, tensor eigenvalues, tensor eigenvectors, tensor self) { - PROTECT( - auto outputs__ = torch::linalg_eig_out(*eigenvalues, *eigenvectors, *self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_eigh(tensor *out__, tensor self, char * UPLO) { - PROTECT( - auto outputs__ = torch::linalg_eigh(*self, std::string(UPLO)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_eigh_eigvals(tensor *out__, tensor eigvals, tensor eigvecs, tensor self, char * UPLO) { - PROTECT( - auto outputs__ = torch::linalg_eigh_out(*eigvals, *eigvecs, *self, std::string(UPLO)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_eigvals(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::linalg_eigvals(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_eigvals_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::linalg_eigvals_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_eigvalsh(tensor *out__, tensor self, char * UPLO) { - PROTECT( - auto outputs__ = torch::linalg_eigvalsh(*self, std::string(UPLO)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_eigvalsh_out(tensor *out__, tensor out, tensor self, char * UPLO) { - PROTECT( - auto outputs__ = torch::linalg_eigvalsh_out(*out, *self, std::string(UPLO)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_householder_product(tensor *out__, tensor input, tensor tau) { - PROTECT( - auto outputs__ = torch::linalg_householder_product(*input, *tau); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_householder_product_out(tensor *out__, tensor out, tensor input, tensor tau) { - PROTECT( - auto outputs__ = torch::linalg_householder_product_out(*out, *input, *tau); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_inv(tensor *out__, tensor A) { - PROTECT( - auto outputs__ = torch::linalg_inv(*A); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_inv_ex(tensor *out__, tensor A, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_inv_ex(*A, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_inv_ex_inverse(tensor *out__, tensor inverse, tensor info, tensor A, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_inv_ex_out(*inverse, *info, *A, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_inv_out(tensor *out__, tensor out, tensor A) { - PROTECT( - auto outputs__ = torch::linalg_inv_out(*out, *A); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_ldl_factor(tensor *out__, tensor self, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_ldl_factor(*self, (bool)hermitian); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_ldl_factor_ex(tensor *out__, tensor self, int hermitian, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_ldl_factor_ex(*self, (bool)hermitian, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_ldl_factor_ex_out(tensor *out__, tensor LD, tensor pivots, tensor info, tensor self, int hermitian, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_ldl_factor_ex_out(*LD, *pivots, *info, *self, (bool)hermitian, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_ldl_factor_out(tensor *out__, tensor LD, tensor pivots, tensor self, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_ldl_factor_out(*LD, *pivots, *self, (bool)hermitian); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_ldl_solve(tensor *out__, tensor LD, tensor pivots, tensor B, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_ldl_solve(*LD, *pivots, *B, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_ldl_solve_out(tensor *out__, tensor out, tensor LD, tensor pivots, tensor B, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_ldl_solve_out(*out, *LD, *pivots, *B, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_lstsq(tensor *out__, tensor self, tensor b, double rcond_v, int rcond_null, char * driver) { - PROTECT( - auto outputs__ = torch::linalg_lstsq(*self, *b, rcond_null ? c10::nullopt : c10::optional(rcond_v), std::string(driver)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_linalg_lstsq_out(tensor *out__, tensor solution, tensor residuals, tensor rank, tensor singular_values, tensor self, tensor b, double rcond_v, int rcond_null, char * driver) { - PROTECT( - auto outputs__ = torch::linalg_lstsq_out(*solution, *residuals, *rank, *singular_values, *self, *b, rcond_null ? c10::nullopt : c10::optional(rcond_v), std::string(driver)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_linalg_lu(tensor *out__, tensor A, int pivot) { - PROTECT( - auto outputs__ = torch::linalg_lu(*A, (bool)pivot); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_lu_factor(tensor *out__, tensor A, int pivot) { - PROTECT( - auto outputs__ = torch::linalg_lu_factor(*A, (bool)pivot); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_lu_factor_ex(tensor *out__, tensor A, int pivot, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_lu_factor_ex(*A, (bool)pivot, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_lu_factor_ex_out(tensor *out__, tensor LU, tensor pivots, tensor info, tensor A, int pivot, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_lu_factor_ex_out(*LU, *pivots, *info, *A, (bool)pivot, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_lu_factor_out(tensor *out__, tensor LU, tensor pivots, tensor A, int pivot) { - PROTECT( - auto outputs__ = torch::linalg_lu_factor_out(*LU, *pivots, *A, (bool)pivot); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_lu_out(tensor *out__, tensor P, tensor L, tensor U, tensor A, int pivot) { - PROTECT( - auto outputs__ = torch::linalg_lu_out(*P, *L, *U, *A, (bool)pivot); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_lu_solve(tensor *out__, tensor LU, tensor pivots, tensor B, int left, int adjoint) { - PROTECT( - auto outputs__ = torch::linalg_lu_solve(*LU, *pivots, *B, (bool)left, (bool)adjoint); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_lu_solve_out(tensor *out__, tensor out, tensor LU, tensor pivots, tensor B, int left, int adjoint) { - PROTECT( - auto outputs__ = torch::linalg_lu_solve_out(*out, *LU, *pivots, *B, (bool)left, (bool)adjoint); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matmul(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::linalg_matmul(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matmul_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::linalg_matmul_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_exp(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::linalg_matrix_exp(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_exp_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::linalg_matrix_exp_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_power(tensor *out__, tensor self, int64_t n) { - PROTECT( - auto outputs__ = torch::linalg_matrix_power(*self, n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_power_out(tensor *out__, tensor out, tensor self, int64_t n) { - PROTECT( - auto outputs__ = torch::linalg_matrix_power_out(*out, *self, n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank(tensor *out__, tensor self, double tol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank(*self, tol, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank_atol_rtol_float(tensor *out__, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank(*self, atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank_atol_rtol_float_out(tensor *out__, tensor out, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank_out(*out, *self, atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank_atol_rtol_tensor(tensor *out__, tensor input, tensor atol, tensor rtol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank(*input, (atol ? *atol : torch::Tensor()), (rtol ? *rtol : torch::Tensor()), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank_atol_rtol_tensor_out(tensor *out__, tensor out, tensor input, tensor atol, tensor rtol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank_out(*out, *input, (atol ? *atol : torch::Tensor()), (rtol ? *rtol : torch::Tensor()), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank_out(tensor *out__, tensor out, tensor self, double tol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank_out(*out, *self, tol, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank_out_tol_tensor(tensor *out__, tensor out, tensor input, tensor tol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank_out(*out, *input, *tol, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_matrix_rank_tol_tensor(tensor *out__, tensor input, tensor tol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_matrix_rank(*input, *tol, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_multi_dot(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::linalg_multi_dot(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_multi_dot_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::linalg_multi_dot_out(*out, of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv(tensor *out__, tensor self, double rcond, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv(*self, rcond, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv_atol_rtol_float(tensor *out__, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv(*self, atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv_atol_rtol_float_out(tensor *out__, tensor out, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv_out(*out, *self, atol_null ? c10::nullopt : c10::optional(atol_v), rtol_null ? c10::nullopt : c10::optional(rtol_v), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv_atol_rtol_tensor(tensor *out__, tensor self, tensor atol, tensor rtol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv(*self, (atol ? *atol : torch::Tensor()), (rtol ? *rtol : torch::Tensor()), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv_atol_rtol_tensor_out(tensor *out__, tensor out, tensor self, tensor atol, tensor rtol, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv_out(*out, *self, (atol ? *atol : torch::Tensor()), (rtol ? *rtol : torch::Tensor()), (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv_out(tensor *out__, tensor out, tensor self, double rcond, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv_out(*out, *self, rcond, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv_out_rcond_tensor(tensor *out__, tensor out, tensor self, tensor rcond, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv_out(*out, *self, *rcond, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_pinv_rcond_tensor(tensor *out__, tensor self, tensor rcond, int hermitian) { - PROTECT( - auto outputs__ = torch::linalg_pinv(*self, *rcond, (bool)hermitian); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_qr(tensor *out__, tensor A, char * mode) { - PROTECT( - auto outputs__ = torch::linalg_qr(*A, std::string(mode)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_qr_out(tensor *out__, tensor Q, tensor R, tensor A, char * mode) { - PROTECT( - auto outputs__ = torch::linalg_qr_out(*Q, *R, *A, std::string(mode)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_slogdet(tensor *out__, tensor A) { - PROTECT( - auto outputs__ = torch::linalg_slogdet(*A); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_slogdet_out(tensor *out__, tensor sign, tensor logabsdet, tensor A) { - PROTECT( - auto outputs__ = torch::linalg_slogdet_out(*sign, *logabsdet, *A); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_solve(tensor *out__, tensor A, tensor B, int left) { - PROTECT( - auto outputs__ = torch::linalg_solve(*A, *B, (bool)left); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_solve_ex(tensor *out__, tensor A, tensor B, int left, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_solve_ex(*A, *B, (bool)left, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_solve_ex_out(tensor *out__, tensor result, tensor info, tensor A, tensor B, int left, int check_errors) { - PROTECT( - auto outputs__ = torch::linalg_solve_ex_out(*result, *info, *A, *B, (bool)left, (bool)check_errors); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_linalg_solve_out(tensor *out__, tensor out, tensor A, tensor B, int left) { - PROTECT( - auto outputs__ = torch::linalg_solve_out(*out, *A, *B, (bool)left); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_solve_triangular(tensor *out__, tensor self, tensor B, int upper, int left, int unitriangular) { - PROTECT( - auto outputs__ = torch::linalg_solve_triangular(*self, *B, (bool)upper, (bool)left, (bool)unitriangular); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_solve_triangular_out(tensor *out__, tensor out, tensor self, tensor B, int upper, int left, int unitriangular) { - PROTECT( - auto outputs__ = torch::linalg_solve_triangular_out(*out, *self, *B, (bool)upper, (bool)left, (bool)unitriangular); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_svd(tensor *out__, tensor A, int full_matrices, char * driver) { - PROTECT( - auto outputs__ = torch::linalg_svd(*A, (bool)full_matrices, std::string(driver)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_svd_u(tensor *out__, tensor U, tensor S, tensor Vh, tensor A, int full_matrices, char * driver) { - PROTECT( - auto outputs__ = torch::linalg_svd_out(*U, *S, *Vh, *A, (bool)full_matrices, std::string(driver)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_linalg_svdvals(tensor *out__, tensor A, char * driver) { - PROTECT( - auto outputs__ = torch::linalg_svdvals(*A, std::string(driver)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_svdvals_out(tensor *out__, tensor out, tensor A, char * driver) { - PROTECT( - auto outputs__ = torch::linalg_svdvals_out(*out, *A, std::string(driver)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_tensorinv(tensor *out__, tensor self, int64_t ind) { - PROTECT( - auto outputs__ = torch::linalg_tensorinv(*self, ind); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_tensorinv_out(tensor *out__, tensor out, tensor self, int64_t ind) { - PROTECT( - auto outputs__ = torch::linalg_tensorinv_out(*out, *self, ind); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_tensorsolve(tensor *out__, tensor self, tensor other, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::linalg_tensorsolve(*self, *other, dims_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dims_data, dims_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_tensorsolve_out(tensor *out__, tensor out, tensor self, tensor other, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::linalg_tensorsolve_out(*out, *self, *other, dims_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dims_data, dims_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_vander(tensor *out__, tensor x, int64_t n_v, int n_null) { - PROTECT( - auto outputs__ = torch::linalg_vander(*x, n_null ? c10::nullopt : c10::optional(n_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_vecdot(tensor *out__, tensor x, tensor y, int64_t dim) { - PROTECT( - auto outputs__ = torch::linalg_vecdot(*x, *y, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linalg_vecdot_out(tensor *out__, tensor out, tensor x, tensor y, int64_t dim) { - PROTECT( - auto outputs__ = torch::linalg_vecdot_out(*out, *x, *y, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linear(tensor *out__, tensor input, tensor weight, tensor bias) { - PROTECT( - auto outputs__ = torch::linear(*input, *weight, (bias ? *bias : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linear_out(tensor *out__, tensor out, tensor input, tensor weight, tensor bias) { - PROTECT( - auto outputs__ = torch::linear_out(*out, *input, *weight, (bias ? *bias : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linspace(tensor *out__, scalar start, scalar end, int64_t steps, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::linspace(*start, *end, steps, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_linspace_out(tensor *out__, tensor out, scalar start, scalar end, int64_t steps) { - PROTECT( - auto outputs__ = torch::linspace_out(*out, *start, *end, steps); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log10(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log10(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log10_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log10_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log10_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::log10_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log1p(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log1p(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log1p_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log1p_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log1p_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::log1p_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log2(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log2(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log2_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log2_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log2_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::log2_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_normal(tensor *out__, tensor self, double mean, double std) { - PROTECT( - auto outputs__ = torch::log_normal(*self, mean, std); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_normal_(tensor *out__, tensor self, double mean, double std) { - PROTECT( - auto outputs__ = self->log_normal_(mean, std); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_normal_out(tensor *out__, tensor out, tensor self, double mean, double std) { - PROTECT( - auto outputs__ = torch::log_normal_out(*out, *self, mean, std); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::log_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_sigmoid(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::log_sigmoid(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_sigmoid_backward(tensor *out__, tensor grad_output, tensor self, tensor buffer) { - PROTECT( - auto outputs__ = torch::log_sigmoid_backward(*grad_output, *self, *buffer); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_sigmoid_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor buffer) { - PROTECT( - auto outputs__ = torch::log_sigmoid_backward_out(*grad_input, *grad_output, *self, *buffer); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_sigmoid_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::log_sigmoid_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_softmax(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::log_softmax(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_log_softmax_int_out(tensor *out__, tensor out, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::log_softmax_out(*out, *self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logaddexp(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logaddexp(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logaddexp2(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logaddexp2(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logaddexp2_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logaddexp2_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logaddexp_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logaddexp_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logcumsumexp(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::logcumsumexp(*self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logcumsumexp_out(tensor *out__, tensor out, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::logcumsumexp_out(*out, *self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logdet(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::logdet(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_and(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logical_and(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_and_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->logical_and_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_and_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logical_and_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_not(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::logical_not(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_not_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->logical_not_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_not_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::logical_not_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_or(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logical_or(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_or_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->logical_or_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_or_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logical_or_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_xor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logical_xor(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_xor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->logical_xor_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logical_xor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::logical_xor_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logit(tensor *out__, tensor self, double eps_v, int eps_null) { - PROTECT( - auto outputs__ = torch::logit(*self, eps_null ? c10::nullopt : c10::optional(eps_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logit_(tensor *out__, tensor self, double eps_v, int eps_null) { - PROTECT( - auto outputs__ = torch::logit_(*self, eps_null ? c10::nullopt : c10::optional(eps_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logit_backward(tensor *out__, tensor grad_output, tensor self, double eps_v, int eps_null) { - PROTECT( - auto outputs__ = torch::logit_backward(*grad_output, *self, eps_null ? c10::nullopt : c10::optional(eps_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logit_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, double eps_v, int eps_null) { - PROTECT( - auto outputs__ = torch::logit_backward_out(*grad_input, *grad_output, *self, eps_null ? c10::nullopt : c10::optional(eps_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logit_out(tensor *out__, tensor out, tensor self, double eps_v, int eps_null) { - PROTECT( - auto outputs__ = torch::logit_out(*out, *self, eps_null ? c10::nullopt : c10::optional(eps_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logspace(tensor *out__, scalar start, scalar end, int64_t steps, double base, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::logspace(*start, *end, steps, base, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logspace_out(tensor *out__, tensor out, scalar start, scalar end, int64_t steps, double base) { - PROTECT( - auto outputs__ = torch::logspace_out(*out, *start, *end, steps, base); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logsumexp(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::logsumexp(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_logsumexp_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::logsumexp_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lstm(tensor *out__, tensor input, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { - PROTECT( - auto outputs__ = torch::lstm(*input, of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_lstm_cell(tensor *out__, tensor input, tensor *hx_data, int hx_len, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh) { - PROTECT( - auto outputs__ = torch::lstm_cell(*input, of_carray_tensor(hx_data, hx_len), *w_ih, *w_hh, (b_ih ? *b_ih : torch::Tensor()), (b_hh ? *b_hh : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_lstm_data(tensor *out__, tensor data, tensor batch_sizes, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { - PROTECT( - auto outputs__ = torch::lstm(*data, *batch_sizes, of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_lstm_mps_backward(tensor out0, tensor *out1_data, int out1_len, tensor *out2_data, int out2_len, tensor grad_y, tensor grad_hy, tensor grad_cy, tensor z_state, tensor cell_state_fwd, tensor input, tensor layersOutputs, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { - PROTECT( - torch::lstm_mps_backward_out(*out0, of_carray_tensor(out1_data, out1_len), of_carray_tensor(out2_data, out2_len), (grad_y ? *grad_y : torch::Tensor()), (grad_hy ? *grad_hy : torch::Tensor()), (grad_cy ? *grad_cy : torch::Tensor()), *z_state, *cell_state_fwd, *input, *layersOutputs, of_carray_tensor(hx_data, hx_len), of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); - ) -} - -void atg_lt(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::lt(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lt_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->lt_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lt_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::lt_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lt_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::lt(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lt_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->lt_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lt_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::lt_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lu_solve(tensor *out__, tensor self, tensor LU_data, tensor LU_pivots) { - PROTECT( - auto outputs__ = torch::lu_solve(*self, *LU_data, *LU_pivots); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lu_solve_out(tensor *out__, tensor out, tensor self, tensor LU_data, tensor LU_pivots) { - PROTECT( - auto outputs__ = torch::lu_solve_out(*out, *self, *LU_data, *LU_pivots); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_lu_unpack(tensor *out__, tensor LU_data, tensor LU_pivots, int unpack_data, int unpack_pivots) { - PROTECT( - auto outputs__ = torch::lu_unpack(*LU_data, *LU_pivots, (bool)unpack_data, (bool)unpack_pivots); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_lu_unpack_out(tensor *out__, tensor P, tensor L, tensor U, tensor LU_data, tensor LU_pivots, int unpack_data, int unpack_pivots) { - PROTECT( - auto outputs__ = torch::lu_unpack_out(*P, *L, *U, *LU_data, *LU_pivots, (bool)unpack_data, (bool)unpack_pivots); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_margin_ranking_loss(tensor *out__, tensor input1, tensor input2, tensor target, double margin, int64_t reduction) { - PROTECT( - auto outputs__ = torch::margin_ranking_loss(*input1, *input2, *target, margin, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_fill(tensor *out__, tensor self, tensor mask, scalar value) { - PROTECT( - auto outputs__ = torch::masked_fill(*self, *mask, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_fill_(tensor *out__, tensor self, tensor mask, scalar value) { - PROTECT( - auto outputs__ = self->masked_fill_(*mask, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_fill_scalar_out(tensor *out__, tensor out, tensor self, tensor mask, scalar value) { - PROTECT( - auto outputs__ = torch::masked_fill_out(*out, *self, *mask, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_fill_tensor(tensor *out__, tensor self, tensor mask, tensor value) { - PROTECT( - auto outputs__ = torch::masked_fill(*self, *mask, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_fill_tensor_(tensor *out__, tensor self, tensor mask, tensor value) { - PROTECT( - auto outputs__ = self->masked_fill_(*mask, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_fill_tensor_out(tensor *out__, tensor out, tensor self, tensor mask, tensor value) { - PROTECT( - auto outputs__ = torch::masked_fill_out(*out, *self, *mask, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_scatter(tensor *out__, tensor self, tensor mask, tensor source) { - PROTECT( - auto outputs__ = torch::masked_scatter(*self, *mask, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_scatter_(tensor *out__, tensor self, tensor mask, tensor source) { - PROTECT( - auto outputs__ = self->masked_scatter_(*mask, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_scatter_out(tensor *out__, tensor out, tensor self, tensor mask, tensor source) { - PROTECT( - auto outputs__ = torch::masked_scatter_out(*out, *self, *mask, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_select(tensor *out__, tensor self, tensor mask) { - PROTECT( - auto outputs__ = torch::masked_select(*self, *mask); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_select_backward(tensor *out__, tensor grad, tensor input, tensor mask) { - PROTECT( - auto outputs__ = torch::masked_select_backward(*grad, *input, *mask); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_masked_select_out(tensor *out__, tensor out, tensor self, tensor mask) { - PROTECT( - auto outputs__ = torch::masked_select_out(*out, *self, *mask); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_matmul(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::matmul(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_matmul_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::matmul_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_matrix_exp(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::matrix_exp(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_matrix_exp_backward(tensor *out__, tensor self, tensor grad) { - PROTECT( - auto outputs__ = torch::matrix_exp_backward(*self, *grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_matrix_h(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->matrix_H(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_matrix_power(tensor *out__, tensor self, int64_t n) { - PROTECT( - auto outputs__ = torch::matrix_power(*self, n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_matrix_power_out(tensor *out__, tensor out, tensor self, int64_t n) { - PROTECT( - auto outputs__ = torch::matrix_power_out(*out, *self, n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::max(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_dim(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::max(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_max_dim_max(tensor *out__, tensor max, tensor max_values, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::max_out(*max, *max_values, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_max_other(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::max(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::max_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool1d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool1d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool1d_with_indices(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool1d_with_indices(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_max_pool2d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool2d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool2d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool2d_backward(*grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool2d_backward_out(tensor *out__, tensor out, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool2d_backward_out(*out, *grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool2d_with_indices(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool2d_with_indices(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_max_pool2d_with_indices_backward(tensor *out__, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices) { - PROTECT( - auto outputs__ = torch::max_pool2d_with_indices_backward(*grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool2d_with_indices_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices) { - PROTECT( - auto outputs__ = torch::max_pool2d_with_indices_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool2d_with_indices_out(tensor *out__, tensor out, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool2d_with_indices_out(*out, *indices, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_max_pool3d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool3d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool3d_with_indices(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool3d_with_indices(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_max_pool3d_with_indices_backward(tensor *out__, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices) { - PROTECT( - auto outputs__ = torch::max_pool3d_with_indices_backward(*grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool3d_with_indices_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices) { - PROTECT( - auto outputs__ = torch::max_pool3d_with_indices_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode, *indices); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_pool3d_with_indices_out(tensor *out__, tensor out, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::max_pool3d_with_indices_out(*out, *indices, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_max_unary_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::max_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_unpool2d(tensor *out__, tensor self, tensor indices, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::max_unpool2d(*self, *indices, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_unpool2d_out(tensor *out__, tensor out, tensor self, tensor indices, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::max_unpool2d_out(*out, *self, *indices, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_unpool3d(tensor *out__, tensor self, tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::max_unpool3d(*self, *indices, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_max_unpool3d_out(tensor *out__, tensor out, tensor self, tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::max_unpool3d_out(*out, *self, *indices, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_maximum(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::maximum(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_maximum_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::maximum_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mean(tensor *out__, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::mean(*self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mean_dim(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::mean(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mean_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::mean_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_median(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::median(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_median_dim(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::median(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_median_dim_values(tensor *out__, tensor values, tensor indices, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::median_out(*values, *indices, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_median_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::median_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_meshgrid(tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::meshgrid(of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_meshgrid_indexing(tensor *tensors_data, int tensors_len, char * indexing) { - PROTECT( - auto outputs__ = torch::meshgrid(of_carray_tensor(tensors_data, tensors_len), std::string(indexing)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_mh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->mH(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_min(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::min(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_min_dim(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::min(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_min_dim_min(tensor *out__, tensor min, tensor min_indices, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::min_out(*min, *min_indices, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_min_other(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::min(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_min_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::min_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_min_unary_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::min_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_minimum(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::minimum(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_minimum_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::minimum_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_batch_norm(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon) { - PROTECT( - auto outputs__ = torch::miopen_batch_norm(*input, *weight, (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_miopen_batch_norm_backward(tensor *out__, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon) { - PROTECT( - auto outputs__ = torch::miopen_batch_norm_backward(*input, *grad_output, *weight, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (save_mean ? *save_mean : torch::Tensor()), (save_var ? *save_var : torch::Tensor()), epsilon); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_miopen_batch_norm_backward_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon) { - PROTECT( - auto outputs__ = torch::miopen_batch_norm_backward_out(*out0, *out1, *out2, *input, *grad_output, *weight, (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (save_mean ? *save_mean : torch::Tensor()), (save_var ? *save_var : torch::Tensor()), epsilon); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_miopen_batch_norm_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon) { - PROTECT( - auto outputs__ = torch::miopen_batch_norm_out(*out0, *out1, *out2, *input, *weight, (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)training, exponential_average_factor, epsilon); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_miopen_convolution(tensor *out__, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { - PROTECT( - auto outputs__ = torch::miopen_convolution(*self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_convolution_add_relu(tensor *out__, tensor self, tensor weight, tensor z, scalar alpha, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::miopen_convolution_add_relu(*self, *weight, *z, *alpha, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_convolution_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { - PROTECT( - auto outputs__ = torch::miopen_convolution_out(*out, *self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_convolution_relu(tensor *out__, tensor self, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::miopen_convolution_relu(*self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_convolution_transpose(tensor *out__, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { - PROTECT( - auto outputs__ = torch::miopen_convolution_transpose(*self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_convolution_transpose_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { - PROTECT( - auto outputs__ = torch::miopen_convolution_transpose_out(*out, *self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_depthwise_convolution(tensor *out__, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { - PROTECT( - auto outputs__ = torch::miopen_depthwise_convolution(*self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_depthwise_convolution_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic) { - PROTECT( - auto outputs__ = torch::miopen_depthwise_convolution_out(*out, *self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, (bool)benchmark, (bool)deterministic); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_miopen_rnn(tensor *out__, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state) { - PROTECT( - auto outputs__ = torch::miopen_rnn(*input, of_carray_tensor(weight_data, weight_len), weight_stride0, *hx, (cx ? *cx : torch::Tensor()), mode, hidden_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? *dropout_state : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg_miopen_rnn_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state) { - PROTECT( - auto outputs__ = torch::miopen_rnn_out(*out0, *out1, *out2, *out3, *out4, *input, of_carray_tensor(weight_data, weight_len), weight_stride0, *hx, (cx ? *cx : torch::Tensor()), mode, hidden_size, num_layers, (bool)batch_first, dropout, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (dropout_state ? *dropout_state : torch::Tensor())); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - ) -} - -void atg_mish(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::mish(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mish_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::mish_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mish_backward(tensor *out__, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::mish_backward(*grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mish_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::mish_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_adaptive_avg_pool2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::mkldnn_adaptive_avg_pool2d(*self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_adaptive_avg_pool2d_backward(tensor *out__, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::mkldnn_adaptive_avg_pool2d_backward(*grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_adaptive_avg_pool2d_backward_out(tensor *out__, tensor out, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::mkldnn_adaptive_avg_pool2d_backward_out(*out, *grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_adaptive_avg_pool2d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::mkldnn_adaptive_avg_pool2d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_convolution(tensor *out__, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::mkldnn_convolution(*self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_convolution_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::mkldnn_convolution_out(*out, *self, *weight, (bias ? *bias : torch::Tensor()), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_linear(tensor *out__, tensor self, tensor weight, tensor bias) { - PROTECT( - auto outputs__ = torch::mkldnn_linear(*self, *weight, (bias ? *bias : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_linear_backward_input(tensor *out__, int64_t *input_size_data, int input_size_len, tensor grad_output, tensor weight) { - PROTECT( - auto outputs__ = torch::mkldnn_linear_backward_input(torch::IntArrayRef(input_size_data, input_size_len), *grad_output, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_linear_backward_input_out(tensor *out__, tensor out, int64_t *input_size_data, int input_size_len, tensor grad_output, tensor weight) { - PROTECT( - auto outputs__ = torch::mkldnn_linear_backward_input_out(*out, torch::IntArrayRef(input_size_data, input_size_len), *grad_output, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_linear_backward_weights(tensor *out__, tensor grad_output, tensor input, tensor weight, int bias_defined) { - PROTECT( - auto outputs__ = torch::mkldnn_linear_backward_weights(*grad_output, *input, *weight, (bool)bias_defined); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_mkldnn_linear_backward_weights_out(tensor *out__, tensor out0, tensor out1, tensor grad_output, tensor input, tensor weight, int bias_defined) { - PROTECT( - auto outputs__ = torch::mkldnn_linear_backward_weights_out(*out0, *out1, *grad_output, *input, *weight, (bool)bias_defined); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_mkldnn_linear_out(tensor *out__, tensor out, tensor self, tensor weight, tensor bias) { - PROTECT( - auto outputs__ = torch::mkldnn_linear_out(*out, *self, *weight, (bias ? *bias : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool2d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool2d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool2d_backward(tensor *out__, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool2d_backward(*grad_output, *output, *input, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool2d_backward_out(tensor *out__, tensor out, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool2d_backward_out(*out, *grad_output, *output, *input, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool2d_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool2d_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool3d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool3d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool3d_backward(tensor *out__, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool3d_backward(*grad_output, *output, *input, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool3d_backward_out(tensor *out__, tensor out, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool3d_backward_out(*out, *grad_output, *output, *input, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_max_pool3d_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::mkldnn_max_pool3d_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_reorder_conv2d_weight(tensor *out__, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len) { - PROTECT( - auto outputs__ = torch::mkldnn_reorder_conv2d_weight(*self, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, input_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(input_size_data, input_size_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_reorder_conv2d_weight_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len) { - PROTECT( - auto outputs__ = torch::mkldnn_reorder_conv2d_weight_out(*out, *self, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups, input_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(input_size_data, input_size_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_reorder_conv3d_weight(tensor *out__, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::mkldnn_reorder_conv3d_weight(*self, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_reorder_conv3d_weight_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups) { - PROTECT( - auto outputs__ = torch::mkldnn_reorder_conv3d_weight_out(*out, *self, torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(dilation_data, dilation_len), groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mkldnn_rnn_layer(tensor *out__, tensor input, tensor weight0, tensor weight1, tensor weight2, tensor weight3, tensor hx_, tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train) { - PROTECT( - auto outputs__ = torch::mkldnn_rnn_layer(*input, *weight0, *weight1, *weight2, *weight3, *hx_, *cx_, (bool)reverse, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), mode, hidden_size, num_layers, (bool)has_biases, (bool)bidirectional, (bool)batch_first, (bool)train); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_mkldnn_rnn_layer_backward(tensor *out__, tensor input, tensor weight1, tensor weight2, tensor weight3, tensor weight4, tensor hx_, tensor cx_tmp, tensor output, tensor hy_, tensor cy_, tensor grad_output, tensor grad_hy, tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, tensor workspace) { - PROTECT( - auto outputs__ = torch::mkldnn_rnn_layer_backward(*input, *weight1, *weight2, *weight3, *weight4, *hx_, *cx_tmp, *output, *hy_, *cy_, (grad_output ? *grad_output : torch::Tensor()), (grad_hy ? *grad_hy : torch::Tensor()), (grad_cy ? *grad_cy : torch::Tensor()), (bool)reverse, mode, hidden_size, num_layers, (bool)has_biases, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (bool)batch_first, *workspace); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - out__[5] = new torch::Tensor(std::get<5>(outputs__)); - out__[6] = new torch::Tensor(std::get<6>(outputs__)); - ) -} - -void atg_mkldnn_rnn_layer_backward_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor out5, tensor out6, tensor input, tensor weight1, tensor weight2, tensor weight3, tensor weight4, tensor hx_, tensor cx_tmp, tensor output, tensor hy_, tensor cy_, tensor grad_output, tensor grad_hy, tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, tensor workspace) { - PROTECT( - auto outputs__ = torch::mkldnn_rnn_layer_backward_out(*out0, *out1, *out2, *out3, *out4, *out5, *out6, *input, *weight1, *weight2, *weight3, *weight4, *hx_, *cx_tmp, *output, *hy_, *cy_, (grad_output ? *grad_output : torch::Tensor()), (grad_hy ? *grad_hy : torch::Tensor()), (grad_cy ? *grad_cy : torch::Tensor()), (bool)reverse, mode, hidden_size, num_layers, (bool)has_biases, (bool)train, (bool)bidirectional, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), (bool)batch_first, *workspace); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - out__[4] = new torch::Tensor(std::get<4>(outputs__)); - out__[5] = new torch::Tensor(std::get<5>(outputs__)); - out__[6] = new torch::Tensor(std::get<6>(outputs__)); - ) -} - -void atg_mkldnn_rnn_layer_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor out3, tensor input, tensor weight0, tensor weight1, tensor weight2, tensor weight3, tensor hx_, tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train) { - PROTECT( - auto outputs__ = torch::mkldnn_rnn_layer_out(*out0, *out1, *out2, *out3, *input, *weight0, *weight1, *weight2, *weight3, *hx_, *cx_, (bool)reverse, torch::IntArrayRef(batch_sizes_data, batch_sizes_len), mode, hidden_size, num_layers, (bool)has_biases, (bool)bidirectional, (bool)batch_first, (bool)train); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - out__[3] = new torch::Tensor(std::get<3>(outputs__)); - ) -} - -void atg_mm(tensor *out__, tensor self, tensor mat2) { - PROTECT( - auto outputs__ = torch::mm(*self, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mm_out(tensor *out__, tensor out, tensor self, tensor mat2) { - PROTECT( - auto outputs__ = torch::mm_out(*out, *self, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mode(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::mode(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_mode_values(tensor *out__, tensor values, tensor indices, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::mode_out(*values, *indices, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_moveaxis(tensor *out__, tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len) { - PROTECT( - auto outputs__ = torch::moveaxis(*self, torch::IntArrayRef(source_data, source_len), torch::IntArrayRef(destination_data, destination_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_moveaxis_int(tensor *out__, tensor self, int64_t source, int64_t destination) { - PROTECT( - auto outputs__ = torch::moveaxis(*self, source, destination); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_movedim(tensor *out__, tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len) { - PROTECT( - auto outputs__ = torch::movedim(*self, torch::IntArrayRef(source_data, source_len), torch::IntArrayRef(destination_data, destination_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_movedim_int(tensor *out__, tensor self, int64_t source, int64_t destination) { - PROTECT( - auto outputs__ = torch::movedim(*self, source, destination); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mse_loss(tensor *out__, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::mse_loss(*self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mse_loss_backward(tensor *out__, tensor grad_output, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::mse_loss_backward(*grad_output, *self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mse_loss_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::mse_loss_backward_out(*grad_input, *grad_output, *self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mse_loss_out(tensor *out__, tensor out, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::mse_loss_out(*out, *self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_msort(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::msort(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_msort_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::msort_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mt(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->mT(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mul(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::mul(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mul_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->mul_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mul_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::mul_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mul_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::mul(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mul_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->mul_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mul_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::mul_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multi_margin_loss_backward(tensor *out__, tensor grad_output, tensor self, tensor target, scalar p, scalar margin, tensor weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::multi_margin_loss_backward(*grad_output, *self, *target, *p, *margin, (weight ? *weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multi_margin_loss_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, scalar p, scalar margin, tensor weight, int64_t reduction) { - PROTECT( - auto outputs__ = torch::multi_margin_loss_backward_out(*grad_input, *grad_output, *self, *target, *p, *margin, (weight ? *weight : torch::Tensor()), reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multilabel_margin_loss(tensor *out__, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::multilabel_margin_loss(*self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multilabel_margin_loss_backward(tensor *out__, tensor grad_output, tensor self, tensor target, int64_t reduction, tensor is_target) { - PROTECT( - auto outputs__ = torch::multilabel_margin_loss_backward(*grad_output, *self, *target, reduction, *is_target); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multilabel_margin_loss_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction, tensor is_target) { - PROTECT( - auto outputs__ = torch::multilabel_margin_loss_backward_out(*grad_input, *grad_output, *self, *target, reduction, *is_target); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multilabel_margin_loss_out(tensor *out__, tensor out, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::multilabel_margin_loss_out(*out, *self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multinomial(tensor *out__, tensor self, int64_t num_samples, int replacement) { - PROTECT( - auto outputs__ = torch::multinomial(*self, num_samples, (bool)replacement); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multinomial_out(tensor *out__, tensor out, tensor self, int64_t num_samples, int replacement) { - PROTECT( - auto outputs__ = torch::multinomial_out(*out, *self, num_samples, (bool)replacement); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multiply(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::multiply(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multiply_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->multiply_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multiply_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::multiply_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multiply_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::multiply(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_multiply_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->multiply_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mv(tensor *out__, tensor self, tensor vec) { - PROTECT( - auto outputs__ = torch::mv(*self, *vec); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mv_out(tensor *out__, tensor out, tensor self, tensor vec) { - PROTECT( - auto outputs__ = torch::mv_out(*out, *self, *vec); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mvlgamma(tensor *out__, tensor self, int64_t p) { - PROTECT( - auto outputs__ = torch::mvlgamma(*self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mvlgamma_(tensor *out__, tensor self, int64_t p) { - PROTECT( - auto outputs__ = self->mvlgamma_(p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_mvlgamma_out(tensor *out__, tensor out, tensor self, int64_t p) { - PROTECT( - auto outputs__ = torch::mvlgamma_out(*out, *self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nan_to_num(tensor *out__, tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null) { - PROTECT( - auto outputs__ = torch::nan_to_num(*self, nan_null ? c10::nullopt : c10::optional(nan_v), posinf_null ? c10::nullopt : c10::optional(posinf_v), neginf_null ? c10::nullopt : c10::optional(neginf_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nan_to_num_(tensor *out__, tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null) { - PROTECT( - auto outputs__ = torch::nan_to_num_(*self, nan_null ? c10::nullopt : c10::optional(nan_v), posinf_null ? c10::nullopt : c10::optional(posinf_v), neginf_null ? c10::nullopt : c10::optional(neginf_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nan_to_num_out(tensor *out__, tensor out, tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null) { - PROTECT( - auto outputs__ = torch::nan_to_num_out(*out, *self, nan_null ? c10::nullopt : c10::optional(nan_v), posinf_null ? c10::nullopt : c10::optional(posinf_v), neginf_null ? c10::nullopt : c10::optional(neginf_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanmean(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::nanmean(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanmean_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::nanmean_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanmedian(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::nanmedian(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanmedian_dim(tensor *out__, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::nanmedian(*self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_nanmedian_dim_values(tensor *out__, tensor values, tensor indices, tensor self, int64_t dim, int keepdim) { - PROTECT( - auto outputs__ = torch::nanmedian_out(*values, *indices, *self, dim, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_nanmedian_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::nanmedian_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanquantile(tensor *out__, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::nanquantile(*self, *q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanquantile_out(tensor *out__, tensor out, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::nanquantile_out(*out, *self, *q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanquantile_scalar(tensor *out__, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::nanquantile(*self, q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nanquantile_scalar_out(tensor *out__, tensor out, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::nanquantile_out(*out, *self, q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nansum(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::nansum(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nansum_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::nansum_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_narrow(tensor *out__, tensor self, int64_t dim, int64_t start, int64_t length) { - PROTECT( - auto outputs__ = torch::narrow(*self, dim, start, length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_narrow_copy(tensor *out__, tensor self, int64_t dim, int64_t start, int64_t length) { - PROTECT( - auto outputs__ = torch::narrow_copy(*self, dim, start, length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_narrow_copy_out(tensor *out__, tensor out, tensor self, int64_t dim, int64_t start, int64_t length) { - PROTECT( - auto outputs__ = torch::narrow_copy_out(*out, *self, dim, start, length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_narrow_tensor(tensor *out__, tensor self, int64_t dim, tensor start, int64_t length) { - PROTECT( - auto outputs__ = torch::narrow(*self, dim, *start, length); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_native_batch_norm(tensor *out__, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::native_batch_norm(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)training, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_native_batch_norm_out(tensor *out__, tensor out, tensor save_mean, tensor save_invstd, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps) { - PROTECT( - auto outputs__ = torch::native_batch_norm_out(*out, *save_mean, *save_invstd, *input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), (running_mean ? *running_mean : torch::Tensor()), (running_var ? *running_var : torch::Tensor()), (bool)training, momentum, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_native_channel_shuffle(tensor *out__, tensor self, int64_t groups) { - PROTECT( - auto outputs__ = torch::native_channel_shuffle(*self, groups); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_native_dropout(tensor *out__, tensor input, double p, int train) { - PROTECT( - auto outputs__ = torch::native_dropout(*input, p, (bool)train); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_native_dropout_backward(tensor *out__, tensor grad_output, tensor mask, double scale) { - PROTECT( - auto outputs__ = torch::native_dropout_backward(*grad_output, *mask, scale); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_native_dropout_backward_out(tensor *out__, tensor out, tensor grad_output, tensor mask, double scale) { - PROTECT( - auto outputs__ = torch::native_dropout_backward_out(*out, *grad_output, *mask, scale); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_native_dropout_out(tensor *out__, tensor out0, tensor out1, tensor input, double p, int train) { - PROTECT( - auto outputs__ = torch::native_dropout_out(*out0, *out1, *input, p, (bool)train); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_native_group_norm(tensor *out__, tensor input, tensor weight, tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps) { - PROTECT( - auto outputs__ = torch::native_group_norm(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), n, C, HxW, group, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_native_group_norm_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor input, tensor weight, tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps) { - PROTECT( - auto outputs__ = torch::native_group_norm_out(*out0, *out1, *out2, *input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), n, C, HxW, group, eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_native_layer_norm(tensor *out__, tensor input, int64_t *normalized_shape_data, int normalized_shape_len, tensor weight, tensor bias, double eps) { - PROTECT( - auto outputs__ = torch::native_layer_norm(*input, torch::IntArrayRef(normalized_shape_data, normalized_shape_len), (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_native_layer_norm_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor input, int64_t *normalized_shape_data, int normalized_shape_len, tensor weight, tensor bias, double eps) { - PROTECT( - auto outputs__ = torch::native_layer_norm_out(*out0, *out1, *out2, *input, torch::IntArrayRef(normalized_shape_data, normalized_shape_len), (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), eps); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_native_norm(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::native_norm(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_native_norm_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::native_norm_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_native_norm_scalaropt_dim_dtype(tensor *out__, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::native_norm(*self, *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_native_norm_scalaropt_dim_dtype_out(tensor *out__, tensor out, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::native_norm_out(*out, *self, *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ne(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::ne(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ne_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->ne_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ne_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::ne_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ne_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::ne(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ne_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->ne_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ne_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::ne_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_neg(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::neg(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_neg_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::neg_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_neg_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::neg_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_negative(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::negative(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_negative_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::negative_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_negative_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::negative_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nested_to_padded_tensor(tensor *out__, tensor self, double padding, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::nested_to_padded_tensor(*self, padding, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_empty(tensor *out__, tensor self, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = self->new_empty(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_empty_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::new_empty_out(*out, *self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_empty_strided(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = self->new_empty_strided(torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_empty_strided_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = torch::new_empty_strided_out(*out, *self, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_full(tensor *out__, tensor self, int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device) { - PROTECT( - auto outputs__ = self->new_full(torch::IntArrayRef(size_data, size_len), *fill_value, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_full_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, scalar fill_value) { - PROTECT( - auto outputs__ = torch::new_full_out(*out, *self, torch::IntArrayRef(size_data, size_len), *fill_value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_ones(tensor *out__, tensor self, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = self->new_ones(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_ones_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::new_ones_out(*out, *self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_zeros(tensor *out__, tensor self, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = self->new_zeros(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_new_zeros_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::new_zeros_out(*out, *self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nextafter(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::nextafter(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nextafter_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->nextafter_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nextafter_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::nextafter_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss(tensor *out__, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index) { - PROTECT( - auto outputs__ = torch::nll_loss(*self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss2d(tensor *out__, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index) { - PROTECT( - auto outputs__ = torch::nll_loss2d(*self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss2d_backward(tensor *out__, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight) { - PROTECT( - auto outputs__ = torch::nll_loss2d_backward(*grad_output, *self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index, *total_weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight) { - PROTECT( - auto outputs__ = torch::nll_loss2d_backward_out(*grad_input, *grad_output, *self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index, *total_weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss2d_out(tensor *out__, tensor out, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index) { - PROTECT( - auto outputs__ = torch::nll_loss2d_out(*out, *self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss_backward(tensor *out__, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight) { - PROTECT( - auto outputs__ = torch::nll_loss_backward(*grad_output, *self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index, *total_weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight) { - PROTECT( - auto outputs__ = torch::nll_loss_backward_out(*grad_input, *grad_output, *self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index, *total_weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss_nd(tensor *out__, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index) { - PROTECT( - auto outputs__ = torch::nll_loss_nd(*self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nll_loss_out(tensor *out__, tensor out, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index) { - PROTECT( - auto outputs__ = torch::nll_loss_out(*out, *self, *target, (weight ? *weight : torch::Tensor()), reduction, ignore_index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nonzero(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::nonzero(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_nonzero_numpy(tensor self) { - PROTECT( - auto outputs__ = torch::nonzero_numpy(*self); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_nonzero_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::nonzero_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nonzero_static(tensor *out__, tensor self, int64_t size, int64_t fill_value) { - PROTECT( - auto outputs__ = torch::nonzero_static(*self, size, fill_value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nonzero_static_out(tensor *out__, tensor out, tensor self, int64_t size, int64_t fill_value) { - PROTECT( - auto outputs__ = torch::nonzero_static_out(*out, *self, size, fill_value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::norm(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_dtype_out(tensor *out__, tensor out, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::norm_out(*out, *self, *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_except_dim(tensor *out__, tensor v, int64_t pow, int64_t dim) { - PROTECT( - auto outputs__ = torch::norm_except_dim(*v, pow, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_out(tensor *out__, tensor out, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::norm_out(*out, *self, *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_scalar_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::norm_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_scalaropt_dim(tensor *out__, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::norm(*self, *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_scalaropt_dim_dtype(tensor *out__, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::norm(*self, *p, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_scalaropt_dtype(tensor *out__, tensor self, scalar p, int dtype) { - PROTECT( - auto outputs__ = torch::norm(*self, *p, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_norm_scalaropt_dtype_out(tensor *out__, tensor out, tensor self, scalar p, int dtype) { - PROTECT( - auto outputs__ = torch::norm_out(*out, *self, *p, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_normal_(tensor *out__, tensor self, double mean, double std) { - PROTECT( - auto outputs__ = self->normal_(mean, std); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_normal_functional(tensor *out__, tensor self, double mean, double std) { - PROTECT( - auto outputs__ = torch::normal_functional(*self, mean, std); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_not_equal(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::not_equal(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_not_equal_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->not_equal_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_not_equal_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::not_equal_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_not_equal_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::not_equal(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_not_equal_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->not_equal_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_not_equal_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::not_equal_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nuclear_norm(tensor *out__, tensor self, int keepdim) { - PROTECT( - auto outputs__ = torch::nuclear_norm(*self, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nuclear_norm_dim(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::nuclear_norm(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nuclear_norm_dim_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::nuclear_norm_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_nuclear_norm_out(tensor *out__, tensor out, tensor self, int keepdim) { - PROTECT( - auto outputs__ = torch::nuclear_norm_out(*out, *self, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_numpy_t(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->numpy_T(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_one_hot(tensor *out__, tensor self, int64_t num_classes) { - PROTECT( - auto outputs__ = torch::one_hot(*self, num_classes); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ones(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::ones(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ones_like(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::ones_like(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ones_like_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::ones_like_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ones_out(tensor *out__, tensor out, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::ones_out(*out, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_orgqr(tensor *out__, tensor self, tensor input2) { - PROTECT( - auto outputs__ = torch::orgqr(*self, *input2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_orgqr_out(tensor *out__, tensor out, tensor self, tensor input2) { - PROTECT( - auto outputs__ = torch::orgqr_out(*out, *self, *input2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ormqr(tensor *out__, tensor self, tensor input2, tensor input3, int left, int transpose) { - PROTECT( - auto outputs__ = torch::ormqr(*self, *input2, *input3, (bool)left, (bool)transpose); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ormqr_out(tensor *out__, tensor out, tensor self, tensor input2, tensor input3, int left, int transpose) { - PROTECT( - auto outputs__ = torch::ormqr_out(*out, *self, *input2, *input3, (bool)left, (bool)transpose); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_outer(tensor *out__, tensor self, tensor vec2) { - PROTECT( - auto outputs__ = torch::outer(*self, *vec2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_outer_out(tensor *out__, tensor out, tensor self, tensor vec2) { - PROTECT( - auto outputs__ = torch::outer_out(*out, *self, *vec2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg_output_nr(tensor self) { - PROTECT( - return self->output_nr(); - ) - return 0; -} - -void atg_pad(tensor *out__, tensor self, int64_t *pad_data, int pad_len, char * mode, double value_v, int value_null) { - PROTECT( - auto outputs__ = torch::pad(*self, torch::IntArrayRef(pad_data, pad_len), std::string(mode), value_null ? c10::nullopt : c10::optional(value_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pad_sequence(tensor *out__, tensor *sequences_data, int sequences_len, int batch_first, double padding_value) { - PROTECT( - auto outputs__ = torch::pad_sequence(of_carray_tensor(sequences_data, sequences_len), (bool)batch_first, padding_value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pairwise_distance(tensor *out__, tensor x1, tensor x2, double p, double eps, int keepdim) { - PROTECT( - auto outputs__ = torch::pairwise_distance(*x1, *x2, p, eps, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pdist(tensor *out__, tensor self, double p) { - PROTECT( - auto outputs__ = torch::pdist(*self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_permute(tensor *out__, tensor self, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::permute(*self, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_permute_copy(tensor *out__, tensor self, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::permute_copy(*self, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_permute_copy_out(tensor *out__, tensor out, tensor self, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::permute_copy_out(*out, *self, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pin_memory(tensor *out__, tensor self, int device) { - PROTECT( - auto outputs__ = self->pin_memory(device_of_int(device)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pinverse(tensor *out__, tensor self, double rcond) { - PROTECT( - auto outputs__ = torch::pinverse(*self, rcond); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pixel_shuffle(tensor *out__, tensor self, int64_t upscale_factor) { - PROTECT( - auto outputs__ = torch::pixel_shuffle(*self, upscale_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pixel_shuffle_out(tensor *out__, tensor out, tensor self, int64_t upscale_factor) { - PROTECT( - auto outputs__ = torch::pixel_shuffle_out(*out, *self, upscale_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pixel_unshuffle(tensor *out__, tensor self, int64_t downscale_factor) { - PROTECT( - auto outputs__ = torch::pixel_unshuffle(*self, downscale_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pixel_unshuffle_out(tensor *out__, tensor out, tensor self, int64_t downscale_factor) { - PROTECT( - auto outputs__ = torch::pixel_unshuffle_out(*out, *self, downscale_factor); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_poisson(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::poisson(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_poisson_nll_loss(tensor *out__, tensor input, tensor target, int log_input, int full, double eps, int64_t reduction) { - PROTECT( - auto outputs__ = torch::poisson_nll_loss(*input, *target, (bool)log_input, (bool)full, eps, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_poisson_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::poisson_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_polar(tensor *out__, tensor abs, tensor angle) { - PROTECT( - auto outputs__ = torch::polar(*abs, *angle); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_polar_out(tensor *out__, tensor out, tensor abs, tensor angle) { - PROTECT( - auto outputs__ = torch::polar_out(*out, *abs, *angle); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_polygamma(tensor *out__, int64_t n, tensor self) { - PROTECT( - auto outputs__ = torch::polygamma(n, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_polygamma_(tensor *out__, tensor self, int64_t n) { - PROTECT( - auto outputs__ = self->polygamma_(n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_polygamma_out(tensor *out__, tensor out, int64_t n, tensor self) { - PROTECT( - auto outputs__ = torch::polygamma_out(*out, n, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_positive(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::positive(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow(tensor *out__, tensor self, tensor exponent) { - PROTECT( - auto outputs__ = torch::pow(*self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow_(tensor *out__, tensor self, scalar exponent) { - PROTECT( - auto outputs__ = self->pow_(*exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow_scalar(tensor *out__, scalar self, tensor exponent) { - PROTECT( - auto outputs__ = torch::pow(*self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow_scalar_out(tensor *out__, tensor out, scalar self, tensor exponent) { - PROTECT( - auto outputs__ = torch::pow_out(*out, *self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow_tensor_(tensor *out__, tensor self, tensor exponent) { - PROTECT( - auto outputs__ = self->pow_(*exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow_tensor_scalar(tensor *out__, tensor self, scalar exponent) { - PROTECT( - auto outputs__ = torch::pow(*self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow_tensor_scalar_out(tensor *out__, tensor out, tensor self, scalar exponent) { - PROTECT( - auto outputs__ = torch::pow_out(*out, *self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_pow_tensor_tensor_out(tensor *out__, tensor out, tensor self, tensor exponent) { - PROTECT( - auto outputs__ = torch::pow_out(*out, *self, *exponent); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_prelu(tensor *out__, tensor self, tensor weight) { - PROTECT( - auto outputs__ = torch::prelu(*self, *weight); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_prod(tensor *out__, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::prod(*self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_prod_dim_int(tensor *out__, tensor self, int64_t dim, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::prod(*self, dim, (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_prod_int_out(tensor *out__, tensor out, tensor self, int64_t dim, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::prod_out(*out, *self, dim, (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_prod_out(tensor *out__, tensor out, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::prod_out(*out, *self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_put(tensor *out__, tensor self, tensor index, tensor source, int accumulate) { - PROTECT( - auto outputs__ = torch::put(*self, *index, *source, (bool)accumulate); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_put_(tensor *out__, tensor self, tensor index, tensor source, int accumulate) { - PROTECT( - auto outputs__ = self->put_(*index, *source, (bool)accumulate); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_put_out(tensor *out__, tensor out, tensor self, tensor index, tensor source, int accumulate) { - PROTECT( - auto outputs__ = torch::put_out(*out, *self, *index, *source, (bool)accumulate); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg_q_per_channel_axis(tensor self) { - PROTECT( - return torch::q_per_channel_axis(*self); - ) - return 0; -} - -void atg_q_per_channel_scales(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::q_per_channel_scales(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_q_per_channel_scales_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::q_per_channel_scales_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_q_per_channel_zero_points(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::q_per_channel_zero_points(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_q_per_channel_zero_points_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::q_per_channel_zero_points_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -double atg_q_scale(tensor self) { - PROTECT( - return torch::q_scale(*self); - ) - return 0; -} - -int64_t atg_q_zero_point(tensor self) { - PROTECT( - return torch::q_zero_point(*self); - ) - return 0; -} - -void atg_qr(tensor *out__, tensor self, int some) { - PROTECT( - auto outputs__ = torch::qr(*self, (bool)some); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_qr_q(tensor *out__, tensor Q, tensor R, tensor self, int some) { - PROTECT( - auto outputs__ = torch::qr_out(*Q, *R, *self, (bool)some); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_quantile(tensor *out__, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::quantile(*self, *q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantile_out(tensor *out__, tensor out, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::quantile_out(*out, *self, *q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantile_scalar(tensor *out__, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::quantile(*self, q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantile_scalar_out(tensor *out__, tensor out, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation) { - PROTECT( - auto outputs__ = torch::quantile_out(*out, *self, q, dim_null ? c10::nullopt : c10::optional(dim_v), (bool)keepdim, std::string(interpolation)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_channel(tensor *out__, tensor self, tensor scales, tensor zero_points, int64_t axis, int dtype) { - PROTECT( - auto outputs__ = torch::quantize_per_channel(*self, *scales, *zero_points, axis, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_channel_out(tensor *out__, tensor out, tensor self, tensor scales, tensor zero_points, int64_t axis, int dtype) { - PROTECT( - auto outputs__ = torch::quantize_per_channel_out(*out, *self, *scales, *zero_points, axis, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_tensor(tensor *out__, tensor self, double scale, int64_t zero_point, int dtype) { - PROTECT( - auto outputs__ = torch::quantize_per_tensor(*self, scale, zero_point, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_tensor_dynamic(tensor *out__, tensor self, int dtype, int reduce_range) { - PROTECT( - auto outputs__ = torch::quantize_per_tensor_dynamic(*self, torch::ScalarType(dtype), (bool)reduce_range); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_tensor_dynamic_out(tensor *out__, tensor out, tensor self, int dtype, int reduce_range) { - PROTECT( - auto outputs__ = torch::quantize_per_tensor_dynamic_out(*out, *self, torch::ScalarType(dtype), (bool)reduce_range); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_tensor_out(tensor *out__, tensor out, tensor self, double scale, int64_t zero_point, int dtype) { - PROTECT( - auto outputs__ = torch::quantize_per_tensor_out(*out, *self, scale, zero_point, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_tensor_tensor_qparams(tensor *out__, tensor self, tensor scale, tensor zero_point, int dtype) { - PROTECT( - auto outputs__ = torch::quantize_per_tensor(*self, *scale, *zero_point, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantize_per_tensor_tensor_qparams_out(tensor *out__, tensor out, tensor self, tensor scale, tensor zero_point, int dtype) { - PROTECT( - auto outputs__ = torch::quantize_per_tensor_out(*out, *self, *scale, *zero_point, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_quantize_per_tensor_tensors(tensor *tensors_data, int tensors_len, tensor scales, tensor zero_points, int dtype) { - PROTECT( - auto outputs__ = torch::quantize_per_tensor(of_carray_tensor(tensors_data, tensors_len), *scales, *zero_points, torch::ScalarType(dtype)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_quantize_per_tensor_tensors_out(tensor *out_data, int out_len, tensor *tensors_data, int tensors_len, tensor scales, tensor zero_points, int dtype) { - PROTECT( - torch::quantize_per_tensor_out(of_carray_tensor(out_data, out_len), of_carray_tensor(tensors_data, tensors_len), *scales, *zero_points, torch::ScalarType(dtype)); - ) -} - -void atg_quantized_batch_norm(tensor *out__, tensor input, tensor weight, tensor bias, tensor mean, tensor var, double eps, double output_scale, int64_t output_zero_point) { - PROTECT( - auto outputs__ = torch::quantized_batch_norm(*input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *mean, *var, eps, output_scale, output_zero_point); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_batch_norm_out(tensor *out__, tensor out, tensor input, tensor weight, tensor bias, tensor mean, tensor var, double eps, double output_scale, int64_t output_zero_point) { - PROTECT( - auto outputs__ = torch::quantized_batch_norm_out(*out, *input, (weight ? *weight : torch::Tensor()), (bias ? *bias : torch::Tensor()), *mean, *var, eps, output_scale, output_zero_point); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_gru_cell(tensor *out__, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { - PROTECT( - auto outputs__ = torch::quantized_gru_cell(*input, *hx, *w_ih, *w_hh, *b_ih, *b_hh, *packed_ih, *packed_hh, *col_offsets_ih, *col_offsets_hh, *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_lstm_cell(tensor *out__, tensor input, tensor *hx_data, int hx_len, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { - PROTECT( - auto outputs__ = torch::quantized_lstm_cell(*input, of_carray_tensor(hx_data, hx_len), *w_ih, *w_hh, *b_ih, *b_hh, *packed_ih, *packed_hh, *col_offsets_ih, *col_offsets_hh, *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_quantized_max_pool1d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::quantized_max_pool1d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_max_pool1d_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::quantized_max_pool1d_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_max_pool2d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::quantized_max_pool2d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_max_pool2d_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::quantized_max_pool2d_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_max_pool3d(tensor *out__, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::quantized_max_pool3d(*self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_max_pool3d_out(tensor *out__, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode) { - PROTECT( - auto outputs__ = torch::quantized_max_pool3d_out(*out, *self, torch::IntArrayRef(kernel_size_data, kernel_size_len), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len), (bool)ceil_mode); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_rnn_relu_cell(tensor *out__, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { - PROTECT( - auto outputs__ = torch::quantized_rnn_relu_cell(*input, *hx, *w_ih, *w_hh, *b_ih, *b_hh, *packed_ih, *packed_hh, *col_offsets_ih, *col_offsets_hh, *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_quantized_rnn_tanh_cell(tensor *out__, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh) { - PROTECT( - auto outputs__ = torch::quantized_rnn_tanh_cell(*input, *hx, *w_ih, *w_hh, *b_ih, *b_hh, *packed_ih, *packed_hh, *col_offsets_ih, *col_offsets_hh, *scale_ih, *scale_hh, *zero_point_ih, *zero_point_hh); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rad2deg(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::rad2deg(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rad2deg_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::rad2deg_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rad2deg_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::rad2deg_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rand(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::rand(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rand_like(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::rand_like(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rand_like_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::rand_like_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rand_out(tensor *out__, tensor out, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::rand_out(*out, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint(tensor *out__, int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::randint(high, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint_like(tensor *out__, tensor self, int64_t high) { - PROTECT( - auto outputs__ = torch::randint_like(*self, high); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint_like_low_dtype(tensor *out__, tensor self, int64_t low, int64_t high) { - PROTECT( - auto outputs__ = torch::randint_like(*self, low, high); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint_like_low_dtype_out(tensor *out__, tensor out, tensor self, int64_t low, int64_t high) { - PROTECT( - auto outputs__ = torch::randint_like_out(*out, *self, low, high); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint_like_out(tensor *out__, tensor out, tensor self, int64_t high) { - PROTECT( - auto outputs__ = torch::randint_like_out(*out, *self, high); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint_low(tensor *out__, int64_t low, int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::randint(low, high, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint_low_out(tensor *out__, tensor out, int64_t low, int64_t high, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::randint_out(*out, low, high, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randint_out(tensor *out__, tensor out, int64_t high, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::randint_out(*out, high, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randn(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::randn(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randn_like(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::randn_like(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randn_like_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::randn_like_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randn_out(tensor *out__, tensor out, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::randn_out(*out, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::random(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->random_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_from(tensor *out__, tensor self, int64_t from, int64_t to_v, int to_null) { - PROTECT( - auto outputs__ = torch::random(*self, from, to_null ? c10::nullopt : c10::optional(to_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_from_(tensor *out__, tensor self, int64_t from, int64_t to_v, int to_null) { - PROTECT( - auto outputs__ = self->random_(from, to_null ? c10::nullopt : c10::optional(to_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_from_out(tensor *out__, tensor out, tensor self, int64_t from, int64_t to_v, int to_null) { - PROTECT( - auto outputs__ = torch::random_out(*out, *self, from, to_null ? c10::nullopt : c10::optional(to_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::random_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_to(tensor *out__, tensor self, int64_t to) { - PROTECT( - auto outputs__ = torch::random(*self, to); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_to_(tensor *out__, tensor self, int64_t to) { - PROTECT( - auto outputs__ = self->random_(to); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_random_to_out(tensor *out__, tensor out, tensor self, int64_t to) { - PROTECT( - auto outputs__ = torch::random_out(*out, *self, to); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randperm(tensor *out__, int64_t n, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::randperm(n, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_randperm_out(tensor *out__, tensor out, int64_t n) { - PROTECT( - auto outputs__ = torch::randperm_out(*out, n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_range(tensor *out__, scalar start, scalar end, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::range(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_range_out(tensor *out__, tensor out, scalar start, scalar end) { - PROTECT( - auto outputs__ = torch::range_out(*out, *start, *end); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_range_out_(tensor *out__, tensor out, scalar start, scalar end) { - PROTECT( - auto outputs__ = torch::range_out(*out, *start, *end); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_range_step(tensor *out__, scalar start, scalar end, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::range(*start, *end, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_ravel(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::ravel(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_real(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::real(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reciprocal(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::reciprocal(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reciprocal_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::reciprocal_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reciprocal_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::reciprocal_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad1d(tensor *out__, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad1d(*self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad1d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad1d_backward(*grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad1d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad1d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad1d_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad1d_out(*out, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad2d(tensor *out__, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad2d(*self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad2d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad2d_backward(*grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad2d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad2d_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad2d_out(*out, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad3d(tensor *out__, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad3d(*self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad3d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad3d_backward(*grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad3d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reflection_pad3d_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::reflection_pad3d_out(*out, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_relu(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::relu(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_relu6(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::relu6(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_relu6_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::relu6_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_relu_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::relu_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_relu_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::relu_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::remainder(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->remainder_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::remainder_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder_scalar_tensor(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::remainder(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder_scalar_tensor_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::remainder_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder_tensor(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::remainder(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder_tensor_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->remainder_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_remainder_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::remainder_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_renorm(tensor *out__, tensor self, scalar p, int64_t dim, scalar maxnorm) { - PROTECT( - auto outputs__ = torch::renorm(*self, *p, dim, *maxnorm); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_renorm_(tensor *out__, tensor self, scalar p, int64_t dim, scalar maxnorm) { - PROTECT( - auto outputs__ = self->renorm_(*p, dim, *maxnorm); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_renorm_out(tensor *out__, tensor out, tensor self, scalar p, int64_t dim, scalar maxnorm) { - PROTECT( - auto outputs__ = torch::renorm_out(*out, *self, *p, dim, *maxnorm); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_repeat(tensor *out__, tensor self, int64_t *repeats_data, int repeats_len) { - PROTECT( - auto outputs__ = self->repeat(torch::IntArrayRef(repeats_data, repeats_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_repeat_interleave(tensor *out__, tensor repeats, int64_t output_size_v, int output_size_null) { - PROTECT( - auto outputs__ = torch::repeat_interleave(*repeats, output_size_null ? c10::nullopt : c10::optional(output_size_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_repeat_interleave_self_int(tensor *out__, tensor self, int64_t repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null) { - PROTECT( - auto outputs__ = torch::repeat_interleave(*self, repeats, dim_null ? c10::nullopt : c10::optional(dim_v), output_size_null ? c10::nullopt : c10::optional(output_size_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_repeat_interleave_self_tensor(tensor *out__, tensor self, tensor repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null) { - PROTECT( - auto outputs__ = torch::repeat_interleave(*self, *repeats, dim_null ? c10::nullopt : c10::optional(dim_v), output_size_null ? c10::nullopt : c10::optional(output_size_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_repeat_interleave_tensor_out(tensor *out__, tensor out, tensor repeats, int64_t output_size_v, int output_size_null) { - PROTECT( - auto outputs__ = torch::repeat_interleave_out(*out, *repeats, output_size_null ? c10::nullopt : c10::optional(output_size_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_repeat_out(tensor *out__, tensor out, tensor self, int64_t *repeats_data, int repeats_len) { - PROTECT( - auto outputs__ = torch::repeat_out(*out, *self, torch::IntArrayRef(repeats_data, repeats_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad1d(tensor *out__, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad1d(*self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad1d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad1d_backward(*grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad1d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad1d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad1d_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad1d_out(*out, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad2d(tensor *out__, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad2d(*self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad2d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad2d_backward(*grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad2d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad2d_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad2d_out(*out, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad3d(tensor *out__, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad3d(*self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad3d_backward(tensor *out__, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad3d_backward(*grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad3d_backward_out(*grad_input, *grad_output, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_replication_pad3d_out(tensor *out__, tensor out, tensor self, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::replication_pad3d_out(*out, *self, torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_requires_grad_(tensor *out__, tensor self, int requires_grad) { - PROTECT( - auto outputs__ = self->requires_grad_((bool)requires_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reshape(tensor *out__, tensor self, int64_t *shape_data, int shape_len) { - PROTECT( - auto outputs__ = torch::reshape(*self, torch::IntArrayRef(shape_data, shape_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_reshape_as(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->reshape_as(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::resize(*self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = self->resize_(torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_as(tensor *out__, tensor self, tensor the_template) { - PROTECT( - auto outputs__ = torch::resize_as(*self, *the_template); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_as_(tensor *out__, tensor self, tensor the_template) { - PROTECT( - auto outputs__ = torch::resize_as_(*self, *the_template); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_as_out(tensor *out__, tensor out, tensor self, tensor the_template) { - PROTECT( - auto outputs__ = torch::resize_as_out(*out, *self, *the_template); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_as_sparse(tensor *out__, tensor self, tensor the_template) { - PROTECT( - auto outputs__ = torch::resize_as_sparse(*self, *the_template); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_as_sparse_(tensor *out__, tensor self, tensor the_template) { - PROTECT( - auto outputs__ = torch::resize_as_sparse_(*self, *the_template); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_as_sparse_out(tensor *out__, tensor out, tensor self, tensor the_template) { - PROTECT( - auto outputs__ = torch::resize_as_sparse_out(*out, *self, *the_template); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resize_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::resize_out(*out, *self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resolve_conj(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::resolve_conj(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_resolve_neg(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::resolve_neg(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int atg_retains_grad(tensor self) { - PROTECT( - return self->retains_grad(); - ) - return 0; -} - -void atg_rnn_relu(tensor *out__, tensor input, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { - PROTECT( - auto outputs__ = torch::rnn_relu(*input, *hx, of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_rnn_relu_cell(tensor *out__, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh) { - PROTECT( - auto outputs__ = torch::rnn_relu_cell(*input, *hx, *w_ih, *w_hh, (b_ih ? *b_ih : torch::Tensor()), (b_hh ? *b_hh : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rnn_relu_data(tensor *out__, tensor data, tensor batch_sizes, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { - PROTECT( - auto outputs__ = torch::rnn_relu(*data, *batch_sizes, *hx, of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_rnn_tanh(tensor *out__, tensor input, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first) { - PROTECT( - auto outputs__ = torch::rnn_tanh(*input, *hx, of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional, (bool)batch_first); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_rnn_tanh_cell(tensor *out__, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh) { - PROTECT( - auto outputs__ = torch::rnn_tanh_cell(*input, *hx, *w_ih, *w_hh, (b_ih ? *b_ih : torch::Tensor()), (b_hh ? *b_hh : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rnn_tanh_data(tensor *out__, tensor data, tensor batch_sizes, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional) { - PROTECT( - auto outputs__ = torch::rnn_tanh(*data, *batch_sizes, *hx, of_carray_tensor(params_data, params_len), (bool)has_biases, num_layers, dropout, (bool)train, (bool)bidirectional); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_roll(tensor *out__, tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::roll(*self, torch::IntArrayRef(shifts_data, shifts_len), torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_roll_out(tensor *out__, tensor out, tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::roll_out(*out, *self, torch::IntArrayRef(shifts_data, shifts_len), torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rot90(tensor *out__, tensor self, int64_t k, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::rot90(*self, k, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rot90_out(tensor *out__, tensor out, tensor self, int64_t k, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::rot90_out(*out, *self, k, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_round(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::round(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_round_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::round_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_round_decimals(tensor *out__, tensor self, int64_t decimals) { - PROTECT( - auto outputs__ = torch::round(*self, decimals); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_round_decimals_(tensor *out__, tensor self, int64_t decimals) { - PROTECT( - auto outputs__ = torch::round_(*self, decimals); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_round_decimals_out(tensor *out__, tensor out, tensor self, int64_t decimals) { - PROTECT( - auto outputs__ = torch::round_out(*out, *self, decimals); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_round_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::round_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_row_indices(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->row_indices(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_row_indices_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::row_indices_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_row_indices_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::row_indices_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_row_stack(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::row_stack(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_row_stack_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::row_stack_out(*out, of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rrelu(tensor *out__, tensor self, int training) { - PROTECT( - auto outputs__ = torch::rrelu(*self, (bool)training); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rrelu_(tensor *out__, tensor self, int training) { - PROTECT( - auto outputs__ = torch::rrelu_(*self, (bool)training); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rrelu_with_noise(tensor *out__, tensor self, tensor noise, int training) { - PROTECT( - auto outputs__ = torch::rrelu_with_noise(*self, *noise, (bool)training); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rrelu_with_noise_(tensor *out__, tensor self, tensor noise, int training) { - PROTECT( - auto outputs__ = torch::rrelu_with_noise_(*self, *noise, (bool)training); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rrelu_with_noise_backward(tensor *out__, tensor grad_output, tensor self, tensor noise, scalar lower, scalar upper, int training, int self_is_result) { - PROTECT( - auto outputs__ = torch::rrelu_with_noise_backward(*grad_output, *self, *noise, *lower, *upper, (bool)training, (bool)self_is_result); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rrelu_with_noise_backward_out(tensor *out__, tensor out, tensor grad_output, tensor self, tensor noise, scalar lower, scalar upper, int training, int self_is_result) { - PROTECT( - auto outputs__ = torch::rrelu_with_noise_backward_out(*out, *grad_output, *self, *noise, *lower, *upper, (bool)training, (bool)self_is_result); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rrelu_with_noise_out(tensor *out__, tensor out, tensor self, tensor noise, int training) { - PROTECT( - auto outputs__ = torch::rrelu_with_noise_out(*out, *self, *noise, (bool)training); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rsqrt(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::rsqrt(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rsqrt_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::rsqrt_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rsqrt_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::rsqrt_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rsub(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::rsub(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rsub_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::rsub(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rsub_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::rsub_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_rsub_tensor_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::rsub_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scalar_tensor(tensor *out__, scalar s, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::scalar_tensor(*s, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scalar_tensor_out(tensor *out__, tensor out, scalar s) { - PROTECT( - auto outputs__ = torch::scalar_tensor_out(*out, *s); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scaled_dot_product_attention(tensor *out__, tensor query, tensor key, tensor value, tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null) { - PROTECT( - auto outputs__ = torch::scaled_dot_product_attention(*query, *key, *value, (attn_mask ? *attn_mask : torch::Tensor()), dropout_p, (bool)is_causal, scale_null ? c10::nullopt : c10::optional(scale_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter(tensor *out__, tensor self, int64_t dim, tensor index, tensor src) { - PROTECT( - auto outputs__ = torch::scatter(*self, dim, *index, *src); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_(tensor *out__, tensor self, int64_t dim, tensor index, tensor src) { - PROTECT( - auto outputs__ = self->scatter_(dim, *index, *src); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_add(tensor *out__, tensor self, int64_t dim, tensor index, tensor src) { - PROTECT( - auto outputs__ = torch::scatter_add(*self, dim, *index, *src); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_add_(tensor *out__, tensor self, int64_t dim, tensor index, tensor src) { - PROTECT( - auto outputs__ = self->scatter_add_(dim, *index, *src); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_add_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor src) { - PROTECT( - auto outputs__ = torch::scatter_add_out(*out, *self, dim, *index, *src); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_reduce(tensor *out__, tensor self, int64_t dim, tensor index, tensor src, char * reduce) { - PROTECT( - auto outputs__ = torch::scatter(*self, dim, *index, *src, std::string(reduce)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_reduce_(tensor *out__, tensor self, int64_t dim, tensor index, tensor src, char * reduce) { - PROTECT( - auto outputs__ = self->scatter_(dim, *index, *src, std::string(reduce)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_reduce_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor src, char * reduce) { - PROTECT( - auto outputs__ = torch::scatter_out(*out, *self, dim, *index, *src, std::string(reduce)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_src_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, tensor src) { - PROTECT( - auto outputs__ = torch::scatter_out(*out, *self, dim, *index, *src); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_value(tensor *out__, tensor self, int64_t dim, tensor index, scalar value) { - PROTECT( - auto outputs__ = torch::scatter(*self, dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_value_(tensor *out__, tensor self, int64_t dim, tensor index, scalar value) { - PROTECT( - auto outputs__ = self->scatter_(dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_value_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, scalar value) { - PROTECT( - auto outputs__ = torch::scatter_out(*out, *self, dim, *index, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_value_reduce(tensor *out__, tensor self, int64_t dim, tensor index, scalar value, char * reduce) { - PROTECT( - auto outputs__ = torch::scatter(*self, dim, *index, *value, std::string(reduce)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_value_reduce_(tensor *out__, tensor self, int64_t dim, tensor index, scalar value, char * reduce) { - PROTECT( - auto outputs__ = self->scatter_(dim, *index, *value, std::string(reduce)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_scatter_value_reduce_out(tensor *out__, tensor out, tensor self, int64_t dim, tensor index, scalar value, char * reduce) { - PROTECT( - auto outputs__ = torch::scatter_out(*out, *self, dim, *index, *value, std::string(reduce)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_searchsorted(tensor *out__, tensor sorted_sequence, tensor self, int out_int32, int right, char * side, tensor sorter) { - PROTECT( - auto outputs__ = torch::searchsorted(*sorted_sequence, *self, (bool)out_int32, (bool)right, std::string(side), (sorter ? *sorter : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_searchsorted_scalar(tensor *out__, tensor sorted_sequence, scalar self, int out_int32, int right, char * side, tensor sorter) { - PROTECT( - auto outputs__ = torch::searchsorted(*sorted_sequence, *self, (bool)out_int32, (bool)right, std::string(side), (sorter ? *sorter : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_searchsorted_scalar_out(tensor *out__, tensor out, tensor sorted_sequence, scalar self, int out_int32, int right, char * side, tensor sorter) { - PROTECT( - auto outputs__ = torch::searchsorted_out(*out, *sorted_sequence, *self, (bool)out_int32, (bool)right, std::string(side), (sorter ? *sorter : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_searchsorted_tensor_out(tensor *out__, tensor out, tensor sorted_sequence, tensor self, int out_int32, int right, char * side, tensor sorter) { - PROTECT( - auto outputs__ = torch::searchsorted_out(*out, *sorted_sequence, *self, (bool)out_int32, (bool)right, std::string(side), (sorter ? *sorter : torch::Tensor())); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_segment_reduce(tensor *out__, tensor data, char * reduce, tensor lengths, tensor indices, tensor offsets, int64_t axis, int unsafe, scalar initial) { - PROTECT( - auto outputs__ = torch::segment_reduce(*data, std::string(reduce), (lengths ? *lengths : torch::Tensor()), (indices ? *indices : torch::Tensor()), (offsets ? *offsets : torch::Tensor()), axis, (bool)unsafe, *initial); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_segment_reduce_out(tensor *out__, tensor out, tensor data, char * reduce, tensor lengths, tensor indices, tensor offsets, int64_t axis, int unsafe, scalar initial) { - PROTECT( - auto outputs__ = torch::segment_reduce_out(*out, *data, std::string(reduce), (lengths ? *lengths : torch::Tensor()), (indices ? *indices : torch::Tensor()), (offsets ? *offsets : torch::Tensor()), axis, (bool)unsafe, *initial); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_select(tensor *out__, tensor self, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::select(*self, dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_select_backward(tensor *out__, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::select_backward(*grad_output, torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_select_backward_out(tensor *out__, tensor out, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::select_backward_out(*out, *grad_output, torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_select_copy(tensor *out__, tensor self, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::select_copy(*self, dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_select_copy_int_out(tensor *out__, tensor out, tensor self, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::select_copy_out(*out, *self, dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_select_scatter(tensor *out__, tensor self, tensor src, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::select_scatter(*self, *src, dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_select_scatter_out(tensor *out__, tensor out, tensor self, tensor src, int64_t dim, int64_t index) { - PROTECT( - auto outputs__ = torch::select_scatter_out(*out, *self, *src, dim, index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_selu(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::selu(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_selu_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::selu_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::set(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->set_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::set_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set_requires_grad(tensor *out__, tensor self, int r) { - PROTECT( - auto outputs__ = self->set_requires_grad((bool)r); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set_source_tensor(tensor *out__, tensor self, tensor source) { - PROTECT( - auto outputs__ = torch::set(*self, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set_source_tensor_(tensor *out__, tensor self, tensor source) { - PROTECT( - auto outputs__ = self->set_(*source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set_source_tensor_out(tensor *out__, tensor out, tensor self, tensor source) { - PROTECT( - auto outputs__ = torch::set_out(*out, *self, *source); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_set_source_tensor_storage_offset_(tensor *out__, tensor self, tensor source, int64_t storage_offset, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len) { - PROTECT( - auto outputs__ = self->set_(*source, storage_offset, torch::IntArrayRef(size_data, size_len), torch::IntArrayRef(stride_data, stride_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sgn(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sgn(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sgn_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->sgn_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sgn_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::sgn_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sigmoid(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sigmoid(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sigmoid_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sigmoid_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sigmoid_backward(tensor *out__, tensor grad_output, tensor output) { - PROTECT( - auto outputs__ = torch::sigmoid_backward(*grad_output, *output); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sigmoid_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor output) { - PROTECT( - auto outputs__ = torch::sigmoid_backward_out(*grad_input, *grad_output, *output); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sigmoid_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::sigmoid_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sign(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sign(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sign_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->sign_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sign_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::sign_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_signbit(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::signbit(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_signbit_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::signbit_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_silu(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::silu(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_silu_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::silu_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_silu_backward(tensor *out__, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::silu_backward(*grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_silu_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self) { - PROTECT( - auto outputs__ = torch::silu_backward_out(*grad_input, *grad_output, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_silu_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::silu_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sin(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sin(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sin_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sin_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sin_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::sin_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sinc(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sinc(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sinc_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sinc_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sinc_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::sinc_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sinh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sinh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sinh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sinh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sinh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::sinh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg_size(tensor self, int64_t dim) { - PROTECT( - return torch::size(*self, dim); - ) - return 0; -} - -void atg_slice(tensor *out__, tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { - PROTECT( - auto outputs__ = torch::slice(*self, dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slice_backward(tensor *out__, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step) { - PROTECT( - auto outputs__ = torch::slice_backward(*grad_output, torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, start, end, step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slice_backward_out(tensor *out__, tensor out, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step) { - PROTECT( - auto outputs__ = torch::slice_backward_out(*out, *grad_output, torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, start, end, step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slice_copy(tensor *out__, tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { - PROTECT( - auto outputs__ = torch::slice_copy(*self, dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slice_copy_tensor_out(tensor *out__, tensor out, tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { - PROTECT( - auto outputs__ = torch::slice_copy_out(*out, *self, dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slice_scatter(tensor *out__, tensor self, tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { - PROTECT( - auto outputs__ = torch::slice_scatter(*self, *src, dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slice_scatter_out(tensor *out__, tensor out, tensor self, tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step) { - PROTECT( - auto outputs__ = torch::slice_scatter_out(*out, *self, *src, dim, start_null ? c10::nullopt : c10::optional(start_v), end_null ? c10::nullopt : c10::optional(end_v), step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slogdet(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::slogdet(*self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_slogdet_out(tensor *out__, tensor sign, tensor logabsdet, tensor self) { - PROTECT( - auto outputs__ = torch::slogdet_out(*sign, *logabsdet, *self); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_slow_conv3d(tensor *out__, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::slow_conv3d(*self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv3d_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len) { - PROTECT( - auto outputs__ = torch::slow_conv3d_out(*out, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_dilated2d(tensor *out__, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_dilated2d(*self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_dilated2d_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_dilated2d_out(*out, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_dilated3d(tensor *out__, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_dilated3d(*self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_dilated3d_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_dilated3d_out(*out, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_transpose2d(tensor *out__, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_transpose2d(*self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_transpose2d_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_transpose2d_out(*out, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_transpose3d(tensor *out__, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_transpose3d(*self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_slow_conv_transpose3d_out(tensor *out__, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len) { - PROTECT( - auto outputs__ = torch::slow_conv_transpose3d_out(*out, *self, *weight, torch::IntArrayRef(kernel_size_data, kernel_size_len), (bias ? *bias : torch::Tensor()), torch::IntArrayRef(stride_data, stride_len), torch::IntArrayRef(padding_data, padding_len), torch::IntArrayRef(output_padding_data, output_padding_len), torch::IntArrayRef(dilation_data, dilation_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_smm(tensor *out__, tensor self, tensor mat2) { - PROTECT( - auto outputs__ = torch::smm(*self, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_smooth_l1_loss(tensor *out__, tensor self, tensor target, int64_t reduction, double beta) { - PROTECT( - auto outputs__ = torch::smooth_l1_loss(*self, *target, reduction, beta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_smooth_l1_loss_backward(tensor *out__, tensor grad_output, tensor self, tensor target, int64_t reduction, double beta) { - PROTECT( - auto outputs__ = torch::smooth_l1_loss_backward(*grad_output, *self, *target, reduction, beta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_smooth_l1_loss_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction, double beta) { - PROTECT( - auto outputs__ = torch::smooth_l1_loss_backward_out(*grad_input, *grad_output, *self, *target, reduction, beta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_smooth_l1_loss_out(tensor *out__, tensor out, tensor self, tensor target, int64_t reduction, double beta) { - PROTECT( - auto outputs__ = torch::smooth_l1_loss_out(*out, *self, *target, reduction, beta); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_soft_margin_loss(tensor *out__, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::soft_margin_loss(*self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_soft_margin_loss_backward(tensor *out__, tensor grad_output, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::soft_margin_loss_backward(*grad_output, *self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_soft_margin_loss_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::soft_margin_loss_backward_out(*grad_input, *grad_output, *self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_soft_margin_loss_out(tensor *out__, tensor out, tensor self, tensor target, int64_t reduction) { - PROTECT( - auto outputs__ = torch::soft_margin_loss_out(*out, *self, *target, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softmax(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::softmax(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softmax_int_out(tensor *out__, tensor out, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::softmax_out(*out, *self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softplus(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::softplus(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softplus_backward(tensor *out__, tensor grad_output, tensor self, scalar beta, scalar threshold) { - PROTECT( - auto outputs__ = torch::softplus_backward(*grad_output, *self, *beta, *threshold); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softplus_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, scalar beta, scalar threshold) { - PROTECT( - auto outputs__ = torch::softplus_backward_out(*grad_input, *grad_output, *self, *beta, *threshold); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softplus_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::softplus_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softshrink(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::softshrink(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softshrink_backward(tensor *out__, tensor grad_output, tensor self, scalar lambd) { - PROTECT( - auto outputs__ = torch::softshrink_backward(*grad_output, *self, *lambd); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softshrink_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, scalar lambd) { - PROTECT( - auto outputs__ = torch::softshrink_backward_out(*grad_input, *grad_output, *self, *lambd); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_softshrink_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::softshrink_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sort(tensor *out__, tensor self, int64_t dim, int descending) { - PROTECT( - auto outputs__ = torch::sort(*self, dim, (bool)descending); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_sort_stable(tensor *out__, tensor self, int stable, int64_t dim, int descending) { - PROTECT( - auto outputs__ = torch::sort(*self, (bool)stable, dim, (bool)descending); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_sort_values(tensor *out__, tensor values, tensor indices, tensor self, int64_t dim, int descending) { - PROTECT( - auto outputs__ = torch::sort_out(*values, *indices, *self, dim, (bool)descending); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_sort_values_stable(tensor *out__, tensor values, tensor indices, tensor self, int stable, int64_t dim, int descending) { - PROTECT( - auto outputs__ = torch::sort_out(*values, *indices, *self, (bool)stable, dim, (bool)descending); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_sparse_bsc_tensor(tensor *out__, tensor ccol_indices, tensor row_indices, tensor values, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_bsc_tensor(*ccol_indices, *row_indices, *values, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_bsc_tensor_ccol_row_value_size(tensor *out__, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_bsc_tensor(*ccol_indices, *row_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_bsr_tensor(tensor *out__, tensor crow_indices, tensor col_indices, tensor values, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_bsr_tensor(*crow_indices, *col_indices, *values, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_bsr_tensor_crow_col_value_size(tensor *out__, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_bsr_tensor(*crow_indices, *col_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_compressed_tensor(tensor *out__, tensor compressed_indices, tensor plain_indices, tensor values, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_compressed_tensor(*compressed_indices, *plain_indices, *values, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_compressed_tensor_comp_plain_value_size(tensor *out__, tensor compressed_indices, tensor plain_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_compressed_tensor(*compressed_indices, *plain_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_coo_tensor(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_coo_tensor(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_coo_tensor_indices(tensor *out__, tensor indices, tensor values, int options_kind, int options_device, int is_coalesced) { - PROTECT( - auto outputs__ = torch::sparse_coo_tensor(*indices, *values, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_coo_tensor_indices_size(tensor *out__, tensor indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced) { - PROTECT( - auto outputs__ = torch::sparse_coo_tensor(*indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)is_coalesced); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_coo_tensor_size_out(tensor *out__, tensor out, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::sparse_coo_tensor_out(*out, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_csc_tensor(tensor *out__, tensor ccol_indices, tensor row_indices, tensor values, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_csc_tensor(*ccol_indices, *row_indices, *values, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_csc_tensor_ccol_row_value_size(tensor *out__, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_csc_tensor(*ccol_indices, *row_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_csr_tensor(tensor *out__, tensor crow_indices, tensor col_indices, tensor values, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_csr_tensor(*crow_indices, *col_indices, *values, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_csr_tensor_crow_col_value_size(tensor *out__, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::sparse_csr_tensor(*crow_indices, *col_indices, *values, torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg_sparse_dim(tensor self) { - PROTECT( - return self->sparse_dim(); - ) - return 0; -} - -void atg_sparse_mask(tensor *out__, tensor self, tensor mask) { - PROTECT( - auto outputs__ = self->sparse_mask(*mask); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_mask_out(tensor *out__, tensor out, tensor self, tensor mask) { - PROTECT( - auto outputs__ = torch::sparse_mask_out(*out, *self, *mask); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_resize(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { - PROTECT( - auto outputs__ = torch::sparse_resize(*self, torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_resize_(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { - PROTECT( - auto outputs__ = self->sparse_resize_(torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_resize_and_clear(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { - PROTECT( - auto outputs__ = torch::sparse_resize_and_clear(*self, torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_resize_and_clear_(tensor *out__, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { - PROTECT( - auto outputs__ = self->sparse_resize_and_clear_(torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_resize_and_clear_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { - PROTECT( - auto outputs__ = torch::sparse_resize_and_clear_out(*out, *self, torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_resize_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim) { - PROTECT( - auto outputs__ = torch::sparse_resize_out(*out, *self, torch::IntArrayRef(size_data, size_len), sparse_dim, dense_dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_sampled_addmm(tensor *out__, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::sparse_sampled_addmm(*self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sparse_sampled_addmm_out(tensor *out__, tensor out, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::sparse_sampled_addmm_out(*out, *self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_airy_ai(tensor *out__, tensor x) { - PROTECT( - auto outputs__ = torch::special_airy_ai(*x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_airy_ai_out(tensor *out__, tensor out, tensor x) { - PROTECT( - auto outputs__ = torch::special_airy_ai_out(*out, *x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_j0(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_j0(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_j0_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_j0_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_j1(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_j1(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_j1_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_j1_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_y0(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_y0(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_y0_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_y0_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_y1(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_y1(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_bessel_y1_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_bessel_y1_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_t(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_t(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_t_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_t(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_t_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_t_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_t_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_t_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_t_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_t(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_t_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_t_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_u(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_u(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_u_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_u(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_u_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_u_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_u_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_u_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_u_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_u(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_u_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_u_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_v(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_v(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_v_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_v(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_v_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_v_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_v_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_v_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_v_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_v(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_v_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_v_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_w(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_w(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_w_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_w(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_w_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_w_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_w_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_w_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_w_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_w(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_chebyshev_polynomial_w_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_chebyshev_polynomial_w_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_digamma(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_digamma(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_digamma_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_digamma_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_entr(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_entr(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_entr_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_entr_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erf(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_erf(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erf_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_erf_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erfc(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_erfc(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erfc_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_erfc_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erfcx(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_erfcx(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erfcx_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_erfcx_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erfinv(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_erfinv(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_erfinv_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_erfinv_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_exp2(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_exp2(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_exp2_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_exp2_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_expit(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_expit(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_expit_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_expit_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_expm1(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_expm1(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_expm1_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_expm1_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_gammainc(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_gammainc(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_gammainc_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_gammainc_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_gammaincc(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_gammaincc(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_gammaincc_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_gammaincc_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_gammaln(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_gammaln(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_gammaln_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_gammaln_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_h(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_h(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_h_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_h(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_h_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_h_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_h_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_h_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_h_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_h(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_h_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_h_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_he(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_he(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_he_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_he(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_he_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_he_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_he_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_he_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_he_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_he(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_hermite_polynomial_he_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_hermite_polynomial_he_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i0(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_i0(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i0_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_i0_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i0e(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_i0e(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i0e_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_i0e_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i1(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_i1(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i1_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_i1_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i1e(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_i1e(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_i1e_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_i1e_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_laguerre_polynomial_l(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_laguerre_polynomial_l(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_laguerre_polynomial_l_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_laguerre_polynomial_l(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_laguerre_polynomial_l_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_laguerre_polynomial_l_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_laguerre_polynomial_l_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_laguerre_polynomial_l_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_laguerre_polynomial_l_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_laguerre_polynomial_l(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_laguerre_polynomial_l_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_laguerre_polynomial_l_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_legendre_polynomial_p(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_legendre_polynomial_p(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_legendre_polynomial_p_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_legendre_polynomial_p(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_legendre_polynomial_p_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_legendre_polynomial_p_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_legendre_polynomial_p_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_legendre_polynomial_p_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_legendre_polynomial_p_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_legendre_polynomial_p(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_legendre_polynomial_p_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_legendre_polynomial_p_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_log1p(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_log1p(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_log1p_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_log1p_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_log_ndtr(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_log_ndtr(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_log_ndtr_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_log_ndtr_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_log_softmax(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::special_log_softmax(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_logit(tensor *out__, tensor self, double eps_v, int eps_null) { - PROTECT( - auto outputs__ = torch::special_logit(*self, eps_null ? c10::nullopt : c10::optional(eps_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_logit_out(tensor *out__, tensor out, tensor self, double eps_v, int eps_null) { - PROTECT( - auto outputs__ = torch::special_logit_out(*out, *self, eps_null ? c10::nullopt : c10::optional(eps_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_logsumexp(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::special_logsumexp(*self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_logsumexp_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim) { - PROTECT( - auto outputs__ = torch::special_logsumexp_out(*out, *self, torch::IntArrayRef(dim_data, dim_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_i0(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_i0(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_i0_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_i0_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_i1(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_i1(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_i1_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_i1_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_k0(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_k0(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_k0_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_k0_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_k1(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_k1(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_modified_bessel_k1_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_modified_bessel_k1_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_multigammaln(tensor *out__, tensor self, int64_t p) { - PROTECT( - auto outputs__ = torch::special_multigammaln(*self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_multigammaln_out(tensor *out__, tensor out, tensor self, int64_t p) { - PROTECT( - auto outputs__ = torch::special_multigammaln_out(*out, *self, p); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_ndtr(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_ndtr(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_ndtr_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_ndtr_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_ndtri(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_ndtri(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_ndtri_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_ndtri_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_polygamma(tensor *out__, int64_t n, tensor self) { - PROTECT( - auto outputs__ = torch::special_polygamma(n, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_polygamma_out(tensor *out__, tensor out, int64_t n, tensor self) { - PROTECT( - auto outputs__ = torch::special_polygamma_out(*out, n, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_psi(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_psi(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_psi_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_psi_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_round(tensor *out__, tensor self, int64_t decimals) { - PROTECT( - auto outputs__ = torch::special_round(*self, decimals); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_round_out(tensor *out__, tensor out, tensor self, int64_t decimals) { - PROTECT( - auto outputs__ = torch::special_round_out(*out, *self, decimals); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_scaled_modified_bessel_k0(tensor *out__, tensor x) { - PROTECT( - auto outputs__ = torch::special_scaled_modified_bessel_k0(*x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_scaled_modified_bessel_k0_out(tensor *out__, tensor out, tensor x) { - PROTECT( - auto outputs__ = torch::special_scaled_modified_bessel_k0_out(*out, *x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_scaled_modified_bessel_k1(tensor *out__, tensor x) { - PROTECT( - auto outputs__ = torch::special_scaled_modified_bessel_k1(*x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_scaled_modified_bessel_k1_out(tensor *out__, tensor out, tensor x) { - PROTECT( - auto outputs__ = torch::special_scaled_modified_bessel_k1_out(*out, *x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_t(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_t(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_t_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_t(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_t_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_t_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_t_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_t_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_t_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_t(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_t_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_t_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_u(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_u(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_u_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_u(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_u_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_u_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_u_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_u_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_u_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_u(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_u_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_u_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_v(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_v(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_v_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_v(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_v_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_v_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_v_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_v_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_v_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_v(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_v_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_v_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_w(tensor *out__, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_w(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_w_n_scalar(tensor *out__, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_w(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_w_n_scalar_out(tensor *out__, tensor out, tensor x, scalar n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_w_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_w_out(tensor *out__, tensor out, tensor x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_w_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_w_x_scalar(tensor *out__, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_w(*x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_shifted_chebyshev_polynomial_w_x_scalar_out(tensor *out__, tensor out, scalar x, tensor n) { - PROTECT( - auto outputs__ = torch::special_shifted_chebyshev_polynomial_w_out(*out, *x, *n); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_sinc(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::special_sinc(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_sinc_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::special_sinc_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_softmax(tensor *out__, tensor self, int64_t dim, int dtype) { - PROTECT( - auto outputs__ = torch::special_softmax(*self, dim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_spherical_bessel_j0(tensor *out__, tensor x) { - PROTECT( - auto outputs__ = torch::special_spherical_bessel_j0(*x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_spherical_bessel_j0_out(tensor *out__, tensor out, tensor x) { - PROTECT( - auto outputs__ = torch::special_spherical_bessel_j0_out(*out, *x); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlog1py(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlog1py(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlog1py_other_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::special_xlog1py(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlog1py_other_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::special_xlog1py_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlog1py_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlog1py_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlog1py_self_scalar(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlog1py(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlog1py_self_scalar_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlog1py_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlogy(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlogy(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlogy_other_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::special_xlogy(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlogy_other_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::special_xlogy_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlogy_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlogy_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlogy_self_scalar(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlogy(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_xlogy_self_scalar_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::special_xlogy_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_zeta(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_zeta(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_zeta_other_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::special_zeta(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_zeta_other_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::special_zeta_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_zeta_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::special_zeta_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_zeta_self_scalar(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::special_zeta(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_special_zeta_self_scalar_out(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::special_zeta_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_split(tensor self, int64_t split_size, int64_t dim) { - PROTECT( - auto outputs__ = torch::split(*self, split_size, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_split_copy(tensor self, int64_t split_size, int64_t dim) { - PROTECT( - auto outputs__ = torch::split_copy(*self, split_size, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_split_copy_tensor_out(tensor *out_data, int out_len, tensor self, int64_t split_size, int64_t dim) { - PROTECT( - torch::split_copy_out(of_carray_tensor(out_data, out_len), *self, split_size, dim); - ) -} - -tensor *atg_split_sizes(tensor self, int64_t *split_size_data, int split_size_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::split(*self, torch::IntArrayRef(split_size_data, split_size_len), dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_split_with_sizes(tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::split_with_sizes(*self, torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_split_with_sizes_copy(tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::split_with_sizes_copy(*self, torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_split_with_sizes_copy_out(tensor *out_data, int out_len, tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { - PROTECT( - torch::split_with_sizes_copy_out(of_carray_tensor(out_data, out_len), *self, torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); - ) -} - -void atg_sqrt(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sqrt(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sqrt_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::sqrt_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sqrt_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::sqrt_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_square(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::square(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_square_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::square_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_square_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::square_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::squeeze(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->squeeze_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::squeeze_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_copy_dim(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::squeeze_copy(*self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_copy_dim_out(tensor *out__, tensor out, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::squeeze_copy_out(*out, *self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_copy_dims(tensor *out__, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::squeeze_copy(*self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_copy_dims_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::squeeze_copy_out(*out, *self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::squeeze_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_dim(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::squeeze(*self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_dim_(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = self->squeeze_(dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_dims(tensor *out__, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = torch::squeeze(*self, torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_squeeze_dims_(tensor *out__, tensor self, int64_t *dim_data, int dim_len) { - PROTECT( - auto outputs__ = self->squeeze_(torch::IntArrayRef(dim_data, dim_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sspaddmm(tensor *out__, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::sspaddmm(*self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sspaddmm_out(tensor *out__, tensor out, tensor self, tensor mat1, tensor mat2) { - PROTECT( - auto outputs__ = torch::sspaddmm_out(*out, *self, *mat1, *mat2); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_stack(tensor *out__, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::stack(of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_stack_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::stack_out(*out, of_carray_tensor(tensors_data, tensors_len), dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_std(tensor *out__, tensor self, int unbiased) { - PROTECT( - auto outputs__ = torch::std(*self, (bool)unbiased); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_std_correction(tensor *out__, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::std(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_std_correction_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::std_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_std_dim(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { - PROTECT( - auto outputs__ = torch::std(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_std_mean(tensor *out__, tensor self, int unbiased) { - PROTECT( - auto outputs__ = torch::std_mean(*self, (bool)unbiased); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_std_mean_correction(tensor *out__, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::std_mean(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_std_mean_correction_out(tensor *out__, tensor out0, tensor out1, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::std_mean_out(*out0, *out1, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_std_mean_dim(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { - PROTECT( - auto outputs__ = torch::std_mean(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_std_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { - PROTECT( - auto outputs__ = torch::std_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_stft(tensor *out__, tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, tensor window, int normalized, int onesided, int return_complex) { - PROTECT( - auto outputs__ = torch::stft(*self, n_fft, hop_length_null ? c10::nullopt : c10::optional(hop_length_v), win_length_null ? c10::nullopt : c10::optional(win_length_v), (window ? *window : torch::Tensor()), (bool)normalized, (bool)onesided, (bool)return_complex); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_stft_center(tensor *out__, tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, tensor window, int center, char * pad_mode, int normalized, int onesided, int return_complex) { - PROTECT( - auto outputs__ = torch::stft(*self, n_fft, hop_length_null ? c10::nullopt : c10::optional(hop_length_v), win_length_null ? c10::nullopt : c10::optional(win_length_v), (window ? *window : torch::Tensor()), (bool)center, std::string(pad_mode), (bool)normalized, (bool)onesided, (bool)return_complex); - out__[0] = new torch::Tensor(outputs__); - ) -} - -int64_t atg_stride(tensor self, int64_t dim) { - PROTECT( - return torch::stride(*self, dim); - ) - return 0; -} - -void atg_sub(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::sub(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sub_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->sub_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sub_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::sub_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sub_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::sub(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sub_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->sub_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sub_scalar_out(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::sub_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_subtract(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::subtract(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_subtract_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->subtract_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_subtract_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::subtract_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_subtract_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::subtract(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_subtract_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->subtract_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sum(tensor *out__, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::sum(*self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sum_dim_intlist(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::sum(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sum_intlist_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype) { - PROTECT( - auto outputs__ = torch::sum_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)keepdim, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sum_out(tensor *out__, tensor out, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::sum_out(*out, *self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_sum_to_size(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = self->sum_to_size(torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_svd(tensor *out__, tensor self, int some, int compute_uv) { - PROTECT( - auto outputs__ = torch::svd(*self, (bool)some, (bool)compute_uv); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_svd_u(tensor *out__, tensor U, tensor S, tensor V, tensor self, int some, int compute_uv) { - PROTECT( - auto outputs__ = torch::svd_out(*U, *S, *V, *self, (bool)some, (bool)compute_uv); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_swapaxes(tensor *out__, tensor self, int64_t axis0, int64_t axis1) { - PROTECT( - auto outputs__ = torch::swapaxes(*self, axis0, axis1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_swapaxes_(tensor *out__, tensor self, int64_t axis0, int64_t axis1) { - PROTECT( - auto outputs__ = self->swapaxes_(axis0, axis1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_swapdims(tensor *out__, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = torch::swapdims(*self, dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_swapdims_(tensor *out__, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = self->swapdims_(dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_t(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::t(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_t_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->t_(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_t_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::t_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_t_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::t_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_take(tensor *out__, tensor self, tensor index) { - PROTECT( - auto outputs__ = torch::take(*self, *index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_take_along_dim(tensor *out__, tensor self, tensor indices, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::take_along_dim(*self, *indices, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_take_along_dim_out(tensor *out__, tensor out, tensor self, tensor indices, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::take_along_dim_out(*out, *self, *indices, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_take_out(tensor *out__, tensor out, tensor self, tensor index) { - PROTECT( - auto outputs__ = torch::take_out(*out, *self, *index); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tan(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::tan(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tan_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::tan_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tan_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::tan_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tanh(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::tanh(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tanh_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::tanh_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tanh_backward(tensor *out__, tensor grad_output, tensor output) { - PROTECT( - auto outputs__ = torch::tanh_backward(*grad_output, *output); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tanh_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor output) { - PROTECT( - auto outputs__ = torch::tanh_backward_out(*grad_input, *grad_output, *output); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tanh_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::tanh_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_tensor_split(tensor self, int64_t sections, int64_t dim) { - PROTECT( - auto outputs__ = torch::tensor_split(*self, sections, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_tensor_split_indices(tensor self, int64_t *indices_data, int indices_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::tensor_split(*self, torch::IntArrayRef(indices_data, indices_len), dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_tensor_split_tensor_indices_or_sections(tensor self, tensor tensor_indices_or_sections, int64_t dim) { - PROTECT( - auto outputs__ = torch::tensor_split(*self, *tensor_indices_or_sections, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_tensordot(tensor *out__, tensor self, tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len) { - PROTECT( - auto outputs__ = torch::tensordot(*self, *other, torch::IntArrayRef(dims_self_data, dims_self_len), torch::IntArrayRef(dims_other_data, dims_other_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tensordot_out(tensor *out__, tensor out, tensor self, tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len) { - PROTECT( - auto outputs__ = torch::tensordot_out(*out, *self, *other, torch::IntArrayRef(dims_self_data, dims_self_len), torch::IntArrayRef(dims_other_data, dims_other_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_threshold(tensor *out__, tensor self, scalar threshold, scalar value) { - PROTECT( - auto outputs__ = torch::threshold(*self, *threshold, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_threshold_(tensor *out__, tensor self, scalar threshold, scalar value) { - PROTECT( - auto outputs__ = torch::threshold_(*self, *threshold, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_threshold_backward(tensor *out__, tensor grad_output, tensor self, scalar threshold) { - PROTECT( - auto outputs__ = torch::threshold_backward(*grad_output, *self, *threshold); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_threshold_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, tensor self, scalar threshold) { - PROTECT( - auto outputs__ = torch::threshold_backward_out(*grad_input, *grad_output, *self, *threshold); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_threshold_out(tensor *out__, tensor out, tensor self, scalar threshold, scalar value) { - PROTECT( - auto outputs__ = torch::threshold_out(*out, *self, *threshold, *value); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tile(tensor *out__, tensor self, int64_t *dims_data, int dims_len) { - PROTECT( - auto outputs__ = torch::tile(*self, torch::IntArrayRef(dims_data, dims_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to(tensor *out__, tensor self, int device) { - PROTECT( - auto outputs__ = self->to(device_of_int(device)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_dense(tensor *out__, tensor self, int dtype, int masked_grad) { - PROTECT( - auto outputs__ = self->to_dense(torch::ScalarType(dtype), (bool)masked_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_dense_backward(tensor *out__, tensor grad, tensor input, int masked_grad) { - PROTECT( - auto outputs__ = torch::to_dense_backward(*grad, *input, (bool)masked_grad); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_device(tensor *out__, tensor self, int device, int dtype, int non_blocking, int copy) { - PROTECT( - auto outputs__ = self->to(device_of_int(device), torch::ScalarType(dtype), (bool)non_blocking, (bool)copy); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_dtype(tensor *out__, tensor self, int dtype, int non_blocking, int copy) { - PROTECT( - auto outputs__ = self->to(torch::ScalarType(dtype), (bool)non_blocking, (bool)copy); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_dtype_layout(tensor *out__, tensor self, int options_kind, int options_device, int non_blocking, int copy) { - PROTECT( - auto outputs__ = self->to(at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind)), (bool)non_blocking, (bool)copy); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_mkldnn(tensor *out__, tensor self, int dtype) { - PROTECT( - auto outputs__ = self->to_mkldnn(torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_mkldnn_backward(tensor *out__, tensor grad, tensor input) { - PROTECT( - auto outputs__ = torch::to_mkldnn_backward(*grad, *input); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_mkldnn_out(tensor *out__, tensor out, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::to_mkldnn_out(*out, *self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_other(tensor *out__, tensor self, tensor other, int non_blocking, int copy) { - PROTECT( - auto outputs__ = self->to(*other, (bool)non_blocking, (bool)copy); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_padded_tensor(tensor *out__, tensor self, double padding, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = self->to_padded_tensor(padding, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_to_padded_tensor_out(tensor *out__, tensor out, tensor self, double padding, int64_t *output_size_data, int output_size_len) { - PROTECT( - auto outputs__ = torch::to_padded_tensor_out(*out, *self, padding, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_topk(tensor *out__, tensor self, int64_t k, int64_t dim, int largest, int sorted) { - PROTECT( - auto outputs__ = torch::topk(*self, k, dim, (bool)largest, (bool)sorted); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_topk_values(tensor *out__, tensor values, tensor indices, tensor self, int64_t k, int64_t dim, int largest, int sorted) { - PROTECT( - auto outputs__ = torch::topk_out(*values, *indices, *self, k, dim, (bool)largest, (bool)sorted); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_totype(tensor *out__, tensor self, int scalar_type) { - PROTECT( - auto outputs__ = self->toType(torch::ScalarType(scalar_type)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trace(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::trace(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trace_backward(tensor *out__, tensor grad, int64_t *sizes_data, int sizes_len) { - PROTECT( - auto outputs__ = torch::trace_backward(*grad, torch::IntArrayRef(sizes_data, sizes_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trace_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::trace_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_transpose(tensor *out__, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = torch::transpose(*self, dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_transpose_(tensor *out__, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = self->transpose_(dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_transpose_copy(tensor *out__, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = torch::transpose_copy(*self, dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_transpose_copy_int_out(tensor *out__, tensor out, tensor self, int64_t dim0, int64_t dim1) { - PROTECT( - auto outputs__ = torch::transpose_copy_out(*out, *self, dim0, dim1); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trapezoid(tensor *out__, tensor y, int64_t dim) { - PROTECT( - auto outputs__ = torch::trapezoid(*y, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trapezoid_x(tensor *out__, tensor y, tensor x, int64_t dim) { - PROTECT( - auto outputs__ = torch::trapezoid(*y, *x, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trapz(tensor *out__, tensor y, tensor x, int64_t dim) { - PROTECT( - auto outputs__ = torch::trapz(*y, *x, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trapz_dx(tensor *out__, tensor y, double dx, int64_t dim) { - PROTECT( - auto outputs__ = torch::trapz(*y, dx, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_triangular_solve(tensor *out__, tensor self, tensor A, int upper, int transpose, int unitriangular) { - PROTECT( - auto outputs__ = torch::triangular_solve(*self, *A, (bool)upper, (bool)transpose, (bool)unitriangular); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_triangular_solve_x(tensor *out__, tensor X, tensor M, tensor self, tensor A, int upper, int transpose, int unitriangular) { - PROTECT( - auto outputs__ = torch::triangular_solve_out(*X, *M, *self, *A, (bool)upper, (bool)transpose, (bool)unitriangular); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_tril(tensor *out__, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = torch::tril(*self, diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tril_(tensor *out__, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = self->tril_(diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tril_indices(tensor *out__, int64_t row, int64_t col, int64_t offset, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::tril_indices(row, col, offset, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tril_indices_out(tensor *out__, tensor out, int64_t row, int64_t col, int64_t offset) { - PROTECT( - auto outputs__ = torch::tril_indices_out(*out, row, col, offset); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_tril_out(tensor *out__, tensor out, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = torch::tril_out(*out, *self, diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_triplet_margin_loss(tensor *out__, tensor anchor, tensor positive, tensor negative, double margin, double p, double eps, int swap, int64_t reduction) { - PROTECT( - auto outputs__ = torch::triplet_margin_loss(*anchor, *positive, *negative, margin, p, eps, (bool)swap, reduction); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_triu(tensor *out__, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = torch::triu(*self, diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_triu_(tensor *out__, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = self->triu_(diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_triu_indices(tensor *out__, int64_t row, int64_t col, int64_t offset, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::triu_indices(row, col, offset, at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_triu_indices_out(tensor *out__, tensor out, int64_t row, int64_t col, int64_t offset) { - PROTECT( - auto outputs__ = torch::triu_indices_out(*out, row, col, offset); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_triu_out(tensor *out__, tensor out, tensor self, int64_t diagonal) { - PROTECT( - auto outputs__ = torch::triu_out(*out, *self, diagonal); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_true_divide(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::true_divide(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_true_divide_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->true_divide_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_true_divide_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::true_divide_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_true_divide_scalar(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::true_divide(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_true_divide_scalar_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = self->true_divide_(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trunc(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::trunc(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trunc_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::trunc_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_trunc_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::trunc_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_type_as(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->type_as(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_unbind(tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::unbind(*self, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_unbind_copy(tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::unbind_copy(*self, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_unbind_copy_int_out(tensor *out_data, int out_len, tensor self, int64_t dim) { - PROTECT( - torch::unbind_copy_out(of_carray_tensor(out_data, out_len), *self, dim); - ) -} - -void atg_unflatten(tensor *out__, tensor self, int64_t dim, int64_t *sizes_data, int sizes_len) { - PROTECT( - auto outputs__ = torch::unflatten(*self, dim, torch::IntArrayRef(sizes_data, sizes_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_unflatten_dense_tensors(tensor flat, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::unflatten_dense_tensors(*flat, of_carray_tensor(tensors_data, tensors_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_unfold(tensor *out__, tensor self, int64_t dimension, int64_t size, int64_t step) { - PROTECT( - auto outputs__ = self->unfold(dimension, size, step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unfold_backward(tensor *out__, tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step) { - PROTECT( - auto outputs__ = torch::unfold_backward(*grad_in, torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, size, step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unfold_backward_out(tensor *out__, tensor out, tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step) { - PROTECT( - auto outputs__ = torch::unfold_backward_out(*out, *grad_in, torch::IntArrayRef(input_sizes_data, input_sizes_len), dim, size, step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unfold_copy(tensor *out__, tensor self, int64_t dimension, int64_t size, int64_t step) { - PROTECT( - auto outputs__ = torch::unfold_copy(*self, dimension, size, step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unfold_copy_out(tensor *out__, tensor out, tensor self, int64_t dimension, int64_t size, int64_t step) { - PROTECT( - auto outputs__ = torch::unfold_copy_out(*out, *self, dimension, size, step); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_uniform(tensor *out__, tensor self, double from, double to) { - PROTECT( - auto outputs__ = torch::uniform(*self, from, to); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_uniform_(tensor *out__, tensor self, double from, double to) { - PROTECT( - auto outputs__ = self->uniform_(from, to); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_uniform_out(tensor *out__, tensor out, tensor self, double from, double to) { - PROTECT( - auto outputs__ = torch::uniform_out(*out, *self, from, to); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unique_consecutive(tensor *out__, tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::unique_consecutive(*self, (bool)return_inverse, (bool)return_counts, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_unique_consecutive_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null) { - PROTECT( - auto outputs__ = torch::unique_consecutive_out(*out0, *out1, *out2, *self, (bool)return_inverse, (bool)return_counts, dim_null ? c10::nullopt : c10::optional(dim_v)); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_unique_dim(tensor *out__, tensor self, int64_t dim, int sorted, int return_inverse, int return_counts) { - PROTECT( - auto outputs__ = torch::unique_dim(*self, dim, (bool)sorted, (bool)return_inverse, (bool)return_counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_unique_dim_consecutive(tensor *out__, tensor self, int64_t dim, int return_inverse, int return_counts) { - PROTECT( - auto outputs__ = torch::unique_dim_consecutive(*self, dim, (bool)return_inverse, (bool)return_counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_unique_dim_consecutive_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor self, int64_t dim, int return_inverse, int return_counts) { - PROTECT( - auto outputs__ = torch::unique_dim_consecutive_out(*out0, *out1, *out2, *self, dim, (bool)return_inverse, (bool)return_counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -void atg_unique_dim_out(tensor *out__, tensor out0, tensor out1, tensor out2, tensor self, int64_t dim, int sorted, int return_inverse, int return_counts) { - PROTECT( - auto outputs__ = torch::unique_dim_out(*out0, *out1, *out2, *self, dim, (bool)sorted, (bool)return_inverse, (bool)return_counts); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - out__[2] = new torch::Tensor(std::get<2>(outputs__)); - ) -} - -tensor *atg_unsafe_chunk(tensor self, int64_t chunks, int64_t dim) { - PROTECT( - auto outputs__ = torch::unsafe_chunk(*self, chunks, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_unsafe_split(tensor self, int64_t split_size, int64_t dim) { - PROTECT( - auto outputs__ = torch::unsafe_split(*self, split_size, dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_unsafe_split_tensor_out(tensor *out_data, int out_len, tensor self, int64_t split_size, int64_t dim) { - PROTECT( - torch::unsafe_split_out(of_carray_tensor(out_data, out_len), *self, split_size, dim); - ) -} - -tensor *atg_unsafe_split_with_sizes(tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { - PROTECT( - auto outputs__ = torch::unsafe_split_with_sizes(*self, torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_unsafe_split_with_sizes_out(tensor *out_data, int out_len, tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim) { - PROTECT( - torch::unsafe_split_with_sizes_out(of_carray_tensor(out_data, out_len), *self, torch::IntArrayRef(split_sizes_data, split_sizes_len), dim); - ) -} - -void atg_unsqueeze(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::unsqueeze(*self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unsqueeze_(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = self->unsqueeze_(dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unsqueeze_copy(tensor *out__, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::unsqueeze_copy(*self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_unsqueeze_copy_out(tensor *out__, tensor out, tensor self, int64_t dim) { - PROTECT( - auto outputs__ = torch::unsqueeze_copy_out(*out, *self, dim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bicubic2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bicubic2d(*self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bicubic2d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bicubic2d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bicubic2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bicubic2d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bicubic2d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bicubic2d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bicubic2d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::upsample_bicubic2d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bilinear2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bilinear2d(*self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bilinear2d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bilinear2d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bilinear2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bilinear2d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bilinear2d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_bilinear2d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_bilinear2d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::upsample_bilinear2d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_linear1d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_linear1d(*self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_linear1d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_linear1d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_linear1d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_linear1d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_linear1d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_linear1d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_linear1d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::upsample_linear1d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest1d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest1d(*self, torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest1d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest1d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest1d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest1d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest1d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest1d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), scales_null ? c10::nullopt : c10::optional(scales_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest1d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::upsample_nearest1d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest2d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest2d(*self, torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest2d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest2d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest2d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest2d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest2d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest2d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest2d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::upsample_nearest2d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest3d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest3d(*self, torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest3d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest3d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest3d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest3d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_nearest3d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_nearest3d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::upsample_nearest3d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_trilinear3d(tensor *out__, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_trilinear3d(*self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_trilinear3d_backward(tensor *out__, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_trilinear3d_backward(*grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_trilinear3d_backward_grad_input(tensor *out__, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_trilinear3d_backward_out(*grad_input, *grad_output, torch::IntArrayRef(output_size_data, output_size_len), torch::IntArrayRef(input_size_data, input_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_trilinear3d_out(tensor *out__, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null) { - PROTECT( - auto outputs__ = torch::upsample_trilinear3d_out(*out, *self, torch::IntArrayRef(output_size_data, output_size_len), (bool)align_corners, scales_d_null ? c10::nullopt : c10::optional(scales_d_v), scales_h_null ? c10::nullopt : c10::optional(scales_h_v), scales_w_null ? c10::nullopt : c10::optional(scales_w_v)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_upsample_trilinear3d_vec(tensor *out__, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len) { - PROTECT( - auto outputs__ = torch::upsample_trilinear3d(*input, output_size_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(output_size_data, output_size_len)), (bool)align_corners, at::ArrayRef(scale_factors_data, scale_factors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_value_selecting_reduction_backward(tensor *out__, tensor grad, int64_t dim, tensor indices, int64_t *sizes_data, int sizes_len, int keepdim) { - PROTECT( - auto outputs__ = torch::value_selecting_reduction_backward(*grad, dim, *indices, torch::IntArrayRef(sizes_data, sizes_len), (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_values(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = self->values(); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_values_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::values_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_values_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::values_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_vander(tensor *out__, tensor x, int64_t n_v, int n_null, int increasing) { - PROTECT( - auto outputs__ = torch::vander(*x, n_null ? c10::nullopt : c10::optional(n_v), (bool)increasing); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_var(tensor *out__, tensor self, int unbiased) { - PROTECT( - auto outputs__ = torch::var(*self, (bool)unbiased); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_var_correction(tensor *out__, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::var(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_var_correction_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::var_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_var_dim(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { - PROTECT( - auto outputs__ = torch::var(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_var_mean(tensor *out__, tensor self, int unbiased) { - PROTECT( - auto outputs__ = torch::var_mean(*self, (bool)unbiased); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_var_mean_correction(tensor *out__, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::var_mean(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_var_mean_correction_out(tensor *out__, tensor out0, tensor out1, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim) { - PROTECT( - auto outputs__ = torch::var_mean_out(*out0, *out1, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), *correction, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_var_mean_dim(tensor *out__, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { - PROTECT( - auto outputs__ = torch::var_mean(*self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); - out__[0] = new torch::Tensor(std::get<0>(outputs__)); - out__[1] = new torch::Tensor(std::get<1>(outputs__)); - ) -} - -void atg_var_out(tensor *out__, tensor out, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim) { - PROTECT( - auto outputs__ = torch::var_out(*out, *self, dim_data == nullptr ? c10::nullopt : c10::optional(torch::IntArrayRef(dim_data, dim_len)), (bool)unbiased, (bool)keepdim); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_vdot(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::vdot(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_vdot_out(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::vdot_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = self->view(torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_as(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = self->view_as(*other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_as_complex(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::view_as_complex(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_as_complex_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::view_as_complex_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_as_complex_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::view_as_complex_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_as_real(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::view_as_real(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_as_real_copy(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::view_as_real_copy(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_as_real_copy_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::view_as_real_copy_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_copy(tensor *out__, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::view_copy(*self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_copy_dtype(tensor *out__, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::view_copy(*self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_copy_dtype_out(tensor *out__, tensor out, tensor self, int dtype) { - PROTECT( - auto outputs__ = torch::view_copy_out(*out, *self, torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_copy_out(tensor *out__, tensor out, tensor self, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::view_copy_out(*out, *self, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_view_dtype(tensor *out__, tensor self, int dtype) { - PROTECT( - auto outputs__ = self->view(torch::ScalarType(dtype)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_vsplit(tensor self, int64_t sections) { - PROTECT( - auto outputs__ = torch::vsplit(*self, sections); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -tensor *atg_vsplit_array(tensor self, int64_t *indices_data, int indices_len) { - PROTECT( - auto outputs__ = torch::vsplit(*self, torch::IntArrayRef(indices_data, indices_len)); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_vstack(tensor *out__, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::vstack(of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_vstack_out(tensor *out__, tensor out, tensor *tensors_data, int tensors_len) { - PROTECT( - auto outputs__ = torch::vstack_out(*out, of_carray_tensor(tensors_data, tensors_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - -tensor *atg_where(tensor condition) { - PROTECT( - auto outputs__ = torch::where(*condition); - int sz = outputs__.size(); - torch::Tensor **out__ = (torch::Tensor**)malloc((sz + 1) * sizeof(torch::Tensor*)); - for (int i = 0; i < sz; ++i) - out__[i] = new torch::Tensor(outputs__[i]); - out__[sz] = nullptr; - return out__; - ) -} - -void atg_where_scalar(tensor *out__, tensor condition, scalar self, scalar other) { - PROTECT( - auto outputs__ = torch::where(*condition, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_where_scalarother(tensor *out__, tensor condition, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::where(*condition, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_where_scalarself(tensor *out__, tensor condition, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::where(*condition, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_where_self(tensor *out__, tensor condition, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::where(*condition, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_where_self_out(tensor *out__, tensor out, tensor condition, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::where_out(*out, *condition, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::xlogy(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy_(tensor *out__, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::xlogy_(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy_outscalar_other(tensor *out__, tensor out, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::xlogy_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy_outscalar_self(tensor *out__, tensor out, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::xlogy_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy_outtensor(tensor *out__, tensor out, tensor self, tensor other) { - PROTECT( - auto outputs__ = torch::xlogy_out(*out, *self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy_scalar_other(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::xlogy(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy_scalar_other_(tensor *out__, tensor self, scalar other) { - PROTECT( - auto outputs__ = torch::xlogy_(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_xlogy_scalar_self(tensor *out__, scalar self, tensor other) { - PROTECT( - auto outputs__ = torch::xlogy(*self, *other); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_zero(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::zero(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_zero_(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::zero_(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_zero_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::zero_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_zeros(tensor *out__, int64_t *size_data, int size_len, int options_kind, int options_device) { - PROTECT( - auto outputs__ = torch::zeros(torch::IntArrayRef(size_data, size_len), at::device(device_of_int(options_device)).dtype(at::ScalarType(options_kind))); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_zeros_like(tensor *out__, tensor self) { - PROTECT( - auto outputs__ = torch::zeros_like(*self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_zeros_like_out(tensor *out__, tensor out, tensor self) { - PROTECT( - auto outputs__ = torch::zeros_like_out(*out, *self); - out__[0] = new torch::Tensor(outputs__); - ) -} - -void atg_zeros_out(tensor *out__, tensor out, int64_t *size_data, int size_len) { - PROTECT( - auto outputs__ = torch::zeros_out(*out, torch::IntArrayRef(size_data, size_len)); - out__[0] = new torch::Tensor(outputs__); - ) -} - diff --git a/src/wrapper/torch_api_generated.h b/src/wrapper/torch_api_generated.h index b63341d..bab5428 100644 --- a/src/wrapper/torch_api_generated.h +++ b/src/wrapper/torch_api_generated.h @@ -1,2551 +1,2551 @@ // THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! -void atg___and__(tensor *, tensor self, scalar other); -void atg___and__tensor_(tensor *, tensor self, tensor other); -void atg___iand__(tensor *, tensor self, scalar other); -void atg___iand__tensor_(tensor *, tensor self, tensor other); -void atg___ilshift__(tensor *, tensor self, scalar other); -void atg___ilshift__tensor_(tensor *, tensor self, tensor other); -void atg___ior__(tensor *, tensor self, scalar other); -void atg___ior__tensor_(tensor *, tensor self, tensor other); -void atg___irshift__(tensor *, tensor self, scalar other); -void atg___irshift__tensor_(tensor *, tensor self, tensor other); -void atg___ixor__(tensor *, tensor self, scalar other); -void atg___ixor__tensor_(tensor *, tensor self, tensor other); -void atg___lshift__(tensor *, tensor self, scalar other); -void atg___lshift__scalar_out_(tensor *, tensor out, tensor self, scalar other); -void atg___lshift__tensor_(tensor *, tensor self, tensor other); -void atg___lshift__tensor_out_(tensor *, tensor out, tensor self, tensor other); -void atg___or__(tensor *, tensor self, scalar other); -void atg___or__tensor_(tensor *, tensor self, tensor other); -void atg___rshift__(tensor *, tensor self, scalar other); -void atg___rshift__scalar_out_(tensor *, tensor out, tensor self, scalar other); -void atg___rshift__tensor_(tensor *, tensor self, tensor other); -void atg___rshift__tensor_out_(tensor *, tensor out, tensor self, tensor other); -void atg___xor__(tensor *, tensor self, scalar other); -void atg___xor__tensor_(tensor *, tensor self, tensor other); -void atg__adaptive_avg_pool2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg__adaptive_avg_pool2d_backward(tensor *, tensor grad_output, tensor self); -void atg__adaptive_avg_pool2d_backward_out(tensor *, tensor out, tensor grad_output, tensor self); -void atg__adaptive_avg_pool2d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len); -void atg__adaptive_avg_pool3d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg__adaptive_avg_pool3d_backward(tensor *, tensor grad_output, tensor self); -void atg__adaptive_avg_pool3d_backward_out(tensor *, tensor out, tensor grad_output, tensor self); -void atg__adaptive_avg_pool3d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len); -void atg__add_batch_dim(tensor *, tensor self, int64_t batch_dim, int64_t level); -void atg__add_relu(tensor *, tensor self, tensor other); -void atg__add_relu_(tensor *, tensor self, tensor other); -void atg__add_relu_out(tensor *, tensor out, tensor self, tensor other); -void atg__add_relu_scalar(tensor *, tensor self, scalar other); -void atg__add_relu_scalar_(tensor *, tensor self, scalar other); -void atg__add_relu_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg__addmm_activation(tensor *, tensor self, tensor mat1, tensor mat2, int use_gelu); -void atg__addmm_activation_out(tensor *, tensor out, tensor self, tensor mat1, tensor mat2, int use_gelu); -void atg__aminmax(tensor *, tensor self); -void atg__aminmax_dim(tensor *, tensor self, int64_t dim, int keepdim); -void atg__aminmax_dim_out(tensor *, tensor out0, tensor out1, tensor self, int64_t dim, int keepdim); -void atg__aminmax_out(tensor *, tensor out0, tensor out1, tensor self); -void atg__amp_update_scale(tensor *, tensor self, tensor growth_tracker, tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval); -void atg__amp_update_scale_(tensor *, tensor self, tensor growth_tracker, tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval); -void atg__amp_update_scale_out(tensor *, tensor out, tensor self, tensor growth_tracker, tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval); -void atg__assert_tensor_metadata(tensor a, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int dtype); -void atg__autocast_to_full_precision(tensor *, tensor self, int cuda_enabled, int cpu_enabled); -void atg__autocast_to_reduced_precision(tensor *, tensor self, int cuda_enabled, int cpu_enabled, int cuda_dtype, int cpu_dtype); -void atg__cast_byte(tensor *, tensor self, int non_blocking); -void atg__cast_char(tensor *, tensor self, int non_blocking); -void atg__cast_double(tensor *, tensor self, int non_blocking); -void atg__cast_float(tensor *, tensor self, int non_blocking); -void atg__cast_half(tensor *, tensor self, int non_blocking); -void atg__cast_int(tensor *, tensor self, int non_blocking); -void atg__cast_long(tensor *, tensor self, int non_blocking); -void atg__cast_short(tensor *, tensor self, int non_blocking); -void atg__cdist_backward(tensor *, tensor grad, tensor x1, tensor x2, double p, tensor cdist); -void atg__cdist_backward_out(tensor *, tensor out, tensor grad, tensor x1, tensor x2, double p, tensor cdist); -void atg__cholesky_solve_helper(tensor *, tensor self, tensor A, int upper); -void atg__cholesky_solve_helper_out(tensor *, tensor out, tensor self, tensor A, int upper); -void atg__coalesce(tensor *, tensor self); -void atg__coalesce_out(tensor *, tensor out, tensor self); -void atg__coalesced(tensor *, tensor self, int coalesced); -void atg__coalesced_(tensor *, tensor self, int coalesced); -void atg__coalesced_out(tensor *, tensor out, tensor self, int coalesced); -void atg__compute_linear_combination(tensor *, tensor input, tensor coefficients); -void atg__compute_linear_combination_out(tensor *, tensor out, tensor input, tensor coefficients); -void atg__conj(tensor *, tensor self); -void atg__conj_copy(tensor *, tensor self); -void atg__conj_copy_out(tensor *, tensor out, tensor self); -void atg__conj_physical(tensor *, tensor self); -void atg__conj_physical_out(tensor *, tensor out, tensor self); -void atg__conv_depthwise2d(tensor *, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg__conv_depthwise2d_out(tensor *, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg__convert_indices_from_coo_to_csr(tensor *, tensor self, int64_t size, int out_int32); -void atg__convert_indices_from_coo_to_csr_out(tensor *, tensor out, tensor self, int64_t size, int out_int32); -void atg__convert_indices_from_csr_to_coo(tensor *, tensor crow_indices, tensor col_indices, int out_int32, int transpose); -void atg__convert_indices_from_csr_to_coo_out(tensor *, tensor out, tensor crow_indices, tensor col_indices, int out_int32, int transpose); -void atg__convolution(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32); -void atg__convolution_deprecated(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled); -void atg__convolution_mode(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg__convolution_out(tensor *, tensor out, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32); -void atg__copy_from(tensor *, tensor self, tensor dst, int non_blocking); -void atg__copy_from_and_resize(tensor *, tensor self, tensor dst); -void atg__copy_from_and_resize_out(tensor *, tensor out, tensor self, tensor dst); -void atg__copy_from_out(tensor *, tensor out, tensor self, tensor dst, int non_blocking); -void atg__cslt_compress(tensor *, tensor input); -void atg__cslt_sparse_mm(tensor *, tensor compressed_A, tensor dense_B, tensor bias, int transpose_result); -void atg__ctc_loss(tensor *, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity); -void atg__ctc_loss_backward(tensor *, tensor grad, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, tensor neg_log_likelihood, tensor log_alpha, int64_t blank, int zero_infinity); -void atg__ctc_loss_backward_out(tensor *, tensor out, tensor grad, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, tensor neg_log_likelihood, tensor log_alpha, int64_t blank, int zero_infinity); -void atg__ctc_loss_backward_tensor(tensor *, tensor grad, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, tensor neg_log_likelihood, tensor log_alpha, int64_t blank, int zero_infinity); -void atg__ctc_loss_out(tensor *, tensor out0, tensor out1, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity); -void atg__ctc_loss_tensor(tensor *, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int zero_infinity); -void atg__ctc_loss_tensor_out(tensor *, tensor out0, tensor out1, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int zero_infinity); -void atg__cudnn_ctc_loss(tensor *, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity); -void atg__cudnn_ctc_loss_out(tensor *, tensor out0, tensor out1, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity); -void atg__cudnn_ctc_loss_tensor(tensor *, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int deterministic, int zero_infinity); -void atg__cudnn_init_dropout_state(tensor *, double dropout, int train, int64_t dropout_seed, int options_kind, int options_device); -void atg__cudnn_init_dropout_state_out(tensor *, tensor out, double dropout, int train, int64_t dropout_seed); -void atg__cudnn_rnn(tensor *, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor weight_buf, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state); -void atg__cudnn_rnn_flatten_weight(tensor *, tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional); -void atg__cudnn_rnn_flatten_weight_out(tensor *, tensor out, tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional); -void atg__cudnn_rnn_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor weight_buf, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state); -int64_t atg__debug_has_internal_overlap(tensor self); -void atg__dim_arange(tensor *, tensor like, int64_t dim); -int64_t atg__dimi(tensor self); -int64_t atg__dimv(tensor self); -void atg__dirichlet_grad(tensor *, tensor x, tensor alpha, tensor total); -void atg__dirichlet_grad_out(tensor *, tensor out, tensor x, tensor alpha, tensor total); -void atg__efficient_attention_backward(tensor *, tensor grad_out_, tensor query, tensor key, tensor value, tensor bias, tensor out, tensor cu_seqlens_q, tensor cu_seqlens_k, int64_t max_seqlen_k, int64_t max_seqlen_q, tensor logsumexp, double dropout_p, tensor philox_seed, tensor philox_offset, int64_t custom_mask_type, int bias_requires_grad, double scale_v, int scale_null, int64_t num_splits_key_v, int num_splits_key_null); -void atg__efficientzerotensor(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg__efficientzerotensor_out(tensor *, tensor out, int64_t *size_data, int size_len); -void atg__embedding_bag(tensor *, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx); -void atg__embedding_bag_backward(tensor *, tensor grad, tensor indices, tensor offsets, tensor offset2bag, tensor bag_size, tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int64_t padding_idx); -void atg__embedding_bag_dense_backward(tensor *, tensor grad, tensor indices, tensor offset2bag, tensor bag_size, tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, tensor per_sample_weights, int64_t padding_idx); -void atg__embedding_bag_dense_backward_out(tensor *, tensor out, tensor grad, tensor indices, tensor offset2bag, tensor bag_size, tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, tensor per_sample_weights, int64_t padding_idx); -void atg__embedding_bag_forward_only(tensor *, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx); -void atg__embedding_bag_forward_only_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx); -void atg__embedding_bag_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx); -void atg__embedding_bag_per_sample_weights_backward(tensor *, tensor grad, tensor weight, tensor indices, tensor offsets, tensor offset2bag, int64_t mode, int64_t padding_idx); -void atg__embedding_bag_per_sample_weights_backward_out(tensor *, tensor out, tensor grad, tensor weight, tensor indices, tensor offsets, tensor offset2bag, int64_t mode, int64_t padding_idx); -void atg__embedding_bag_sparse_backward(tensor *, tensor grad, tensor indices, tensor offsets, tensor offset2bag, tensor bag_size, int64_t num_weights, int scale_grad_by_freq, int64_t mode, tensor per_sample_weights, int64_t padding_idx); -void atg__empty_affine_quantized(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device, double scale, int64_t zero_point); -void atg__empty_affine_quantized_out(tensor *, tensor out, int64_t *size_data, int size_len, double scale, int64_t zero_point); -void atg__empty_per_channel_affine_quantized(tensor *, int64_t *size_data, int size_len, tensor scales, tensor zero_points, int64_t axis, int options_kind, int options_device); -void atg__empty_per_channel_affine_quantized_out(tensor *, tensor out, int64_t *size_data, int size_len, tensor scales, tensor zero_points, int64_t axis); -void atg__euclidean_dist(tensor *, tensor x1, tensor x2); -void atg__euclidean_dist_out(tensor *, tensor out, tensor x1, tensor x2); -void atg__fake_quantize_learnable_per_channel_affine(tensor *, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor); -void atg__fake_quantize_learnable_per_channel_affine_backward(tensor *, tensor grad, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor); -void atg__fake_quantize_learnable_per_channel_affine_out(tensor *, tensor out, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor); -void atg__fake_quantize_learnable_per_tensor_affine(tensor *, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor); -void atg__fake_quantize_learnable_per_tensor_affine_backward(tensor *, tensor grad, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor); -void atg__fake_quantize_learnable_per_tensor_affine_out(tensor *, tensor out, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor); -void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams(tensor *, tensor self, tensor scale, tensor zero_point, tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max); -void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out(tensor *, tensor out0, tensor out1, tensor self, tensor scale, tensor zero_point, tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max); -void atg__fft_c2c(tensor *, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward); -void atg__fft_c2c_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward); -void atg__fft_c2r(tensor *, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size); -void atg__fft_c2r_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size); -void atg__fft_r2c(tensor *, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided); -void atg__fft_r2c_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided); -void atg__fill_mem_eff_dropout_mask_(tensor *, tensor self, double dropout_p, int64_t seed, int64_t offset); -void atg__flash_attention_backward(tensor *, tensor grad_out, tensor query, tensor key, tensor value, tensor out, tensor logsumexp, tensor cum_seq_q, tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, tensor philox_seed, tensor philox_offset, double scale_v, int scale_null); -void atg__foobar(tensor *, tensor self, int arg1, int arg2, int arg3); -void atg__foobar_out(tensor *, tensor out, tensor self, int arg1, int arg2, int arg3); -void atg__functional_assert_async(tensor *, tensor self, char * assert_msg, tensor dep_token); -void atg__functional_sym_constrain_range(tensor *, scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, tensor dep_token); -void atg__functional_sym_constrain_range_for_size(tensor *, scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, tensor dep_token); -void atg__fused_adam(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_adam_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_adam_tensor_lr_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_adam_tensor_lr_out(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_adamw(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_adamw_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_adamw_tensor_lr_(tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_adamw_tensor_lr_out(tensor *out_data, int out_len, tensor *self_data, int self_len, tensor *grads_data, int grads_len, tensor *exp_avgs_data, int exp_avgs_len, tensor *exp_avg_sqs_data, int exp_avg_sqs_len, tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, tensor *state_steps_data, int state_steps_len, tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, tensor grad_scale, tensor found_inf); -void atg__fused_dropout(tensor *, tensor self, double p); -void atg__fused_dropout_out(tensor *, tensor out0, tensor out1, tensor self, double p); -void atg__fused_moving_avg_obs_fq_helper(tensor *, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); -void atg__fused_moving_avg_obs_fq_helper_functional(tensor *, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); -void atg__fused_moving_avg_obs_fq_helper_out(tensor *, tensor out0, tensor out1, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); -int64_t atg__fused_sdp_choice(tensor query, tensor key, tensor value, tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null); -void atg__fw_primal(tensor *, tensor self, int64_t level); -void atg__fw_primal_copy(tensor *, tensor self, int64_t level); -void atg__fw_primal_copy_out(tensor *, tensor out, tensor self, int64_t level); -void atg__gather_sparse_backward(tensor *, tensor self, int64_t dim, tensor index, tensor grad); -void atg__grid_sampler_2d_cpu_fallback(tensor *, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -void atg__grid_sampler_2d_cpu_fallback_backward(tensor *, tensor grad_output, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -void atg__grid_sampler_2d_cpu_fallback_out(tensor *, tensor out, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -int atg__has_compatible_shallow_copy_type(tensor self, tensor from); -int atg__has_same_storage_numel(tensor self, tensor other); -tensor *atg__histogramdd_bin_edges(tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density); -void atg__histogramdd_bin_edges_out(tensor *out_data, int out_len, tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density); -void atg__histogramdd_from_bin_cts(tensor *, tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density); -void atg__histogramdd_from_bin_cts_out(tensor *, tensor out, tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, tensor weight, int density); -void atg__histogramdd_from_bin_tensors(tensor *, tensor self, tensor *bins_data, int bins_len, tensor weight, int density); -void atg__histogramdd_from_bin_tensors_out(tensor *, tensor out, tensor self, tensor *bins_data, int bins_len, tensor weight, int density); -void atg__index_put_impl(tensor *, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate, int unsafe); -void atg__index_put_impl_(tensor *, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate, int unsafe); -void atg__index_put_impl_out(tensor *, tensor out, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate, int unsafe); -void atg__indices(tensor *, tensor self); -void atg__indices_copy(tensor *, tensor self); -void atg__indices_copy_out(tensor *, tensor out, tensor self); -void atg__int_mm(tensor *, tensor self, tensor mat2); -void atg__int_mm_out(tensor *, tensor out, tensor self, tensor mat2); -void atg__is_all_true(tensor *, tensor self); -void atg__is_any_true(tensor *, tensor self); -int atg__is_zerotensor(tensor self); -void atg__linalg_check_errors(tensor info, char * api_name, int is_matrix); -void atg__linalg_det(tensor *, tensor A); -void atg__linalg_det_result(tensor *, tensor result, tensor LU, tensor pivots, tensor A); -void atg__linalg_eigh(tensor *, tensor A, char * UPLO, int compute_v); -void atg__linalg_eigh_eigenvalues(tensor *, tensor eigenvalues, tensor eigenvectors, tensor A, char * UPLO, int compute_v); -void atg__linalg_slogdet(tensor *, tensor A); -void atg__linalg_slogdet_sign(tensor *, tensor sign, tensor logabsdet, tensor LU, tensor pivots, tensor A); -void atg__linalg_solve_ex(tensor *, tensor A, tensor B, int left, int check_errors); -void atg__linalg_solve_ex_result(tensor *, tensor result, tensor LU, tensor pivots, tensor info, tensor A, tensor B, int left, int check_errors); -void atg__linalg_svd(tensor *, tensor A, int full_matrices, int compute_uv, char * driver); -void atg__linalg_svd_u(tensor *, tensor U, tensor S, tensor Vh, tensor A, int full_matrices, int compute_uv, char * driver); -void atg__log_softmax(tensor *, tensor self, int64_t dim, int half_to_float); -void atg__log_softmax_backward_data(tensor *, tensor grad_output, tensor output, int64_t dim, int input_dtype); -void atg__log_softmax_backward_data_out(tensor *, tensor out, tensor grad_output, tensor output, int64_t dim, int input_dtype); -void atg__log_softmax_out(tensor *, tensor out, tensor self, int64_t dim, int half_to_float); -void atg__logcumsumexp(tensor *, tensor self, int64_t dim); -void atg__logcumsumexp_out(tensor *, tensor out, tensor self, int64_t dim); -void atg__lstm_mps(tensor *, tensor input, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); -void atg__lstm_mps_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor out5, tensor input, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); -void atg__lu_with_info(tensor *, tensor self, int pivot, int check_errors); -void atg__make_dep_token(tensor *, int options_kind, int options_device); -void atg__make_dual(tensor *, tensor primal, tensor tangent, int64_t level); -void atg__make_dual_copy(tensor *, tensor primal, tensor tangent, int64_t level); -void atg__make_dual_copy_out(tensor *, tensor out, tensor primal, tensor tangent, int64_t level); -void atg__make_per_channel_quantized_tensor(tensor *, tensor self, tensor scale, tensor zero_point, int64_t axis); -void atg__make_per_channel_quantized_tensor_out(tensor *, tensor out, tensor self, tensor scale, tensor zero_point, int64_t axis); -void atg__make_per_tensor_quantized_tensor(tensor *, tensor self, double scale, int64_t zero_point); -void atg__make_per_tensor_quantized_tensor_out(tensor *, tensor out, tensor self, double scale, int64_t zero_point); -void atg__masked_scale(tensor *, tensor self, tensor mask, double scale); -void atg__masked_scale_out(tensor *, tensor out, tensor self, tensor mask, double scale); -void atg__masked_softmax(tensor *, tensor self, tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null); -void atg__masked_softmax_backward(tensor *, tensor grad_output, tensor output, tensor mask, int64_t dim_v, int dim_null); -void atg__masked_softmax_backward_out(tensor *, tensor out, tensor grad_output, tensor output, tensor mask, int64_t dim_v, int dim_null); -void atg__masked_softmax_out(tensor *, tensor out, tensor self, tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null); -void atg__mkldnn_reshape(tensor *, tensor self, int64_t *shape_data, int shape_len); -void atg__mkldnn_reshape_out(tensor *, tensor out, tensor self, int64_t *shape_data, int shape_len); -void atg__mkldnn_transpose(tensor *, tensor self, int64_t dim0, int64_t dim1); -void atg__mkldnn_transpose_(tensor *, tensor self, int64_t dim0, int64_t dim1); -void atg__mkldnn_transpose_out(tensor *, tensor out, tensor self, int64_t dim0, int64_t dim1); -void atg__mps_convolution(tensor *, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg__mps_convolution_out(tensor *, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg__mps_convolution_transpose(tensor *, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg__mps_convolution_transpose_out(tensor *, tensor out, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg__native_batch_norm_legit(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps); -void atg__native_batch_norm_legit_functional(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps); -void atg__native_batch_norm_legit_no_stats(tensor *, tensor input, tensor weight, tensor bias, int training, double momentum, double eps); -void atg__native_batch_norm_legit_no_stats_out(tensor *, tensor out, tensor save_mean, tensor save_invstd, tensor input, tensor weight, tensor bias, int training, double momentum, double eps); -void atg__native_batch_norm_legit_no_training(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, double momentum, double eps); -void atg__native_batch_norm_legit_no_training_out(tensor *, tensor out0, tensor out1, tensor out2, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, double momentum, double eps); -void atg__native_batch_norm_legit_out(tensor *, tensor out, tensor save_mean, tensor save_invstd, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps); -void atg__native_multi_head_attention(tensor *, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null); -void atg__native_multi_head_attention_out(tensor *, tensor out0, tensor out1, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null); -void atg__neg_view(tensor *, tensor self); -void atg__neg_view_copy(tensor *, tensor self); -void atg__neg_view_copy_out(tensor *, tensor out, tensor self); -void atg__nested_from_padded(tensor *, tensor padded, tensor cpu_nested_shape_example, int fuse_transform_0213); -void atg__nested_from_padded_and_nested_example(tensor *, tensor padded, tensor nt_example); -void atg__nested_from_padded_and_nested_example_out(tensor *, tensor out, tensor padded, tensor nt_example); -void atg__nested_from_padded_out(tensor *, tensor out, tensor padded, tensor cpu_nested_shape_example, int fuse_transform_0213); -void atg__nested_select_backward(tensor *, tensor grad_output, tensor self, int64_t dim, int64_t index); -void atg__nested_sum_backward(tensor *, tensor grad, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg__nested_view_from_buffer(tensor *, tensor self, tensor nested_size, tensor nested_strides, tensor offsets); -void atg__nested_view_from_buffer_copy(tensor *, tensor self, tensor nested_size, tensor nested_strides, tensor offsets); -void atg__nested_view_from_buffer_copy_out(tensor *, tensor out, tensor self, tensor nested_size, tensor nested_strides, tensor offsets); -void atg__new_zeros_with_same_feature_meta(tensor *, tensor self, tensor other, int64_t self_num_batch_dims); -void atg__new_zeros_with_same_feature_meta_out(tensor *, tensor out, tensor self, tensor other, int64_t self_num_batch_dims); +raw_tensor atg___and__(gc_tensor self, scalar other); +raw_tensor atg___and__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___iand__(gc_tensor self, scalar other); +raw_tensor atg___iand__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___ilshift__(gc_tensor self, scalar other); +raw_tensor atg___ilshift__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___ior__(gc_tensor self, scalar other); +raw_tensor atg___ior__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___irshift__(gc_tensor self, scalar other); +raw_tensor atg___irshift__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___ixor__(gc_tensor self, scalar other); +raw_tensor atg___ixor__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___lshift__(gc_tensor self, scalar other); +raw_tensor atg___lshift__scalar_out_(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg___lshift__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___lshift__tensor_out_(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg___or__(gc_tensor self, scalar other); +raw_tensor atg___or__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___rshift__(gc_tensor self, scalar other); +raw_tensor atg___rshift__scalar_out_(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg___rshift__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg___rshift__tensor_out_(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg___xor__(gc_tensor self, scalar other); +raw_tensor atg___xor__tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg__adaptive_avg_pool2d(gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg__adaptive_avg_pool2d_backward(gc_tensor grad_output, gc_tensor self); +raw_tensor atg__adaptive_avg_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self); +raw_tensor atg__adaptive_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg__adaptive_avg_pool3d(gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg__adaptive_avg_pool3d_backward(gc_tensor grad_output, gc_tensor self); +raw_tensor atg__adaptive_avg_pool3d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self); +raw_tensor atg__adaptive_avg_pool3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg__add_batch_dim(gc_tensor self, int64_t batch_dim, int64_t level); +raw_tensor atg__add_relu(gc_tensor self, gc_tensor other); +raw_tensor atg__add_relu_(gc_tensor self, gc_tensor other); +raw_tensor atg__add_relu_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg__add_relu_scalar(gc_tensor self, scalar other); +raw_tensor atg__add_relu_scalar_(gc_tensor self, scalar other); +raw_tensor atg__add_relu_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg__addmm_activation(gc_tensor self, gc_tensor mat1, gc_tensor mat2, int use_gelu); +raw_tensor atg__addmm_activation_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2, int use_gelu); +void atg__aminmax(raw_tensor *, gc_tensor self); +void atg__aminmax_dim(raw_tensor *, gc_tensor self, int64_t dim, int keepdim); +void atg__aminmax_dim_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, int64_t dim, int keepdim); +void atg__aminmax_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self); +void atg__amp_update_scale(raw_tensor *, gc_tensor self, gc_tensor growth_tracker, gc_tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval); +raw_tensor atg__amp_update_scale_(gc_tensor self, gc_tensor growth_tracker, gc_tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval); +raw_tensor atg__amp_update_scale_out(gc_tensor out, gc_tensor self, gc_tensor growth_tracker, gc_tensor found_inf, double scale_growth_factor, double scale_backoff_factor, int64_t growth_interval); +void atg__assert_tensor_metadata(gc_tensor a, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int dtype); +raw_tensor atg__autocast_to_full_precision(gc_tensor self, int cuda_enabled, int cpu_enabled); +raw_tensor atg__autocast_to_reduced_precision(gc_tensor self, int cuda_enabled, int cpu_enabled, int cuda_dtype, int cpu_dtype); +raw_tensor atg__cast_byte(gc_tensor self, int non_blocking); +raw_tensor atg__cast_char(gc_tensor self, int non_blocking); +raw_tensor atg__cast_double(gc_tensor self, int non_blocking); +raw_tensor atg__cast_float(gc_tensor self, int non_blocking); +raw_tensor atg__cast_half(gc_tensor self, int non_blocking); +raw_tensor atg__cast_int(gc_tensor self, int non_blocking); +raw_tensor atg__cast_long(gc_tensor self, int non_blocking); +raw_tensor atg__cast_short(gc_tensor self, int non_blocking); +raw_tensor atg__cdist_backward(gc_tensor grad, gc_tensor x1, gc_tensor x2, double p, gc_tensor cdist); +raw_tensor atg__cdist_backward_out(gc_tensor out, gc_tensor grad, gc_tensor x1, gc_tensor x2, double p, gc_tensor cdist); +raw_tensor atg__cholesky_solve_helper(gc_tensor self, gc_tensor A, int upper); +raw_tensor atg__cholesky_solve_helper_out(gc_tensor out, gc_tensor self, gc_tensor A, int upper); +raw_tensor atg__coalesce(gc_tensor self); +raw_tensor atg__coalesce_out(gc_tensor out, gc_tensor self); +raw_tensor atg__coalesced(gc_tensor self, int coalesced); +raw_tensor atg__coalesced_(gc_tensor self, int coalesced); +raw_tensor atg__coalesced_out(gc_tensor out, gc_tensor self, int coalesced); +raw_tensor atg__compute_linear_combination(gc_tensor input, gc_tensor coefficients); +raw_tensor atg__compute_linear_combination_out(gc_tensor out, gc_tensor input, gc_tensor coefficients); +raw_tensor atg__conj(gc_tensor self); +raw_tensor atg__conj_copy(gc_tensor self); +raw_tensor atg__conj_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg__conj_physical(gc_tensor self); +raw_tensor atg__conj_physical_out(gc_tensor out, gc_tensor self); +raw_tensor atg__conv_depthwise2d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg__conv_depthwise2d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg__convert_indices_from_coo_to_csr(gc_tensor self, int64_t size, int out_int32); +raw_tensor atg__convert_indices_from_coo_to_csr_out(gc_tensor out, gc_tensor self, int64_t size, int out_int32); +raw_tensor atg__convert_indices_from_csr_to_coo(gc_tensor crow_indices, gc_tensor col_indices, int out_int32, int transpose); +raw_tensor atg__convert_indices_from_csr_to_coo_out(gc_tensor out, gc_tensor crow_indices, gc_tensor col_indices, int out_int32, int transpose); +raw_tensor atg__convolution(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32); +raw_tensor atg__convolution_deprecated(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled); +raw_tensor atg__convolution_mode(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg__convolution_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups, int benchmark, int deterministic, int cudnn_enabled, int allow_tf32); +raw_tensor atg__copy_from(gc_tensor self, gc_tensor dst, int non_blocking); +raw_tensor atg__copy_from_and_resize(gc_tensor self, gc_tensor dst); +raw_tensor atg__copy_from_and_resize_out(gc_tensor out, gc_tensor self, gc_tensor dst); +raw_tensor atg__copy_from_out(gc_tensor out, gc_tensor self, gc_tensor dst, int non_blocking); +raw_tensor atg__cslt_compress(gc_tensor input); +raw_tensor atg__cslt_sparse_mm(gc_tensor compressed_A, gc_tensor dense_B, gc_tensor bias, int transpose_result); +void atg__ctc_loss(raw_tensor *, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity); +raw_tensor atg__ctc_loss_backward(gc_tensor grad, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, gc_tensor neg_log_likelihood, gc_tensor log_alpha, int64_t blank, int zero_infinity); +raw_tensor atg__ctc_loss_backward_out(gc_tensor out, gc_tensor grad, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, gc_tensor neg_log_likelihood, gc_tensor log_alpha, int64_t blank, int zero_infinity); +raw_tensor atg__ctc_loss_backward_tensor(gc_tensor grad, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, gc_tensor neg_log_likelihood, gc_tensor log_alpha, int64_t blank, int zero_infinity); +void atg__ctc_loss_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int zero_infinity); +void atg__ctc_loss_tensor(raw_tensor *, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int zero_infinity); +void atg__ctc_loss_tensor_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int zero_infinity); +void atg__cudnn_ctc_loss(raw_tensor *, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity); +void atg__cudnn_ctc_loss_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int deterministic, int zero_infinity); +void atg__cudnn_ctc_loss_tensor(raw_tensor *, gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int deterministic, int zero_infinity); +raw_tensor atg__cudnn_init_dropout_state(double dropout, int train, int64_t dropout_seed, int options_kind, int options_device); +raw_tensor atg__cudnn_init_dropout_state_out(gc_tensor out, double dropout, int train, int64_t dropout_seed); +void atg__cudnn_rnn(raw_tensor *, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor weight_buf, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state); +raw_tensor atg__cudnn_rnn_flatten_weight(gc_tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional); +raw_tensor atg__cudnn_rnn_flatten_weight_out(gc_tensor out, gc_tensor *weight_arr_data, int weight_arr_len, int64_t weight_stride0, int64_t input_size, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, int bidirectional); +void atg__cudnn_rnn_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor weight_buf, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t proj_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state); +int64_t atg__debug_has_internal_overlap(gc_tensor self); +raw_tensor atg__dim_arange(gc_tensor like, int64_t dim); +int64_t atg__dimi(gc_tensor self); +int64_t atg__dimv(gc_tensor self); +raw_tensor atg__dirichlet_grad(gc_tensor x, gc_tensor alpha, gc_tensor total); +raw_tensor atg__dirichlet_grad_out(gc_tensor out, gc_tensor x, gc_tensor alpha, gc_tensor total); +void atg__efficient_attention_backward(raw_tensor *, gc_tensor grad_out_, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor bias, gc_tensor out, gc_tensor cu_seqlens_q, gc_tensor cu_seqlens_k, int64_t max_seqlen_k, int64_t max_seqlen_q, gc_tensor logsumexp, double dropout_p, gc_tensor philox_seed, gc_tensor philox_offset, int64_t custom_mask_type, int bias_requires_grad, double scale_v, int scale_null, int64_t num_splits_key_v, int num_splits_key_null); +raw_tensor atg__efficientzerotensor(int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg__efficientzerotensor_out(gc_tensor out, int64_t *size_data, int size_len); +void atg__embedding_bag(raw_tensor *, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx); +raw_tensor atg__embedding_bag_backward(gc_tensor grad, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, gc_tensor bag_size, gc_tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int64_t padding_idx); +raw_tensor atg__embedding_bag_dense_backward(gc_tensor grad, gc_tensor indices, gc_tensor offset2bag, gc_tensor bag_size, gc_tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, gc_tensor per_sample_weights, int64_t padding_idx); +raw_tensor atg__embedding_bag_dense_backward_out(gc_tensor out, gc_tensor grad, gc_tensor indices, gc_tensor offset2bag, gc_tensor bag_size, gc_tensor maximum_indices, int64_t num_weights, int scale_grad_by_freq, int64_t mode, gc_tensor per_sample_weights, int64_t padding_idx); +void atg__embedding_bag_forward_only(raw_tensor *, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx); +void atg__embedding_bag_forward_only_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx); +void atg__embedding_bag_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx); +raw_tensor atg__embedding_bag_per_sample_weights_backward(gc_tensor grad, gc_tensor weight, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, int64_t mode, int64_t padding_idx); +raw_tensor atg__embedding_bag_per_sample_weights_backward_out(gc_tensor out, gc_tensor grad, gc_tensor weight, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, int64_t mode, int64_t padding_idx); +raw_tensor atg__embedding_bag_sparse_backward(gc_tensor grad, gc_tensor indices, gc_tensor offsets, gc_tensor offset2bag, gc_tensor bag_size, int64_t num_weights, int scale_grad_by_freq, int64_t mode, gc_tensor per_sample_weights, int64_t padding_idx); +raw_tensor atg__empty_affine_quantized(int64_t *size_data, int size_len, int options_kind, int options_device, double scale, int64_t zero_point); +raw_tensor atg__empty_affine_quantized_out(gc_tensor out, int64_t *size_data, int size_len, double scale, int64_t zero_point); +raw_tensor atg__empty_per_channel_affine_quantized(int64_t *size_data, int size_len, gc_tensor scales, gc_tensor zero_points, int64_t axis, int options_kind, int options_device); +raw_tensor atg__empty_per_channel_affine_quantized_out(gc_tensor out, int64_t *size_data, int size_len, gc_tensor scales, gc_tensor zero_points, int64_t axis); +raw_tensor atg__euclidean_dist(gc_tensor x1, gc_tensor x2); +raw_tensor atg__euclidean_dist_out(gc_tensor out, gc_tensor x1, gc_tensor x2); +raw_tensor atg__fake_quantize_learnable_per_channel_affine(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor); +void atg__fake_quantize_learnable_per_channel_affine_backward(raw_tensor *, gc_tensor grad, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor); +raw_tensor atg__fake_quantize_learnable_per_channel_affine_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max, double grad_factor); +raw_tensor atg__fake_quantize_learnable_per_tensor_affine(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor); +void atg__fake_quantize_learnable_per_tensor_affine_backward(raw_tensor *, gc_tensor grad, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor); +raw_tensor atg__fake_quantize_learnable_per_tensor_affine_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max, double grad_factor); +void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams(raw_tensor *, gc_tensor self, gc_tensor scale, gc_tensor zero_point, gc_tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max); +void atg__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor scale, gc_tensor zero_point, gc_tensor fake_quant_enabled, int64_t quant_min, int64_t quant_max); +raw_tensor atg__fft_c2c(gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward); +raw_tensor atg__fft_c2c_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int forward); +raw_tensor atg__fft_c2r(gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size); +raw_tensor atg__fft_c2r_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int64_t last_dim_size); +raw_tensor atg__fft_r2c(gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided); +raw_tensor atg__fft_r2c_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int64_t normalization, int onesided); +raw_tensor atg__fill_mem_eff_dropout_mask_(gc_tensor self, double dropout_p, int64_t seed, int64_t offset); +void atg__flash_attention_backward(raw_tensor *, gc_tensor grad_out, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor out, gc_tensor logsumexp, gc_tensor cum_seq_q, gc_tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, gc_tensor philox_seed, gc_tensor philox_offset, double scale_v, int scale_null); +raw_tensor atg__foobar(gc_tensor self, int arg1, int arg2, int arg3); +raw_tensor atg__foobar_out(gc_tensor out, gc_tensor self, int arg1, int arg2, int arg3); +raw_tensor atg__functional_assert_async(gc_tensor self, char * assert_msg, gc_tensor dep_token); +raw_tensor atg__functional_sym_constrain_range(scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, gc_tensor dep_token); +raw_tensor atg__functional_sym_constrain_range_for_size(scalar size, int64_t min_v, int min_null, int64_t max_v, int max_null, gc_tensor dep_token); +void atg__fused_adam(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_adam_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_adam_tensor_lr_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_adam_tensor_lr_out(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_adamw(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_adamw_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, double lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_adamw_tensor_lr_(gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_adamw_tensor_lr_out(gc_tensor *out_data, int out_len, gc_tensor *self_data, int self_len, gc_tensor *grads_data, int grads_len, gc_tensor *exp_avgs_data, int exp_avgs_len, gc_tensor *exp_avg_sqs_data, int exp_avg_sqs_len, gc_tensor *max_exp_avg_sqs_data, int max_exp_avg_sqs_len, gc_tensor *state_steps_data, int state_steps_len, gc_tensor lr, double beta1, double beta2, double weight_decay, double eps, int amsgrad, int maximize, gc_tensor grad_scale, gc_tensor found_inf); +void atg__fused_dropout(raw_tensor *, gc_tensor self, double p); +void atg__fused_dropout_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, double p); +void atg__fused_moving_avg_obs_fq_helper(raw_tensor *, gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); +void atg__fused_moving_avg_obs_fq_helper_functional(raw_tensor *, gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); +void atg__fused_moving_avg_obs_fq_helper_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); +int64_t atg__fused_sdp_choice(gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null); +raw_tensor atg__fw_primal(gc_tensor self, int64_t level); +raw_tensor atg__fw_primal_copy(gc_tensor self, int64_t level); +raw_tensor atg__fw_primal_copy_out(gc_tensor out, gc_tensor self, int64_t level); +raw_tensor atg__gather_sparse_backward(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor grad); +raw_tensor atg__grid_sampler_2d_cpu_fallback(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +void atg__grid_sampler_2d_cpu_fallback_backward(raw_tensor *, gc_tensor grad_output, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +raw_tensor atg__grid_sampler_2d_cpu_fallback_out(gc_tensor out, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +int atg__has_compatible_shallow_copy_type(gc_tensor self, gc_tensor from); +int atg__has_same_storage_numel(gc_tensor self, gc_tensor other); +raw_tensor *atg__histogramdd_bin_edges(gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density); +void atg__histogramdd_bin_edges_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density); +raw_tensor atg__histogramdd_from_bin_cts(gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density); +raw_tensor atg__histogramdd_from_bin_cts_out(gc_tensor out, gc_tensor self, int64_t *bins_data, int bins_len, double *range_data, int range_len, gc_tensor weight, int density); +raw_tensor atg__histogramdd_from_bin_tensors(gc_tensor self, gc_tensor *bins_data, int bins_len, gc_tensor weight, int density); +raw_tensor atg__histogramdd_from_bin_tensors_out(gc_tensor out, gc_tensor self, gc_tensor *bins_data, int bins_len, gc_tensor weight, int density); +raw_tensor atg__index_put_impl(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate, int unsafe); +raw_tensor atg__index_put_impl_(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate, int unsafe); +raw_tensor atg__index_put_impl_out(gc_tensor out, gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate, int unsafe); +raw_tensor atg__indices(gc_tensor self); +raw_tensor atg__indices_copy(gc_tensor self); +raw_tensor atg__indices_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg__int_mm(gc_tensor self, gc_tensor mat2); +raw_tensor atg__int_mm_out(gc_tensor out, gc_tensor self, gc_tensor mat2); +raw_tensor atg__is_all_true(gc_tensor self); +raw_tensor atg__is_any_true(gc_tensor self); +int atg__is_zerotensor(gc_tensor self); +void atg__linalg_check_errors(gc_tensor info, char * api_name, int is_matrix); +void atg__linalg_det(raw_tensor *, gc_tensor A); +void atg__linalg_det_result(raw_tensor *, gc_tensor result, gc_tensor LU, gc_tensor pivots, gc_tensor A); +void atg__linalg_eigh(raw_tensor *, gc_tensor A, char * UPLO, int compute_v); +void atg__linalg_eigh_eigenvalues(raw_tensor *, gc_tensor eigenvalues, gc_tensor eigenvectors, gc_tensor A, char * UPLO, int compute_v); +void atg__linalg_slogdet(raw_tensor *, gc_tensor A); +void atg__linalg_slogdet_sign(raw_tensor *, gc_tensor sign, gc_tensor logabsdet, gc_tensor LU, gc_tensor pivots, gc_tensor A); +void atg__linalg_solve_ex(raw_tensor *, gc_tensor A, gc_tensor B, int left, int check_errors); +void atg__linalg_solve_ex_result(raw_tensor *, gc_tensor result, gc_tensor LU, gc_tensor pivots, gc_tensor info, gc_tensor A, gc_tensor B, int left, int check_errors); +void atg__linalg_svd(raw_tensor *, gc_tensor A, int full_matrices, int compute_uv, char * driver); +void atg__linalg_svd_u(raw_tensor *, gc_tensor U, gc_tensor S, gc_tensor Vh, gc_tensor A, int full_matrices, int compute_uv, char * driver); +raw_tensor atg__log_softmax(gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__log_softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype); +raw_tensor atg__log_softmax_backward_data_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype); +raw_tensor atg__log_softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__logcumsumexp(gc_tensor self, int64_t dim); +raw_tensor atg__logcumsumexp_out(gc_tensor out, gc_tensor self, int64_t dim); +void atg__lstm_mps(raw_tensor *, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); +void atg__lstm_mps_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor out5, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); +void atg__lu_with_info(raw_tensor *, gc_tensor self, int pivot, int check_errors); +raw_tensor atg__make_dep_token(int options_kind, int options_device); +raw_tensor atg__make_dual(gc_tensor primal, gc_tensor tangent, int64_t level); +raw_tensor atg__make_dual_copy(gc_tensor primal, gc_tensor tangent, int64_t level); +raw_tensor atg__make_dual_copy_out(gc_tensor out, gc_tensor primal, gc_tensor tangent, int64_t level); +raw_tensor atg__make_per_channel_quantized_tensor(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis); +raw_tensor atg__make_per_channel_quantized_tensor_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis); +raw_tensor atg__make_per_tensor_quantized_tensor(gc_tensor self, double scale, int64_t zero_point); +raw_tensor atg__make_per_tensor_quantized_tensor_out(gc_tensor out, gc_tensor self, double scale, int64_t zero_point); +raw_tensor atg__masked_scale(gc_tensor self, gc_tensor mask, double scale); +raw_tensor atg__masked_scale_out(gc_tensor out, gc_tensor self, gc_tensor mask, double scale); +raw_tensor atg__masked_softmax(gc_tensor self, gc_tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null); +raw_tensor atg__masked_softmax_backward(gc_tensor grad_output, gc_tensor output, gc_tensor mask, int64_t dim_v, int dim_null); +raw_tensor atg__masked_softmax_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, gc_tensor mask, int64_t dim_v, int dim_null); +raw_tensor atg__masked_softmax_out(gc_tensor out, gc_tensor self, gc_tensor mask, int64_t dim_v, int dim_null, int64_t mask_type_v, int mask_type_null); +raw_tensor atg__mkldnn_reshape(gc_tensor self, int64_t *shape_data, int shape_len); +raw_tensor atg__mkldnn_reshape_out(gc_tensor out, gc_tensor self, int64_t *shape_data, int shape_len); +raw_tensor atg__mkldnn_transpose(gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg__mkldnn_transpose_(gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg__mkldnn_transpose_out(gc_tensor out, gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg__mps_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg__mps_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg__mps_convolution_transpose(gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg__mps_convolution_transpose_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +void atg__native_batch_norm_legit(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps); +void atg__native_batch_norm_legit_functional(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps); +void atg__native_batch_norm_legit_no_stats(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, int training, double momentum, double eps); +void atg__native_batch_norm_legit_no_stats_out(raw_tensor *, gc_tensor out, gc_tensor save_mean, gc_tensor save_invstd, gc_tensor input, gc_tensor weight, gc_tensor bias, int training, double momentum, double eps); +void atg__native_batch_norm_legit_no_training(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps); +void atg__native_batch_norm_legit_no_training_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps); +void atg__native_batch_norm_legit_out(raw_tensor *, gc_tensor out, gc_tensor save_mean, gc_tensor save_invstd, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps); +void atg__native_multi_head_attention(raw_tensor *, gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null); +void atg__native_multi_head_attention_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask, int need_weights, int average_attn_weights, int64_t mask_type_v, int mask_type_null); +raw_tensor atg__neg_view(gc_tensor self); +raw_tensor atg__neg_view_copy(gc_tensor self); +raw_tensor atg__neg_view_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg__nested_from_padded(gc_tensor padded, gc_tensor cpu_nested_shape_example, int fuse_transform_0213); +raw_tensor atg__nested_from_padded_and_nested_example(gc_tensor padded, gc_tensor nt_example); +raw_tensor atg__nested_from_padded_and_nested_example_out(gc_tensor out, gc_tensor padded, gc_tensor nt_example); +raw_tensor atg__nested_from_padded_out(gc_tensor out, gc_tensor padded, gc_tensor cpu_nested_shape_example, int fuse_transform_0213); +raw_tensor atg__nested_select_backward(gc_tensor grad_output, gc_tensor self, int64_t dim, int64_t index); +raw_tensor atg__nested_sum_backward(gc_tensor grad, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg__nested_view_from_buffer(gc_tensor self, gc_tensor nested_size, gc_tensor nested_strides, gc_tensor offsets); +raw_tensor atg__nested_view_from_buffer_copy(gc_tensor self, gc_tensor nested_size, gc_tensor nested_strides, gc_tensor offsets); +raw_tensor atg__nested_view_from_buffer_copy_out(gc_tensor out, gc_tensor self, gc_tensor nested_size, gc_tensor nested_strides, gc_tensor offsets); +raw_tensor atg__new_zeros_with_same_feature_meta(gc_tensor self, gc_tensor other, int64_t self_num_batch_dims); +raw_tensor atg__new_zeros_with_same_feature_meta_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t self_num_batch_dims); int atg__nnpack_available(); -void atg__nnpack_spatial_convolution(tensor *, tensor input, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); -void atg__nnpack_spatial_convolution_out(tensor *, tensor out, tensor input, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); -int64_t atg__nnz(tensor self); -void atg__pack_padded_sequence(tensor *, tensor input, tensor lengths, int batch_first); -void atg__pack_padded_sequence_backward(tensor *, tensor grad, int64_t *input_size_data, int input_size_len, tensor batch_sizes, int batch_first); -void atg__pack_padded_sequence_out(tensor *, tensor out0, tensor out1, tensor input, tensor lengths, int batch_first); -void atg__pad_circular(tensor *, tensor self, int64_t *pad_data, int pad_len); -void atg__pad_enum(tensor *, tensor self, int64_t *pad_data, int pad_len, int64_t mode, double value_v, int value_null); -void atg__pad_packed_sequence(tensor *, tensor data, tensor batch_sizes, int batch_first, scalar padding_value, int64_t total_length); -void atg__pdist_backward(tensor *, tensor grad, tensor self, double p, tensor pdist); -void atg__pdist_backward_out(tensor *, tensor out, tensor grad, tensor self, double p, tensor pdist); -void atg__pin_memory(tensor *, tensor self, int device); -void atg__pin_memory_out(tensor *, tensor out, tensor self, int device); -void atg__prelu_kernel(tensor *, tensor self, tensor weight); -void atg__prelu_kernel_backward(tensor *, tensor grad_output, tensor self, tensor weight); -void atg__propagate_xla_data(tensor input, tensor output); -void atg__remove_batch_dim(tensor *, tensor self, int64_t level, int64_t batch_size, int64_t out_dim); -void atg__reshape_alias(tensor *, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); -void atg__reshape_alias_copy(tensor *, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); -void atg__reshape_alias_copy_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); -void atg__reshape_copy(tensor *, tensor self, int64_t *size_data, int size_len); -void atg__reshape_from_tensor(tensor *, tensor self, tensor shape); -void atg__resize_output(tensor *, tensor self, int64_t *size_data, int size_len, int device); -void atg__resize_output_(tensor *, tensor self, int64_t *size_data, int size_len, int device); -void atg__resize_output_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, int device); -void atg__rowwise_prune(tensor *, tensor weight, tensor mask, int compressed_indices_dtype); -void atg__sample_dirichlet(tensor *, tensor self); -void atg__sample_dirichlet_out(tensor *, tensor out, tensor self); -void atg__saturate_weight_to_fp16(tensor *, tensor weight); -void atg__scaled_dot_product_attention_math(tensor *, tensor query, tensor key, tensor value, tensor attn_mask, double dropout_p, int is_causal, tensor dropout_mask, double scale_v, int scale_null); -void atg__scaled_dot_product_efficient_attention(tensor *, tensor query, tensor key, tensor value, tensor attn_bias, int compute_log_sumexp, double dropout_p, int is_causal, double scale_v, int scale_null); -void atg__scaled_dot_product_flash_attention_backward(tensor *, tensor grad_out, tensor query, tensor key, tensor value, tensor out, tensor logsumexp, tensor cum_seq_q, tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, tensor philox_seed, tensor philox_offset, double scale_v, int scale_null); -void atg__scaled_mm(tensor *, tensor self, tensor mat2, tensor bias, int out_dtype, tensor scale_a, tensor scale_b, tensor scale_result); -void atg__scaled_mm_out(tensor *, tensor out, tensor out_amax, tensor self, tensor mat2, tensor bias, int out_dtype, tensor scale_a, tensor scale_b, tensor scale_result); -void atg__scatter_reduce(tensor *, tensor self, int64_t dim, tensor index, tensor src, char * reduce, int include_self); -void atg__scatter_reduce_(tensor *, tensor self, int64_t dim, tensor index, tensor src, char * reduce, int include_self); -void atg__scatter_reduce_two_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor src, char * reduce, int include_self); -void atg__segment_reduce_backward(tensor *, tensor grad, tensor output, tensor data, char * reduce, tensor lengths, tensor offsets, int64_t axis, scalar initial); -void atg__segment_reduce_backward_out(tensor *, tensor out, tensor grad, tensor output, tensor data, char * reduce, tensor lengths, tensor offsets, int64_t axis, scalar initial); -void atg__shape_as_tensor(tensor *, tensor self); -void atg__slow_conv2d_backward(tensor *, tensor grad_input, tensor grad_weight, tensor grad_bias, tensor grad_output, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); -void atg__sobol_engine_draw(tensor *, tensor quasi, int64_t n, tensor sobolstate, int64_t dimension, int64_t num_generated, int dtype); -void atg__sobol_engine_ff_(tensor *, tensor self, int64_t n, tensor sobolstate, int64_t dimension, int64_t num_generated); -void atg__sobol_engine_initialize_state_(tensor *, tensor self, int64_t dimension); -void atg__sobol_engine_scramble_(tensor *, tensor self, tensor ltm, int64_t dimension); -void atg__softmax(tensor *, tensor self, int64_t dim, int half_to_float); -void atg__softmax_backward_data(tensor *, tensor grad_output, tensor output, int64_t dim, int input_dtype); -void atg__softmax_backward_data_out(tensor *, tensor grad_input, tensor grad_output, tensor output, int64_t dim, int input_dtype); -void atg__softmax_out(tensor *, tensor out, tensor self, int64_t dim, int half_to_float); -void atg__sparse_addmm(tensor *, tensor self, tensor mat1, tensor mat2); -void atg__sparse_addmm_out(tensor *, tensor out, tensor self, tensor mat1, tensor mat2); -void atg__sparse_broadcast_to(tensor *, tensor self, int64_t *size_data, int size_len); -void atg__sparse_broadcast_to_copy(tensor *, tensor self, int64_t *size_data, int size_len); -void atg__sparse_broadcast_to_copy_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len); -void atg__sparse_bsc_tensor_unsafe(tensor *, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg__sparse_bsr_tensor_unsafe(tensor *, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg__sparse_compressed_tensor_unsafe(tensor *, tensor compressed_indices, tensor plain_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg__sparse_coo_tensor_unsafe(tensor *, tensor indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced); -void atg__sparse_coo_tensor_with_dims(tensor *, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg__sparse_coo_tensor_with_dims_and_tensors(tensor *, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, tensor indices, tensor values, int options_kind, int options_device, int is_coalesced); -void atg__sparse_coo_tensor_with_dims_and_tensors_out(tensor *, tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, tensor indices, tensor values, int is_coalesced); -void atg__sparse_coo_tensor_with_dims_out(tensor *, tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len); -void atg__sparse_csc_tensor_unsafe(tensor *, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg__sparse_csr_prod(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg__sparse_csr_prod_dim_dtype_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg__sparse_csr_sum(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg__sparse_csr_sum_dim_dtype_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg__sparse_csr_tensor_unsafe(tensor *, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg__sparse_log_softmax(tensor *, tensor self, int64_t dim, int half_to_float); -void atg__sparse_log_softmax_backward_data(tensor *, tensor grad_output, tensor output, int64_t dim, tensor self); -void atg__sparse_log_softmax_backward_data_out(tensor *, tensor out, tensor grad_output, tensor output, int64_t dim, tensor self); -void atg__sparse_log_softmax_int(tensor *, tensor self, int64_t dim, int dtype); -void atg__sparse_log_softmax_out(tensor *, tensor out, tensor self, int64_t dim, int half_to_float); -void atg__sparse_mask_projection(tensor *, tensor self, tensor mask, int accumulate_matches); -void atg__sparse_mask_projection_out(tensor *, tensor out, tensor self, tensor mask, int accumulate_matches); -void atg__sparse_mm(tensor *, tensor sparse, tensor dense); -void atg__sparse_mm_reduce(tensor *, tensor sparse, tensor dense, char * reduce); -void atg__sparse_mm_reduce_impl(tensor *, tensor self, tensor other, char * reduce); -void atg__sparse_semi_structured_linear(tensor *, tensor input, tensor weight, tensor meta, tensor bias, char * activation); -void atg__sparse_softmax(tensor *, tensor self, int64_t dim, int half_to_float); -void atg__sparse_softmax_backward_data(tensor *, tensor grad_output, tensor output, int64_t dim, tensor self); -void atg__sparse_softmax_backward_data_out(tensor *, tensor out, tensor grad_output, tensor output, int64_t dim, tensor self); -void atg__sparse_softmax_int(tensor *, tensor self, int64_t dim, int dtype); -void atg__sparse_softmax_out(tensor *, tensor out, tensor self, int64_t dim, int half_to_float); -void atg__sparse_sparse_matmul(tensor *, tensor self, tensor other); -void atg__sparse_sparse_matmul_out(tensor *, tensor out, tensor self, tensor other); -void atg__sparse_sum(tensor *, tensor self); -void atg__sparse_sum_backward(tensor *, tensor grad, tensor self, int64_t *dim_data, int dim_len); -void atg__sparse_sum_backward_out(tensor *, tensor out, tensor grad, tensor self, int64_t *dim_data, int dim_len); -void atg__sparse_sum_dim(tensor *, tensor self, int64_t *dim_data, int dim_len); -void atg__sparse_sum_dim_dtype(tensor *, tensor self, int64_t *dim_data, int dim_len, int dtype); -void atg__sparse_sum_dim_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len); -void atg__sparse_sum_dtype(tensor *, tensor self, int dtype); -void atg__spdiags(tensor *, tensor diagonals, tensor offsets, int64_t *shape_data, int shape_len); -void atg__spdiags_out(tensor *, tensor out, tensor diagonals, tensor offsets, int64_t *shape_data, int shape_len); -void atg__stack(tensor *, tensor *tensors_data, int tensors_len, int64_t dim); -void atg__stack_out(tensor *, tensor out, tensor *tensors_data, int tensors_len, int64_t dim); -void atg__standard_gamma(tensor *, tensor self); -void atg__standard_gamma_grad(tensor *, tensor self, tensor output); -void atg__standard_gamma_grad_out(tensor *, tensor out, tensor self, tensor output); -void atg__standard_gamma_out(tensor *, tensor out, tensor self); -void atg__test_ambiguous_defaults(tensor *, tensor dummy, int64_t a, int64_t b); -void atg__test_ambiguous_defaults_b(tensor *, tensor dummy, int64_t a, char * b); -void atg__test_autograd_multiple_dispatch(tensor *, tensor self); -void atg__test_autograd_multiple_dispatch_fullcoverage_out(tensor *, tensor out, tensor self); -void atg__test_autograd_multiple_dispatch_ntonly(tensor *, tensor self, int b); -void atg__test_autograd_multiple_dispatch_view(tensor *, tensor self); -void atg__test_autograd_multiple_dispatch_view_copy(tensor *, tensor self); -void atg__test_autograd_multiple_dispatch_view_copy_out(tensor *, tensor out, tensor self); -void atg__test_check_tensor(tensor *, tensor self); -void atg__test_functorch_fallback(tensor *, tensor self, tensor other); -void atg__test_functorch_fallback_out(tensor *, tensor out, tensor self, tensor other); -void atg__test_optional_filled_intlist(tensor *, tensor values, int64_t *addends_data, int addends_len); -void atg__test_optional_filled_intlist_out(tensor *, tensor out, tensor values, int64_t *addends_data, int addends_len); -void atg__test_optional_floatlist(tensor *, tensor values, double *addends_data, int addends_len); -void atg__test_optional_floatlist_out(tensor *, tensor out, tensor values, double *addends_data, int addends_len); -void atg__test_optional_intlist(tensor *, tensor values, int64_t *addends_data, int addends_len); -void atg__test_optional_intlist_out(tensor *, tensor out, tensor values, int64_t *addends_data, int addends_len); -void atg__test_serialization_subcmul(tensor *, tensor self, tensor other); -void atg__test_string_default(tensor *, tensor dummy, char * a, char * b); -void atg__test_warn_in_autograd(tensor *, tensor self); -void atg__test_warn_in_autograd_out(tensor *, tensor out, tensor self); -void atg__thnn_differentiable_gru_cell_backward(tensor *, tensor grad_hy, tensor input_gates, tensor hidden_gates, tensor hx, tensor input_bias, tensor hidden_bias); -void atg__thnn_differentiable_lstm_cell_backward(tensor *, tensor grad_hy, tensor grad_cy, tensor input_gates, tensor hidden_gates, tensor input_bias, tensor hidden_bias, tensor cx, tensor cy); -void atg__thnn_fused_gru_cell(tensor *, tensor input_gates, tensor hidden_gates, tensor hx, tensor input_bias, tensor hidden_bias); -void atg__thnn_fused_gru_cell_backward(tensor *, tensor grad_hy, tensor workspace, int has_bias); -void atg__thnn_fused_gru_cell_backward_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor grad_hy, tensor workspace, int has_bias); -void atg__thnn_fused_gru_cell_out(tensor *, tensor out0, tensor out1, tensor input_gates, tensor hidden_gates, tensor hx, tensor input_bias, tensor hidden_bias); -void atg__thnn_fused_lstm_cell(tensor *, tensor input_gates, tensor hidden_gates, tensor cx, tensor input_bias, tensor hidden_bias); -void atg__thnn_fused_lstm_cell_backward(tensor *, tensor grad_hy, tensor grad_cy, tensor cx, tensor cy, tensor workspace, int has_bias); -void atg__thnn_fused_lstm_cell_backward_impl(tensor *, tensor grad_hy, tensor grad_cy, tensor cx, tensor cy, tensor workspace, int has_bias); -void atg__thnn_fused_lstm_cell_backward_impl_out(tensor *, tensor out0, tensor out1, tensor out2, tensor grad_hy, tensor grad_cy, tensor cx, tensor cy, tensor workspace, int has_bias); -void atg__thnn_fused_lstm_cell_out(tensor *, tensor out0, tensor out1, tensor out2, tensor input_gates, tensor hidden_gates, tensor cx, tensor input_bias, tensor hidden_bias); -void atg__to_copy(tensor *, tensor self, int options_kind, int options_device, int non_blocking); -void atg__to_copy_out(tensor *, tensor out, tensor self, int non_blocking); -tensor *atg__to_cpu(tensor *tensors_data, int tensors_len); -void atg__to_dense(tensor *, tensor self, int dtype, int masked_grad); -void atg__to_dense_out(tensor *, tensor out, tensor self, int dtype, int masked_grad); -void atg__to_sparse_bsc(tensor *, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_bsc_out(tensor *, tensor out, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_bsr(tensor *, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_bsr_out(tensor *, tensor out, tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_csc(tensor *, tensor self, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_csc_out(tensor *, tensor out, tensor self, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_csr(tensor *, tensor self, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_csr_out(tensor *, tensor out, tensor self, int64_t dense_dim_v, int dense_dim_null); -void atg__to_sparse_semi_structured(tensor *, tensor dense); -void atg__transform_bias_rescale_qkv(tensor *, tensor qkv, tensor qkv_bias, int64_t num_heads); -void atg__transform_bias_rescale_qkv_out(tensor *, tensor out0, tensor out1, tensor out2, tensor qkv, tensor qkv_bias, int64_t num_heads); -void atg__transformer_encoder_layer_fwd(tensor *, tensor src, int64_t embed_dim, int64_t num_heads, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, int use_gelu, int norm_first, double eps, tensor norm_weight_1, tensor norm_bias_1, tensor norm_weight_2, tensor norm_bias_2, tensor ffn_weight_1, tensor ffn_bias_1, tensor ffn_weight_2, tensor ffn_bias_2, tensor mask, int64_t mask_type_v, int mask_type_null); -void atg__transformer_encoder_layer_fwd_out(tensor *, tensor out, tensor src, int64_t embed_dim, int64_t num_heads, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, int use_gelu, int norm_first, double eps, tensor norm_weight_1, tensor norm_bias_1, tensor norm_weight_2, tensor norm_bias_2, tensor ffn_weight_1, tensor ffn_bias_1, tensor ffn_weight_2, tensor ffn_bias_2, tensor mask, int64_t mask_type_v, int mask_type_null); -void atg__trilinear(tensor *, tensor i1, tensor i2, tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim); -void atg__trilinear_out(tensor *, tensor out, tensor i1, tensor i2, tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim); -void atg__triton_multi_head_attention(tensor *, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask); -void atg__triton_multi_head_attention_out(tensor *, tensor out, tensor query, tensor key, tensor value, int64_t embed_dim, int64_t num_head, tensor qkv_weight, tensor qkv_bias, tensor proj_weight, tensor proj_bias, tensor mask); -void atg__triton_scaled_dot_attention(tensor *, tensor q, tensor k, tensor v, double dropout_p); -void atg__triton_scaled_dot_attention_out(tensor *, tensor out, tensor q, tensor k, tensor v, double dropout_p); -void atg__unique(tensor *, tensor self, int sorted, int return_inverse); -void atg__unique2(tensor *, tensor self, int sorted, int return_inverse, int return_counts); -void atg__unique2_out(tensor *, tensor out0, tensor out1, tensor out2, tensor self, int sorted, int return_inverse, int return_counts); -void atg__unique_out(tensor *, tensor out0, tensor out1, tensor self, int sorted, int return_inverse); -void atg__unpack_dual(tensor *, tensor dual, int64_t level); -void atg__unsafe_index(tensor *, tensor self, tensor *indices_data, int indices_len); -void atg__unsafe_index_put(tensor *, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate); -void atg__unsafe_view(tensor *, tensor self, int64_t *size_data, int size_len); -void atg__unsafe_view_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len); -void atg__upsample_bicubic2d_aa(tensor *, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bicubic2d_aa_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bicubic2d_aa_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bicubic2d_aa_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bicubic2d_aa_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); -void atg__upsample_bilinear2d_aa(tensor *, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bilinear2d_aa_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bilinear2d_aa_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bilinear2d_aa_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_bilinear2d_aa_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); -void atg__upsample_nearest_exact1d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); -void atg__upsample_nearest_exact1d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); -void atg__upsample_nearest_exact1d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); -void atg__upsample_nearest_exact1d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); -void atg__upsample_nearest_exact1d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); -void atg__upsample_nearest_exact2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact2d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact2d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact2d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); -void atg__upsample_nearest_exact3d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact3d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact3d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg__upsample_nearest_exact3d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); -int atg__use_cudnn_ctc_loss(tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank); -int atg__use_cudnn_ctc_loss_tensor(tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank); +raw_tensor atg__nnpack_spatial_convolution(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); +raw_tensor atg__nnpack_spatial_convolution_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); +int64_t atg__nnz(gc_tensor self); +void atg__pack_padded_sequence(raw_tensor *, gc_tensor input, gc_tensor lengths, int batch_first); +raw_tensor atg__pack_padded_sequence_backward(gc_tensor grad, int64_t *input_size_data, int input_size_len, gc_tensor batch_sizes, int batch_first); +void atg__pack_padded_sequence_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor lengths, int batch_first); +raw_tensor atg__pad_circular(gc_tensor self, int64_t *pad_data, int pad_len); +raw_tensor atg__pad_enum(gc_tensor self, int64_t *pad_data, int pad_len, int64_t mode, double value_v, int value_null); +void atg__pad_packed_sequence(raw_tensor *, gc_tensor data, gc_tensor batch_sizes, int batch_first, scalar padding_value, int64_t total_length); +raw_tensor atg__pdist_backward(gc_tensor grad, gc_tensor self, double p, gc_tensor pdist); +raw_tensor atg__pdist_backward_out(gc_tensor out, gc_tensor grad, gc_tensor self, double p, gc_tensor pdist); +raw_tensor atg__pin_memory(gc_tensor self, int device); +raw_tensor atg__pin_memory_out(gc_tensor out, gc_tensor self, int device); +raw_tensor atg__prelu_kernel(gc_tensor self, gc_tensor weight); +void atg__prelu_kernel_backward(raw_tensor *, gc_tensor grad_output, gc_tensor self, gc_tensor weight); +void atg__propagate_xla_data(gc_tensor input, gc_tensor output); +raw_tensor atg__remove_batch_dim(gc_tensor self, int64_t level, int64_t batch_size, int64_t out_dim); +raw_tensor atg__reshape_alias(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); +raw_tensor atg__reshape_alias_copy(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); +raw_tensor atg__reshape_alias_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); +raw_tensor atg__reshape_copy(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg__reshape_from_tensor(gc_tensor self, gc_tensor shape); +raw_tensor atg__resize_output(gc_tensor self, int64_t *size_data, int size_len, int device); +raw_tensor atg__resize_output_(gc_tensor self, int64_t *size_data, int size_len, int device); +raw_tensor atg__resize_output_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int device); +void atg__rowwise_prune(raw_tensor *, gc_tensor weight, gc_tensor mask, int compressed_indices_dtype); +raw_tensor atg__sample_dirichlet(gc_tensor self); +raw_tensor atg__sample_dirichlet_out(gc_tensor out, gc_tensor self); +raw_tensor atg__saturate_weight_to_fp16(gc_tensor weight); +void atg__scaled_dot_product_attention_math(raw_tensor *, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_mask, double dropout_p, int is_causal, gc_tensor dropout_mask, double scale_v, int scale_null); +void atg__scaled_dot_product_efficient_attention(raw_tensor *, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_bias, int compute_log_sumexp, double dropout_p, int is_causal, double scale_v, int scale_null); +void atg__scaled_dot_product_flash_attention_backward(raw_tensor *, gc_tensor grad_out, gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor out, gc_tensor logsumexp, gc_tensor cum_seq_q, gc_tensor cum_seq_k, int64_t max_q, int64_t max_k, double dropout_p, int is_causal, gc_tensor philox_seed, gc_tensor philox_offset, double scale_v, int scale_null); +void atg__scaled_mm(raw_tensor *, gc_tensor self, gc_tensor mat2, gc_tensor bias, int out_dtype, gc_tensor scale_a, gc_tensor scale_b, gc_tensor scale_result); +void atg__scaled_mm_out(raw_tensor *, gc_tensor out, gc_tensor out_amax, gc_tensor self, gc_tensor mat2, gc_tensor bias, int out_dtype, gc_tensor scale_a, gc_tensor scale_b, gc_tensor scale_result); +raw_tensor atg__scatter_reduce(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce, int include_self); +raw_tensor atg__scatter_reduce_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce, int include_self); +raw_tensor atg__scatter_reduce_two_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce, int include_self); +raw_tensor atg__segment_reduce_backward(gc_tensor grad, gc_tensor output, gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor offsets, int64_t axis, scalar initial); +raw_tensor atg__segment_reduce_backward_out(gc_tensor out, gc_tensor grad, gc_tensor output, gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor offsets, int64_t axis, scalar initial); +raw_tensor atg__shape_as_tensor(gc_tensor self); +void atg__slow_conv2d_backward(raw_tensor *, gc_tensor grad_input, gc_tensor grad_weight, gc_tensor grad_bias, gc_tensor grad_output, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); +void atg__sobol_engine_draw(raw_tensor *, gc_tensor quasi, int64_t n, gc_tensor sobolstate, int64_t dimension, int64_t num_generated, int dtype); +raw_tensor atg__sobol_engine_ff_(gc_tensor self, int64_t n, gc_tensor sobolstate, int64_t dimension, int64_t num_generated); +raw_tensor atg__sobol_engine_initialize_state_(gc_tensor self, int64_t dimension); +raw_tensor atg__sobol_engine_scramble_(gc_tensor self, gc_tensor ltm, int64_t dimension); +raw_tensor atg__softmax(gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype); +raw_tensor atg__softmax_backward_data_out(gc_tensor grad_input, gc_tensor grad_output, gc_tensor output, int64_t dim, int input_dtype); +raw_tensor atg__softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__sparse_addmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg__sparse_addmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg__sparse_broadcast_to(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg__sparse_broadcast_to_copy(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg__sparse_broadcast_to_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg__sparse_bsc_tensor_unsafe(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg__sparse_bsr_tensor_unsafe(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg__sparse_compressed_tensor_unsafe(gc_tensor compressed_indices, gc_tensor plain_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg__sparse_coo_tensor_unsafe(gc_tensor indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced); +raw_tensor atg__sparse_coo_tensor_with_dims(int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg__sparse_coo_tensor_with_dims_and_tensors(int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, gc_tensor indices, gc_tensor values, int options_kind, int options_device, int is_coalesced); +raw_tensor atg__sparse_coo_tensor_with_dims_and_tensors_out(gc_tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len, gc_tensor indices, gc_tensor values, int is_coalesced); +raw_tensor atg__sparse_coo_tensor_with_dims_out(gc_tensor out, int64_t sparse_dim, int64_t dense_dim, int64_t *size_data, int size_len); +raw_tensor atg__sparse_csc_tensor_unsafe(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg__sparse_csr_prod(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg__sparse_csr_prod_dim_dtype_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg__sparse_csr_sum(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg__sparse_csr_sum_dim_dtype_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg__sparse_csr_tensor_unsafe(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg__sparse_log_softmax(gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__sparse_log_softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self); +raw_tensor atg__sparse_log_softmax_backward_data_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self); +raw_tensor atg__sparse_log_softmax_int(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg__sparse_log_softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__sparse_mask_projection(gc_tensor self, gc_tensor mask, int accumulate_matches); +raw_tensor atg__sparse_mask_projection_out(gc_tensor out, gc_tensor self, gc_tensor mask, int accumulate_matches); +raw_tensor atg__sparse_mm(gc_tensor sparse, gc_tensor dense); +raw_tensor atg__sparse_mm_reduce(gc_tensor sparse, gc_tensor dense, char * reduce); +void atg__sparse_mm_reduce_impl(raw_tensor *, gc_tensor self, gc_tensor other, char * reduce); +raw_tensor atg__sparse_semi_structured_linear(gc_tensor input, gc_tensor weight, gc_tensor meta, gc_tensor bias, char * activation); +raw_tensor atg__sparse_softmax(gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__sparse_softmax_backward_data(gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self); +raw_tensor atg__sparse_softmax_backward_data_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, int64_t dim, gc_tensor self); +raw_tensor atg__sparse_softmax_int(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg__sparse_softmax_out(gc_tensor out, gc_tensor self, int64_t dim, int half_to_float); +raw_tensor atg__sparse_sparse_matmul(gc_tensor self, gc_tensor other); +raw_tensor atg__sparse_sparse_matmul_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg__sparse_sum(gc_tensor self); +raw_tensor atg__sparse_sum_backward(gc_tensor grad, gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg__sparse_sum_backward_out(gc_tensor out, gc_tensor grad, gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg__sparse_sum_dim(gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg__sparse_sum_dim_dtype(gc_tensor self, int64_t *dim_data, int dim_len, int dtype); +raw_tensor atg__sparse_sum_dim_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg__sparse_sum_dtype(gc_tensor self, int dtype); +raw_tensor atg__spdiags(gc_tensor diagonals, gc_tensor offsets, int64_t *shape_data, int shape_len); +raw_tensor atg__spdiags_out(gc_tensor out, gc_tensor diagonals, gc_tensor offsets, int64_t *shape_data, int shape_len); +raw_tensor atg__stack(gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg__stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg__standard_gamma(gc_tensor self); +raw_tensor atg__standard_gamma_grad(gc_tensor self, gc_tensor output); +raw_tensor atg__standard_gamma_grad_out(gc_tensor out, gc_tensor self, gc_tensor output); +raw_tensor atg__standard_gamma_out(gc_tensor out, gc_tensor self); +raw_tensor atg__test_ambiguous_defaults(gc_tensor dummy, int64_t a, int64_t b); +raw_tensor atg__test_ambiguous_defaults_b(gc_tensor dummy, int64_t a, char * b); +raw_tensor atg__test_autograd_multiple_dispatch(gc_tensor self); +raw_tensor atg__test_autograd_multiple_dispatch_fullcoverage_out(gc_tensor out, gc_tensor self); +raw_tensor atg__test_autograd_multiple_dispatch_ntonly(gc_tensor self, int b); +raw_tensor atg__test_autograd_multiple_dispatch_view(gc_tensor self); +raw_tensor atg__test_autograd_multiple_dispatch_view_copy(gc_tensor self); +raw_tensor atg__test_autograd_multiple_dispatch_view_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg__test_check_tensor(gc_tensor self); +raw_tensor atg__test_functorch_fallback(gc_tensor self, gc_tensor other); +raw_tensor atg__test_functorch_fallback_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg__test_optional_filled_intlist(gc_tensor values, int64_t *addends_data, int addends_len); +raw_tensor atg__test_optional_filled_intlist_out(gc_tensor out, gc_tensor values, int64_t *addends_data, int addends_len); +raw_tensor atg__test_optional_floatlist(gc_tensor values, double *addends_data, int addends_len); +raw_tensor atg__test_optional_floatlist_out(gc_tensor out, gc_tensor values, double *addends_data, int addends_len); +raw_tensor atg__test_optional_intlist(gc_tensor values, int64_t *addends_data, int addends_len); +raw_tensor atg__test_optional_intlist_out(gc_tensor out, gc_tensor values, int64_t *addends_data, int addends_len); +raw_tensor atg__test_serialization_subcmul(gc_tensor self, gc_tensor other); +raw_tensor atg__test_string_default(gc_tensor dummy, char * a, char * b); +raw_tensor atg__test_warn_in_autograd(gc_tensor self); +raw_tensor atg__test_warn_in_autograd_out(gc_tensor out, gc_tensor self); +void atg__thnn_differentiable_gru_cell_backward(raw_tensor *, gc_tensor grad_hy, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor hx, gc_tensor input_bias, gc_tensor hidden_bias); +void atg__thnn_differentiable_lstm_cell_backward(raw_tensor *, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor input_bias, gc_tensor hidden_bias, gc_tensor cx, gc_tensor cy); +void atg__thnn_fused_gru_cell(raw_tensor *, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor hx, gc_tensor input_bias, gc_tensor hidden_bias); +void atg__thnn_fused_gru_cell_backward(raw_tensor *, gc_tensor grad_hy, gc_tensor workspace, int has_bias); +void atg__thnn_fused_gru_cell_backward_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor grad_hy, gc_tensor workspace, int has_bias); +void atg__thnn_fused_gru_cell_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor hx, gc_tensor input_bias, gc_tensor hidden_bias); +void atg__thnn_fused_lstm_cell(raw_tensor *, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor cx, gc_tensor input_bias, gc_tensor hidden_bias); +void atg__thnn_fused_lstm_cell_backward(raw_tensor *, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor cx, gc_tensor cy, gc_tensor workspace, int has_bias); +void atg__thnn_fused_lstm_cell_backward_impl(raw_tensor *, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor cx, gc_tensor cy, gc_tensor workspace, int has_bias); +void atg__thnn_fused_lstm_cell_backward_impl_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor cx, gc_tensor cy, gc_tensor workspace, int has_bias); +void atg__thnn_fused_lstm_cell_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input_gates, gc_tensor hidden_gates, gc_tensor cx, gc_tensor input_bias, gc_tensor hidden_bias); +raw_tensor atg__to_copy(gc_tensor self, int options_kind, int options_device, int non_blocking); +raw_tensor atg__to_copy_out(gc_tensor out, gc_tensor self, int non_blocking); +raw_tensor *atg__to_cpu(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg__to_dense(gc_tensor self, int dtype, int masked_grad); +raw_tensor atg__to_dense_out(gc_tensor out, gc_tensor self, int dtype, int masked_grad); +raw_tensor atg__to_sparse_bsc(gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); +raw_tensor atg__to_sparse_bsc_out(gc_tensor out, gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); +raw_tensor atg__to_sparse_bsr(gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); +raw_tensor atg__to_sparse_bsr_out(gc_tensor out, gc_tensor self, int64_t *blocksize_data, int blocksize_len, int64_t dense_dim_v, int dense_dim_null); +raw_tensor atg__to_sparse_csc(gc_tensor self, int64_t dense_dim_v, int dense_dim_null); +raw_tensor atg__to_sparse_csc_out(gc_tensor out, gc_tensor self, int64_t dense_dim_v, int dense_dim_null); +raw_tensor atg__to_sparse_csr(gc_tensor self, int64_t dense_dim_v, int dense_dim_null); +raw_tensor atg__to_sparse_csr_out(gc_tensor out, gc_tensor self, int64_t dense_dim_v, int dense_dim_null); +void atg__to_sparse_semi_structured(raw_tensor *, gc_tensor dense); +void atg__transform_bias_rescale_qkv(raw_tensor *, gc_tensor qkv, gc_tensor qkv_bias, int64_t num_heads); +void atg__transform_bias_rescale_qkv_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor qkv, gc_tensor qkv_bias, int64_t num_heads); +raw_tensor atg__transformer_encoder_layer_fwd(gc_tensor src, int64_t embed_dim, int64_t num_heads, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, int use_gelu, int norm_first, double eps, gc_tensor norm_weight_1, gc_tensor norm_bias_1, gc_tensor norm_weight_2, gc_tensor norm_bias_2, gc_tensor ffn_weight_1, gc_tensor ffn_bias_1, gc_tensor ffn_weight_2, gc_tensor ffn_bias_2, gc_tensor mask, int64_t mask_type_v, int mask_type_null); +raw_tensor atg__transformer_encoder_layer_fwd_out(gc_tensor out, gc_tensor src, int64_t embed_dim, int64_t num_heads, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, int use_gelu, int norm_first, double eps, gc_tensor norm_weight_1, gc_tensor norm_bias_1, gc_tensor norm_weight_2, gc_tensor norm_bias_2, gc_tensor ffn_weight_1, gc_tensor ffn_bias_1, gc_tensor ffn_weight_2, gc_tensor ffn_bias_2, gc_tensor mask, int64_t mask_type_v, int mask_type_null); +raw_tensor atg__trilinear(gc_tensor i1, gc_tensor i2, gc_tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim); +raw_tensor atg__trilinear_out(gc_tensor out, gc_tensor i1, gc_tensor i2, gc_tensor i3, int64_t *expand1_data, int expand1_len, int64_t *expand2_data, int expand2_len, int64_t *expand3_data, int expand3_len, int64_t *sumdim_data, int sumdim_len, int64_t unroll_dim); +raw_tensor atg__triton_multi_head_attention(gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask); +raw_tensor atg__triton_multi_head_attention_out(gc_tensor out, gc_tensor query, gc_tensor key, gc_tensor value, int64_t embed_dim, int64_t num_head, gc_tensor qkv_weight, gc_tensor qkv_bias, gc_tensor proj_weight, gc_tensor proj_bias, gc_tensor mask); +raw_tensor atg__triton_scaled_dot_attention(gc_tensor q, gc_tensor k, gc_tensor v, double dropout_p); +raw_tensor atg__triton_scaled_dot_attention_out(gc_tensor out, gc_tensor q, gc_tensor k, gc_tensor v, double dropout_p); +void atg__unique(raw_tensor *, gc_tensor self, int sorted, int return_inverse); +void atg__unique2(raw_tensor *, gc_tensor self, int sorted, int return_inverse, int return_counts); +void atg__unique2_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int sorted, int return_inverse, int return_counts); +void atg__unique_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, int sorted, int return_inverse); +void atg__unpack_dual(raw_tensor *, gc_tensor dual, int64_t level); +raw_tensor atg__unsafe_index(gc_tensor self, gc_tensor *indices_data, int indices_len); +raw_tensor atg__unsafe_index_put(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate); +raw_tensor atg__unsafe_view(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg__unsafe_view_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg__upsample_bicubic2d_aa(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bicubic2d_aa_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bicubic2d_aa_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bicubic2d_aa_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bicubic2d_aa_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); +raw_tensor atg__upsample_bilinear2d_aa(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bilinear2d_aa_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bilinear2d_aa_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bilinear2d_aa_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_bilinear2d_aa_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); +raw_tensor atg__upsample_nearest_exact1d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); +raw_tensor atg__upsample_nearest_exact1d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); +raw_tensor atg__upsample_nearest_exact1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); +raw_tensor atg__upsample_nearest_exact1d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); +raw_tensor atg__upsample_nearest_exact1d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); +raw_tensor atg__upsample_nearest_exact2d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); +raw_tensor atg__upsample_nearest_exact3d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact3d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg__upsample_nearest_exact3d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); +int atg__use_cudnn_ctc_loss(gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank); +int atg__use_cudnn_ctc_loss_tensor(gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank); int atg__use_cudnn_rnn_flatten_weight(); -void atg__validate_compressed_sparse_indices(int is_crow, tensor compressed_idx, tensor plain_idx, int64_t cdim, int64_t dim, int64_t nnz); -void atg__validate_sparse_bsc_tensor_args(tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len); -void atg__validate_sparse_bsr_tensor_args(tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len); -void atg__validate_sparse_csc_tensor_args(tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len); -void atg__values(tensor *, tensor self); -void atg__values_copy(tensor *, tensor self); -void atg__values_copy_out(tensor *, tensor out, tensor self); -int64_t atg__version(tensor self); -void atg__weight_norm(tensor *, tensor v, tensor g, int64_t dim); -void atg__weight_norm_differentiable_backward(tensor *, tensor grad_w, tensor saved_v, tensor saved_g, tensor saved_norms, int64_t dim); -void atg__weight_norm_interface(tensor *, tensor v, tensor g, int64_t dim); -void atg__weight_norm_interface_backward(tensor *, tensor grad_w, tensor saved_v, tensor saved_g, tensor saved_norms, int64_t dim); -void atg__weight_norm_interface_backward_out(tensor *, tensor out0, tensor out1, tensor grad_w, tensor saved_v, tensor saved_g, tensor saved_norms, int64_t dim); -void atg__weight_norm_interface_out(tensor *, tensor out0, tensor out1, tensor v, tensor g, int64_t dim); -void atg_abs(tensor *, tensor self); -void atg_abs_(tensor *, tensor self); -void atg_abs_out(tensor *, tensor out, tensor self); -void atg_absolute(tensor *, tensor self); -void atg_absolute_(tensor *, tensor self); -void atg_absolute_out(tensor *, tensor out, tensor self); -void atg_acos(tensor *, tensor self); -void atg_acos_(tensor *, tensor self); -void atg_acos_out(tensor *, tensor out, tensor self); -void atg_acosh(tensor *, tensor self); -void atg_acosh_(tensor *, tensor self); -void atg_acosh_out(tensor *, tensor out, tensor self); -void atg_adaptive_avg_pool1d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_avg_pool2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_avg_pool2d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_avg_pool3d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_avg_pool3d_backward(tensor *, tensor grad_input, tensor grad_output, tensor self); -void atg_adaptive_avg_pool3d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_max_pool1d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_max_pool2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_max_pool2d_backward(tensor *, tensor grad_output, tensor self, tensor indices); -void atg_adaptive_max_pool2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor indices); -void atg_adaptive_max_pool2d_out(tensor *, tensor out, tensor indices, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_max_pool3d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg_adaptive_max_pool3d_backward(tensor *, tensor grad_output, tensor self, tensor indices); -void atg_adaptive_max_pool3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor indices); -void atg_adaptive_max_pool3d_out(tensor *, tensor out, tensor indices, tensor self, int64_t *output_size_data, int output_size_len); -void atg_add(tensor *, tensor self, tensor other); -void atg_add_(tensor *, tensor self, tensor other); -void atg_add_out(tensor *, tensor out, tensor self, tensor other); -void atg_add_scalar(tensor *, tensor self, scalar other); -void atg_add_scalar_(tensor *, tensor self, scalar other); -void atg_add_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_addbmm(tensor *, tensor self, tensor batch1, tensor batch2); -void atg_addbmm_(tensor *, tensor self, tensor batch1, tensor batch2); -void atg_addbmm_out(tensor *, tensor out, tensor self, tensor batch1, tensor batch2); -void atg_addcdiv(tensor *, tensor self, tensor tensor1, tensor tensor2); -void atg_addcdiv_(tensor *, tensor self, tensor tensor1, tensor tensor2); -void atg_addcdiv_out(tensor *, tensor out, tensor self, tensor tensor1, tensor tensor2); -void atg_addcmul(tensor *, tensor self, tensor tensor1, tensor tensor2); -void atg_addcmul_(tensor *, tensor self, tensor tensor1, tensor tensor2); -void atg_addcmul_out(tensor *, tensor out, tensor self, tensor tensor1, tensor tensor2); -void atg_addmm(tensor *, tensor self, tensor mat1, tensor mat2); -void atg_addmm_(tensor *, tensor self, tensor mat1, tensor mat2); -void atg_addmm_out(tensor *, tensor out, tensor self, tensor mat1, tensor mat2); -void atg_addmv(tensor *, tensor self, tensor mat, tensor vec); -void atg_addmv_(tensor *, tensor self, tensor mat, tensor vec); -void atg_addmv_out(tensor *, tensor out, tensor self, tensor mat, tensor vec); -void atg_addr(tensor *, tensor self, tensor vec1, tensor vec2); -void atg_addr_(tensor *, tensor self, tensor vec1, tensor vec2); -void atg_addr_out(tensor *, tensor out, tensor self, tensor vec1, tensor vec2); -void atg_adjoint(tensor *, tensor self); -void atg_affine_grid_generator(tensor *, tensor theta, int64_t *size_data, int size_len, int align_corners); -void atg_affine_grid_generator_backward(tensor *, tensor grad, int64_t *size_data, int size_len, int align_corners); -void atg_affine_grid_generator_out(tensor *, tensor out, tensor theta, int64_t *size_data, int size_len, int align_corners); -void atg_alias(tensor *, tensor self); -void atg_alias_copy(tensor *, tensor self); -void atg_alias_copy_out(tensor *, tensor out, tensor self); -void atg_align_as(tensor *, tensor self, tensor other); -tensor *atg_align_tensors(tensor *tensors_data, int tensors_len); -void atg_all(tensor *, tensor self); -void atg_all_all_out(tensor *, tensor out, tensor self); -void atg_all_dim(tensor *, tensor self, int64_t dim, int keepdim); -void atg_all_out(tensor *, tensor out, tensor self, int64_t dim, int keepdim); -int atg_allclose(tensor self, tensor other, double rtol, double atol, int equal_nan); -void atg_alpha_dropout(tensor *, tensor input, double p, int train); -void atg_alpha_dropout_(tensor *, tensor self, double p, int train); -void atg_amax(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_amax_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_amin(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_amin_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_aminmax(tensor *, tensor self, int64_t dim_v, int dim_null, int keepdim); -void atg_aminmax_out(tensor *, tensor min, tensor max, tensor self, int64_t dim_v, int dim_null, int keepdim); -void atg_angle(tensor *, tensor self); -void atg_angle_out(tensor *, tensor out, tensor self); -void atg_any(tensor *, tensor self); -void atg_any_all_out(tensor *, tensor out, tensor self); -void atg_any_dim(tensor *, tensor self, int64_t dim, int keepdim); -void atg_any_out(tensor *, tensor out, tensor self, int64_t dim, int keepdim); -void atg_arange(tensor *, scalar end, int options_kind, int options_device); -void atg_arange_start(tensor *, scalar start, scalar end, int options_kind, int options_device); -void atg_arange_start_step(tensor *, scalar start, scalar end, int options_kind, int options_device); -void atg_arccos(tensor *, tensor self); -void atg_arccos_(tensor *, tensor self); -void atg_arccos_out(tensor *, tensor out, tensor self); -void atg_arccosh(tensor *, tensor self); -void atg_arccosh_(tensor *, tensor self); -void atg_arccosh_out(tensor *, tensor out, tensor self); -void atg_arcsin(tensor *, tensor self); -void atg_arcsin_(tensor *, tensor self); -void atg_arcsin_out(tensor *, tensor out, tensor self); -void atg_arcsinh(tensor *, tensor self); -void atg_arcsinh_(tensor *, tensor self); -void atg_arcsinh_out(tensor *, tensor out, tensor self); -void atg_arctan(tensor *, tensor self); -void atg_arctan2(tensor *, tensor self, tensor other); -void atg_arctan2_(tensor *, tensor self, tensor other); -void atg_arctan2_out(tensor *, tensor out, tensor self, tensor other); -void atg_arctan_(tensor *, tensor self); -void atg_arctan_out(tensor *, tensor out, tensor self); -void atg_arctanh(tensor *, tensor self); -void atg_arctanh_(tensor *, tensor self); -void atg_arctanh_out(tensor *, tensor out, tensor self); -void atg_argmax(tensor *, tensor self, int64_t dim_v, int dim_null, int keepdim); -void atg_argmax_out(tensor *, tensor out, tensor self, int64_t dim_v, int dim_null, int keepdim); -void atg_argmin(tensor *, tensor self, int64_t dim_v, int dim_null, int keepdim); -void atg_argmin_out(tensor *, tensor out, tensor self, int64_t dim_v, int dim_null, int keepdim); -void atg_argsort(tensor *, tensor self, int64_t dim, int descending); -void atg_argsort_stable(tensor *, tensor self, int stable, int64_t dim, int descending); -void atg_argsort_stable_out(tensor *, tensor out, tensor self, int stable, int64_t dim, int descending); -void atg_argwhere(tensor *, tensor self); -void atg_as_strided(tensor *, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); -void atg_as_strided_(tensor *, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); -void atg_as_strided_copy(tensor *, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); -void atg_as_strided_copy_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); -void atg_as_strided_scatter(tensor *, tensor self, tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); -void atg_as_strided_scatter_out(tensor *, tensor out, tensor self, tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); -void atg_asin(tensor *, tensor self); -void atg_asin_(tensor *, tensor self); -void atg_asin_out(tensor *, tensor out, tensor self); -void atg_asinh(tensor *, tensor self); -void atg_asinh_(tensor *, tensor self); -void atg_asinh_out(tensor *, tensor out, tensor self); -void atg_atan(tensor *, tensor self); -void atg_atan2(tensor *, tensor self, tensor other); -void atg_atan2_(tensor *, tensor self, tensor other); -void atg_atan2_out(tensor *, tensor out, tensor self, tensor other); -void atg_atan_(tensor *, tensor self); -void atg_atan_out(tensor *, tensor out, tensor self); -void atg_atanh(tensor *, tensor self); -void atg_atanh_(tensor *, tensor self); -void atg_atanh_out(tensor *, tensor out, tensor self); -void atg_atleast_1d(tensor *, tensor self); -tensor *atg_atleast_1d_sequence(tensor *tensors_data, int tensors_len); -void atg_atleast_2d(tensor *, tensor self); -tensor *atg_atleast_2d_sequence(tensor *tensors_data, int tensors_len); -void atg_atleast_3d(tensor *, tensor self); -tensor *atg_atleast_3d_sequence(tensor *tensors_data, int tensors_len); -void atg_avg_pool1d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad); -void atg_avg_pool2d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_avg_pool2d_backward(tensor *, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_avg_pool2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_avg_pool2d_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_avg_pool3d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_avg_pool3d_backward(tensor *, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_avg_pool3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_avg_pool3d_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); -void atg_baddbmm(tensor *, tensor self, tensor batch1, tensor batch2); -void atg_baddbmm_(tensor *, tensor self, tensor batch1, tensor batch2); -void atg_baddbmm_out(tensor *, tensor out, tensor self, tensor batch1, tensor batch2); -void atg_bartlett_window(tensor *, int64_t window_length, int options_kind, int options_device); -void atg_bartlett_window_out(tensor *, tensor out, int64_t window_length); -void atg_bartlett_window_periodic(tensor *, int64_t window_length, int periodic, int options_kind, int options_device); -void atg_bartlett_window_periodic_out(tensor *, tensor out, int64_t window_length, int periodic); -void atg_batch_norm(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps, int cudnn_enabled); -void atg_batch_norm_backward_elemt(tensor *, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, tensor sum_dy, tensor sum_dy_xmu, tensor count); -void atg_batch_norm_backward_elemt_out(tensor *, tensor out, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, tensor sum_dy, tensor sum_dy_xmu, tensor count); -void atg_batch_norm_backward_reduce(tensor *, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, int input_g, int weight_g, int bias_g); -void atg_batch_norm_backward_reduce_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor grad_out, tensor input, tensor mean, tensor invstd, tensor weight, int input_g, int weight_g, int bias_g); -void atg_batch_norm_elemt(tensor *, tensor input, tensor weight, tensor bias, tensor mean, tensor invstd, double eps); -void atg_batch_norm_elemt_out(tensor *, tensor out, tensor input, tensor weight, tensor bias, tensor mean, tensor invstd, double eps); -void atg_batch_norm_gather_stats(tensor *, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, int64_t count); -void atg_batch_norm_gather_stats_out(tensor *, tensor out0, tensor out1, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, int64_t count); -void atg_batch_norm_gather_stats_with_counts(tensor *, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, tensor counts); -void atg_batch_norm_gather_stats_with_counts_out(tensor *, tensor out0, tensor out1, tensor input, tensor mean, tensor invstd, tensor running_mean, tensor running_var, double momentum, double eps, tensor counts); -void atg_batch_norm_stats(tensor *, tensor input, double eps); -void atg_batch_norm_stats_out(tensor *, tensor out0, tensor out1, tensor input, double eps); -void atg_batch_norm_update_stats(tensor *, tensor input, tensor running_mean, tensor running_var, double momentum); -void atg_batch_norm_update_stats_out(tensor *, tensor out0, tensor out1, tensor input, tensor running_mean, tensor running_var, double momentum); -void atg_bernoulli(tensor *, tensor self); -void atg_bernoulli_(tensor *, tensor self, tensor p); -void atg_bernoulli_float_(tensor *, tensor self, double p); -void atg_bernoulli_p(tensor *, tensor self, double p); -void atg_bernoulli_tensor(tensor *, tensor self, tensor p); -void atg_bilinear(tensor *, tensor input1, tensor input2, tensor weight, tensor bias); -void atg_binary_cross_entropy(tensor *, tensor self, tensor target, tensor weight, int64_t reduction); -void atg_binary_cross_entropy_backward(tensor *, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction); -void atg_binary_cross_entropy_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction); -void atg_binary_cross_entropy_out(tensor *, tensor out, tensor self, tensor target, tensor weight, int64_t reduction); -void atg_binary_cross_entropy_with_logits(tensor *, tensor self, tensor target, tensor weight, tensor pos_weight, int64_t reduction); -void atg_binary_cross_entropy_with_logits_out(tensor *, tensor out, tensor self, tensor target, tensor weight, tensor pos_weight, int64_t reduction); -void atg_bincount(tensor *, tensor self, tensor weights, int64_t minlength); -void atg_bincount_out(tensor *, tensor out, tensor self, tensor weights, int64_t minlength); -void atg_binomial(tensor *, tensor count, tensor prob); -void atg_binomial_out(tensor *, tensor out, tensor count, tensor prob); -void atg_bitwise_and(tensor *, tensor self, scalar other); -void atg_bitwise_and_(tensor *, tensor self, scalar other); -void atg_bitwise_and_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_bitwise_and_scalar_tensor(tensor *, scalar self, tensor other); -void atg_bitwise_and_scalar_tensor_out(tensor *, tensor out, scalar self, tensor other); -void atg_bitwise_and_tensor(tensor *, tensor self, tensor other); -void atg_bitwise_and_tensor_(tensor *, tensor self, tensor other); -void atg_bitwise_and_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_bitwise_left_shift(tensor *, tensor self, tensor other); -void atg_bitwise_left_shift_(tensor *, tensor self, tensor other); -void atg_bitwise_left_shift_scalar_tensor(tensor *, scalar self, tensor other); -void atg_bitwise_left_shift_scalar_tensor_out(tensor *, tensor out, scalar self, tensor other); -void atg_bitwise_left_shift_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_bitwise_left_shift_tensor_scalar(tensor *, tensor self, scalar other); -void atg_bitwise_left_shift_tensor_scalar_(tensor *, tensor self, scalar other); -void atg_bitwise_left_shift_tensor_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_bitwise_not(tensor *, tensor self); -void atg_bitwise_not_(tensor *, tensor self); -void atg_bitwise_not_out(tensor *, tensor out, tensor self); -void atg_bitwise_or(tensor *, tensor self, scalar other); -void atg_bitwise_or_(tensor *, tensor self, scalar other); -void atg_bitwise_or_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_bitwise_or_scalar_tensor(tensor *, scalar self, tensor other); -void atg_bitwise_or_scalar_tensor_out(tensor *, tensor out, scalar self, tensor other); -void atg_bitwise_or_tensor(tensor *, tensor self, tensor other); -void atg_bitwise_or_tensor_(tensor *, tensor self, tensor other); -void atg_bitwise_or_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_bitwise_right_shift(tensor *, tensor self, tensor other); -void atg_bitwise_right_shift_(tensor *, tensor self, tensor other); -void atg_bitwise_right_shift_scalar_tensor(tensor *, scalar self, tensor other); -void atg_bitwise_right_shift_scalar_tensor_out(tensor *, tensor out, scalar self, tensor other); -void atg_bitwise_right_shift_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_bitwise_right_shift_tensor_scalar(tensor *, tensor self, scalar other); -void atg_bitwise_right_shift_tensor_scalar_(tensor *, tensor self, scalar other); -void atg_bitwise_right_shift_tensor_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_bitwise_xor(tensor *, tensor self, scalar other); -void atg_bitwise_xor_(tensor *, tensor self, scalar other); -void atg_bitwise_xor_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_bitwise_xor_scalar_tensor(tensor *, scalar self, tensor other); -void atg_bitwise_xor_scalar_tensor_out(tensor *, tensor out, scalar self, tensor other); -void atg_bitwise_xor_tensor(tensor *, tensor self, tensor other); -void atg_bitwise_xor_tensor_(tensor *, tensor self, tensor other); -void atg_bitwise_xor_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_blackman_window(tensor *, int64_t window_length, int options_kind, int options_device); -void atg_blackman_window_out(tensor *, tensor out, int64_t window_length); -void atg_blackman_window_periodic(tensor *, int64_t window_length, int periodic, int options_kind, int options_device); -void atg_blackman_window_periodic_out(tensor *, tensor out, int64_t window_length, int periodic); -void atg_block_diag(tensor *, tensor *tensors_data, int tensors_len); -void atg_block_diag_out(tensor *, tensor out, tensor *tensors_data, int tensors_len); -void atg_bmm(tensor *, tensor self, tensor mat2); -void atg_bmm_out(tensor *, tensor out, tensor self, tensor mat2); -tensor *atg_broadcast_tensors(tensor *tensors_data, int tensors_len); -void atg_broadcast_to(tensor *, tensor self, int64_t *size_data, int size_len); -void atg_bucketize(tensor *, tensor self, tensor boundaries, int out_int32, int right); -void atg_bucketize_scalar(tensor *, scalar self, tensor boundaries, int out_int32, int right); -void atg_bucketize_scalar_out(tensor *, tensor out, scalar self, tensor boundaries, int out_int32, int right); -void atg_bucketize_tensor_out(tensor *, tensor out, tensor self, tensor boundaries, int out_int32, int right); +void atg__validate_compressed_sparse_indices(int is_crow, gc_tensor compressed_idx, gc_tensor plain_idx, int64_t cdim, int64_t dim, int64_t nnz); +void atg__validate_sparse_bsc_tensor_args(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len); +void atg__validate_sparse_bsr_tensor_args(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len); +void atg__validate_sparse_csc_tensor_args(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len); +raw_tensor atg__values(gc_tensor self); +raw_tensor atg__values_copy(gc_tensor self); +raw_tensor atg__values_copy_out(gc_tensor out, gc_tensor self); +int64_t atg__version(gc_tensor self); +raw_tensor atg__weight_norm(gc_tensor v, gc_tensor g, int64_t dim); +void atg__weight_norm_differentiable_backward(raw_tensor *, gc_tensor grad_w, gc_tensor saved_v, gc_tensor saved_g, gc_tensor saved_norms, int64_t dim); +void atg__weight_norm_interface(raw_tensor *, gc_tensor v, gc_tensor g, int64_t dim); +void atg__weight_norm_interface_backward(raw_tensor *, gc_tensor grad_w, gc_tensor saved_v, gc_tensor saved_g, gc_tensor saved_norms, int64_t dim); +void atg__weight_norm_interface_backward_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor grad_w, gc_tensor saved_v, gc_tensor saved_g, gc_tensor saved_norms, int64_t dim); +void atg__weight_norm_interface_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor v, gc_tensor g, int64_t dim); +raw_tensor atg_abs(gc_tensor self); +raw_tensor atg_abs_(gc_tensor self); +raw_tensor atg_abs_out(gc_tensor out, gc_tensor self); +raw_tensor atg_absolute(gc_tensor self); +raw_tensor atg_absolute_(gc_tensor self); +raw_tensor atg_absolute_out(gc_tensor out, gc_tensor self); +raw_tensor atg_acos(gc_tensor self); +raw_tensor atg_acos_(gc_tensor self); +raw_tensor atg_acos_out(gc_tensor out, gc_tensor self); +raw_tensor atg_acosh(gc_tensor self); +raw_tensor atg_acosh_(gc_tensor self); +raw_tensor atg_acosh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_adaptive_avg_pool1d(gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_adaptive_avg_pool2d(gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_adaptive_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_adaptive_avg_pool3d(gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_adaptive_avg_pool3d_backward(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self); +raw_tensor atg_adaptive_avg_pool3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len); +void atg_adaptive_max_pool1d(raw_tensor *, gc_tensor self, int64_t *output_size_data, int output_size_len); +void atg_adaptive_max_pool2d(raw_tensor *, gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_adaptive_max_pool2d_backward(gc_tensor grad_output, gc_tensor self, gc_tensor indices); +raw_tensor atg_adaptive_max_pool2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor indices); +void atg_adaptive_max_pool2d_out(raw_tensor *, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *output_size_data, int output_size_len); +void atg_adaptive_max_pool3d(raw_tensor *, gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_adaptive_max_pool3d_backward(gc_tensor grad_output, gc_tensor self, gc_tensor indices); +raw_tensor atg_adaptive_max_pool3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor indices); +void atg_adaptive_max_pool3d_out(raw_tensor *, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_add(gc_tensor self, gc_tensor other); +raw_tensor atg_add_(gc_tensor self, gc_tensor other); +raw_tensor atg_add_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_add_scalar(gc_tensor self, scalar other); +raw_tensor atg_add_scalar_(gc_tensor self, scalar other); +raw_tensor atg_add_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_addbmm(gc_tensor self, gc_tensor batch1, gc_tensor batch2); +raw_tensor atg_addbmm_(gc_tensor self, gc_tensor batch1, gc_tensor batch2); +raw_tensor atg_addbmm_out(gc_tensor out, gc_tensor self, gc_tensor batch1, gc_tensor batch2); +raw_tensor atg_addcdiv(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2); +raw_tensor atg_addcdiv_(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2); +raw_tensor atg_addcdiv_out(gc_tensor out, gc_tensor self, gc_tensor tensor1, gc_tensor tensor2); +raw_tensor atg_addcmul(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2); +raw_tensor atg_addcmul_(gc_tensor self, gc_tensor tensor1, gc_tensor tensor2); +raw_tensor atg_addcmul_out(gc_tensor out, gc_tensor self, gc_tensor tensor1, gc_tensor tensor2); +raw_tensor atg_addmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_addmm_(gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_addmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_addmv(gc_tensor self, gc_tensor mat, gc_tensor vec); +raw_tensor atg_addmv_(gc_tensor self, gc_tensor mat, gc_tensor vec); +raw_tensor atg_addmv_out(gc_tensor out, gc_tensor self, gc_tensor mat, gc_tensor vec); +raw_tensor atg_addr(gc_tensor self, gc_tensor vec1, gc_tensor vec2); +raw_tensor atg_addr_(gc_tensor self, gc_tensor vec1, gc_tensor vec2); +raw_tensor atg_addr_out(gc_tensor out, gc_tensor self, gc_tensor vec1, gc_tensor vec2); +raw_tensor atg_adjoint(gc_tensor self); +raw_tensor atg_affine_grid_generator(gc_tensor theta, int64_t *size_data, int size_len, int align_corners); +raw_tensor atg_affine_grid_generator_backward(gc_tensor grad, int64_t *size_data, int size_len, int align_corners); +raw_tensor atg_affine_grid_generator_out(gc_tensor out, gc_tensor theta, int64_t *size_data, int size_len, int align_corners); +raw_tensor atg_alias(gc_tensor self); +raw_tensor atg_alias_copy(gc_tensor self); +raw_tensor atg_alias_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_align_as(gc_tensor self, gc_tensor other); +raw_tensor *atg_align_tensors(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_all(gc_tensor self); +raw_tensor atg_all_all_out(gc_tensor out, gc_tensor self); +raw_tensor atg_all_dim(gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_all_out(gc_tensor out, gc_tensor self, int64_t dim, int keepdim); +int atg_allclose(gc_tensor self, gc_tensor other, double rtol, double atol, int equal_nan); +raw_tensor atg_alpha_dropout(gc_tensor input, double p, int train); +raw_tensor atg_alpha_dropout_(gc_tensor self, double p, int train); +raw_tensor atg_amax(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_amax_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_amin(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_amin_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +void atg_aminmax(raw_tensor *, gc_tensor self, int64_t dim_v, int dim_null, int keepdim); +void atg_aminmax_out(raw_tensor *, gc_tensor min, gc_tensor max, gc_tensor self, int64_t dim_v, int dim_null, int keepdim); +raw_tensor atg_angle(gc_tensor self); +raw_tensor atg_angle_out(gc_tensor out, gc_tensor self); +raw_tensor atg_any(gc_tensor self); +raw_tensor atg_any_all_out(gc_tensor out, gc_tensor self); +raw_tensor atg_any_dim(gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_any_out(gc_tensor out, gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_arange(scalar end, int options_kind, int options_device); +raw_tensor atg_arange_start(scalar start, scalar end, int options_kind, int options_device); +raw_tensor atg_arange_start_step(scalar start, scalar end, int options_kind, int options_device); +raw_tensor atg_arccos(gc_tensor self); +raw_tensor atg_arccos_(gc_tensor self); +raw_tensor atg_arccos_out(gc_tensor out, gc_tensor self); +raw_tensor atg_arccosh(gc_tensor self); +raw_tensor atg_arccosh_(gc_tensor self); +raw_tensor atg_arccosh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_arcsin(gc_tensor self); +raw_tensor atg_arcsin_(gc_tensor self); +raw_tensor atg_arcsin_out(gc_tensor out, gc_tensor self); +raw_tensor atg_arcsinh(gc_tensor self); +raw_tensor atg_arcsinh_(gc_tensor self); +raw_tensor atg_arcsinh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_arctan(gc_tensor self); +raw_tensor atg_arctan2(gc_tensor self, gc_tensor other); +raw_tensor atg_arctan2_(gc_tensor self, gc_tensor other); +raw_tensor atg_arctan2_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_arctan_(gc_tensor self); +raw_tensor atg_arctan_out(gc_tensor out, gc_tensor self); +raw_tensor atg_arctanh(gc_tensor self); +raw_tensor atg_arctanh_(gc_tensor self); +raw_tensor atg_arctanh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_argmax(gc_tensor self, int64_t dim_v, int dim_null, int keepdim); +raw_tensor atg_argmax_out(gc_tensor out, gc_tensor self, int64_t dim_v, int dim_null, int keepdim); +raw_tensor atg_argmin(gc_tensor self, int64_t dim_v, int dim_null, int keepdim); +raw_tensor atg_argmin_out(gc_tensor out, gc_tensor self, int64_t dim_v, int dim_null, int keepdim); +raw_tensor atg_argsort(gc_tensor self, int64_t dim, int descending); +raw_tensor atg_argsort_stable(gc_tensor self, int stable, int64_t dim, int descending); +raw_tensor atg_argsort_stable_out(gc_tensor out, gc_tensor self, int stable, int64_t dim, int descending); +raw_tensor atg_argwhere(gc_tensor self); +raw_tensor atg_as_strided(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); +raw_tensor atg_as_strided_(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); +raw_tensor atg_as_strided_copy(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); +raw_tensor atg_as_strided_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); +raw_tensor atg_as_strided_scatter(gc_tensor self, gc_tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); +raw_tensor atg_as_strided_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int64_t storage_offset_v, int storage_offset_null); +raw_tensor atg_asin(gc_tensor self); +raw_tensor atg_asin_(gc_tensor self); +raw_tensor atg_asin_out(gc_tensor out, gc_tensor self); +raw_tensor atg_asinh(gc_tensor self); +raw_tensor atg_asinh_(gc_tensor self); +raw_tensor atg_asinh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_atan(gc_tensor self); +raw_tensor atg_atan2(gc_tensor self, gc_tensor other); +raw_tensor atg_atan2_(gc_tensor self, gc_tensor other); +raw_tensor atg_atan2_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_atan_(gc_tensor self); +raw_tensor atg_atan_out(gc_tensor out, gc_tensor self); +raw_tensor atg_atanh(gc_tensor self); +raw_tensor atg_atanh_(gc_tensor self); +raw_tensor atg_atanh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_atleast_1d(gc_tensor self); +raw_tensor *atg_atleast_1d_sequence(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_atleast_2d(gc_tensor self); +raw_tensor *atg_atleast_2d_sequence(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_atleast_3d(gc_tensor self); +raw_tensor *atg_atleast_3d_sequence(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_avg_pool1d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad); +raw_tensor atg_avg_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_avg_pool2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_avg_pool2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_avg_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_avg_pool3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_avg_pool3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_avg_pool3d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int ceil_mode, int count_include_pad, int64_t divisor_override_v, int divisor_override_null); +raw_tensor atg_baddbmm(gc_tensor self, gc_tensor batch1, gc_tensor batch2); +raw_tensor atg_baddbmm_(gc_tensor self, gc_tensor batch1, gc_tensor batch2); +raw_tensor atg_baddbmm_out(gc_tensor out, gc_tensor self, gc_tensor batch1, gc_tensor batch2); +raw_tensor atg_bartlett_window(int64_t window_length, int options_kind, int options_device); +raw_tensor atg_bartlett_window_out(gc_tensor out, int64_t window_length); +raw_tensor atg_bartlett_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device); +raw_tensor atg_bartlett_window_periodic_out(gc_tensor out, int64_t window_length, int periodic); +raw_tensor atg_batch_norm(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps, int cudnn_enabled); +raw_tensor atg_batch_norm_backward_elemt(gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, gc_tensor sum_dy, gc_tensor sum_dy_xmu, gc_tensor count); +raw_tensor atg_batch_norm_backward_elemt_out(gc_tensor out, gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, gc_tensor sum_dy, gc_tensor sum_dy_xmu, gc_tensor count); +void atg_batch_norm_backward_reduce(raw_tensor *, gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, int input_g, int weight_g, int bias_g); +void atg_batch_norm_backward_reduce_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor grad_out, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor weight, int input_g, int weight_g, int bias_g); +raw_tensor atg_batch_norm_elemt(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor invstd, double eps); +raw_tensor atg_batch_norm_elemt_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor invstd, double eps); +void atg_batch_norm_gather_stats(raw_tensor *, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, int64_t count); +void atg_batch_norm_gather_stats_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, int64_t count); +void atg_batch_norm_gather_stats_with_counts(raw_tensor *, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, gc_tensor counts); +void atg_batch_norm_gather_stats_with_counts_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor mean, gc_tensor invstd, gc_tensor running_mean, gc_tensor running_var, double momentum, double eps, gc_tensor counts); +void atg_batch_norm_stats(raw_tensor *, gc_tensor input, double eps); +void atg_batch_norm_stats_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor input, double eps); +void atg_batch_norm_update_stats(raw_tensor *, gc_tensor input, gc_tensor running_mean, gc_tensor running_var, double momentum); +void atg_batch_norm_update_stats_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor input, gc_tensor running_mean, gc_tensor running_var, double momentum); +raw_tensor atg_bernoulli(gc_tensor self); +raw_tensor atg_bernoulli_(gc_tensor self, gc_tensor p); +raw_tensor atg_bernoulli_float_(gc_tensor self, double p); +raw_tensor atg_bernoulli_p(gc_tensor self, double p); +raw_tensor atg_bernoulli_tensor(gc_tensor self, gc_tensor p); +raw_tensor atg_bilinear(gc_tensor input1, gc_tensor input2, gc_tensor weight, gc_tensor bias); +raw_tensor atg_binary_cross_entropy(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction); +raw_tensor atg_binary_cross_entropy_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction); +raw_tensor atg_binary_cross_entropy_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction); +raw_tensor atg_binary_cross_entropy_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction); +raw_tensor atg_binary_cross_entropy_with_logits(gc_tensor self, gc_tensor target, gc_tensor weight, gc_tensor pos_weight, int64_t reduction); +raw_tensor atg_binary_cross_entropy_with_logits_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, gc_tensor pos_weight, int64_t reduction); +raw_tensor atg_bincount(gc_tensor self, gc_tensor weights, int64_t minlength); +raw_tensor atg_bincount_out(gc_tensor out, gc_tensor self, gc_tensor weights, int64_t minlength); +raw_tensor atg_binomial(gc_tensor count, gc_tensor prob); +raw_tensor atg_binomial_out(gc_tensor out, gc_tensor count, gc_tensor prob); +raw_tensor atg_bitwise_and(gc_tensor self, scalar other); +raw_tensor atg_bitwise_and_(gc_tensor self, scalar other); +raw_tensor atg_bitwise_and_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_bitwise_and_scalar_tensor(scalar self, gc_tensor other); +raw_tensor atg_bitwise_and_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_bitwise_and_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_and_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_and_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_left_shift(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_left_shift_(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_left_shift_scalar_tensor(scalar self, gc_tensor other); +raw_tensor atg_bitwise_left_shift_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_bitwise_left_shift_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_left_shift_tensor_scalar(gc_tensor self, scalar other); +raw_tensor atg_bitwise_left_shift_tensor_scalar_(gc_tensor self, scalar other); +raw_tensor atg_bitwise_left_shift_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_bitwise_not(gc_tensor self); +raw_tensor atg_bitwise_not_(gc_tensor self); +raw_tensor atg_bitwise_not_out(gc_tensor out, gc_tensor self); +raw_tensor atg_bitwise_or(gc_tensor self, scalar other); +raw_tensor atg_bitwise_or_(gc_tensor self, scalar other); +raw_tensor atg_bitwise_or_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_bitwise_or_scalar_tensor(scalar self, gc_tensor other); +raw_tensor atg_bitwise_or_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_bitwise_or_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_or_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_or_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_right_shift(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_right_shift_(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_right_shift_scalar_tensor(scalar self, gc_tensor other); +raw_tensor atg_bitwise_right_shift_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_bitwise_right_shift_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_right_shift_tensor_scalar(gc_tensor self, scalar other); +raw_tensor atg_bitwise_right_shift_tensor_scalar_(gc_tensor self, scalar other); +raw_tensor atg_bitwise_right_shift_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_bitwise_xor(gc_tensor self, scalar other); +raw_tensor atg_bitwise_xor_(gc_tensor self, scalar other); +raw_tensor atg_bitwise_xor_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_bitwise_xor_scalar_tensor(scalar self, gc_tensor other); +raw_tensor atg_bitwise_xor_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_bitwise_xor_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_xor_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_bitwise_xor_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_blackman_window(int64_t window_length, int options_kind, int options_device); +raw_tensor atg_blackman_window_out(gc_tensor out, int64_t window_length); +raw_tensor atg_blackman_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device); +raw_tensor atg_blackman_window_periodic_out(gc_tensor out, int64_t window_length, int periodic); +raw_tensor atg_block_diag(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_block_diag_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_bmm(gc_tensor self, gc_tensor mat2); +raw_tensor atg_bmm_out(gc_tensor out, gc_tensor self, gc_tensor mat2); +raw_tensor *atg_broadcast_tensors(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_broadcast_to(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_bucketize(gc_tensor self, gc_tensor boundaries, int out_int32, int right); +raw_tensor atg_bucketize_scalar(scalar self, gc_tensor boundaries, int out_int32, int right); +raw_tensor atg_bucketize_scalar_out(gc_tensor out, scalar self, gc_tensor boundaries, int out_int32, int right); +raw_tensor atg_bucketize_tensor_out(gc_tensor out, gc_tensor self, gc_tensor boundaries, int out_int32, int right); int atg_can_cast(int from, int to); -void atg_cartesian_prod(tensor *, tensor *tensors_data, int tensors_len); -void atg_cat(tensor *, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_cat_out(tensor *, tensor out, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_cauchy(tensor *, tensor self, double median, double sigma); -void atg_cauchy_(tensor *, tensor self, double median, double sigma); -void atg_cauchy_out(tensor *, tensor out, tensor self, double median, double sigma); -void atg_ccol_indices(tensor *, tensor self); -void atg_ccol_indices_copy(tensor *, tensor self); -void atg_ccol_indices_copy_out(tensor *, tensor out, tensor self); -void atg_cdist(tensor *, tensor x1, tensor x2, double p, int64_t compute_mode_v, int compute_mode_null); -void atg_ceil(tensor *, tensor self); -void atg_ceil_(tensor *, tensor self); -void atg_ceil_out(tensor *, tensor out, tensor self); -void atg_celu(tensor *, tensor self); -void atg_celu_(tensor *, tensor self); -void atg_celu_out(tensor *, tensor out, tensor self); -void atg_chain_matmul(tensor *, tensor *matrices_data, int matrices_len); -void atg_chain_matmul_out(tensor *, tensor out, tensor *matrices_data, int matrices_len); -void atg_chalf(tensor *, tensor self); -void atg_channel_shuffle(tensor *, tensor self, int64_t groups); -void atg_channel_shuffle_out(tensor *, tensor out, tensor self, int64_t groups); -void atg_cholesky(tensor *, tensor self, int upper); -void atg_cholesky_inverse(tensor *, tensor self, int upper); -void atg_cholesky_inverse_out(tensor *, tensor out, tensor self, int upper); -void atg_cholesky_out(tensor *, tensor out, tensor self, int upper); -void atg_cholesky_solve(tensor *, tensor self, tensor input2, int upper); -void atg_cholesky_solve_out(tensor *, tensor out, tensor self, tensor input2, int upper); -void atg_choose_qparams_optimized(tensor *, tensor input, int64_t numel, int64_t n_bins, double ratio, int64_t bit_width); -tensor *atg_chunk(tensor self, int64_t chunks, int64_t dim); -void atg_clamp(tensor *, tensor self, scalar min, scalar max); -void atg_clamp_(tensor *, tensor self, scalar min, scalar max); -void atg_clamp_max(tensor *, tensor self, scalar max); -void atg_clamp_max_(tensor *, tensor self, scalar max); -void atg_clamp_max_out(tensor *, tensor out, tensor self, scalar max); -void atg_clamp_max_tensor(tensor *, tensor self, tensor max); -void atg_clamp_max_tensor_(tensor *, tensor self, tensor max); -void atg_clamp_max_tensor_out(tensor *, tensor out, tensor self, tensor max); -void atg_clamp_min(tensor *, tensor self, scalar min); -void atg_clamp_min_(tensor *, tensor self, scalar min); -void atg_clamp_min_out(tensor *, tensor out, tensor self, scalar min); -void atg_clamp_min_tensor(tensor *, tensor self, tensor min); -void atg_clamp_min_tensor_(tensor *, tensor self, tensor min); -void atg_clamp_min_tensor_out(tensor *, tensor out, tensor self, tensor min); -void atg_clamp_out(tensor *, tensor out, tensor self, scalar min, scalar max); -void atg_clamp_tensor(tensor *, tensor self, tensor min, tensor max); -void atg_clamp_tensor_(tensor *, tensor self, tensor min, tensor max); -void atg_clamp_tensor_out(tensor *, tensor out, tensor self, tensor min, tensor max); -void atg_clip(tensor *, tensor self, scalar min, scalar max); -void atg_clip_(tensor *, tensor self, scalar min, scalar max); -void atg_clip_out(tensor *, tensor out, tensor self, scalar min, scalar max); -void atg_clip_tensor(tensor *, tensor self, tensor min, tensor max); -void atg_clip_tensor_(tensor *, tensor self, tensor min, tensor max); -void atg_clip_tensor_out(tensor *, tensor out, tensor self, tensor min, tensor max); -void atg_clone(tensor *, tensor self); -void atg_clone_out(tensor *, tensor out, tensor self); -void atg_coalesce(tensor *, tensor self); -void atg_col2im(tensor *, tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); -void atg_col2im_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); -void atg_col_indices(tensor *, tensor self); -void atg_col_indices_copy(tensor *, tensor self); -void atg_col_indices_copy_out(tensor *, tensor out, tensor self); -void atg_column_stack(tensor *, tensor *tensors_data, int tensors_len); -void atg_column_stack_out(tensor *, tensor out, tensor *tensors_data, int tensors_len); -void atg_combinations(tensor *, tensor self, int64_t r, int with_replacement); -void atg_complex(tensor *, tensor real, tensor imag); -void atg_complex_out(tensor *, tensor out, tensor real, tensor imag); -void atg_concat(tensor *, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_concat_out(tensor *, tensor out, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_concatenate(tensor *, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_concatenate_out(tensor *, tensor out, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_conj(tensor *, tensor self); -void atg_conj_physical(tensor *, tensor self); -void atg_conj_physical_(tensor *, tensor self); -void atg_conj_physical_out(tensor *, tensor out, tensor self); -void atg_constant_pad_nd(tensor *, tensor self, int64_t *pad_data, int pad_len); -void atg_constant_pad_nd_out(tensor *, tensor out, tensor self, int64_t *pad_data, int pad_len); -void atg_contiguous(tensor *, tensor self); -void atg_conv1d(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_conv1d_padding(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_conv2d(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_conv2d_padding(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_conv3d(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_conv3d_padding(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_conv_depthwise3d(tensor *, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg_conv_depthwise3d_out(tensor *, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg_conv_tbc(tensor *, tensor self, tensor weight, tensor bias, int64_t pad); -void atg_conv_tbc_backward(tensor *, tensor self, tensor input, tensor weight, tensor bias, int64_t pad); -void atg_conv_tbc_out(tensor *, tensor out, tensor self, tensor weight, tensor bias, int64_t pad); -void atg_conv_transpose1d(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len); -void atg_conv_transpose2d(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len); -void atg_conv_transpose3d(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len); -void atg_convolution(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); -void atg_convolution_out(tensor *, tensor out, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); -void atg_convolution_overrideable(tensor *, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); -void atg_convolution_overrideable_out(tensor *, tensor out, tensor input, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); -void atg_copy(tensor *, tensor self, tensor src, int non_blocking); -void atg_copy_out(tensor *, tensor out, tensor self, tensor src, int non_blocking); -void atg_copy_sparse_to_sparse(tensor *, tensor self, tensor src, int non_blocking); -void atg_copy_sparse_to_sparse_(tensor *, tensor self, tensor src, int non_blocking); -void atg_copy_sparse_to_sparse_out(tensor *, tensor out, tensor self, tensor src, int non_blocking); -void atg_copysign(tensor *, tensor self, tensor other); -void atg_copysign_(tensor *, tensor self, tensor other); -void atg_copysign_out(tensor *, tensor out, tensor self, tensor other); -void atg_copysign_scalar(tensor *, tensor self, scalar other); -void atg_copysign_scalar_(tensor *, tensor self, scalar other); -void atg_copysign_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_corrcoef(tensor *, tensor self); -void atg_cos(tensor *, tensor self); -void atg_cos_(tensor *, tensor self); -void atg_cos_out(tensor *, tensor out, tensor self); -void atg_cosh(tensor *, tensor self); -void atg_cosh_(tensor *, tensor self); -void atg_cosh_out(tensor *, tensor out, tensor self); -void atg_cosine_embedding_loss(tensor *, tensor input1, tensor input2, tensor target, double margin, int64_t reduction); -void atg_cosine_similarity(tensor *, tensor x1, tensor x2, int64_t dim, double eps); -void atg_count_nonzero(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len); -void atg_count_nonzero_out(tensor *, tensor out, tensor self, int64_t dim_v, int dim_null); -void atg_cov(tensor *, tensor self, int64_t correction, tensor fweights, tensor aweights); -void atg_cross(tensor *, tensor self, tensor other, int64_t dim_v, int dim_null); -void atg_cross_entropy_loss(tensor *, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, double label_smoothing); -void atg_cross_out(tensor *, tensor out, tensor self, tensor other, int64_t dim_v, int dim_null); -void atg_crow_indices(tensor *, tensor self); -void atg_crow_indices_copy(tensor *, tensor self); -void atg_crow_indices_copy_out(tensor *, tensor out, tensor self); -void atg_ctc_loss(tensor *, tensor log_probs, tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int64_t reduction, int zero_infinity); -void atg_ctc_loss_tensor(tensor *, tensor log_probs, tensor targets, tensor input_lengths, tensor target_lengths, int64_t blank, int64_t reduction, int zero_infinity); -void atg_cudnn_affine_grid_generator(tensor *, tensor theta, int64_t n, int64_t C, int64_t H, int64_t W); -void atg_cudnn_affine_grid_generator_backward(tensor *, tensor grad, int64_t n, int64_t C, int64_t H, int64_t W); -void atg_cudnn_affine_grid_generator_backward_out(tensor *, tensor out, tensor grad, int64_t n, int64_t C, int64_t H, int64_t W); -void atg_cudnn_affine_grid_generator_out(tensor *, tensor out, tensor theta, int64_t n, int64_t C, int64_t H, int64_t W); -void atg_cudnn_batch_norm(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon); -void atg_cudnn_batch_norm_backward(tensor *, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon, tensor reserveSpace); -void atg_cudnn_batch_norm_backward_out(tensor *, tensor out0, tensor out1, tensor out2, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon, tensor reserveSpace); -void atg_cudnn_batch_norm_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon); -void atg_cudnn_convolution(tensor *, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); -void atg_cudnn_convolution_add_relu(tensor *, tensor self, tensor weight, tensor z, scalar alpha, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_cudnn_convolution_add_relu_out(tensor *, tensor out, tensor self, tensor weight, tensor z, scalar alpha, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_cudnn_convolution_out(tensor *, tensor out, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); -void atg_cudnn_convolution_relu(tensor *, tensor self, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_cudnn_convolution_relu_out(tensor *, tensor out, tensor self, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_cudnn_convolution_transpose(tensor *, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); -void atg_cudnn_convolution_transpose_out(tensor *, tensor out, tensor self, tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); -void atg_cudnn_grid_sampler(tensor *, tensor self, tensor grid); -void atg_cudnn_grid_sampler_backward(tensor *, tensor self, tensor grid, tensor grad_output); -void atg_cudnn_grid_sampler_backward_out(tensor *, tensor out0, tensor out1, tensor self, tensor grid, tensor grad_output); -void atg_cudnn_grid_sampler_out(tensor *, tensor out, tensor self, tensor grid); -int atg_cudnn_is_acceptable(tensor self); -void atg_cummax(tensor *, tensor self, int64_t dim); -void atg_cummax_out(tensor *, tensor values, tensor indices, tensor self, int64_t dim); -void atg_cummaxmin_backward(tensor *, tensor grad, tensor input, tensor indices, int64_t dim); -void atg_cummin(tensor *, tensor self, int64_t dim); -void atg_cummin_out(tensor *, tensor values, tensor indices, tensor self, int64_t dim); -void atg_cumprod(tensor *, tensor self, int64_t dim, int dtype); -void atg_cumprod_(tensor *, tensor self, int64_t dim, int dtype); -void atg_cumprod_backward(tensor *, tensor grad, tensor input, int64_t dim, tensor output); -void atg_cumprod_out(tensor *, tensor out, tensor self, int64_t dim, int dtype); -void atg_cumsum(tensor *, tensor self, int64_t dim, int dtype); -void atg_cumsum_(tensor *, tensor self, int64_t dim, int dtype); -void atg_cumsum_out(tensor *, tensor out, tensor self, int64_t dim, int dtype); -void atg_cumulative_trapezoid(tensor *, tensor y, int64_t dim); -void atg_cumulative_trapezoid_x(tensor *, tensor y, tensor x, int64_t dim); -void atg_data(tensor *, tensor self); -void atg_deg2rad(tensor *, tensor self); -void atg_deg2rad_(tensor *, tensor self); -void atg_deg2rad_out(tensor *, tensor out, tensor self); -int64_t atg_dense_dim(tensor self); -void atg_dequantize(tensor *, tensor self); -void atg_dequantize_self_out(tensor *, tensor out, tensor self); -tensor *atg_dequantize_tensors(tensor *tensors_data, int tensors_len); -void atg_dequantize_tensors_out(tensor *out_data, int out_len, tensor *tensors_data, int tensors_len); -void atg_det(tensor *, tensor self); -void atg_detach(tensor *, tensor self); -void atg_detach_(tensor *, tensor self); -void atg_detach_copy(tensor *, tensor self); -void atg_detach_copy_out(tensor *, tensor out, tensor self); -void atg_diag(tensor *, tensor self, int64_t diagonal); -void atg_diag_embed(tensor *, tensor self, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diag_embed_out(tensor *, tensor out, tensor self, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diag_out(tensor *, tensor out, tensor self, int64_t diagonal); -void atg_diagflat(tensor *, tensor self, int64_t offset); -void atg_diagonal(tensor *, tensor self, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diagonal_backward(tensor *, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diagonal_backward_out(tensor *, tensor out, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diagonal_copy(tensor *, tensor self, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diagonal_copy_out(tensor *, tensor out, tensor self, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diagonal_scatter(tensor *, tensor self, tensor src, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diagonal_scatter_out(tensor *, tensor out, tensor self, tensor src, int64_t offset, int64_t dim1, int64_t dim2); -void atg_diff(tensor *, tensor self, int64_t n, int64_t dim, tensor prepend, tensor append); -void atg_diff_out(tensor *, tensor out, tensor self, int64_t n, int64_t dim, tensor prepend, tensor append); -void atg_digamma(tensor *, tensor self); -void atg_digamma_(tensor *, tensor self); -void atg_digamma_out(tensor *, tensor out, tensor self); -void atg_dist(tensor *, tensor self, tensor other); -void atg_dist_out(tensor *, tensor out, tensor self, tensor other); -void atg_div(tensor *, tensor self, tensor other); -void atg_div_(tensor *, tensor self, tensor other); -void atg_div_out(tensor *, tensor out, tensor self, tensor other); -void atg_div_out_mode(tensor *, tensor out, tensor self, tensor other, char * rounding_mode); -void atg_div_scalar(tensor *, tensor self, scalar other); -void atg_div_scalar_(tensor *, tensor self, scalar other); -void atg_div_scalar_mode(tensor *, tensor self, scalar other, char * rounding_mode); -void atg_div_scalar_mode_(tensor *, tensor self, scalar other, char * rounding_mode); -void atg_div_scalar_mode_out(tensor *, tensor out, tensor self, scalar other, char * rounding_mode); -void atg_div_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_div_tensor_mode(tensor *, tensor self, tensor other, char * rounding_mode); -void atg_div_tensor_mode_(tensor *, tensor self, tensor other, char * rounding_mode); -void atg_divide(tensor *, tensor self, tensor other); -void atg_divide_(tensor *, tensor self, tensor other); -void atg_divide_out(tensor *, tensor out, tensor self, tensor other); -void atg_divide_out_mode(tensor *, tensor out, tensor self, tensor other, char * rounding_mode); -void atg_divide_scalar(tensor *, tensor self, scalar other); -void atg_divide_scalar_(tensor *, tensor self, scalar other); -void atg_divide_scalar_mode(tensor *, tensor self, scalar other, char * rounding_mode); -void atg_divide_scalar_mode_(tensor *, tensor self, scalar other, char * rounding_mode); -void atg_divide_tensor_mode(tensor *, tensor self, tensor other, char * rounding_mode); -void atg_divide_tensor_mode_(tensor *, tensor self, tensor other, char * rounding_mode); -void atg_dot(tensor *, tensor self, tensor tensor); -void atg_dot_out(tensor *, tensor out, tensor self, tensor tensor); -void atg_dropout(tensor *, tensor input, double p, int train); -void atg_dropout_(tensor *, tensor self, double p, int train); -tensor *atg_dsplit(tensor self, int64_t sections); -tensor *atg_dsplit_array(tensor self, int64_t *indices_data, int indices_len); -void atg_dstack(tensor *, tensor *tensors_data, int tensors_len); -void atg_dstack_out(tensor *, tensor out, tensor *tensors_data, int tensors_len); -void atg_einsum(tensor *, char * equation, tensor *tensors_data, int tensors_len, int64_t *path_data, int path_len); -void atg_elu(tensor *, tensor self); -void atg_elu_(tensor *, tensor self); -void atg_elu_backward(tensor *, tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, tensor self_or_result); -void atg_elu_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, tensor self_or_result); -void atg_elu_out(tensor *, tensor out, tensor self); -void atg_embedding(tensor *, tensor weight, tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse); -void atg_embedding_backward(tensor *, tensor grad, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq, int sparse); -void atg_embedding_bag(tensor *, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset); -void atg_embedding_bag_padding_idx(tensor *, tensor weight, tensor indices, tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, tensor per_sample_weights, int include_last_offset, int64_t padding_idx_v, int padding_idx_null); -void atg_embedding_dense_backward(tensor *, tensor grad_output, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq); -void atg_embedding_dense_backward_out(tensor *, tensor out, tensor grad_output, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq); -void atg_embedding_out(tensor *, tensor out, tensor weight, tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse); -void atg_embedding_renorm(tensor *, tensor self, tensor indices, double max_norm, double norm_type); -void atg_embedding_renorm_(tensor *, tensor self, tensor indices, double max_norm, double norm_type); -void atg_embedding_renorm_out(tensor *, tensor out, tensor self, tensor indices, double max_norm, double norm_type); -void atg_embedding_sparse_backward(tensor *, tensor grad, tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq); -void atg_empty(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_empty_like(tensor *, tensor self); -void atg_empty_like_out(tensor *, tensor out, tensor self); -void atg_empty_out(tensor *, tensor out, int64_t *size_data, int size_len); -void atg_empty_permuted(tensor *, int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len, int options_kind, int options_device); -void atg_empty_permuted_out(tensor *, tensor out, int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len); -void atg_empty_quantized(tensor *, int64_t *size_data, int size_len, tensor qtensor, int options_kind, int options_device); -void atg_empty_quantized_out(tensor *, tensor out, int64_t *size_data, int size_len, tensor qtensor); -void atg_empty_strided(tensor *, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device); -void atg_empty_strided_out(tensor *, tensor out, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); -void atg_eq(tensor *, tensor self, scalar other); -void atg_eq_(tensor *, tensor self, scalar other); -void atg_eq_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_eq_tensor(tensor *, tensor self, tensor other); -void atg_eq_tensor_(tensor *, tensor self, tensor other); -void atg_eq_tensor_out(tensor *, tensor out, tensor self, tensor other); -int atg_equal(tensor self, tensor other); -void atg_erf(tensor *, tensor self); -void atg_erf_(tensor *, tensor self); -void atg_erf_out(tensor *, tensor out, tensor self); -void atg_erfc(tensor *, tensor self); -void atg_erfc_(tensor *, tensor self); -void atg_erfc_out(tensor *, tensor out, tensor self); -void atg_erfinv(tensor *, tensor self); -void atg_erfinv_(tensor *, tensor self); -void atg_erfinv_out(tensor *, tensor out, tensor self); -void atg_exp(tensor *, tensor self); -void atg_exp2(tensor *, tensor self); -void atg_exp2_(tensor *, tensor self); -void atg_exp2_out(tensor *, tensor out, tensor self); -void atg_exp_(tensor *, tensor self); -void atg_exp_out(tensor *, tensor out, tensor self); -void atg_expand(tensor *, tensor self, int64_t *size_data, int size_len, int implicit); -void atg_expand_as(tensor *, tensor self, tensor other); -void atg_expand_copy(tensor *, tensor self, int64_t *size_data, int size_len, int implicit); -void atg_expand_copy_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, int implicit); -void atg_expm1(tensor *, tensor self); -void atg_expm1_(tensor *, tensor self); -void atg_expm1_out(tensor *, tensor out, tensor self); -void atg_exponential(tensor *, tensor self, double lambd); -void atg_exponential_(tensor *, tensor self, double lambd); -void atg_exponential_out(tensor *, tensor out, tensor self, double lambd); -void atg_eye(tensor *, int64_t n, int options_kind, int options_device); -void atg_eye_m(tensor *, int64_t n, int64_t m, int options_kind, int options_device); -void atg_eye_m_out(tensor *, tensor out, int64_t n, int64_t m); -void atg_eye_out(tensor *, tensor out, int64_t n); -void atg_fake_quantize_per_channel_affine(tensor *, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max); -void atg_fake_quantize_per_channel_affine_cachemask(tensor *, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max); -void atg_fake_quantize_per_channel_affine_cachemask_backward(tensor *, tensor grad, tensor mask); -void atg_fake_quantize_per_channel_affine_cachemask_out(tensor *, tensor out0, tensor out1, tensor self, tensor scale, tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max); -void atg_fake_quantize_per_tensor_affine(tensor *, tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max); -void atg_fake_quantize_per_tensor_affine_cachemask(tensor *, tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max); -void atg_fake_quantize_per_tensor_affine_cachemask_backward(tensor *, tensor grad, tensor mask); -void atg_fake_quantize_per_tensor_affine_cachemask_out(tensor *, tensor out0, tensor out1, tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max); -void atg_fake_quantize_per_tensor_affine_tensor_qparams(tensor *, tensor self, tensor scale, tensor zero_point, int64_t quant_min, int64_t quant_max); -void atg_fbgemm_linear_fp16_weight(tensor *, tensor input, tensor packed_weight, tensor bias); -void atg_fbgemm_linear_fp16_weight_fp32_activation(tensor *, tensor input, tensor packed_weight, tensor bias); -void atg_fbgemm_linear_int8_weight(tensor *, tensor input, tensor weight, tensor packed, tensor col_offsets, scalar weight_scale, scalar weight_zero_point, tensor bias); -void atg_fbgemm_linear_int8_weight_fp32_activation(tensor *, tensor input, tensor weight, tensor packed, tensor col_offsets, scalar weight_scale, scalar weight_zero_point, tensor bias); -void atg_fbgemm_pack_gemm_matrix_fp16(tensor *, tensor input); -void atg_fbgemm_pack_quantized_matrix(tensor *, tensor input); -void atg_fbgemm_pack_quantized_matrix_kn(tensor *, tensor input, int64_t K, int64_t n); -void atg_feature_alpha_dropout(tensor *, tensor input, double p, int train); -void atg_feature_alpha_dropout_(tensor *, tensor self, double p, int train); -void atg_feature_dropout(tensor *, tensor input, double p, int train); -void atg_feature_dropout_(tensor *, tensor self, double p, int train); -void atg_fft_fft(tensor *, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_fft2(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_fft2_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_fft_out(tensor *, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_fftfreq(tensor *, int64_t n, double d, int options_kind, int options_device); -void atg_fft_fftfreq_out(tensor *, tensor out, int64_t n, double d); -void atg_fft_fftn(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_fftn_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_fftshift(tensor *, tensor self, int64_t *dim_data, int dim_len); -void atg_fft_hfft(tensor *, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_hfft2(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_hfft2_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_hfft_out(tensor *, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_hfftn(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_hfftn_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ifft(tensor *, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_ifft2(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ifft2_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ifft_out(tensor *, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_ifftn(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ifftn_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ifftshift(tensor *, tensor self, int64_t *dim_data, int dim_len); -void atg_fft_ihfft(tensor *, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_ihfft2(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ihfft2_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ihfft_out(tensor *, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_ihfftn(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_ihfftn_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_irfft(tensor *, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_irfft2(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_irfft2_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_irfft_out(tensor *, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_irfftn(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_irfftn_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_rfft(tensor *, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_rfft2(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_rfft2_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_rfft_out(tensor *, tensor out, tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); -void atg_fft_rfftfreq(tensor *, int64_t n, double d, int options_kind, int options_device); -void atg_fft_rfftfreq_out(tensor *, tensor out, int64_t n, double d); -void atg_fft_rfftn(tensor *, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fft_rfftn_out(tensor *, tensor out, tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); -void atg_fill(tensor *, tensor self, scalar value); -void atg_fill_(tensor *, tensor self, scalar value); -void atg_fill_diagonal_(tensor *, tensor self, scalar fill_value, int wrap); -void atg_fill_scalar_out(tensor *, tensor out, tensor self, scalar value); -void atg_fill_tensor(tensor *, tensor self, tensor value); -void atg_fill_tensor_(tensor *, tensor self, tensor value); -void atg_fill_tensor_out(tensor *, tensor out, tensor self, tensor value); -void atg_fix(tensor *, tensor self); -void atg_fix_(tensor *, tensor self); -void atg_fix_out(tensor *, tensor out, tensor self); -void atg_flatten(tensor *, tensor self, int64_t start_dim, int64_t end_dim); -void atg_flatten_dense_tensors(tensor *, tensor *tensors_data, int tensors_len); -void atg_flip(tensor *, tensor self, int64_t *dims_data, int dims_len); -void atg_flip_out(tensor *, tensor out, tensor self, int64_t *dims_data, int dims_len); -void atg_fliplr(tensor *, tensor self); -void atg_flipud(tensor *, tensor self); -void atg_float_power(tensor *, tensor self, tensor exponent); -void atg_float_power_(tensor *, tensor self, scalar exponent); -void atg_float_power_scalar(tensor *, scalar self, tensor exponent); -void atg_float_power_scalar_out(tensor *, tensor out, scalar self, tensor exponent); -void atg_float_power_tensor_(tensor *, tensor self, tensor exponent); -void atg_float_power_tensor_scalar(tensor *, tensor self, scalar exponent); -void atg_float_power_tensor_scalar_out(tensor *, tensor out, tensor self, scalar exponent); -void atg_float_power_tensor_tensor_out(tensor *, tensor out, tensor self, tensor exponent); -void atg_floor(tensor *, tensor self); -void atg_floor_(tensor *, tensor self); -void atg_floor_divide(tensor *, tensor self, tensor other); -void atg_floor_divide_(tensor *, tensor self, tensor other); -void atg_floor_divide_out(tensor *, tensor out, tensor self, tensor other); -void atg_floor_divide_scalar(tensor *, tensor self, scalar other); -void atg_floor_divide_scalar_(tensor *, tensor self, scalar other); -void atg_floor_out(tensor *, tensor out, tensor self); -void atg_fmax(tensor *, tensor self, tensor other); -void atg_fmax_out(tensor *, tensor out, tensor self, tensor other); -void atg_fmin(tensor *, tensor self, tensor other); -void atg_fmin_out(tensor *, tensor out, tensor self, tensor other); -void atg_fmod(tensor *, tensor self, scalar other); -void atg_fmod_(tensor *, tensor self, scalar other); -void atg_fmod_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_fmod_tensor(tensor *, tensor self, tensor other); -void atg_fmod_tensor_(tensor *, tensor self, tensor other); -void atg_fmod_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_frac(tensor *, tensor self); -void atg_frac_(tensor *, tensor self); -void atg_frac_out(tensor *, tensor out, tensor self); -void atg_fractional_max_pool2d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples); -void atg_fractional_max_pool2d_backward(tensor *, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices); -void atg_fractional_max_pool2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices); -void atg_fractional_max_pool2d_output(tensor *, tensor output, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples); -void atg_fractional_max_pool3d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples); -void atg_fractional_max_pool3d_backward(tensor *, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices); -void atg_fractional_max_pool3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor indices); -void atg_fractional_max_pool3d_output(tensor *, tensor output, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, tensor random_samples); -void atg_frexp(tensor *, tensor self); -void atg_frexp_tensor_out(tensor *, tensor mantissa, tensor exponent, tensor self); -void atg_frobenius_norm(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_frobenius_norm_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_from_file(tensor *, char * filename, int shared, int64_t size_v, int size_null, int options_kind, int options_device); -void atg_from_file_out(tensor *, tensor out, char * filename, int shared, int64_t size_v, int size_null); -void atg_full(tensor *, int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device); -void atg_full_like(tensor *, tensor self, scalar fill_value); -void atg_full_like_out(tensor *, tensor out, tensor self, scalar fill_value); -void atg_full_out(tensor *, tensor out, int64_t *size_data, int size_len, scalar fill_value); -void atg_fused_moving_avg_obs_fake_quant(tensor *, tensor self, tensor observer_on, tensor fake_quant_on, tensor running_min, tensor running_max, tensor scale, tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); -void atg_gather(tensor *, tensor self, int64_t dim, tensor index, int sparse_grad); -void atg_gather_backward(tensor *, tensor grad, tensor self, int64_t dim, tensor index, int sparse_grad); -void atg_gather_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, int sparse_grad); -void atg_gcd(tensor *, tensor self, tensor other); -void atg_gcd_(tensor *, tensor self, tensor other); -void atg_gcd_out(tensor *, tensor out, tensor self, tensor other); -void atg_ge(tensor *, tensor self, scalar other); -void atg_ge_(tensor *, tensor self, scalar other); -void atg_ge_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_ge_tensor(tensor *, tensor self, tensor other); -void atg_ge_tensor_(tensor *, tensor self, tensor other); -void atg_ge_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_gelu(tensor *, tensor self, char * approximate); -void atg_gelu_(tensor *, tensor self, char * approximate); -void atg_gelu_backward(tensor *, tensor grad_output, tensor self, char * approximate); -void atg_gelu_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, char * approximate); -void atg_gelu_out(tensor *, tensor out, tensor self, char * approximate); -void atg_geometric(tensor *, tensor self, double p); -void atg_geometric_(tensor *, tensor self, double p); -void atg_geometric_out(tensor *, tensor out, tensor self, double p); -void atg_geqrf(tensor *, tensor self); -void atg_geqrf_a(tensor *, tensor a, tensor tau, tensor self); -void atg_ger(tensor *, tensor self, tensor vec2); -void atg_ger_out(tensor *, tensor out, tensor self, tensor vec2); -void atg_glu(tensor *, tensor self, int64_t dim); -void atg_glu_backward(tensor *, tensor grad_output, tensor self, int64_t dim); -void atg_glu_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t dim); -void atg_glu_backward_jvp(tensor *, tensor grad_x, tensor grad_glu, tensor x, tensor dgrad_glu, tensor dx, int64_t dim); -void atg_glu_backward_jvp_out(tensor *, tensor out, tensor grad_x, tensor grad_glu, tensor x, tensor dgrad_glu, tensor dx, int64_t dim); -void atg_glu_jvp(tensor *, tensor glu, tensor x, tensor dx, int64_t dim); -void atg_glu_jvp_out(tensor *, tensor out, tensor glu, tensor x, tensor dx, int64_t dim); -void atg_glu_out(tensor *, tensor out, tensor self, int64_t dim); -void atg_grad(tensor *, tensor self); -void atg_greater(tensor *, tensor self, scalar other); -void atg_greater_(tensor *, tensor self, scalar other); -void atg_greater_equal(tensor *, tensor self, scalar other); -void atg_greater_equal_(tensor *, tensor self, scalar other); -void atg_greater_equal_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_greater_equal_tensor(tensor *, tensor self, tensor other); -void atg_greater_equal_tensor_(tensor *, tensor self, tensor other); -void atg_greater_equal_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_greater_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_greater_tensor(tensor *, tensor self, tensor other); -void atg_greater_tensor_(tensor *, tensor self, tensor other); -void atg_greater_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_grid_sampler(tensor *, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -void atg_grid_sampler_2d(tensor *, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -void atg_grid_sampler_2d_out(tensor *, tensor out, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -void atg_grid_sampler_3d(tensor *, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -void atg_grid_sampler_3d_out(tensor *, tensor out, tensor input, tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); -void atg_group_norm(tensor *, tensor input, int64_t num_groups, tensor weight, tensor bias, double eps, int cudnn_enabled); -void atg_gru(tensor *, tensor input, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); -void atg_gru_cell(tensor *, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh); -void atg_gru_data(tensor *, tensor data, tensor batch_sizes, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); -void atg_gt(tensor *, tensor self, scalar other); -void atg_gt_(tensor *, tensor self, scalar other); -void atg_gt_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_gt_tensor(tensor *, tensor self, tensor other); -void atg_gt_tensor_(tensor *, tensor self, tensor other); -void atg_gt_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_hamming_window(tensor *, int64_t window_length, int options_kind, int options_device); -void atg_hamming_window_out(tensor *, tensor out, int64_t window_length); -void atg_hamming_window_periodic(tensor *, int64_t window_length, int periodic, int options_kind, int options_device); -void atg_hamming_window_periodic_alpha(tensor *, int64_t window_length, int periodic, double alpha, int options_kind, int options_device); -void atg_hamming_window_periodic_alpha_beta(tensor *, int64_t window_length, int periodic, double alpha, double beta, int options_kind, int options_device); -void atg_hamming_window_periodic_alpha_beta_out(tensor *, tensor out, int64_t window_length, int periodic, double alpha, double beta); -void atg_hamming_window_periodic_alpha_out(tensor *, tensor out, int64_t window_length, int periodic, double alpha); -void atg_hamming_window_periodic_out(tensor *, tensor out, int64_t window_length, int periodic); -void atg_hann_window(tensor *, int64_t window_length, int options_kind, int options_device); -void atg_hann_window_out(tensor *, tensor out, int64_t window_length); -void atg_hann_window_periodic(tensor *, int64_t window_length, int periodic, int options_kind, int options_device); -void atg_hann_window_periodic_out(tensor *, tensor out, int64_t window_length, int periodic); -void atg_hardshrink(tensor *, tensor self); -void atg_hardshrink_backward(tensor *, tensor grad_out, tensor self, scalar lambd); -void atg_hardshrink_backward_grad_input(tensor *, tensor grad_input, tensor grad_out, tensor self, scalar lambd); -void atg_hardshrink_out(tensor *, tensor out, tensor self); -void atg_hardsigmoid(tensor *, tensor self); -void atg_hardsigmoid_(tensor *, tensor self); -void atg_hardsigmoid_backward(tensor *, tensor grad_output, tensor self); -void atg_hardsigmoid_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self); -void atg_hardsigmoid_out(tensor *, tensor out, tensor self); -void atg_hardswish(tensor *, tensor self); -void atg_hardswish_(tensor *, tensor self); -void atg_hardswish_backward(tensor *, tensor grad_output, tensor self); -void atg_hardswish_backward_out(tensor *, tensor out, tensor grad_output, tensor self); -void atg_hardswish_out(tensor *, tensor out, tensor self); -void atg_hardtanh(tensor *, tensor self); -void atg_hardtanh_(tensor *, tensor self); -void atg_hardtanh_backward(tensor *, tensor grad_output, tensor self, scalar min_val, scalar max_val); -void atg_hardtanh_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, scalar min_val, scalar max_val); -void atg_hardtanh_out(tensor *, tensor out, tensor self); -void atg_heaviside(tensor *, tensor self, tensor values); -void atg_heaviside_(tensor *, tensor self, tensor values); -void atg_heaviside_out(tensor *, tensor out, tensor self, tensor values); -void atg_hinge_embedding_loss(tensor *, tensor self, tensor target, double margin, int64_t reduction); -void atg_histc(tensor *, tensor self, int64_t bins); -void atg_histc_out(tensor *, tensor out, tensor self, int64_t bins); -tensor *atg_hsplit(tensor self, int64_t sections); -tensor *atg_hsplit_array(tensor self, int64_t *indices_data, int indices_len); -void atg_hspmm(tensor *, tensor mat1, tensor mat2); -void atg_hspmm_out(tensor *, tensor out, tensor mat1, tensor mat2); -void atg_hstack(tensor *, tensor *tensors_data, int tensors_len); -void atg_hstack_out(tensor *, tensor out, tensor *tensors_data, int tensors_len); -void atg_huber_loss(tensor *, tensor self, tensor target, int64_t reduction, double delta); -void atg_huber_loss_backward(tensor *, tensor grad_output, tensor self, tensor target, int64_t reduction, double delta); -void atg_huber_loss_backward_out(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction, double delta); -void atg_huber_loss_out(tensor *, tensor out, tensor self, tensor target, int64_t reduction, double delta); -void atg_hypot(tensor *, tensor self, tensor other); -void atg_hypot_(tensor *, tensor self, tensor other); -void atg_hypot_out(tensor *, tensor out, tensor self, tensor other); -void atg_i0(tensor *, tensor self); -void atg_i0_(tensor *, tensor self); -void atg_i0_out(tensor *, tensor out, tensor self); -void atg_igamma(tensor *, tensor self, tensor other); -void atg_igamma_(tensor *, tensor self, tensor other); -void atg_igamma_out(tensor *, tensor out, tensor self, tensor other); -void atg_igammac(tensor *, tensor self, tensor other); -void atg_igammac_(tensor *, tensor self, tensor other); -void atg_igammac_out(tensor *, tensor out, tensor self, tensor other); -void atg_im2col(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); -void atg_im2col_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); -void atg_imag(tensor *, tensor self); -void atg_index(tensor *, tensor self, tensor *indices_data, int indices_len); -void atg_index_add(tensor *, tensor self, int64_t dim, tensor index, tensor source); -void atg_index_add_(tensor *, tensor self, int64_t dim, tensor index, tensor source); -void atg_index_add_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor source); -void atg_index_copy(tensor *, tensor self, int64_t dim, tensor index, tensor source); -void atg_index_copy_(tensor *, tensor self, int64_t dim, tensor index, tensor source); -void atg_index_copy_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor source); -void atg_index_fill(tensor *, tensor self, int64_t dim, tensor index, scalar value); -void atg_index_fill_(tensor *, tensor self, int64_t dim, tensor index, scalar value); -void atg_index_fill_int_scalar_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, scalar value); -void atg_index_fill_int_tensor(tensor *, tensor self, int64_t dim, tensor index, tensor value); -void atg_index_fill_int_tensor_(tensor *, tensor self, int64_t dim, tensor index, tensor value); -void atg_index_fill_int_tensor_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor value); -void atg_index_put(tensor *, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate); -void atg_index_put_(tensor *, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate); -void atg_index_put_out(tensor *, tensor out, tensor self, tensor *indices_data, int indices_len, tensor values, int accumulate); -void atg_index_reduce(tensor *, tensor self, int64_t dim, tensor index, tensor source, char * reduce, int include_self); -void atg_index_reduce_(tensor *, tensor self, int64_t dim, tensor index, tensor source, char * reduce, int include_self); -void atg_index_reduce_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor source, char * reduce, int include_self); -void atg_index_select(tensor *, tensor self, int64_t dim, tensor index); -void atg_index_select_backward(tensor *, tensor grad, int64_t *self_sizes_data, int self_sizes_len, int64_t dim, tensor index); -void atg_index_select_out(tensor *, tensor out, tensor self, int64_t dim, tensor index); -void atg_index_tensor_out(tensor *, tensor out, tensor self, tensor *indices_data, int indices_len); -void atg_indices(tensor *, tensor self); -void atg_indices_copy(tensor *, tensor self); -void atg_indices_copy_out(tensor *, tensor out, tensor self); -void atg_infinitely_differentiable_gelu_backward(tensor *, tensor grad, tensor self); -void atg_inner(tensor *, tensor self, tensor other); -void atg_inner_out(tensor *, tensor out, tensor self, tensor other); -void atg_instance_norm(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int use_input_stats, double momentum, double eps, int cudnn_enabled); -void atg_int_repr(tensor *, tensor self); -void atg_int_repr_out(tensor *, tensor out, tensor self); -void atg_inverse(tensor *, tensor self); -void atg_inverse_out(tensor *, tensor out, tensor self); -int atg_is_coalesced(tensor self); -int atg_is_complex(tensor self); -int atg_is_conj(tensor self); -int atg_is_distributed(tensor self); -int atg_is_floating_point(tensor self); -int atg_is_inference(tensor self); -int atg_is_leaf(tensor self); -int atg_is_neg(tensor self); -int atg_is_nonzero(tensor self); -int atg_is_pinned(tensor self, int device); -int atg_is_same_size(tensor self, tensor other); -int atg_is_set_to(tensor self, tensor tensor); -int atg_is_signed(tensor self); +raw_tensor atg_cartesian_prod(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_cat(gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_cat_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_cauchy(gc_tensor self, double median, double sigma); +raw_tensor atg_cauchy_(gc_tensor self, double median, double sigma); +raw_tensor atg_cauchy_out(gc_tensor out, gc_tensor self, double median, double sigma); +raw_tensor atg_ccol_indices(gc_tensor self); +raw_tensor atg_ccol_indices_copy(gc_tensor self); +raw_tensor atg_ccol_indices_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_cdist(gc_tensor x1, gc_tensor x2, double p, int64_t compute_mode_v, int compute_mode_null); +raw_tensor atg_ceil(gc_tensor self); +raw_tensor atg_ceil_(gc_tensor self); +raw_tensor atg_ceil_out(gc_tensor out, gc_tensor self); +raw_tensor atg_celu(gc_tensor self); +raw_tensor atg_celu_(gc_tensor self); +raw_tensor atg_celu_out(gc_tensor out, gc_tensor self); +raw_tensor atg_chain_matmul(gc_tensor *matrices_data, int matrices_len); +raw_tensor atg_chain_matmul_out(gc_tensor out, gc_tensor *matrices_data, int matrices_len); +raw_tensor atg_chalf(gc_tensor self); +raw_tensor atg_channel_shuffle(gc_tensor self, int64_t groups); +raw_tensor atg_channel_shuffle_out(gc_tensor out, gc_tensor self, int64_t groups); +raw_tensor atg_cholesky(gc_tensor self, int upper); +raw_tensor atg_cholesky_inverse(gc_tensor self, int upper); +raw_tensor atg_cholesky_inverse_out(gc_tensor out, gc_tensor self, int upper); +raw_tensor atg_cholesky_out(gc_tensor out, gc_tensor self, int upper); +raw_tensor atg_cholesky_solve(gc_tensor self, gc_tensor input2, int upper); +raw_tensor atg_cholesky_solve_out(gc_tensor out, gc_tensor self, gc_tensor input2, int upper); +void atg_choose_qparams_optimized(raw_tensor *, gc_tensor input, int64_t numel, int64_t n_bins, double ratio, int64_t bit_width); +raw_tensor *atg_chunk(gc_tensor self, int64_t chunks, int64_t dim); +raw_tensor atg_clamp(gc_tensor self, scalar min, scalar max); +raw_tensor atg_clamp_(gc_tensor self, scalar min, scalar max); +raw_tensor atg_clamp_max(gc_tensor self, scalar max); +raw_tensor atg_clamp_max_(gc_tensor self, scalar max); +raw_tensor atg_clamp_max_out(gc_tensor out, gc_tensor self, scalar max); +raw_tensor atg_clamp_max_tensor(gc_tensor self, gc_tensor max); +raw_tensor atg_clamp_max_tensor_(gc_tensor self, gc_tensor max); +raw_tensor atg_clamp_max_tensor_out(gc_tensor out, gc_tensor self, gc_tensor max); +raw_tensor atg_clamp_min(gc_tensor self, scalar min); +raw_tensor atg_clamp_min_(gc_tensor self, scalar min); +raw_tensor atg_clamp_min_out(gc_tensor out, gc_tensor self, scalar min); +raw_tensor atg_clamp_min_tensor(gc_tensor self, gc_tensor min); +raw_tensor atg_clamp_min_tensor_(gc_tensor self, gc_tensor min); +raw_tensor atg_clamp_min_tensor_out(gc_tensor out, gc_tensor self, gc_tensor min); +raw_tensor atg_clamp_out(gc_tensor out, gc_tensor self, scalar min, scalar max); +raw_tensor atg_clamp_tensor(gc_tensor self, gc_tensor min, gc_tensor max); +raw_tensor atg_clamp_tensor_(gc_tensor self, gc_tensor min, gc_tensor max); +raw_tensor atg_clamp_tensor_out(gc_tensor out, gc_tensor self, gc_tensor min, gc_tensor max); +raw_tensor atg_clip(gc_tensor self, scalar min, scalar max); +raw_tensor atg_clip_(gc_tensor self, scalar min, scalar max); +raw_tensor atg_clip_out(gc_tensor out, gc_tensor self, scalar min, scalar max); +raw_tensor atg_clip_tensor(gc_tensor self, gc_tensor min, gc_tensor max); +raw_tensor atg_clip_tensor_(gc_tensor self, gc_tensor min, gc_tensor max); +raw_tensor atg_clip_tensor_out(gc_tensor out, gc_tensor self, gc_tensor min, gc_tensor max); +raw_tensor atg_clone(gc_tensor self); +raw_tensor atg_clone_out(gc_tensor out, gc_tensor self); +raw_tensor atg_coalesce(gc_tensor self); +raw_tensor atg_col2im(gc_tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); +raw_tensor atg_col2im_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); +raw_tensor atg_col_indices(gc_tensor self); +raw_tensor atg_col_indices_copy(gc_tensor self); +raw_tensor atg_col_indices_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_column_stack(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_column_stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_combinations(gc_tensor self, int64_t r, int with_replacement); +raw_tensor atg_complex(gc_tensor real, gc_tensor imag); +raw_tensor atg_complex_out(gc_tensor out, gc_tensor real, gc_tensor imag); +raw_tensor atg_concat(gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_concat_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_concatenate(gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_concatenate_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_conj(gc_tensor self); +raw_tensor atg_conj_physical(gc_tensor self); +raw_tensor atg_conj_physical_(gc_tensor self); +raw_tensor atg_conj_physical_out(gc_tensor out, gc_tensor self); +raw_tensor atg_constant_pad_nd(gc_tensor self, int64_t *pad_data, int pad_len); +raw_tensor atg_constant_pad_nd_out(gc_tensor out, gc_tensor self, int64_t *pad_data, int pad_len); +raw_tensor atg_contiguous(gc_tensor self); +raw_tensor atg_conv1d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_conv1d_padding(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_conv2d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_conv2d_padding(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_conv3d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_conv3d_padding(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, char * padding, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_conv_depthwise3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_conv_depthwise3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_conv_tbc(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t pad); +void atg_conv_tbc_backward(raw_tensor *, gc_tensor self, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t pad); +raw_tensor atg_conv_tbc_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t pad); +raw_tensor atg_conv_transpose1d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len); +raw_tensor atg_conv_transpose2d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len); +raw_tensor atg_conv_transpose3d(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t groups, int64_t *dilation_data, int dilation_len); +raw_tensor atg_convolution(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); +raw_tensor atg_convolution_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); +raw_tensor atg_convolution_overrideable(gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); +raw_tensor atg_convolution_overrideable_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int transposed, int64_t *output_padding_data, int output_padding_len, int64_t groups); +raw_tensor atg_copy(gc_tensor self, gc_tensor src, int non_blocking); +raw_tensor atg_copy_out(gc_tensor out, gc_tensor self, gc_tensor src, int non_blocking); +raw_tensor atg_copy_sparse_to_sparse(gc_tensor self, gc_tensor src, int non_blocking); +raw_tensor atg_copy_sparse_to_sparse_(gc_tensor self, gc_tensor src, int non_blocking); +raw_tensor atg_copy_sparse_to_sparse_out(gc_tensor out, gc_tensor self, gc_tensor src, int non_blocking); +raw_tensor atg_copysign(gc_tensor self, gc_tensor other); +raw_tensor atg_copysign_(gc_tensor self, gc_tensor other); +raw_tensor atg_copysign_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_copysign_scalar(gc_tensor self, scalar other); +raw_tensor atg_copysign_scalar_(gc_tensor self, scalar other); +raw_tensor atg_copysign_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_corrcoef(gc_tensor self); +raw_tensor atg_cos(gc_tensor self); +raw_tensor atg_cos_(gc_tensor self); +raw_tensor atg_cos_out(gc_tensor out, gc_tensor self); +raw_tensor atg_cosh(gc_tensor self); +raw_tensor atg_cosh_(gc_tensor self); +raw_tensor atg_cosh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_cosine_embedding_loss(gc_tensor input1, gc_tensor input2, gc_tensor target, double margin, int64_t reduction); +raw_tensor atg_cosine_similarity(gc_tensor x1, gc_tensor x2, int64_t dim, double eps); +raw_tensor atg_count_nonzero(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg_count_nonzero_out(gc_tensor out, gc_tensor self, int64_t dim_v, int dim_null); +raw_tensor atg_cov(gc_tensor self, int64_t correction, gc_tensor fweights, gc_tensor aweights); +raw_tensor atg_cross(gc_tensor self, gc_tensor other, int64_t dim_v, int dim_null); +raw_tensor atg_cross_entropy_loss(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, double label_smoothing); +raw_tensor atg_cross_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t dim_v, int dim_null); +raw_tensor atg_crow_indices(gc_tensor self); +raw_tensor atg_crow_indices_copy(gc_tensor self); +raw_tensor atg_crow_indices_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_ctc_loss(gc_tensor log_probs, gc_tensor targets, int64_t *input_lengths_data, int input_lengths_len, int64_t *target_lengths_data, int target_lengths_len, int64_t blank, int64_t reduction, int zero_infinity); +raw_tensor atg_ctc_loss_tensor(gc_tensor log_probs, gc_tensor targets, gc_tensor input_lengths, gc_tensor target_lengths, int64_t blank, int64_t reduction, int zero_infinity); +raw_tensor atg_cudnn_affine_grid_generator(gc_tensor theta, int64_t n, int64_t C, int64_t H, int64_t W); +raw_tensor atg_cudnn_affine_grid_generator_backward(gc_tensor grad, int64_t n, int64_t C, int64_t H, int64_t W); +raw_tensor atg_cudnn_affine_grid_generator_backward_out(gc_tensor out, gc_tensor grad, int64_t n, int64_t C, int64_t H, int64_t W); +raw_tensor atg_cudnn_affine_grid_generator_out(gc_tensor out, gc_tensor theta, int64_t n, int64_t C, int64_t H, int64_t W); +void atg_cudnn_batch_norm(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon); +void atg_cudnn_batch_norm_backward(raw_tensor *, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon, gc_tensor reserveSpace); +void atg_cudnn_batch_norm_backward_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon, gc_tensor reserveSpace); +void atg_cudnn_batch_norm_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon); +raw_tensor atg_cudnn_convolution(gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); +raw_tensor atg_cudnn_convolution_add_relu(gc_tensor self, gc_tensor weight, gc_tensor z, scalar alpha, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_cudnn_convolution_add_relu_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor z, scalar alpha, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_cudnn_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); +raw_tensor atg_cudnn_convolution_relu(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_cudnn_convolution_relu_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_cudnn_convolution_transpose(gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); +raw_tensor atg_cudnn_convolution_transpose_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic, int allow_tf32); +raw_tensor atg_cudnn_grid_sampler(gc_tensor self, gc_tensor grid); +void atg_cudnn_grid_sampler_backward(raw_tensor *, gc_tensor self, gc_tensor grid, gc_tensor grad_output); +void atg_cudnn_grid_sampler_backward_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor grid, gc_tensor grad_output); +raw_tensor atg_cudnn_grid_sampler_out(gc_tensor out, gc_tensor self, gc_tensor grid); +int atg_cudnn_is_acceptable(gc_tensor self); +void atg_cummax(raw_tensor *, gc_tensor self, int64_t dim); +void atg_cummax_out(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim); +raw_tensor atg_cummaxmin_backward(gc_tensor grad, gc_tensor input, gc_tensor indices, int64_t dim); +void atg_cummin(raw_tensor *, gc_tensor self, int64_t dim); +void atg_cummin_out(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim); +raw_tensor atg_cumprod(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_cumprod_(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_cumprod_backward(gc_tensor grad, gc_tensor input, int64_t dim, gc_tensor output); +raw_tensor atg_cumprod_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_cumsum(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_cumsum_(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_cumsum_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_cumulative_trapezoid(gc_tensor y, int64_t dim); +raw_tensor atg_cumulative_trapezoid_x(gc_tensor y, gc_tensor x, int64_t dim); +raw_tensor atg_data(gc_tensor self); +raw_tensor atg_deg2rad(gc_tensor self); +raw_tensor atg_deg2rad_(gc_tensor self); +raw_tensor atg_deg2rad_out(gc_tensor out, gc_tensor self); +int64_t atg_dense_dim(gc_tensor self); +raw_tensor atg_dequantize(gc_tensor self); +raw_tensor atg_dequantize_self_out(gc_tensor out, gc_tensor self); +raw_tensor *atg_dequantize_tensors(gc_tensor *tensors_data, int tensors_len); +void atg_dequantize_tensors_out(gc_tensor *out_data, int out_len, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_det(gc_tensor self); +raw_tensor atg_detach(gc_tensor self); +raw_tensor atg_detach_(gc_tensor self); +raw_tensor atg_detach_copy(gc_tensor self); +raw_tensor atg_detach_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_diag(gc_tensor self, int64_t diagonal); +raw_tensor atg_diag_embed(gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diag_embed_out(gc_tensor out, gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diag_out(gc_tensor out, gc_tensor self, int64_t diagonal); +raw_tensor atg_diagflat(gc_tensor self, int64_t offset); +raw_tensor atg_diagonal(gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diagonal_backward(gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diagonal_backward_out(gc_tensor out, gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diagonal_copy(gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diagonal_copy_out(gc_tensor out, gc_tensor self, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diagonal_scatter(gc_tensor self, gc_tensor src, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diagonal_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t offset, int64_t dim1, int64_t dim2); +raw_tensor atg_diff(gc_tensor self, int64_t n, int64_t dim, gc_tensor prepend, gc_tensor append); +raw_tensor atg_diff_out(gc_tensor out, gc_tensor self, int64_t n, int64_t dim, gc_tensor prepend, gc_tensor append); +raw_tensor atg_digamma(gc_tensor self); +raw_tensor atg_digamma_(gc_tensor self); +raw_tensor atg_digamma_out(gc_tensor out, gc_tensor self); +raw_tensor atg_dist(gc_tensor self, gc_tensor other); +raw_tensor atg_dist_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_div(gc_tensor self, gc_tensor other); +raw_tensor atg_div_(gc_tensor self, gc_tensor other); +raw_tensor atg_div_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_div_out_mode(gc_tensor out, gc_tensor self, gc_tensor other, char * rounding_mode); +raw_tensor atg_div_scalar(gc_tensor self, scalar other); +raw_tensor atg_div_scalar_(gc_tensor self, scalar other); +raw_tensor atg_div_scalar_mode(gc_tensor self, scalar other, char * rounding_mode); +raw_tensor atg_div_scalar_mode_(gc_tensor self, scalar other, char * rounding_mode); +raw_tensor atg_div_scalar_mode_out(gc_tensor out, gc_tensor self, scalar other, char * rounding_mode); +raw_tensor atg_div_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_div_tensor_mode(gc_tensor self, gc_tensor other, char * rounding_mode); +raw_tensor atg_div_tensor_mode_(gc_tensor self, gc_tensor other, char * rounding_mode); +raw_tensor atg_divide(gc_tensor self, gc_tensor other); +raw_tensor atg_divide_(gc_tensor self, gc_tensor other); +raw_tensor atg_divide_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_divide_out_mode(gc_tensor out, gc_tensor self, gc_tensor other, char * rounding_mode); +raw_tensor atg_divide_scalar(gc_tensor self, scalar other); +raw_tensor atg_divide_scalar_(gc_tensor self, scalar other); +raw_tensor atg_divide_scalar_mode(gc_tensor self, scalar other, char * rounding_mode); +raw_tensor atg_divide_scalar_mode_(gc_tensor self, scalar other, char * rounding_mode); +raw_tensor atg_divide_tensor_mode(gc_tensor self, gc_tensor other, char * rounding_mode); +raw_tensor atg_divide_tensor_mode_(gc_tensor self, gc_tensor other, char * rounding_mode); +raw_tensor atg_dot(gc_tensor self, gc_tensor tensor); +raw_tensor atg_dot_out(gc_tensor out, gc_tensor self, gc_tensor tensor); +raw_tensor atg_dropout(gc_tensor input, double p, int train); +raw_tensor atg_dropout_(gc_tensor self, double p, int train); +raw_tensor *atg_dsplit(gc_tensor self, int64_t sections); +raw_tensor *atg_dsplit_array(gc_tensor self, int64_t *indices_data, int indices_len); +raw_tensor atg_dstack(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_dstack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_einsum(char * equation, gc_tensor *tensors_data, int tensors_len, int64_t *path_data, int path_len); +raw_tensor atg_elu(gc_tensor self); +raw_tensor atg_elu_(gc_tensor self); +raw_tensor atg_elu_backward(gc_tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, gc_tensor self_or_result); +raw_tensor atg_elu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, scalar alpha, scalar scale, scalar input_scale, int is_result, gc_tensor self_or_result); +raw_tensor atg_elu_out(gc_tensor out, gc_tensor self); +raw_tensor atg_embedding(gc_tensor weight, gc_tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse); +raw_tensor atg_embedding_backward(gc_tensor grad, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq, int sparse); +void atg_embedding_bag(raw_tensor *, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset); +void atg_embedding_bag_padding_idx(raw_tensor *, gc_tensor weight, gc_tensor indices, gc_tensor offsets, int scale_grad_by_freq, int64_t mode, int sparse, gc_tensor per_sample_weights, int include_last_offset, int64_t padding_idx_v, int padding_idx_null); +raw_tensor atg_embedding_dense_backward(gc_tensor grad_output, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq); +raw_tensor atg_embedding_dense_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq); +raw_tensor atg_embedding_out(gc_tensor out, gc_tensor weight, gc_tensor indices, int64_t padding_idx, int scale_grad_by_freq, int sparse); +raw_tensor atg_embedding_renorm(gc_tensor self, gc_tensor indices, double max_norm, double norm_type); +raw_tensor atg_embedding_renorm_(gc_tensor self, gc_tensor indices, double max_norm, double norm_type); +raw_tensor atg_embedding_renorm_out(gc_tensor out, gc_tensor self, gc_tensor indices, double max_norm, double norm_type); +raw_tensor atg_embedding_sparse_backward(gc_tensor grad, gc_tensor indices, int64_t num_weights, int64_t padding_idx, int scale_grad_by_freq); +raw_tensor atg_empty(int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_empty_like(gc_tensor self); +raw_tensor atg_empty_like_out(gc_tensor out, gc_tensor self); +raw_tensor atg_empty_out(gc_tensor out, int64_t *size_data, int size_len); +raw_tensor atg_empty_permuted(int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len, int options_kind, int options_device); +raw_tensor atg_empty_permuted_out(gc_tensor out, int64_t *size_data, int size_len, int64_t *physical_layout_data, int physical_layout_len); +raw_tensor atg_empty_quantized(int64_t *size_data, int size_len, gc_tensor qtensor, int options_kind, int options_device); +raw_tensor atg_empty_quantized_out(gc_tensor out, int64_t *size_data, int size_len, gc_tensor qtensor); +raw_tensor atg_empty_strided(int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device); +raw_tensor atg_empty_strided_out(gc_tensor out, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); +raw_tensor atg_eq(gc_tensor self, scalar other); +raw_tensor atg_eq_(gc_tensor self, scalar other); +raw_tensor atg_eq_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_eq_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_eq_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_eq_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +int atg_equal(gc_tensor self, gc_tensor other); +raw_tensor atg_erf(gc_tensor self); +raw_tensor atg_erf_(gc_tensor self); +raw_tensor atg_erf_out(gc_tensor out, gc_tensor self); +raw_tensor atg_erfc(gc_tensor self); +raw_tensor atg_erfc_(gc_tensor self); +raw_tensor atg_erfc_out(gc_tensor out, gc_tensor self); +raw_tensor atg_erfinv(gc_tensor self); +raw_tensor atg_erfinv_(gc_tensor self); +raw_tensor atg_erfinv_out(gc_tensor out, gc_tensor self); +raw_tensor atg_exp(gc_tensor self); +raw_tensor atg_exp2(gc_tensor self); +raw_tensor atg_exp2_(gc_tensor self); +raw_tensor atg_exp2_out(gc_tensor out, gc_tensor self); +raw_tensor atg_exp_(gc_tensor self); +raw_tensor atg_exp_out(gc_tensor out, gc_tensor self); +raw_tensor atg_expand(gc_tensor self, int64_t *size_data, int size_len, int implicit); +raw_tensor atg_expand_as(gc_tensor self, gc_tensor other); +raw_tensor atg_expand_copy(gc_tensor self, int64_t *size_data, int size_len, int implicit); +raw_tensor atg_expand_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int implicit); +raw_tensor atg_expm1(gc_tensor self); +raw_tensor atg_expm1_(gc_tensor self); +raw_tensor atg_expm1_out(gc_tensor out, gc_tensor self); +raw_tensor atg_exponential(gc_tensor self, double lambd); +raw_tensor atg_exponential_(gc_tensor self, double lambd); +raw_tensor atg_exponential_out(gc_tensor out, gc_tensor self, double lambd); +raw_tensor atg_eye(int64_t n, int options_kind, int options_device); +raw_tensor atg_eye_m(int64_t n, int64_t m, int options_kind, int options_device); +raw_tensor atg_eye_m_out(gc_tensor out, int64_t n, int64_t m); +raw_tensor atg_eye_out(gc_tensor out, int64_t n); +raw_tensor atg_fake_quantize_per_channel_affine(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max); +void atg_fake_quantize_per_channel_affine_cachemask(raw_tensor *, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max); +raw_tensor atg_fake_quantize_per_channel_affine_cachemask_backward(gc_tensor grad, gc_tensor mask); +void atg_fake_quantize_per_channel_affine_cachemask_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t axis, int64_t quant_min, int64_t quant_max); +raw_tensor atg_fake_quantize_per_tensor_affine(gc_tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max); +void atg_fake_quantize_per_tensor_affine_cachemask(raw_tensor *, gc_tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max); +raw_tensor atg_fake_quantize_per_tensor_affine_cachemask_backward(gc_tensor grad, gc_tensor mask); +void atg_fake_quantize_per_tensor_affine_cachemask_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, double scale, int64_t zero_point, int64_t quant_min, int64_t quant_max); +raw_tensor atg_fake_quantize_per_tensor_affine_tensor_qparams(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int64_t quant_min, int64_t quant_max); +raw_tensor atg_fbgemm_linear_fp16_weight(gc_tensor input, gc_tensor packed_weight, gc_tensor bias); +raw_tensor atg_fbgemm_linear_fp16_weight_fp32_activation(gc_tensor input, gc_tensor packed_weight, gc_tensor bias); +raw_tensor atg_fbgemm_linear_int8_weight(gc_tensor input, gc_tensor weight, gc_tensor packed, gc_tensor col_offsets, scalar weight_scale, scalar weight_zero_point, gc_tensor bias); +raw_tensor atg_fbgemm_linear_int8_weight_fp32_activation(gc_tensor input, gc_tensor weight, gc_tensor packed, gc_tensor col_offsets, scalar weight_scale, scalar weight_zero_point, gc_tensor bias); +raw_tensor atg_fbgemm_pack_gemm_matrix_fp16(gc_tensor input); +raw_tensor atg_fbgemm_pack_quantized_matrix(gc_tensor input); +raw_tensor atg_fbgemm_pack_quantized_matrix_kn(gc_tensor input, int64_t K, int64_t n); +raw_tensor atg_feature_alpha_dropout(gc_tensor input, double p, int train); +raw_tensor atg_feature_alpha_dropout_(gc_tensor self, double p, int train); +raw_tensor atg_feature_dropout(gc_tensor input, double p, int train); +raw_tensor atg_feature_dropout_(gc_tensor self, double p, int train); +raw_tensor atg_fft_fft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_fft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_fft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_fft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_fftfreq(int64_t n, double d, int options_kind, int options_device); +raw_tensor atg_fft_fftfreq_out(gc_tensor out, int64_t n, double d); +raw_tensor atg_fft_fftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_fftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_fftshift(gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg_fft_hfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_hfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_hfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_hfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_hfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_hfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ifft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_ifft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ifft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ifft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_ifftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ifftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ifftshift(gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg_fft_ihfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_ihfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ihfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ihfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_ihfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_ihfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_irfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_irfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_irfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_irfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_irfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_irfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_rfft(gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_rfft2(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_rfft2_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_rfft_out(gc_tensor out, gc_tensor self, int64_t n_v, int n_null, int64_t dim, char * norm); +raw_tensor atg_fft_rfftfreq(int64_t n, double d, int options_kind, int options_device); +raw_tensor atg_fft_rfftfreq_out(gc_tensor out, int64_t n, double d); +raw_tensor atg_fft_rfftn(gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fft_rfftn_out(gc_tensor out, gc_tensor self, int64_t *s_data, int s_len, int64_t *dim_data, int dim_len, char * norm); +raw_tensor atg_fill(gc_tensor self, scalar value); +raw_tensor atg_fill_(gc_tensor self, scalar value); +raw_tensor atg_fill_diagonal_(gc_tensor self, scalar fill_value, int wrap); +raw_tensor atg_fill_scalar_out(gc_tensor out, gc_tensor self, scalar value); +raw_tensor atg_fill_tensor(gc_tensor self, gc_tensor value); +raw_tensor atg_fill_tensor_(gc_tensor self, gc_tensor value); +raw_tensor atg_fill_tensor_out(gc_tensor out, gc_tensor self, gc_tensor value); +raw_tensor atg_fix(gc_tensor self); +raw_tensor atg_fix_(gc_tensor self); +raw_tensor atg_fix_out(gc_tensor out, gc_tensor self); +raw_tensor atg_flatten(gc_tensor self, int64_t start_dim, int64_t end_dim); +raw_tensor atg_flatten_dense_tensors(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_flip(gc_tensor self, int64_t *dims_data, int dims_len); +raw_tensor atg_flip_out(gc_tensor out, gc_tensor self, int64_t *dims_data, int dims_len); +raw_tensor atg_fliplr(gc_tensor self); +raw_tensor atg_flipud(gc_tensor self); +raw_tensor atg_float_power(gc_tensor self, gc_tensor exponent); +raw_tensor atg_float_power_(gc_tensor self, scalar exponent); +raw_tensor atg_float_power_scalar(scalar self, gc_tensor exponent); +raw_tensor atg_float_power_scalar_out(gc_tensor out, scalar self, gc_tensor exponent); +raw_tensor atg_float_power_tensor_(gc_tensor self, gc_tensor exponent); +raw_tensor atg_float_power_tensor_scalar(gc_tensor self, scalar exponent); +raw_tensor atg_float_power_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar exponent); +raw_tensor atg_float_power_tensor_tensor_out(gc_tensor out, gc_tensor self, gc_tensor exponent); +raw_tensor atg_floor(gc_tensor self); +raw_tensor atg_floor_(gc_tensor self); +raw_tensor atg_floor_divide(gc_tensor self, gc_tensor other); +raw_tensor atg_floor_divide_(gc_tensor self, gc_tensor other); +raw_tensor atg_floor_divide_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_floor_divide_scalar(gc_tensor self, scalar other); +raw_tensor atg_floor_divide_scalar_(gc_tensor self, scalar other); +raw_tensor atg_floor_out(gc_tensor out, gc_tensor self); +raw_tensor atg_fmax(gc_tensor self, gc_tensor other); +raw_tensor atg_fmax_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_fmin(gc_tensor self, gc_tensor other); +raw_tensor atg_fmin_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_fmod(gc_tensor self, scalar other); +raw_tensor atg_fmod_(gc_tensor self, scalar other); +raw_tensor atg_fmod_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_fmod_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_fmod_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_fmod_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_frac(gc_tensor self); +raw_tensor atg_frac_(gc_tensor self); +raw_tensor atg_frac_out(gc_tensor out, gc_tensor self); +void atg_fractional_max_pool2d(raw_tensor *, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples); +raw_tensor atg_fractional_max_pool2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices); +raw_tensor atg_fractional_max_pool2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices); +void atg_fractional_max_pool2d_output(raw_tensor *, gc_tensor output, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples); +void atg_fractional_max_pool3d(raw_tensor *, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples); +raw_tensor atg_fractional_max_pool3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices); +raw_tensor atg_fractional_max_pool3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor indices); +void atg_fractional_max_pool3d_output(raw_tensor *, gc_tensor output, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *output_size_data, int output_size_len, gc_tensor random_samples); +void atg_frexp(raw_tensor *, gc_tensor self); +void atg_frexp_tensor_out(raw_tensor *, gc_tensor mantissa, gc_tensor exponent, gc_tensor self); +raw_tensor atg_frobenius_norm(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_frobenius_norm_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_from_file(char * filename, int shared, int64_t size_v, int size_null, int options_kind, int options_device); +raw_tensor atg_from_file_out(gc_tensor out, char * filename, int shared, int64_t size_v, int size_null); +raw_tensor atg_full(int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device); +raw_tensor atg_full_like(gc_tensor self, scalar fill_value); +raw_tensor atg_full_like_out(gc_tensor out, gc_tensor self, scalar fill_value); +raw_tensor atg_full_out(gc_tensor out, int64_t *size_data, int size_len, scalar fill_value); +raw_tensor atg_fused_moving_avg_obs_fake_quant(gc_tensor self, gc_tensor observer_on, gc_tensor fake_quant_on, gc_tensor running_min, gc_tensor running_max, gc_tensor scale, gc_tensor zero_point, double averaging_const, int64_t quant_min, int64_t quant_max, int64_t ch_axis, int per_row_fake_quant, int symmetric_quant); +raw_tensor atg_gather(gc_tensor self, int64_t dim, gc_tensor index, int sparse_grad); +raw_tensor atg_gather_backward(gc_tensor grad, gc_tensor self, int64_t dim, gc_tensor index, int sparse_grad); +raw_tensor atg_gather_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, int sparse_grad); +raw_tensor atg_gcd(gc_tensor self, gc_tensor other); +raw_tensor atg_gcd_(gc_tensor self, gc_tensor other); +raw_tensor atg_gcd_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_ge(gc_tensor self, scalar other); +raw_tensor atg_ge_(gc_tensor self, scalar other); +raw_tensor atg_ge_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_ge_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_ge_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_ge_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_gelu(gc_tensor self, char * approximate); +raw_tensor atg_gelu_(gc_tensor self, char * approximate); +raw_tensor atg_gelu_backward(gc_tensor grad_output, gc_tensor self, char * approximate); +raw_tensor atg_gelu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, char * approximate); +raw_tensor atg_gelu_out(gc_tensor out, gc_tensor self, char * approximate); +raw_tensor atg_geometric(gc_tensor self, double p); +raw_tensor atg_geometric_(gc_tensor self, double p); +raw_tensor atg_geometric_out(gc_tensor out, gc_tensor self, double p); +void atg_geqrf(raw_tensor *, gc_tensor self); +void atg_geqrf_a(raw_tensor *, gc_tensor a, gc_tensor tau, gc_tensor self); +raw_tensor atg_ger(gc_tensor self, gc_tensor vec2); +raw_tensor atg_ger_out(gc_tensor out, gc_tensor self, gc_tensor vec2); +raw_tensor atg_glu(gc_tensor self, int64_t dim); +raw_tensor atg_glu_backward(gc_tensor grad_output, gc_tensor self, int64_t dim); +raw_tensor atg_glu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t dim); +raw_tensor atg_glu_backward_jvp(gc_tensor grad_x, gc_tensor grad_glu, gc_tensor x, gc_tensor dgrad_glu, gc_tensor dx, int64_t dim); +raw_tensor atg_glu_backward_jvp_out(gc_tensor out, gc_tensor grad_x, gc_tensor grad_glu, gc_tensor x, gc_tensor dgrad_glu, gc_tensor dx, int64_t dim); +raw_tensor atg_glu_jvp(gc_tensor glu, gc_tensor x, gc_tensor dx, int64_t dim); +raw_tensor atg_glu_jvp_out(gc_tensor out, gc_tensor glu, gc_tensor x, gc_tensor dx, int64_t dim); +raw_tensor atg_glu_out(gc_tensor out, gc_tensor self, int64_t dim); +raw_tensor atg_grad(gc_tensor self); +raw_tensor atg_greater(gc_tensor self, scalar other); +raw_tensor atg_greater_(gc_tensor self, scalar other); +raw_tensor atg_greater_equal(gc_tensor self, scalar other); +raw_tensor atg_greater_equal_(gc_tensor self, scalar other); +raw_tensor atg_greater_equal_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_greater_equal_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_greater_equal_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_greater_equal_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_greater_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_greater_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_greater_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_greater_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_grid_sampler(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +raw_tensor atg_grid_sampler_2d(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +raw_tensor atg_grid_sampler_2d_out(gc_tensor out, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +raw_tensor atg_grid_sampler_3d(gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +raw_tensor atg_grid_sampler_3d_out(gc_tensor out, gc_tensor input, gc_tensor grid, int64_t interpolation_mode, int64_t padding_mode, int align_corners); +raw_tensor atg_group_norm(gc_tensor input, int64_t num_groups, gc_tensor weight, gc_tensor bias, double eps, int cudnn_enabled); +void atg_gru(raw_tensor *, gc_tensor input, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); +raw_tensor atg_gru_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh); +void atg_gru_data(raw_tensor *, gc_tensor data, gc_tensor batch_sizes, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); +raw_tensor atg_gt(gc_tensor self, scalar other); +raw_tensor atg_gt_(gc_tensor self, scalar other); +raw_tensor atg_gt_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_gt_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_gt_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_gt_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_hamming_window(int64_t window_length, int options_kind, int options_device); +raw_tensor atg_hamming_window_out(gc_tensor out, int64_t window_length); +raw_tensor atg_hamming_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device); +raw_tensor atg_hamming_window_periodic_alpha(int64_t window_length, int periodic, double alpha, int options_kind, int options_device); +raw_tensor atg_hamming_window_periodic_alpha_beta(int64_t window_length, int periodic, double alpha, double beta, int options_kind, int options_device); +raw_tensor atg_hamming_window_periodic_alpha_beta_out(gc_tensor out, int64_t window_length, int periodic, double alpha, double beta); +raw_tensor atg_hamming_window_periodic_alpha_out(gc_tensor out, int64_t window_length, int periodic, double alpha); +raw_tensor atg_hamming_window_periodic_out(gc_tensor out, int64_t window_length, int periodic); +raw_tensor atg_hann_window(int64_t window_length, int options_kind, int options_device); +raw_tensor atg_hann_window_out(gc_tensor out, int64_t window_length); +raw_tensor atg_hann_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device); +raw_tensor atg_hann_window_periodic_out(gc_tensor out, int64_t window_length, int periodic); +raw_tensor atg_hardshrink(gc_tensor self); +raw_tensor atg_hardshrink_backward(gc_tensor grad_out, gc_tensor self, scalar lambd); +raw_tensor atg_hardshrink_backward_grad_input(gc_tensor grad_input, gc_tensor grad_out, gc_tensor self, scalar lambd); +raw_tensor atg_hardshrink_out(gc_tensor out, gc_tensor self); +raw_tensor atg_hardsigmoid(gc_tensor self); +raw_tensor atg_hardsigmoid_(gc_tensor self); +raw_tensor atg_hardsigmoid_backward(gc_tensor grad_output, gc_tensor self); +raw_tensor atg_hardsigmoid_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self); +raw_tensor atg_hardsigmoid_out(gc_tensor out, gc_tensor self); +raw_tensor atg_hardswish(gc_tensor self); +raw_tensor atg_hardswish_(gc_tensor self); +raw_tensor atg_hardswish_backward(gc_tensor grad_output, gc_tensor self); +raw_tensor atg_hardswish_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self); +raw_tensor atg_hardswish_out(gc_tensor out, gc_tensor self); +raw_tensor atg_hardtanh(gc_tensor self); +raw_tensor atg_hardtanh_(gc_tensor self); +raw_tensor atg_hardtanh_backward(gc_tensor grad_output, gc_tensor self, scalar min_val, scalar max_val); +raw_tensor atg_hardtanh_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar min_val, scalar max_val); +raw_tensor atg_hardtanh_out(gc_tensor out, gc_tensor self); +raw_tensor atg_heaviside(gc_tensor self, gc_tensor values); +raw_tensor atg_heaviside_(gc_tensor self, gc_tensor values); +raw_tensor atg_heaviside_out(gc_tensor out, gc_tensor self, gc_tensor values); +raw_tensor atg_hinge_embedding_loss(gc_tensor self, gc_tensor target, double margin, int64_t reduction); +raw_tensor atg_histc(gc_tensor self, int64_t bins); +raw_tensor atg_histc_out(gc_tensor out, gc_tensor self, int64_t bins); +raw_tensor *atg_hsplit(gc_tensor self, int64_t sections); +raw_tensor *atg_hsplit_array(gc_tensor self, int64_t *indices_data, int indices_len); +raw_tensor atg_hspmm(gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_hspmm_out(gc_tensor out, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_hstack(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_hstack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_huber_loss(gc_tensor self, gc_tensor target, int64_t reduction, double delta); +raw_tensor atg_huber_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double delta); +raw_tensor atg_huber_loss_backward_out(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double delta); +raw_tensor atg_huber_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction, double delta); +raw_tensor atg_hypot(gc_tensor self, gc_tensor other); +raw_tensor atg_hypot_(gc_tensor self, gc_tensor other); +raw_tensor atg_hypot_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_i0(gc_tensor self); +raw_tensor atg_i0_(gc_tensor self); +raw_tensor atg_i0_out(gc_tensor out, gc_tensor self); +raw_tensor atg_igamma(gc_tensor self, gc_tensor other); +raw_tensor atg_igamma_(gc_tensor self, gc_tensor other); +raw_tensor atg_igamma_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_igammac(gc_tensor self, gc_tensor other); +raw_tensor atg_igammac_(gc_tensor self, gc_tensor other); +raw_tensor atg_igammac_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_im2col(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); +raw_tensor atg_im2col_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *dilation_data, int dilation_len, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len); +raw_tensor atg_imag(gc_tensor self); +raw_tensor atg_index(gc_tensor self, gc_tensor *indices_data, int indices_len); +raw_tensor atg_index_add(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source); +raw_tensor atg_index_add_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source); +raw_tensor atg_index_add_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source); +raw_tensor atg_index_copy(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source); +raw_tensor atg_index_copy_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source); +raw_tensor atg_index_copy_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source); +raw_tensor atg_index_fill(gc_tensor self, int64_t dim, gc_tensor index, scalar value); +raw_tensor atg_index_fill_(gc_tensor self, int64_t dim, gc_tensor index, scalar value); +raw_tensor atg_index_fill_int_scalar_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, scalar value); +raw_tensor atg_index_fill_int_tensor(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor value); +raw_tensor atg_index_fill_int_tensor_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor value); +raw_tensor atg_index_fill_int_tensor_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor value); +raw_tensor atg_index_put(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate); +raw_tensor atg_index_put_(gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate); +raw_tensor atg_index_put_out(gc_tensor out, gc_tensor self, gc_tensor *indices_data, int indices_len, gc_tensor values, int accumulate); +raw_tensor atg_index_reduce(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source, char * reduce, int include_self); +raw_tensor atg_index_reduce_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source, char * reduce, int include_self); +raw_tensor atg_index_reduce_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor source, char * reduce, int include_self); +raw_tensor atg_index_select(gc_tensor self, int64_t dim, gc_tensor index); +raw_tensor atg_index_select_backward(gc_tensor grad, int64_t *self_sizes_data, int self_sizes_len, int64_t dim, gc_tensor index); +raw_tensor atg_index_select_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index); +raw_tensor atg_index_tensor_out(gc_tensor out, gc_tensor self, gc_tensor *indices_data, int indices_len); +raw_tensor atg_indices(gc_tensor self); +raw_tensor atg_indices_copy(gc_tensor self); +raw_tensor atg_indices_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_infinitely_differentiable_gelu_backward(gc_tensor grad, gc_tensor self); +raw_tensor atg_inner(gc_tensor self, gc_tensor other); +raw_tensor atg_inner_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_instance_norm(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int use_input_stats, double momentum, double eps, int cudnn_enabled); +raw_tensor atg_int_repr(gc_tensor self); +raw_tensor atg_int_repr_out(gc_tensor out, gc_tensor self); +raw_tensor atg_inverse(gc_tensor self); +raw_tensor atg_inverse_out(gc_tensor out, gc_tensor self); +int atg_is_coalesced(gc_tensor self); +int atg_is_complex(gc_tensor self); +int atg_is_conj(gc_tensor self); +int atg_is_distributed(gc_tensor self); +int atg_is_floating_point(gc_tensor self); +int atg_is_inference(gc_tensor self); +int atg_is_leaf(gc_tensor self); +int atg_is_neg(gc_tensor self); +int atg_is_nonzero(gc_tensor self); +int atg_is_pinned(gc_tensor self, int device); +int atg_is_same_size(gc_tensor self, gc_tensor other); +int atg_is_set_to(gc_tensor self, gc_tensor tensor); +int atg_is_signed(gc_tensor self); int atg_is_vulkan_available(); -void atg_isclose(tensor *, tensor self, tensor other, double rtol, double atol, int equal_nan); -void atg_isfinite(tensor *, tensor self); -void atg_isin(tensor *, tensor elements, tensor test_elements, int assume_unique, int invert); -void atg_isin_scalar_tensor(tensor *, scalar element, tensor test_elements, int assume_unique, int invert); -void atg_isin_scalar_tensor_out(tensor *, tensor out, scalar element, tensor test_elements, int assume_unique, int invert); -void atg_isin_tensor_scalar(tensor *, tensor elements, scalar test_element, int assume_unique, int invert); -void atg_isin_tensor_scalar_out(tensor *, tensor out, tensor elements, scalar test_element, int assume_unique, int invert); -void atg_isin_tensor_tensor_out(tensor *, tensor out, tensor elements, tensor test_elements, int assume_unique, int invert); -void atg_isinf(tensor *, tensor self); -void atg_isinf_out(tensor *, tensor out, tensor self); -void atg_isnan(tensor *, tensor self); -void atg_isnan_out(tensor *, tensor out, tensor self); -void atg_isneginf(tensor *, tensor self); -void atg_isneginf_out(tensor *, tensor out, tensor self); -void atg_isposinf(tensor *, tensor self); -void atg_isposinf_out(tensor *, tensor out, tensor self); -void atg_isreal(tensor *, tensor self); -void atg_istft(tensor *, tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, tensor window, int center, int normalized, int onesided, int64_t length_v, int length_null, int return_complex); -void atg_kaiser_window(tensor *, int64_t window_length, int options_kind, int options_device); -void atg_kaiser_window_beta(tensor *, int64_t window_length, int periodic, double beta, int options_kind, int options_device); -void atg_kaiser_window_beta_out(tensor *, tensor out, int64_t window_length, int periodic, double beta); -void atg_kaiser_window_out(tensor *, tensor out, int64_t window_length); -void atg_kaiser_window_periodic(tensor *, int64_t window_length, int periodic, int options_kind, int options_device); -void atg_kaiser_window_periodic_out(tensor *, tensor out, int64_t window_length, int periodic); -void atg_kl_div(tensor *, tensor self, tensor target, int64_t reduction, int log_target); -void atg_kron(tensor *, tensor self, tensor other); -void atg_kron_out(tensor *, tensor out, tensor self, tensor other); -void atg_kthvalue(tensor *, tensor self, int64_t k, int64_t dim, int keepdim); -void atg_kthvalue_values(tensor *, tensor values, tensor indices, tensor self, int64_t k, int64_t dim, int keepdim); -void atg_l1_loss(tensor *, tensor self, tensor target, int64_t reduction); -void atg_layer_norm(tensor *, tensor input, int64_t *normalized_shape_data, int normalized_shape_len, tensor weight, tensor bias, double eps, int cudnn_enable); -void atg_lcm(tensor *, tensor self, tensor other); -void atg_lcm_(tensor *, tensor self, tensor other); -void atg_lcm_out(tensor *, tensor out, tensor self, tensor other); -void atg_ldexp(tensor *, tensor self, tensor other); -void atg_ldexp_(tensor *, tensor self, tensor other); -void atg_ldexp_out(tensor *, tensor out, tensor self, tensor other); -void atg_le(tensor *, tensor self, scalar other); -void atg_le_(tensor *, tensor self, scalar other); -void atg_le_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_le_tensor(tensor *, tensor self, tensor other); -void atg_le_tensor_(tensor *, tensor self, tensor other); -void atg_le_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_leaky_relu(tensor *, tensor self); -void atg_leaky_relu_(tensor *, tensor self); -void atg_leaky_relu_backward(tensor *, tensor grad_output, tensor self, scalar negative_slope, int self_is_result); -void atg_leaky_relu_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, scalar negative_slope, int self_is_result); -void atg_leaky_relu_out(tensor *, tensor out, tensor self); -void atg_lerp(tensor *, tensor self, tensor end, scalar weight); -void atg_lerp_(tensor *, tensor self, tensor end, scalar weight); -void atg_lerp_scalar_out(tensor *, tensor out, tensor self, tensor end, scalar weight); -void atg_lerp_tensor(tensor *, tensor self, tensor end, tensor weight); -void atg_lerp_tensor_(tensor *, tensor self, tensor end, tensor weight); -void atg_lerp_tensor_out(tensor *, tensor out, tensor self, tensor end, tensor weight); -void atg_less(tensor *, tensor self, scalar other); -void atg_less_(tensor *, tensor self, scalar other); -void atg_less_equal(tensor *, tensor self, scalar other); -void atg_less_equal_(tensor *, tensor self, scalar other); -void atg_less_equal_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_less_equal_tensor(tensor *, tensor self, tensor other); -void atg_less_equal_tensor_(tensor *, tensor self, tensor other); -void atg_less_equal_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_less_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_less_tensor(tensor *, tensor self, tensor other); -void atg_less_tensor_(tensor *, tensor self, tensor other); -void atg_less_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_lgamma(tensor *, tensor self); -void atg_lgamma_(tensor *, tensor self); -void atg_lgamma_out(tensor *, tensor out, tensor self); -void atg_lift(tensor *, tensor self); -void atg_lift_fresh(tensor *, tensor self); -void atg_lift_fresh_copy(tensor *, tensor self); -void atg_lift_fresh_copy_out(tensor *, tensor out, tensor self); -void atg_lift_out(tensor *, tensor out, tensor self); -void atg_linalg_cholesky(tensor *, tensor self, int upper); -void atg_linalg_cholesky_ex(tensor *, tensor self, int upper, int check_errors); -void atg_linalg_cholesky_ex_l(tensor *, tensor L, tensor info, tensor self, int upper, int check_errors); -void atg_linalg_cholesky_out(tensor *, tensor out, tensor self, int upper); -void atg_linalg_cond(tensor *, tensor self, scalar p); -void atg_linalg_cond_out(tensor *, tensor out, tensor self, scalar p); -void atg_linalg_cond_p_str(tensor *, tensor self, char * p); -void atg_linalg_cond_p_str_out(tensor *, tensor out, tensor self, char * p); -void atg_linalg_cross(tensor *, tensor self, tensor other, int64_t dim); -void atg_linalg_cross_out(tensor *, tensor out, tensor self, tensor other, int64_t dim); -void atg_linalg_det(tensor *, tensor A); -void atg_linalg_det_out(tensor *, tensor out, tensor A); -void atg_linalg_diagonal(tensor *, tensor A, int64_t offset, int64_t dim1, int64_t dim2); -void atg_linalg_eig(tensor *, tensor self); -void atg_linalg_eig_out(tensor *, tensor eigenvalues, tensor eigenvectors, tensor self); -void atg_linalg_eigh(tensor *, tensor self, char * UPLO); -void atg_linalg_eigh_eigvals(tensor *, tensor eigvals, tensor eigvecs, tensor self, char * UPLO); -void atg_linalg_eigvals(tensor *, tensor self); -void atg_linalg_eigvals_out(tensor *, tensor out, tensor self); -void atg_linalg_eigvalsh(tensor *, tensor self, char * UPLO); -void atg_linalg_eigvalsh_out(tensor *, tensor out, tensor self, char * UPLO); -void atg_linalg_householder_product(tensor *, tensor input, tensor tau); -void atg_linalg_householder_product_out(tensor *, tensor out, tensor input, tensor tau); -void atg_linalg_inv(tensor *, tensor A); -void atg_linalg_inv_ex(tensor *, tensor A, int check_errors); -void atg_linalg_inv_ex_inverse(tensor *, tensor inverse, tensor info, tensor A, int check_errors); -void atg_linalg_inv_out(tensor *, tensor out, tensor A); -void atg_linalg_ldl_factor(tensor *, tensor self, int hermitian); -void atg_linalg_ldl_factor_ex(tensor *, tensor self, int hermitian, int check_errors); -void atg_linalg_ldl_factor_ex_out(tensor *, tensor LD, tensor pivots, tensor info, tensor self, int hermitian, int check_errors); -void atg_linalg_ldl_factor_out(tensor *, tensor LD, tensor pivots, tensor self, int hermitian); -void atg_linalg_ldl_solve(tensor *, tensor LD, tensor pivots, tensor B, int hermitian); -void atg_linalg_ldl_solve_out(tensor *, tensor out, tensor LD, tensor pivots, tensor B, int hermitian); -void atg_linalg_lstsq(tensor *, tensor self, tensor b, double rcond_v, int rcond_null, char * driver); -void atg_linalg_lstsq_out(tensor *, tensor solution, tensor residuals, tensor rank, tensor singular_values, tensor self, tensor b, double rcond_v, int rcond_null, char * driver); -void atg_linalg_lu(tensor *, tensor A, int pivot); -void atg_linalg_lu_factor(tensor *, tensor A, int pivot); -void atg_linalg_lu_factor_ex(tensor *, tensor A, int pivot, int check_errors); -void atg_linalg_lu_factor_ex_out(tensor *, tensor LU, tensor pivots, tensor info, tensor A, int pivot, int check_errors); -void atg_linalg_lu_factor_out(tensor *, tensor LU, tensor pivots, tensor A, int pivot); -void atg_linalg_lu_out(tensor *, tensor P, tensor L, tensor U, tensor A, int pivot); -void atg_linalg_lu_solve(tensor *, tensor LU, tensor pivots, tensor B, int left, int adjoint); -void atg_linalg_lu_solve_out(tensor *, tensor out, tensor LU, tensor pivots, tensor B, int left, int adjoint); -void atg_linalg_matmul(tensor *, tensor self, tensor other); -void atg_linalg_matmul_out(tensor *, tensor out, tensor self, tensor other); -void atg_linalg_matrix_exp(tensor *, tensor self); -void atg_linalg_matrix_exp_out(tensor *, tensor out, tensor self); -void atg_linalg_matrix_power(tensor *, tensor self, int64_t n); -void atg_linalg_matrix_power_out(tensor *, tensor out, tensor self, int64_t n); -void atg_linalg_matrix_rank(tensor *, tensor self, double tol, int hermitian); -void atg_linalg_matrix_rank_atol_rtol_float(tensor *, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); -void atg_linalg_matrix_rank_atol_rtol_float_out(tensor *, tensor out, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); -void atg_linalg_matrix_rank_atol_rtol_tensor(tensor *, tensor input, tensor atol, tensor rtol, int hermitian); -void atg_linalg_matrix_rank_atol_rtol_tensor_out(tensor *, tensor out, tensor input, tensor atol, tensor rtol, int hermitian); -void atg_linalg_matrix_rank_out(tensor *, tensor out, tensor self, double tol, int hermitian); -void atg_linalg_matrix_rank_out_tol_tensor(tensor *, tensor out, tensor input, tensor tol, int hermitian); -void atg_linalg_matrix_rank_tol_tensor(tensor *, tensor input, tensor tol, int hermitian); -void atg_linalg_multi_dot(tensor *, tensor *tensors_data, int tensors_len); -void atg_linalg_multi_dot_out(tensor *, tensor out, tensor *tensors_data, int tensors_len); -void atg_linalg_pinv(tensor *, tensor self, double rcond, int hermitian); -void atg_linalg_pinv_atol_rtol_float(tensor *, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); -void atg_linalg_pinv_atol_rtol_float_out(tensor *, tensor out, tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); -void atg_linalg_pinv_atol_rtol_tensor(tensor *, tensor self, tensor atol, tensor rtol, int hermitian); -void atg_linalg_pinv_atol_rtol_tensor_out(tensor *, tensor out, tensor self, tensor atol, tensor rtol, int hermitian); -void atg_linalg_pinv_out(tensor *, tensor out, tensor self, double rcond, int hermitian); -void atg_linalg_pinv_out_rcond_tensor(tensor *, tensor out, tensor self, tensor rcond, int hermitian); -void atg_linalg_pinv_rcond_tensor(tensor *, tensor self, tensor rcond, int hermitian); -void atg_linalg_qr(tensor *, tensor A, char * mode); -void atg_linalg_qr_out(tensor *, tensor Q, tensor R, tensor A, char * mode); -void atg_linalg_slogdet(tensor *, tensor A); -void atg_linalg_slogdet_out(tensor *, tensor sign, tensor logabsdet, tensor A); -void atg_linalg_solve(tensor *, tensor A, tensor B, int left); -void atg_linalg_solve_ex(tensor *, tensor A, tensor B, int left, int check_errors); -void atg_linalg_solve_ex_out(tensor *, tensor result, tensor info, tensor A, tensor B, int left, int check_errors); -void atg_linalg_solve_out(tensor *, tensor out, tensor A, tensor B, int left); -void atg_linalg_solve_triangular(tensor *, tensor self, tensor B, int upper, int left, int unitriangular); -void atg_linalg_solve_triangular_out(tensor *, tensor out, tensor self, tensor B, int upper, int left, int unitriangular); -void atg_linalg_svd(tensor *, tensor A, int full_matrices, char * driver); -void atg_linalg_svd_u(tensor *, tensor U, tensor S, tensor Vh, tensor A, int full_matrices, char * driver); -void atg_linalg_svdvals(tensor *, tensor A, char * driver); -void atg_linalg_svdvals_out(tensor *, tensor out, tensor A, char * driver); -void atg_linalg_tensorinv(tensor *, tensor self, int64_t ind); -void atg_linalg_tensorinv_out(tensor *, tensor out, tensor self, int64_t ind); -void atg_linalg_tensorsolve(tensor *, tensor self, tensor other, int64_t *dims_data, int dims_len); -void atg_linalg_tensorsolve_out(tensor *, tensor out, tensor self, tensor other, int64_t *dims_data, int dims_len); -void atg_linalg_vander(tensor *, tensor x, int64_t n_v, int n_null); -void atg_linalg_vecdot(tensor *, tensor x, tensor y, int64_t dim); -void atg_linalg_vecdot_out(tensor *, tensor out, tensor x, tensor y, int64_t dim); -void atg_linear(tensor *, tensor input, tensor weight, tensor bias); -void atg_linear_out(tensor *, tensor out, tensor input, tensor weight, tensor bias); -void atg_linspace(tensor *, scalar start, scalar end, int64_t steps, int options_kind, int options_device); -void atg_linspace_out(tensor *, tensor out, scalar start, scalar end, int64_t steps); -void atg_log(tensor *, tensor self); -void atg_log10(tensor *, tensor self); -void atg_log10_(tensor *, tensor self); -void atg_log10_out(tensor *, tensor out, tensor self); -void atg_log1p(tensor *, tensor self); -void atg_log1p_(tensor *, tensor self); -void atg_log1p_out(tensor *, tensor out, tensor self); -void atg_log2(tensor *, tensor self); -void atg_log2_(tensor *, tensor self); -void atg_log2_out(tensor *, tensor out, tensor self); -void atg_log_(tensor *, tensor self); -void atg_log_normal(tensor *, tensor self, double mean, double std); -void atg_log_normal_(tensor *, tensor self, double mean, double std); -void atg_log_normal_out(tensor *, tensor out, tensor self, double mean, double std); -void atg_log_out(tensor *, tensor out, tensor self); -void atg_log_sigmoid(tensor *, tensor self); -void atg_log_sigmoid_backward(tensor *, tensor grad_output, tensor self, tensor buffer); -void atg_log_sigmoid_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor buffer); -void atg_log_sigmoid_out(tensor *, tensor out, tensor self); -void atg_log_softmax(tensor *, tensor self, int64_t dim, int dtype); -void atg_log_softmax_int_out(tensor *, tensor out, tensor self, int64_t dim, int dtype); -void atg_logaddexp(tensor *, tensor self, tensor other); -void atg_logaddexp2(tensor *, tensor self, tensor other); -void atg_logaddexp2_out(tensor *, tensor out, tensor self, tensor other); -void atg_logaddexp_out(tensor *, tensor out, tensor self, tensor other); -void atg_logcumsumexp(tensor *, tensor self, int64_t dim); -void atg_logcumsumexp_out(tensor *, tensor out, tensor self, int64_t dim); -void atg_logdet(tensor *, tensor self); -void atg_logical_and(tensor *, tensor self, tensor other); -void atg_logical_and_(tensor *, tensor self, tensor other); -void atg_logical_and_out(tensor *, tensor out, tensor self, tensor other); -void atg_logical_not(tensor *, tensor self); -void atg_logical_not_(tensor *, tensor self); -void atg_logical_not_out(tensor *, tensor out, tensor self); -void atg_logical_or(tensor *, tensor self, tensor other); -void atg_logical_or_(tensor *, tensor self, tensor other); -void atg_logical_or_out(tensor *, tensor out, tensor self, tensor other); -void atg_logical_xor(tensor *, tensor self, tensor other); -void atg_logical_xor_(tensor *, tensor self, tensor other); -void atg_logical_xor_out(tensor *, tensor out, tensor self, tensor other); -void atg_logit(tensor *, tensor self, double eps_v, int eps_null); -void atg_logit_(tensor *, tensor self, double eps_v, int eps_null); -void atg_logit_backward(tensor *, tensor grad_output, tensor self, double eps_v, int eps_null); -void atg_logit_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, double eps_v, int eps_null); -void atg_logit_out(tensor *, tensor out, tensor self, double eps_v, int eps_null); -void atg_logspace(tensor *, scalar start, scalar end, int64_t steps, double base, int options_kind, int options_device); -void atg_logspace_out(tensor *, tensor out, scalar start, scalar end, int64_t steps, double base); -void atg_logsumexp(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_logsumexp_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_lstm(tensor *, tensor input, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); -void atg_lstm_cell(tensor *, tensor input, tensor *hx_data, int hx_len, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh); -void atg_lstm_data(tensor *, tensor data, tensor batch_sizes, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); -void atg_lstm_mps_backward(tensor out0, tensor *out1_data, int out1_len, tensor *out2_data, int out2_len, tensor grad_y, tensor grad_hy, tensor grad_cy, tensor z_state, tensor cell_state_fwd, tensor input, tensor layersOutputs, tensor *hx_data, int hx_len, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); -void atg_lt(tensor *, tensor self, scalar other); -void atg_lt_(tensor *, tensor self, scalar other); -void atg_lt_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_lt_tensor(tensor *, tensor self, tensor other); -void atg_lt_tensor_(tensor *, tensor self, tensor other); -void atg_lt_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_lu_solve(tensor *, tensor self, tensor LU_data, tensor LU_pivots); -void atg_lu_solve_out(tensor *, tensor out, tensor self, tensor LU_data, tensor LU_pivots); -void atg_lu_unpack(tensor *, tensor LU_data, tensor LU_pivots, int unpack_data, int unpack_pivots); -void atg_lu_unpack_out(tensor *, tensor P, tensor L, tensor U, tensor LU_data, tensor LU_pivots, int unpack_data, int unpack_pivots); -void atg_margin_ranking_loss(tensor *, tensor input1, tensor input2, tensor target, double margin, int64_t reduction); -void atg_masked_fill(tensor *, tensor self, tensor mask, scalar value); -void atg_masked_fill_(tensor *, tensor self, tensor mask, scalar value); -void atg_masked_fill_scalar_out(tensor *, tensor out, tensor self, tensor mask, scalar value); -void atg_masked_fill_tensor(tensor *, tensor self, tensor mask, tensor value); -void atg_masked_fill_tensor_(tensor *, tensor self, tensor mask, tensor value); -void atg_masked_fill_tensor_out(tensor *, tensor out, tensor self, tensor mask, tensor value); -void atg_masked_scatter(tensor *, tensor self, tensor mask, tensor source); -void atg_masked_scatter_(tensor *, tensor self, tensor mask, tensor source); -void atg_masked_scatter_out(tensor *, tensor out, tensor self, tensor mask, tensor source); -void atg_masked_select(tensor *, tensor self, tensor mask); -void atg_masked_select_backward(tensor *, tensor grad, tensor input, tensor mask); -void atg_masked_select_out(tensor *, tensor out, tensor self, tensor mask); -void atg_matmul(tensor *, tensor self, tensor other); -void atg_matmul_out(tensor *, tensor out, tensor self, tensor other); -void atg_matrix_exp(tensor *, tensor self); -void atg_matrix_exp_backward(tensor *, tensor self, tensor grad); -void atg_matrix_h(tensor *, tensor self); -void atg_matrix_power(tensor *, tensor self, int64_t n); -void atg_matrix_power_out(tensor *, tensor out, tensor self, int64_t n); -void atg_max(tensor *, tensor self); -void atg_max_dim(tensor *, tensor self, int64_t dim, int keepdim); -void atg_max_dim_max(tensor *, tensor max, tensor max_values, tensor self, int64_t dim, int keepdim); -void atg_max_other(tensor *, tensor self, tensor other); -void atg_max_out(tensor *, tensor out, tensor self, tensor other); -void atg_max_pool1d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool1d_with_indices(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool2d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool2d_backward(tensor *, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool2d_backward_out(tensor *, tensor out, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool2d_with_indices(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool2d_with_indices_backward(tensor *, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices); -void atg_max_pool2d_with_indices_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices); -void atg_max_pool2d_with_indices_out(tensor *, tensor out, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool3d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool3d_with_indices(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_pool3d_with_indices_backward(tensor *, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices); -void atg_max_pool3d_with_indices_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, tensor indices); -void atg_max_pool3d_with_indices_out(tensor *, tensor out, tensor indices, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_max_unary_out(tensor *, tensor out, tensor self); -void atg_max_unpool2d(tensor *, tensor self, tensor indices, int64_t *output_size_data, int output_size_len); -void atg_max_unpool2d_out(tensor *, tensor out, tensor self, tensor indices, int64_t *output_size_data, int output_size_len); -void atg_max_unpool3d(tensor *, tensor self, tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); -void atg_max_unpool3d_out(tensor *, tensor out, tensor self, tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); -void atg_maximum(tensor *, tensor self, tensor other); -void atg_maximum_out(tensor *, tensor out, tensor self, tensor other); -void atg_mean(tensor *, tensor self, int dtype); -void atg_mean_dim(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_mean_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_median(tensor *, tensor self); -void atg_median_dim(tensor *, tensor self, int64_t dim, int keepdim); -void atg_median_dim_values(tensor *, tensor values, tensor indices, tensor self, int64_t dim, int keepdim); -void atg_median_out(tensor *, tensor out, tensor self); -tensor *atg_meshgrid(tensor *tensors_data, int tensors_len); -tensor *atg_meshgrid_indexing(tensor *tensors_data, int tensors_len, char * indexing); -void atg_mh(tensor *, tensor self); -void atg_min(tensor *, tensor self); -void atg_min_dim(tensor *, tensor self, int64_t dim, int keepdim); -void atg_min_dim_min(tensor *, tensor min, tensor min_indices, tensor self, int64_t dim, int keepdim); -void atg_min_other(tensor *, tensor self, tensor other); -void atg_min_out(tensor *, tensor out, tensor self, tensor other); -void atg_min_unary_out(tensor *, tensor out, tensor self); -void atg_minimum(tensor *, tensor self, tensor other); -void atg_minimum_out(tensor *, tensor out, tensor self, tensor other); -void atg_miopen_batch_norm(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon); -void atg_miopen_batch_norm_backward(tensor *, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon); -void atg_miopen_batch_norm_backward_out(tensor *, tensor out0, tensor out1, tensor out2, tensor input, tensor grad_output, tensor weight, tensor running_mean, tensor running_var, tensor save_mean, tensor save_var, double epsilon); -void atg_miopen_batch_norm_out(tensor *, tensor out0, tensor out1, tensor out2, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double exponential_average_factor, double epsilon); -void atg_miopen_convolution(tensor *, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); -void atg_miopen_convolution_add_relu(tensor *, tensor self, tensor weight, tensor z, scalar alpha, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_miopen_convolution_out(tensor *, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); -void atg_miopen_convolution_relu(tensor *, tensor self, tensor weight, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_miopen_convolution_transpose(tensor *, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); -void atg_miopen_convolution_transpose_out(tensor *, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); -void atg_miopen_depthwise_convolution(tensor *, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); -void atg_miopen_depthwise_convolution_out(tensor *, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); -void atg_miopen_rnn(tensor *, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state); -void atg_miopen_rnn_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor input, tensor *weight_data, int weight_len, int64_t weight_stride0, tensor hx, tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, tensor dropout_state); -void atg_mish(tensor *, tensor self); -void atg_mish_(tensor *, tensor self); -void atg_mish_backward(tensor *, tensor grad_output, tensor self); -void atg_mish_out(tensor *, tensor out, tensor self); -void atg_mkldnn_adaptive_avg_pool2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len); -void atg_mkldnn_adaptive_avg_pool2d_backward(tensor *, tensor grad_output, tensor self); -void atg_mkldnn_adaptive_avg_pool2d_backward_out(tensor *, tensor out, tensor grad_output, tensor self); -void atg_mkldnn_adaptive_avg_pool2d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len); -void atg_mkldnn_convolution(tensor *, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_mkldnn_convolution_out(tensor *, tensor out, tensor self, tensor weight, tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_mkldnn_linear(tensor *, tensor self, tensor weight, tensor bias); -void atg_mkldnn_linear_backward_input(tensor *, int64_t *input_size_data, int input_size_len, tensor grad_output, tensor weight); -void atg_mkldnn_linear_backward_input_out(tensor *, tensor out, int64_t *input_size_data, int input_size_len, tensor grad_output, tensor weight); -void atg_mkldnn_linear_backward_weights(tensor *, tensor grad_output, tensor input, tensor weight, int bias_defined); -void atg_mkldnn_linear_backward_weights_out(tensor *, tensor out0, tensor out1, tensor grad_output, tensor input, tensor weight, int bias_defined); -void atg_mkldnn_linear_out(tensor *, tensor out, tensor self, tensor weight, tensor bias); -void atg_mkldnn_max_pool2d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_max_pool2d_backward(tensor *, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_max_pool2d_backward_out(tensor *, tensor out, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_max_pool2d_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_max_pool3d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_max_pool3d_backward(tensor *, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_max_pool3d_backward_out(tensor *, tensor out, tensor grad_output, tensor output, tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_max_pool3d_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_mkldnn_reorder_conv2d_weight(tensor *, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len); -void atg_mkldnn_reorder_conv2d_weight_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len); -void atg_mkldnn_reorder_conv3d_weight(tensor *, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_mkldnn_reorder_conv3d_weight_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); -void atg_mkldnn_rnn_layer(tensor *, tensor input, tensor weight0, tensor weight1, tensor weight2, tensor weight3, tensor hx_, tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train); -void atg_mkldnn_rnn_layer_backward(tensor *, tensor input, tensor weight1, tensor weight2, tensor weight3, tensor weight4, tensor hx_, tensor cx_tmp, tensor output, tensor hy_, tensor cy_, tensor grad_output, tensor grad_hy, tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, tensor workspace); -void atg_mkldnn_rnn_layer_backward_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor out4, tensor out5, tensor out6, tensor input, tensor weight1, tensor weight2, tensor weight3, tensor weight4, tensor hx_, tensor cx_tmp, tensor output, tensor hy_, tensor cy_, tensor grad_output, tensor grad_hy, tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, tensor workspace); -void atg_mkldnn_rnn_layer_out(tensor *, tensor out0, tensor out1, tensor out2, tensor out3, tensor input, tensor weight0, tensor weight1, tensor weight2, tensor weight3, tensor hx_, tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train); -void atg_mm(tensor *, tensor self, tensor mat2); -void atg_mm_out(tensor *, tensor out, tensor self, tensor mat2); -void atg_mode(tensor *, tensor self, int64_t dim, int keepdim); -void atg_mode_values(tensor *, tensor values, tensor indices, tensor self, int64_t dim, int keepdim); -void atg_moveaxis(tensor *, tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len); -void atg_moveaxis_int(tensor *, tensor self, int64_t source, int64_t destination); -void atg_movedim(tensor *, tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len); -void atg_movedim_int(tensor *, tensor self, int64_t source, int64_t destination); -void atg_mse_loss(tensor *, tensor self, tensor target, int64_t reduction); -void atg_mse_loss_backward(tensor *, tensor grad_output, tensor self, tensor target, int64_t reduction); -void atg_mse_loss_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction); -void atg_mse_loss_out(tensor *, tensor out, tensor self, tensor target, int64_t reduction); -void atg_msort(tensor *, tensor self); -void atg_msort_out(tensor *, tensor out, tensor self); -void atg_mt(tensor *, tensor self); -void atg_mul(tensor *, tensor self, tensor other); -void atg_mul_(tensor *, tensor self, tensor other); -void atg_mul_out(tensor *, tensor out, tensor self, tensor other); -void atg_mul_scalar(tensor *, tensor self, scalar other); -void atg_mul_scalar_(tensor *, tensor self, scalar other); -void atg_mul_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_multi_margin_loss_backward(tensor *, tensor grad_output, tensor self, tensor target, scalar p, scalar margin, tensor weight, int64_t reduction); -void atg_multi_margin_loss_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, scalar p, scalar margin, tensor weight, int64_t reduction); -void atg_multilabel_margin_loss(tensor *, tensor self, tensor target, int64_t reduction); -void atg_multilabel_margin_loss_backward(tensor *, tensor grad_output, tensor self, tensor target, int64_t reduction, tensor is_target); -void atg_multilabel_margin_loss_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction, tensor is_target); -void atg_multilabel_margin_loss_out(tensor *, tensor out, tensor self, tensor target, int64_t reduction); -void atg_multinomial(tensor *, tensor self, int64_t num_samples, int replacement); -void atg_multinomial_out(tensor *, tensor out, tensor self, int64_t num_samples, int replacement); -void atg_multiply(tensor *, tensor self, tensor other); -void atg_multiply_(tensor *, tensor self, tensor other); -void atg_multiply_out(tensor *, tensor out, tensor self, tensor other); -void atg_multiply_scalar(tensor *, tensor self, scalar other); -void atg_multiply_scalar_(tensor *, tensor self, scalar other); -void atg_mv(tensor *, tensor self, tensor vec); -void atg_mv_out(tensor *, tensor out, tensor self, tensor vec); -void atg_mvlgamma(tensor *, tensor self, int64_t p); -void atg_mvlgamma_(tensor *, tensor self, int64_t p); -void atg_mvlgamma_out(tensor *, tensor out, tensor self, int64_t p); -void atg_nan_to_num(tensor *, tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null); -void atg_nan_to_num_(tensor *, tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null); -void atg_nan_to_num_out(tensor *, tensor out, tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null); -void atg_nanmean(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_nanmean_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_nanmedian(tensor *, tensor self); -void atg_nanmedian_dim(tensor *, tensor self, int64_t dim, int keepdim); -void atg_nanmedian_dim_values(tensor *, tensor values, tensor indices, tensor self, int64_t dim, int keepdim); -void atg_nanmedian_out(tensor *, tensor out, tensor self); -void atg_nanquantile(tensor *, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_nanquantile_out(tensor *, tensor out, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_nanquantile_scalar(tensor *, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_nanquantile_scalar_out(tensor *, tensor out, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_nansum(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_nansum_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_narrow(tensor *, tensor self, int64_t dim, int64_t start, int64_t length); -void atg_narrow_copy(tensor *, tensor self, int64_t dim, int64_t start, int64_t length); -void atg_narrow_copy_out(tensor *, tensor out, tensor self, int64_t dim, int64_t start, int64_t length); -void atg_narrow_tensor(tensor *, tensor self, int64_t dim, tensor start, int64_t length); -void atg_native_batch_norm(tensor *, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps); -void atg_native_batch_norm_out(tensor *, tensor out, tensor save_mean, tensor save_invstd, tensor input, tensor weight, tensor bias, tensor running_mean, tensor running_var, int training, double momentum, double eps); -void atg_native_channel_shuffle(tensor *, tensor self, int64_t groups); -void atg_native_dropout(tensor *, tensor input, double p, int train); -void atg_native_dropout_backward(tensor *, tensor grad_output, tensor mask, double scale); -void atg_native_dropout_backward_out(tensor *, tensor out, tensor grad_output, tensor mask, double scale); -void atg_native_dropout_out(tensor *, tensor out0, tensor out1, tensor input, double p, int train); -void atg_native_group_norm(tensor *, tensor input, tensor weight, tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps); -void atg_native_group_norm_out(tensor *, tensor out0, tensor out1, tensor out2, tensor input, tensor weight, tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps); -void atg_native_layer_norm(tensor *, tensor input, int64_t *normalized_shape_data, int normalized_shape_len, tensor weight, tensor bias, double eps); -void atg_native_layer_norm_out(tensor *, tensor out0, tensor out1, tensor out2, tensor input, int64_t *normalized_shape_data, int normalized_shape_len, tensor weight, tensor bias, double eps); -void atg_native_norm(tensor *, tensor self); -void atg_native_norm_out(tensor *, tensor out, tensor self); -void atg_native_norm_scalaropt_dim_dtype(tensor *, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_native_norm_scalaropt_dim_dtype_out(tensor *, tensor out, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_ne(tensor *, tensor self, scalar other); -void atg_ne_(tensor *, tensor self, scalar other); -void atg_ne_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_ne_tensor(tensor *, tensor self, tensor other); -void atg_ne_tensor_(tensor *, tensor self, tensor other); -void atg_ne_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_neg(tensor *, tensor self); -void atg_neg_(tensor *, tensor self); -void atg_neg_out(tensor *, tensor out, tensor self); -void atg_negative(tensor *, tensor self); -void atg_negative_(tensor *, tensor self); -void atg_negative_out(tensor *, tensor out, tensor self); -void atg_nested_to_padded_tensor(tensor *, tensor self, double padding, int64_t *output_size_data, int output_size_len); -void atg_new_empty(tensor *, tensor self, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_new_empty_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len); -void atg_new_empty_strided(tensor *, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device); -void atg_new_empty_strided_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); -void atg_new_full(tensor *, tensor self, int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device); -void atg_new_full_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, scalar fill_value); -void atg_new_ones(tensor *, tensor self, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_new_ones_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len); -void atg_new_zeros(tensor *, tensor self, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_new_zeros_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len); -void atg_nextafter(tensor *, tensor self, tensor other); -void atg_nextafter_(tensor *, tensor self, tensor other); -void atg_nextafter_out(tensor *, tensor out, tensor self, tensor other); -void atg_nll_loss(tensor *, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index); -void atg_nll_loss2d(tensor *, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index); -void atg_nll_loss2d_backward(tensor *, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight); -void atg_nll_loss2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight); -void atg_nll_loss2d_out(tensor *, tensor out, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index); -void atg_nll_loss_backward(tensor *, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight); -void atg_nll_loss_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index, tensor total_weight); -void atg_nll_loss_nd(tensor *, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index); -void atg_nll_loss_out(tensor *, tensor out, tensor self, tensor target, tensor weight, int64_t reduction, int64_t ignore_index); -void atg_nonzero(tensor *, tensor self); -tensor *atg_nonzero_numpy(tensor self); -void atg_nonzero_out(tensor *, tensor out, tensor self); -void atg_nonzero_static(tensor *, tensor self, int64_t size, int64_t fill_value); -void atg_nonzero_static_out(tensor *, tensor out, tensor self, int64_t size, int64_t fill_value); -void atg_norm(tensor *, tensor self); -void atg_norm_dtype_out(tensor *, tensor out, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_norm_except_dim(tensor *, tensor v, int64_t pow, int64_t dim); -void atg_norm_out(tensor *, tensor out, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim); -void atg_norm_scalar_out(tensor *, tensor out, tensor self); -void atg_norm_scalaropt_dim(tensor *, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim); -void atg_norm_scalaropt_dim_dtype(tensor *, tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_norm_scalaropt_dtype(tensor *, tensor self, scalar p, int dtype); -void atg_norm_scalaropt_dtype_out(tensor *, tensor out, tensor self, scalar p, int dtype); -void atg_normal_(tensor *, tensor self, double mean, double std); -void atg_normal_functional(tensor *, tensor self, double mean, double std); -void atg_not_equal(tensor *, tensor self, scalar other); -void atg_not_equal_(tensor *, tensor self, scalar other); -void atg_not_equal_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_not_equal_tensor(tensor *, tensor self, tensor other); -void atg_not_equal_tensor_(tensor *, tensor self, tensor other); -void atg_not_equal_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_nuclear_norm(tensor *, tensor self, int keepdim); -void atg_nuclear_norm_dim(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_nuclear_norm_dim_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_nuclear_norm_out(tensor *, tensor out, tensor self, int keepdim); -void atg_numpy_t(tensor *, tensor self); -void atg_one_hot(tensor *, tensor self, int64_t num_classes); -void atg_ones(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_ones_like(tensor *, tensor self); -void atg_ones_like_out(tensor *, tensor out, tensor self); -void atg_ones_out(tensor *, tensor out, int64_t *size_data, int size_len); -void atg_orgqr(tensor *, tensor self, tensor input2); -void atg_orgqr_out(tensor *, tensor out, tensor self, tensor input2); -void atg_ormqr(tensor *, tensor self, tensor input2, tensor input3, int left, int transpose); -void atg_ormqr_out(tensor *, tensor out, tensor self, tensor input2, tensor input3, int left, int transpose); -void atg_outer(tensor *, tensor self, tensor vec2); -void atg_outer_out(tensor *, tensor out, tensor self, tensor vec2); -int64_t atg_output_nr(tensor self); -void atg_pad(tensor *, tensor self, int64_t *pad_data, int pad_len, char * mode, double value_v, int value_null); -void atg_pad_sequence(tensor *, tensor *sequences_data, int sequences_len, int batch_first, double padding_value); -void atg_pairwise_distance(tensor *, tensor x1, tensor x2, double p, double eps, int keepdim); -void atg_pdist(tensor *, tensor self, double p); -void atg_permute(tensor *, tensor self, int64_t *dims_data, int dims_len); -void atg_permute_copy(tensor *, tensor self, int64_t *dims_data, int dims_len); -void atg_permute_copy_out(tensor *, tensor out, tensor self, int64_t *dims_data, int dims_len); -void atg_pin_memory(tensor *, tensor self, int device); -void atg_pinverse(tensor *, tensor self, double rcond); -void atg_pixel_shuffle(tensor *, tensor self, int64_t upscale_factor); -void atg_pixel_shuffle_out(tensor *, tensor out, tensor self, int64_t upscale_factor); -void atg_pixel_unshuffle(tensor *, tensor self, int64_t downscale_factor); -void atg_pixel_unshuffle_out(tensor *, tensor out, tensor self, int64_t downscale_factor); -void atg_poisson(tensor *, tensor self); -void atg_poisson_nll_loss(tensor *, tensor input, tensor target, int log_input, int full, double eps, int64_t reduction); -void atg_poisson_out(tensor *, tensor out, tensor self); -void atg_polar(tensor *, tensor abs, tensor angle); -void atg_polar_out(tensor *, tensor out, tensor abs, tensor angle); -void atg_polygamma(tensor *, int64_t n, tensor self); -void atg_polygamma_(tensor *, tensor self, int64_t n); -void atg_polygamma_out(tensor *, tensor out, int64_t n, tensor self); -void atg_positive(tensor *, tensor self); -void atg_pow(tensor *, tensor self, tensor exponent); -void atg_pow_(tensor *, tensor self, scalar exponent); -void atg_pow_scalar(tensor *, scalar self, tensor exponent); -void atg_pow_scalar_out(tensor *, tensor out, scalar self, tensor exponent); -void atg_pow_tensor_(tensor *, tensor self, tensor exponent); -void atg_pow_tensor_scalar(tensor *, tensor self, scalar exponent); -void atg_pow_tensor_scalar_out(tensor *, tensor out, tensor self, scalar exponent); -void atg_pow_tensor_tensor_out(tensor *, tensor out, tensor self, tensor exponent); -void atg_prelu(tensor *, tensor self, tensor weight); -void atg_prod(tensor *, tensor self, int dtype); -void atg_prod_dim_int(tensor *, tensor self, int64_t dim, int keepdim, int dtype); -void atg_prod_int_out(tensor *, tensor out, tensor self, int64_t dim, int keepdim, int dtype); -void atg_prod_out(tensor *, tensor out, tensor self, int dtype); -void atg_put(tensor *, tensor self, tensor index, tensor source, int accumulate); -void atg_put_(tensor *, tensor self, tensor index, tensor source, int accumulate); -void atg_put_out(tensor *, tensor out, tensor self, tensor index, tensor source, int accumulate); -int64_t atg_q_per_channel_axis(tensor self); -void atg_q_per_channel_scales(tensor *, tensor self); -void atg_q_per_channel_scales_out(tensor *, tensor out, tensor self); -void atg_q_per_channel_zero_points(tensor *, tensor self); -void atg_q_per_channel_zero_points_out(tensor *, tensor out, tensor self); -double atg_q_scale(tensor self); -int64_t atg_q_zero_point(tensor self); -void atg_qr(tensor *, tensor self, int some); -void atg_qr_q(tensor *, tensor Q, tensor R, tensor self, int some); -void atg_quantile(tensor *, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_quantile_out(tensor *, tensor out, tensor self, tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_quantile_scalar(tensor *, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_quantile_scalar_out(tensor *, tensor out, tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); -void atg_quantize_per_channel(tensor *, tensor self, tensor scales, tensor zero_points, int64_t axis, int dtype); -void atg_quantize_per_channel_out(tensor *, tensor out, tensor self, tensor scales, tensor zero_points, int64_t axis, int dtype); -void atg_quantize_per_tensor(tensor *, tensor self, double scale, int64_t zero_point, int dtype); -void atg_quantize_per_tensor_dynamic(tensor *, tensor self, int dtype, int reduce_range); -void atg_quantize_per_tensor_dynamic_out(tensor *, tensor out, tensor self, int dtype, int reduce_range); -void atg_quantize_per_tensor_out(tensor *, tensor out, tensor self, double scale, int64_t zero_point, int dtype); -void atg_quantize_per_tensor_tensor_qparams(tensor *, tensor self, tensor scale, tensor zero_point, int dtype); -void atg_quantize_per_tensor_tensor_qparams_out(tensor *, tensor out, tensor self, tensor scale, tensor zero_point, int dtype); -tensor *atg_quantize_per_tensor_tensors(tensor *tensors_data, int tensors_len, tensor scales, tensor zero_points, int dtype); -void atg_quantize_per_tensor_tensors_out(tensor *out_data, int out_len, tensor *tensors_data, int tensors_len, tensor scales, tensor zero_points, int dtype); -void atg_quantized_batch_norm(tensor *, tensor input, tensor weight, tensor bias, tensor mean, tensor var, double eps, double output_scale, int64_t output_zero_point); -void atg_quantized_batch_norm_out(tensor *, tensor out, tensor input, tensor weight, tensor bias, tensor mean, tensor var, double eps, double output_scale, int64_t output_zero_point); -void atg_quantized_gru_cell(tensor *, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); -void atg_quantized_lstm_cell(tensor *, tensor input, tensor *hx_data, int hx_len, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); -void atg_quantized_max_pool1d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_quantized_max_pool1d_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_quantized_max_pool2d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_quantized_max_pool2d_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_quantized_max_pool3d(tensor *, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_quantized_max_pool3d_out(tensor *, tensor out, tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); -void atg_quantized_rnn_relu_cell(tensor *, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); -void atg_quantized_rnn_tanh_cell(tensor *, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh, tensor packed_ih, tensor packed_hh, tensor col_offsets_ih, tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); -void atg_rad2deg(tensor *, tensor self); -void atg_rad2deg_(tensor *, tensor self); -void atg_rad2deg_out(tensor *, tensor out, tensor self); -void atg_rand(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_rand_like(tensor *, tensor self); -void atg_rand_like_out(tensor *, tensor out, tensor self); -void atg_rand_out(tensor *, tensor out, int64_t *size_data, int size_len); -void atg_randint(tensor *, int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_randint_like(tensor *, tensor self, int64_t high); -void atg_randint_like_low_dtype(tensor *, tensor self, int64_t low, int64_t high); -void atg_randint_like_low_dtype_out(tensor *, tensor out, tensor self, int64_t low, int64_t high); -void atg_randint_like_out(tensor *, tensor out, tensor self, int64_t high); -void atg_randint_low(tensor *, int64_t low, int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_randint_low_out(tensor *, tensor out, int64_t low, int64_t high, int64_t *size_data, int size_len); -void atg_randint_out(tensor *, tensor out, int64_t high, int64_t *size_data, int size_len); -void atg_randn(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_randn_like(tensor *, tensor self); -void atg_randn_like_out(tensor *, tensor out, tensor self); -void atg_randn_out(tensor *, tensor out, int64_t *size_data, int size_len); -void atg_random(tensor *, tensor self); -void atg_random_(tensor *, tensor self); -void atg_random_from(tensor *, tensor self, int64_t from, int64_t to_v, int to_null); -void atg_random_from_(tensor *, tensor self, int64_t from, int64_t to_v, int to_null); -void atg_random_from_out(tensor *, tensor out, tensor self, int64_t from, int64_t to_v, int to_null); -void atg_random_out(tensor *, tensor out, tensor self); -void atg_random_to(tensor *, tensor self, int64_t to); -void atg_random_to_(tensor *, tensor self, int64_t to); -void atg_random_to_out(tensor *, tensor out, tensor self, int64_t to); -void atg_randperm(tensor *, int64_t n, int options_kind, int options_device); -void atg_randperm_out(tensor *, tensor out, int64_t n); -void atg_range(tensor *, scalar start, scalar end, int options_kind, int options_device); -void atg_range_out(tensor *, tensor out, scalar start, scalar end); -void atg_range_out_(tensor *, tensor out, scalar start, scalar end); -void atg_range_step(tensor *, scalar start, scalar end, int options_kind, int options_device); -void atg_ravel(tensor *, tensor self); -void atg_real(tensor *, tensor self); -void atg_reciprocal(tensor *, tensor self); -void atg_reciprocal_(tensor *, tensor self); -void atg_reciprocal_out(tensor *, tensor out, tensor self); -void atg_reflection_pad1d(tensor *, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad1d_backward(tensor *, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad1d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad1d_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad2d(tensor *, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad2d_backward(tensor *, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad2d_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad3d(tensor *, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad3d_backward(tensor *, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_reflection_pad3d_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len); -void atg_relu(tensor *, tensor self); -void atg_relu6(tensor *, tensor self); -void atg_relu6_(tensor *, tensor self); -void atg_relu_(tensor *, tensor self); -void atg_relu_out(tensor *, tensor out, tensor self); -void atg_remainder(tensor *, tensor self, scalar other); -void atg_remainder_(tensor *, tensor self, scalar other); -void atg_remainder_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_remainder_scalar_tensor(tensor *, scalar self, tensor other); -void atg_remainder_scalar_tensor_out(tensor *, tensor out, scalar self, tensor other); -void atg_remainder_tensor(tensor *, tensor self, tensor other); -void atg_remainder_tensor_(tensor *, tensor self, tensor other); -void atg_remainder_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_renorm(tensor *, tensor self, scalar p, int64_t dim, scalar maxnorm); -void atg_renorm_(tensor *, tensor self, scalar p, int64_t dim, scalar maxnorm); -void atg_renorm_out(tensor *, tensor out, tensor self, scalar p, int64_t dim, scalar maxnorm); -void atg_repeat(tensor *, tensor self, int64_t *repeats_data, int repeats_len); -void atg_repeat_interleave(tensor *, tensor repeats, int64_t output_size_v, int output_size_null); -void atg_repeat_interleave_self_int(tensor *, tensor self, int64_t repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null); -void atg_repeat_interleave_self_tensor(tensor *, tensor self, tensor repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null); -void atg_repeat_interleave_tensor_out(tensor *, tensor out, tensor repeats, int64_t output_size_v, int output_size_null); -void atg_repeat_out(tensor *, tensor out, tensor self, int64_t *repeats_data, int repeats_len); -void atg_replication_pad1d(tensor *, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad1d_backward(tensor *, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad1d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad1d_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad2d(tensor *, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad2d_backward(tensor *, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad2d_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad3d(tensor *, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad3d_backward(tensor *, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, int64_t *padding_data, int padding_len); -void atg_replication_pad3d_out(tensor *, tensor out, tensor self, int64_t *padding_data, int padding_len); -void atg_requires_grad_(tensor *, tensor self, int requires_grad); -void atg_reshape(tensor *, tensor self, int64_t *shape_data, int shape_len); -void atg_reshape_as(tensor *, tensor self, tensor other); -void atg_resize(tensor *, tensor self, int64_t *size_data, int size_len); -void atg_resize_(tensor *, tensor self, int64_t *size_data, int size_len); -void atg_resize_as(tensor *, tensor self, tensor the_template); -void atg_resize_as_(tensor *, tensor self, tensor the_template); -void atg_resize_as_out(tensor *, tensor out, tensor self, tensor the_template); -void atg_resize_as_sparse(tensor *, tensor self, tensor the_template); -void atg_resize_as_sparse_(tensor *, tensor self, tensor the_template); -void atg_resize_as_sparse_out(tensor *, tensor out, tensor self, tensor the_template); -void atg_resize_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len); -void atg_resolve_conj(tensor *, tensor self); -void atg_resolve_neg(tensor *, tensor self); -int atg_retains_grad(tensor self); -void atg_rnn_relu(tensor *, tensor input, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); -void atg_rnn_relu_cell(tensor *, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh); -void atg_rnn_relu_data(tensor *, tensor data, tensor batch_sizes, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); -void atg_rnn_tanh(tensor *, tensor input, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); -void atg_rnn_tanh_cell(tensor *, tensor input, tensor hx, tensor w_ih, tensor w_hh, tensor b_ih, tensor b_hh); -void atg_rnn_tanh_data(tensor *, tensor data, tensor batch_sizes, tensor hx, tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); -void atg_roll(tensor *, tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len); -void atg_roll_out(tensor *, tensor out, tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len); -void atg_rot90(tensor *, tensor self, int64_t k, int64_t *dims_data, int dims_len); -void atg_rot90_out(tensor *, tensor out, tensor self, int64_t k, int64_t *dims_data, int dims_len); -void atg_round(tensor *, tensor self); -void atg_round_(tensor *, tensor self); -void atg_round_decimals(tensor *, tensor self, int64_t decimals); -void atg_round_decimals_(tensor *, tensor self, int64_t decimals); -void atg_round_decimals_out(tensor *, tensor out, tensor self, int64_t decimals); -void atg_round_out(tensor *, tensor out, tensor self); -void atg_row_indices(tensor *, tensor self); -void atg_row_indices_copy(tensor *, tensor self); -void atg_row_indices_copy_out(tensor *, tensor out, tensor self); -void atg_row_stack(tensor *, tensor *tensors_data, int tensors_len); -void atg_row_stack_out(tensor *, tensor out, tensor *tensors_data, int tensors_len); -void atg_rrelu(tensor *, tensor self, int training); -void atg_rrelu_(tensor *, tensor self, int training); -void atg_rrelu_with_noise(tensor *, tensor self, tensor noise, int training); -void atg_rrelu_with_noise_(tensor *, tensor self, tensor noise, int training); -void atg_rrelu_with_noise_backward(tensor *, tensor grad_output, tensor self, tensor noise, scalar lower, scalar upper, int training, int self_is_result); -void atg_rrelu_with_noise_backward_out(tensor *, tensor out, tensor grad_output, tensor self, tensor noise, scalar lower, scalar upper, int training, int self_is_result); -void atg_rrelu_with_noise_out(tensor *, tensor out, tensor self, tensor noise, int training); -void atg_rsqrt(tensor *, tensor self); -void atg_rsqrt_(tensor *, tensor self); -void atg_rsqrt_out(tensor *, tensor out, tensor self); -void atg_rsub(tensor *, tensor self, tensor other); -void atg_rsub_scalar(tensor *, tensor self, scalar other); -void atg_rsub_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_rsub_tensor_out(tensor *, tensor out, tensor self, tensor other); -void atg_scalar_tensor(tensor *, scalar s, int options_kind, int options_device); -void atg_scalar_tensor_out(tensor *, tensor out, scalar s); -void atg_scaled_dot_product_attention(tensor *, tensor query, tensor key, tensor value, tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null); -void atg_scatter(tensor *, tensor self, int64_t dim, tensor index, tensor src); -void atg_scatter_(tensor *, tensor self, int64_t dim, tensor index, tensor src); -void atg_scatter_add(tensor *, tensor self, int64_t dim, tensor index, tensor src); -void atg_scatter_add_(tensor *, tensor self, int64_t dim, tensor index, tensor src); -void atg_scatter_add_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor src); -void atg_scatter_reduce(tensor *, tensor self, int64_t dim, tensor index, tensor src, char * reduce); -void atg_scatter_reduce_(tensor *, tensor self, int64_t dim, tensor index, tensor src, char * reduce); -void atg_scatter_reduce_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor src, char * reduce); -void atg_scatter_src_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, tensor src); -void atg_scatter_value(tensor *, tensor self, int64_t dim, tensor index, scalar value); -void atg_scatter_value_(tensor *, tensor self, int64_t dim, tensor index, scalar value); -void atg_scatter_value_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, scalar value); -void atg_scatter_value_reduce(tensor *, tensor self, int64_t dim, tensor index, scalar value, char * reduce); -void atg_scatter_value_reduce_(tensor *, tensor self, int64_t dim, tensor index, scalar value, char * reduce); -void atg_scatter_value_reduce_out(tensor *, tensor out, tensor self, int64_t dim, tensor index, scalar value, char * reduce); -void atg_searchsorted(tensor *, tensor sorted_sequence, tensor self, int out_int32, int right, char * side, tensor sorter); -void atg_searchsorted_scalar(tensor *, tensor sorted_sequence, scalar self, int out_int32, int right, char * side, tensor sorter); -void atg_searchsorted_scalar_out(tensor *, tensor out, tensor sorted_sequence, scalar self, int out_int32, int right, char * side, tensor sorter); -void atg_searchsorted_tensor_out(tensor *, tensor out, tensor sorted_sequence, tensor self, int out_int32, int right, char * side, tensor sorter); -void atg_segment_reduce(tensor *, tensor data, char * reduce, tensor lengths, tensor indices, tensor offsets, int64_t axis, int unsafe, scalar initial); -void atg_segment_reduce_out(tensor *, tensor out, tensor data, char * reduce, tensor lengths, tensor indices, tensor offsets, int64_t axis, int unsafe, scalar initial); -void atg_select(tensor *, tensor self, int64_t dim, int64_t index); -void atg_select_backward(tensor *, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index); -void atg_select_backward_out(tensor *, tensor out, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index); -void atg_select_copy(tensor *, tensor self, int64_t dim, int64_t index); -void atg_select_copy_int_out(tensor *, tensor out, tensor self, int64_t dim, int64_t index); -void atg_select_scatter(tensor *, tensor self, tensor src, int64_t dim, int64_t index); -void atg_select_scatter_out(tensor *, tensor out, tensor self, tensor src, int64_t dim, int64_t index); -void atg_selu(tensor *, tensor self); -void atg_selu_(tensor *, tensor self); -void atg_set(tensor *, tensor self); -void atg_set_(tensor *, tensor self); -void atg_set_out(tensor *, tensor out, tensor self); -void atg_set_requires_grad(tensor *, tensor self, int r); -void atg_set_source_tensor(tensor *, tensor self, tensor source); -void atg_set_source_tensor_(tensor *, tensor self, tensor source); -void atg_set_source_tensor_out(tensor *, tensor out, tensor self, tensor source); -void atg_set_source_tensor_storage_offset_(tensor *, tensor self, tensor source, int64_t storage_offset, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); -void atg_sgn(tensor *, tensor self); -void atg_sgn_(tensor *, tensor self); -void atg_sgn_out(tensor *, tensor out, tensor self); -void atg_sigmoid(tensor *, tensor self); -void atg_sigmoid_(tensor *, tensor self); -void atg_sigmoid_backward(tensor *, tensor grad_output, tensor output); -void atg_sigmoid_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor output); -void atg_sigmoid_out(tensor *, tensor out, tensor self); -void atg_sign(tensor *, tensor self); -void atg_sign_(tensor *, tensor self); -void atg_sign_out(tensor *, tensor out, tensor self); -void atg_signbit(tensor *, tensor self); -void atg_signbit_out(tensor *, tensor out, tensor self); -void atg_silu(tensor *, tensor self); -void atg_silu_(tensor *, tensor self); -void atg_silu_backward(tensor *, tensor grad_output, tensor self); -void atg_silu_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self); -void atg_silu_out(tensor *, tensor out, tensor self); -void atg_sin(tensor *, tensor self); -void atg_sin_(tensor *, tensor self); -void atg_sin_out(tensor *, tensor out, tensor self); -void atg_sinc(tensor *, tensor self); -void atg_sinc_(tensor *, tensor self); -void atg_sinc_out(tensor *, tensor out, tensor self); -void atg_sinh(tensor *, tensor self); -void atg_sinh_(tensor *, tensor self); -void atg_sinh_out(tensor *, tensor out, tensor self); -int64_t atg_size(tensor self, int64_t dim); -void atg_slice(tensor *, tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); -void atg_slice_backward(tensor *, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step); -void atg_slice_backward_out(tensor *, tensor out, tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step); -void atg_slice_copy(tensor *, tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); -void atg_slice_copy_tensor_out(tensor *, tensor out, tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); -void atg_slice_scatter(tensor *, tensor self, tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); -void atg_slice_scatter_out(tensor *, tensor out, tensor self, tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); -void atg_slogdet(tensor *, tensor self); -void atg_slogdet_out(tensor *, tensor sign, tensor logabsdet, tensor self); -void atg_slow_conv3d(tensor *, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); -void atg_slow_conv3d_out(tensor *, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); -void atg_slow_conv_dilated2d(tensor *, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg_slow_conv_dilated2d_out(tensor *, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg_slow_conv_dilated3d(tensor *, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg_slow_conv_dilated3d_out(tensor *, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); -void atg_slow_conv_transpose2d(tensor *, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); -void atg_slow_conv_transpose2d_out(tensor *, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); -void atg_slow_conv_transpose3d(tensor *, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); -void atg_slow_conv_transpose3d_out(tensor *, tensor out, tensor self, tensor weight, int64_t *kernel_size_data, int kernel_size_len, tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); -void atg_smm(tensor *, tensor self, tensor mat2); -void atg_smooth_l1_loss(tensor *, tensor self, tensor target, int64_t reduction, double beta); -void atg_smooth_l1_loss_backward(tensor *, tensor grad_output, tensor self, tensor target, int64_t reduction, double beta); -void atg_smooth_l1_loss_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction, double beta); -void atg_smooth_l1_loss_out(tensor *, tensor out, tensor self, tensor target, int64_t reduction, double beta); -void atg_soft_margin_loss(tensor *, tensor self, tensor target, int64_t reduction); -void atg_soft_margin_loss_backward(tensor *, tensor grad_output, tensor self, tensor target, int64_t reduction); -void atg_soft_margin_loss_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, tensor target, int64_t reduction); -void atg_soft_margin_loss_out(tensor *, tensor out, tensor self, tensor target, int64_t reduction); -void atg_softmax(tensor *, tensor self, int64_t dim, int dtype); -void atg_softmax_int_out(tensor *, tensor out, tensor self, int64_t dim, int dtype); -void atg_softplus(tensor *, tensor self); -void atg_softplus_backward(tensor *, tensor grad_output, tensor self, scalar beta, scalar threshold); -void atg_softplus_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, scalar beta, scalar threshold); -void atg_softplus_out(tensor *, tensor out, tensor self); -void atg_softshrink(tensor *, tensor self); -void atg_softshrink_backward(tensor *, tensor grad_output, tensor self, scalar lambd); -void atg_softshrink_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, scalar lambd); -void atg_softshrink_out(tensor *, tensor out, tensor self); -void atg_sort(tensor *, tensor self, int64_t dim, int descending); -void atg_sort_stable(tensor *, tensor self, int stable, int64_t dim, int descending); -void atg_sort_values(tensor *, tensor values, tensor indices, tensor self, int64_t dim, int descending); -void atg_sort_values_stable(tensor *, tensor values, tensor indices, tensor self, int stable, int64_t dim, int descending); -void atg_sparse_bsc_tensor(tensor *, tensor ccol_indices, tensor row_indices, tensor values, int options_kind, int options_device); -void atg_sparse_bsc_tensor_ccol_row_value_size(tensor *, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_sparse_bsr_tensor(tensor *, tensor crow_indices, tensor col_indices, tensor values, int options_kind, int options_device); -void atg_sparse_bsr_tensor_crow_col_value_size(tensor *, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_sparse_compressed_tensor(tensor *, tensor compressed_indices, tensor plain_indices, tensor values, int options_kind, int options_device); -void atg_sparse_compressed_tensor_comp_plain_value_size(tensor *, tensor compressed_indices, tensor plain_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_sparse_coo_tensor(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_sparse_coo_tensor_indices(tensor *, tensor indices, tensor values, int options_kind, int options_device, int is_coalesced); -void atg_sparse_coo_tensor_indices_size(tensor *, tensor indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced); -void atg_sparse_coo_tensor_size_out(tensor *, tensor out, int64_t *size_data, int size_len); -void atg_sparse_csc_tensor(tensor *, tensor ccol_indices, tensor row_indices, tensor values, int options_kind, int options_device); -void atg_sparse_csc_tensor_ccol_row_value_size(tensor *, tensor ccol_indices, tensor row_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_sparse_csr_tensor(tensor *, tensor crow_indices, tensor col_indices, tensor values, int options_kind, int options_device); -void atg_sparse_csr_tensor_crow_col_value_size(tensor *, tensor crow_indices, tensor col_indices, tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); -int64_t atg_sparse_dim(tensor self); -void atg_sparse_mask(tensor *, tensor self, tensor mask); -void atg_sparse_mask_out(tensor *, tensor out, tensor self, tensor mask); -void atg_sparse_resize(tensor *, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); -void atg_sparse_resize_(tensor *, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); -void atg_sparse_resize_and_clear(tensor *, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); -void atg_sparse_resize_and_clear_(tensor *, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); -void atg_sparse_resize_and_clear_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); -void atg_sparse_resize_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); -void atg_sparse_sampled_addmm(tensor *, tensor self, tensor mat1, tensor mat2); -void atg_sparse_sampled_addmm_out(tensor *, tensor out, tensor self, tensor mat1, tensor mat2); -void atg_special_airy_ai(tensor *, tensor x); -void atg_special_airy_ai_out(tensor *, tensor out, tensor x); -void atg_special_bessel_j0(tensor *, tensor self); -void atg_special_bessel_j0_out(tensor *, tensor out, tensor self); -void atg_special_bessel_j1(tensor *, tensor self); -void atg_special_bessel_j1_out(tensor *, tensor out, tensor self); -void atg_special_bessel_y0(tensor *, tensor self); -void atg_special_bessel_y0_out(tensor *, tensor out, tensor self); -void atg_special_bessel_y1(tensor *, tensor self); -void atg_special_bessel_y1_out(tensor *, tensor out, tensor self); -void atg_special_chebyshev_polynomial_t(tensor *, tensor x, tensor n); -void atg_special_chebyshev_polynomial_t_n_scalar(tensor *, tensor x, scalar n); -void atg_special_chebyshev_polynomial_t_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_chebyshev_polynomial_t_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_chebyshev_polynomial_t_x_scalar(tensor *, scalar x, tensor n); -void atg_special_chebyshev_polynomial_t_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_chebyshev_polynomial_u(tensor *, tensor x, tensor n); -void atg_special_chebyshev_polynomial_u_n_scalar(tensor *, tensor x, scalar n); -void atg_special_chebyshev_polynomial_u_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_chebyshev_polynomial_u_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_chebyshev_polynomial_u_x_scalar(tensor *, scalar x, tensor n); -void atg_special_chebyshev_polynomial_u_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_chebyshev_polynomial_v(tensor *, tensor x, tensor n); -void atg_special_chebyshev_polynomial_v_n_scalar(tensor *, tensor x, scalar n); -void atg_special_chebyshev_polynomial_v_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_chebyshev_polynomial_v_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_chebyshev_polynomial_v_x_scalar(tensor *, scalar x, tensor n); -void atg_special_chebyshev_polynomial_v_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_chebyshev_polynomial_w(tensor *, tensor x, tensor n); -void atg_special_chebyshev_polynomial_w_n_scalar(tensor *, tensor x, scalar n); -void atg_special_chebyshev_polynomial_w_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_chebyshev_polynomial_w_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_chebyshev_polynomial_w_x_scalar(tensor *, scalar x, tensor n); -void atg_special_chebyshev_polynomial_w_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_digamma(tensor *, tensor self); -void atg_special_digamma_out(tensor *, tensor out, tensor self); -void atg_special_entr(tensor *, tensor self); -void atg_special_entr_out(tensor *, tensor out, tensor self); -void atg_special_erf(tensor *, tensor self); -void atg_special_erf_out(tensor *, tensor out, tensor self); -void atg_special_erfc(tensor *, tensor self); -void atg_special_erfc_out(tensor *, tensor out, tensor self); -void atg_special_erfcx(tensor *, tensor self); -void atg_special_erfcx_out(tensor *, tensor out, tensor self); -void atg_special_erfinv(tensor *, tensor self); -void atg_special_erfinv_out(tensor *, tensor out, tensor self); -void atg_special_exp2(tensor *, tensor self); -void atg_special_exp2_out(tensor *, tensor out, tensor self); -void atg_special_expit(tensor *, tensor self); -void atg_special_expit_out(tensor *, tensor out, tensor self); -void atg_special_expm1(tensor *, tensor self); -void atg_special_expm1_out(tensor *, tensor out, tensor self); -void atg_special_gammainc(tensor *, tensor self, tensor other); -void atg_special_gammainc_out(tensor *, tensor out, tensor self, tensor other); -void atg_special_gammaincc(tensor *, tensor self, tensor other); -void atg_special_gammaincc_out(tensor *, tensor out, tensor self, tensor other); -void atg_special_gammaln(tensor *, tensor self); -void atg_special_gammaln_out(tensor *, tensor out, tensor self); -void atg_special_hermite_polynomial_h(tensor *, tensor x, tensor n); -void atg_special_hermite_polynomial_h_n_scalar(tensor *, tensor x, scalar n); -void atg_special_hermite_polynomial_h_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_hermite_polynomial_h_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_hermite_polynomial_h_x_scalar(tensor *, scalar x, tensor n); -void atg_special_hermite_polynomial_h_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_hermite_polynomial_he(tensor *, tensor x, tensor n); -void atg_special_hermite_polynomial_he_n_scalar(tensor *, tensor x, scalar n); -void atg_special_hermite_polynomial_he_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_hermite_polynomial_he_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_hermite_polynomial_he_x_scalar(tensor *, scalar x, tensor n); -void atg_special_hermite_polynomial_he_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_i0(tensor *, tensor self); -void atg_special_i0_out(tensor *, tensor out, tensor self); -void atg_special_i0e(tensor *, tensor self); -void atg_special_i0e_out(tensor *, tensor out, tensor self); -void atg_special_i1(tensor *, tensor self); -void atg_special_i1_out(tensor *, tensor out, tensor self); -void atg_special_i1e(tensor *, tensor self); -void atg_special_i1e_out(tensor *, tensor out, tensor self); -void atg_special_laguerre_polynomial_l(tensor *, tensor x, tensor n); -void atg_special_laguerre_polynomial_l_n_scalar(tensor *, tensor x, scalar n); -void atg_special_laguerre_polynomial_l_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_laguerre_polynomial_l_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_laguerre_polynomial_l_x_scalar(tensor *, scalar x, tensor n); -void atg_special_laguerre_polynomial_l_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_legendre_polynomial_p(tensor *, tensor x, tensor n); -void atg_special_legendre_polynomial_p_n_scalar(tensor *, tensor x, scalar n); -void atg_special_legendre_polynomial_p_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_legendre_polynomial_p_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_legendre_polynomial_p_x_scalar(tensor *, scalar x, tensor n); -void atg_special_legendre_polynomial_p_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_log1p(tensor *, tensor self); -void atg_special_log1p_out(tensor *, tensor out, tensor self); -void atg_special_log_ndtr(tensor *, tensor self); -void atg_special_log_ndtr_out(tensor *, tensor out, tensor self); -void atg_special_log_softmax(tensor *, tensor self, int64_t dim, int dtype); -void atg_special_logit(tensor *, tensor self, double eps_v, int eps_null); -void atg_special_logit_out(tensor *, tensor out, tensor self, double eps_v, int eps_null); -void atg_special_logsumexp(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_special_logsumexp_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim); -void atg_special_modified_bessel_i0(tensor *, tensor self); -void atg_special_modified_bessel_i0_out(tensor *, tensor out, tensor self); -void atg_special_modified_bessel_i1(tensor *, tensor self); -void atg_special_modified_bessel_i1_out(tensor *, tensor out, tensor self); -void atg_special_modified_bessel_k0(tensor *, tensor self); -void atg_special_modified_bessel_k0_out(tensor *, tensor out, tensor self); -void atg_special_modified_bessel_k1(tensor *, tensor self); -void atg_special_modified_bessel_k1_out(tensor *, tensor out, tensor self); -void atg_special_multigammaln(tensor *, tensor self, int64_t p); -void atg_special_multigammaln_out(tensor *, tensor out, tensor self, int64_t p); -void atg_special_ndtr(tensor *, tensor self); -void atg_special_ndtr_out(tensor *, tensor out, tensor self); -void atg_special_ndtri(tensor *, tensor self); -void atg_special_ndtri_out(tensor *, tensor out, tensor self); -void atg_special_polygamma(tensor *, int64_t n, tensor self); -void atg_special_polygamma_out(tensor *, tensor out, int64_t n, tensor self); -void atg_special_psi(tensor *, tensor self); -void atg_special_psi_out(tensor *, tensor out, tensor self); -void atg_special_round(tensor *, tensor self, int64_t decimals); -void atg_special_round_out(tensor *, tensor out, tensor self, int64_t decimals); -void atg_special_scaled_modified_bessel_k0(tensor *, tensor x); -void atg_special_scaled_modified_bessel_k0_out(tensor *, tensor out, tensor x); -void atg_special_scaled_modified_bessel_k1(tensor *, tensor x); -void atg_special_scaled_modified_bessel_k1_out(tensor *, tensor out, tensor x); -void atg_special_shifted_chebyshev_polynomial_t(tensor *, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_t_n_scalar(tensor *, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_t_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_t_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_t_x_scalar(tensor *, scalar x, tensor n); -void atg_special_shifted_chebyshev_polynomial_t_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_shifted_chebyshev_polynomial_u(tensor *, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_u_n_scalar(tensor *, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_u_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_u_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_u_x_scalar(tensor *, scalar x, tensor n); -void atg_special_shifted_chebyshev_polynomial_u_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_shifted_chebyshev_polynomial_v(tensor *, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_v_n_scalar(tensor *, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_v_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_v_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_v_x_scalar(tensor *, scalar x, tensor n); -void atg_special_shifted_chebyshev_polynomial_v_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_shifted_chebyshev_polynomial_w(tensor *, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_w_n_scalar(tensor *, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_w_n_scalar_out(tensor *, tensor out, tensor x, scalar n); -void atg_special_shifted_chebyshev_polynomial_w_out(tensor *, tensor out, tensor x, tensor n); -void atg_special_shifted_chebyshev_polynomial_w_x_scalar(tensor *, scalar x, tensor n); -void atg_special_shifted_chebyshev_polynomial_w_x_scalar_out(tensor *, tensor out, scalar x, tensor n); -void atg_special_sinc(tensor *, tensor self); -void atg_special_sinc_out(tensor *, tensor out, tensor self); -void atg_special_softmax(tensor *, tensor self, int64_t dim, int dtype); -void atg_special_spherical_bessel_j0(tensor *, tensor x); -void atg_special_spherical_bessel_j0_out(tensor *, tensor out, tensor x); -void atg_special_xlog1py(tensor *, tensor self, tensor other); -void atg_special_xlog1py_other_scalar(tensor *, tensor self, scalar other); -void atg_special_xlog1py_other_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_special_xlog1py_out(tensor *, tensor out, tensor self, tensor other); -void atg_special_xlog1py_self_scalar(tensor *, scalar self, tensor other); -void atg_special_xlog1py_self_scalar_out(tensor *, tensor out, scalar self, tensor other); -void atg_special_xlogy(tensor *, tensor self, tensor other); -void atg_special_xlogy_other_scalar(tensor *, tensor self, scalar other); -void atg_special_xlogy_other_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_special_xlogy_out(tensor *, tensor out, tensor self, tensor other); -void atg_special_xlogy_self_scalar(tensor *, scalar self, tensor other); -void atg_special_xlogy_self_scalar_out(tensor *, tensor out, scalar self, tensor other); -void atg_special_zeta(tensor *, tensor self, tensor other); -void atg_special_zeta_other_scalar(tensor *, tensor self, scalar other); -void atg_special_zeta_other_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_special_zeta_out(tensor *, tensor out, tensor self, tensor other); -void atg_special_zeta_self_scalar(tensor *, scalar self, tensor other); -void atg_special_zeta_self_scalar_out(tensor *, tensor out, scalar self, tensor other); -tensor *atg_split(tensor self, int64_t split_size, int64_t dim); -tensor *atg_split_copy(tensor self, int64_t split_size, int64_t dim); -void atg_split_copy_tensor_out(tensor *out_data, int out_len, tensor self, int64_t split_size, int64_t dim); -tensor *atg_split_sizes(tensor self, int64_t *split_size_data, int split_size_len, int64_t dim); -tensor *atg_split_with_sizes(tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); -tensor *atg_split_with_sizes_copy(tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); -void atg_split_with_sizes_copy_out(tensor *out_data, int out_len, tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); -void atg_sqrt(tensor *, tensor self); -void atg_sqrt_(tensor *, tensor self); -void atg_sqrt_out(tensor *, tensor out, tensor self); -void atg_square(tensor *, tensor self); -void atg_square_(tensor *, tensor self); -void atg_square_out(tensor *, tensor out, tensor self); -void atg_squeeze(tensor *, tensor self); -void atg_squeeze_(tensor *, tensor self); -void atg_squeeze_copy(tensor *, tensor self); -void atg_squeeze_copy_dim(tensor *, tensor self, int64_t dim); -void atg_squeeze_copy_dim_out(tensor *, tensor out, tensor self, int64_t dim); -void atg_squeeze_copy_dims(tensor *, tensor self, int64_t *dim_data, int dim_len); -void atg_squeeze_copy_dims_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len); -void atg_squeeze_copy_out(tensor *, tensor out, tensor self); -void atg_squeeze_dim(tensor *, tensor self, int64_t dim); -void atg_squeeze_dim_(tensor *, tensor self, int64_t dim); -void atg_squeeze_dims(tensor *, tensor self, int64_t *dim_data, int dim_len); -void atg_squeeze_dims_(tensor *, tensor self, int64_t *dim_data, int dim_len); -void atg_sspaddmm(tensor *, tensor self, tensor mat1, tensor mat2); -void atg_sspaddmm_out(tensor *, tensor out, tensor self, tensor mat1, tensor mat2); -void atg_stack(tensor *, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_stack_out(tensor *, tensor out, tensor *tensors_data, int tensors_len, int64_t dim); -void atg_std(tensor *, tensor self, int unbiased); -void atg_std_correction(tensor *, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_std_correction_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_std_dim(tensor *, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); -void atg_std_mean(tensor *, tensor self, int unbiased); -void atg_std_mean_correction(tensor *, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_std_mean_correction_out(tensor *, tensor out0, tensor out1, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_std_mean_dim(tensor *, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); -void atg_std_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); -void atg_stft(tensor *, tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, tensor window, int normalized, int onesided, int return_complex); -void atg_stft_center(tensor *, tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, tensor window, int center, char * pad_mode, int normalized, int onesided, int return_complex); -int64_t atg_stride(tensor self, int64_t dim); -void atg_sub(tensor *, tensor self, tensor other); -void atg_sub_(tensor *, tensor self, tensor other); -void atg_sub_out(tensor *, tensor out, tensor self, tensor other); -void atg_sub_scalar(tensor *, tensor self, scalar other); -void atg_sub_scalar_(tensor *, tensor self, scalar other); -void atg_sub_scalar_out(tensor *, tensor out, tensor self, scalar other); -void atg_subtract(tensor *, tensor self, tensor other); -void atg_subtract_(tensor *, tensor self, tensor other); -void atg_subtract_out(tensor *, tensor out, tensor self, tensor other); -void atg_subtract_scalar(tensor *, tensor self, scalar other); -void atg_subtract_scalar_(tensor *, tensor self, scalar other); -void atg_sum(tensor *, tensor self, int dtype); -void atg_sum_dim_intlist(tensor *, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_sum_intlist_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); -void atg_sum_out(tensor *, tensor out, tensor self, int dtype); -void atg_sum_to_size(tensor *, tensor self, int64_t *size_data, int size_len); -void atg_svd(tensor *, tensor self, int some, int compute_uv); -void atg_svd_u(tensor *, tensor U, tensor S, tensor V, tensor self, int some, int compute_uv); -void atg_swapaxes(tensor *, tensor self, int64_t axis0, int64_t axis1); -void atg_swapaxes_(tensor *, tensor self, int64_t axis0, int64_t axis1); -void atg_swapdims(tensor *, tensor self, int64_t dim0, int64_t dim1); -void atg_swapdims_(tensor *, tensor self, int64_t dim0, int64_t dim1); -void atg_t(tensor *, tensor self); -void atg_t_(tensor *, tensor self); -void atg_t_copy(tensor *, tensor self); -void atg_t_copy_out(tensor *, tensor out, tensor self); -void atg_take(tensor *, tensor self, tensor index); -void atg_take_along_dim(tensor *, tensor self, tensor indices, int64_t dim_v, int dim_null); -void atg_take_along_dim_out(tensor *, tensor out, tensor self, tensor indices, int64_t dim_v, int dim_null); -void atg_take_out(tensor *, tensor out, tensor self, tensor index); -void atg_tan(tensor *, tensor self); -void atg_tan_(tensor *, tensor self); -void atg_tan_out(tensor *, tensor out, tensor self); -void atg_tanh(tensor *, tensor self); -void atg_tanh_(tensor *, tensor self); -void atg_tanh_backward(tensor *, tensor grad_output, tensor output); -void atg_tanh_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor output); -void atg_tanh_out(tensor *, tensor out, tensor self); -tensor *atg_tensor_split(tensor self, int64_t sections, int64_t dim); -tensor *atg_tensor_split_indices(tensor self, int64_t *indices_data, int indices_len, int64_t dim); -tensor *atg_tensor_split_tensor_indices_or_sections(tensor self, tensor tensor_indices_or_sections, int64_t dim); -void atg_tensordot(tensor *, tensor self, tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len); -void atg_tensordot_out(tensor *, tensor out, tensor self, tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len); -void atg_threshold(tensor *, tensor self, scalar threshold, scalar value); -void atg_threshold_(tensor *, tensor self, scalar threshold, scalar value); -void atg_threshold_backward(tensor *, tensor grad_output, tensor self, scalar threshold); -void atg_threshold_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, tensor self, scalar threshold); -void atg_threshold_out(tensor *, tensor out, tensor self, scalar threshold, scalar value); -void atg_tile(tensor *, tensor self, int64_t *dims_data, int dims_len); -void atg_to(tensor *, tensor self, int device); -void atg_to_dense(tensor *, tensor self, int dtype, int masked_grad); -void atg_to_dense_backward(tensor *, tensor grad, tensor input, int masked_grad); -void atg_to_device(tensor *, tensor self, int device, int dtype, int non_blocking, int copy); -void atg_to_dtype(tensor *, tensor self, int dtype, int non_blocking, int copy); -void atg_to_dtype_layout(tensor *, tensor self, int options_kind, int options_device, int non_blocking, int copy); -void atg_to_mkldnn(tensor *, tensor self, int dtype); -void atg_to_mkldnn_backward(tensor *, tensor grad, tensor input); -void atg_to_mkldnn_out(tensor *, tensor out, tensor self, int dtype); -void atg_to_other(tensor *, tensor self, tensor other, int non_blocking, int copy); -void atg_to_padded_tensor(tensor *, tensor self, double padding, int64_t *output_size_data, int output_size_len); -void atg_to_padded_tensor_out(tensor *, tensor out, tensor self, double padding, int64_t *output_size_data, int output_size_len); -void atg_topk(tensor *, tensor self, int64_t k, int64_t dim, int largest, int sorted); -void atg_topk_values(tensor *, tensor values, tensor indices, tensor self, int64_t k, int64_t dim, int largest, int sorted); -void atg_totype(tensor *, tensor self, int scalar_type); -void atg_trace(tensor *, tensor self); -void atg_trace_backward(tensor *, tensor grad, int64_t *sizes_data, int sizes_len); -void atg_trace_out(tensor *, tensor out, tensor self); -void atg_transpose(tensor *, tensor self, int64_t dim0, int64_t dim1); -void atg_transpose_(tensor *, tensor self, int64_t dim0, int64_t dim1); -void atg_transpose_copy(tensor *, tensor self, int64_t dim0, int64_t dim1); -void atg_transpose_copy_int_out(tensor *, tensor out, tensor self, int64_t dim0, int64_t dim1); -void atg_trapezoid(tensor *, tensor y, int64_t dim); -void atg_trapezoid_x(tensor *, tensor y, tensor x, int64_t dim); -void atg_trapz(tensor *, tensor y, tensor x, int64_t dim); -void atg_trapz_dx(tensor *, tensor y, double dx, int64_t dim); -void atg_triangular_solve(tensor *, tensor self, tensor A, int upper, int transpose, int unitriangular); -void atg_triangular_solve_x(tensor *, tensor X, tensor M, tensor self, tensor A, int upper, int transpose, int unitriangular); -void atg_tril(tensor *, tensor self, int64_t diagonal); -void atg_tril_(tensor *, tensor self, int64_t diagonal); -void atg_tril_indices(tensor *, int64_t row, int64_t col, int64_t offset, int options_kind, int options_device); -void atg_tril_indices_out(tensor *, tensor out, int64_t row, int64_t col, int64_t offset); -void atg_tril_out(tensor *, tensor out, tensor self, int64_t diagonal); -void atg_triplet_margin_loss(tensor *, tensor anchor, tensor positive, tensor negative, double margin, double p, double eps, int swap, int64_t reduction); -void atg_triu(tensor *, tensor self, int64_t diagonal); -void atg_triu_(tensor *, tensor self, int64_t diagonal); -void atg_triu_indices(tensor *, int64_t row, int64_t col, int64_t offset, int options_kind, int options_device); -void atg_triu_indices_out(tensor *, tensor out, int64_t row, int64_t col, int64_t offset); -void atg_triu_out(tensor *, tensor out, tensor self, int64_t diagonal); -void atg_true_divide(tensor *, tensor self, tensor other); -void atg_true_divide_(tensor *, tensor self, tensor other); -void atg_true_divide_out(tensor *, tensor out, tensor self, tensor other); -void atg_true_divide_scalar(tensor *, tensor self, scalar other); -void atg_true_divide_scalar_(tensor *, tensor self, scalar other); -void atg_trunc(tensor *, tensor self); -void atg_trunc_(tensor *, tensor self); -void atg_trunc_out(tensor *, tensor out, tensor self); -void atg_type_as(tensor *, tensor self, tensor other); -tensor *atg_unbind(tensor self, int64_t dim); -tensor *atg_unbind_copy(tensor self, int64_t dim); -void atg_unbind_copy_int_out(tensor *out_data, int out_len, tensor self, int64_t dim); -void atg_unflatten(tensor *, tensor self, int64_t dim, int64_t *sizes_data, int sizes_len); -tensor *atg_unflatten_dense_tensors(tensor flat, tensor *tensors_data, int tensors_len); -void atg_unfold(tensor *, tensor self, int64_t dimension, int64_t size, int64_t step); -void atg_unfold_backward(tensor *, tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step); -void atg_unfold_backward_out(tensor *, tensor out, tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step); -void atg_unfold_copy(tensor *, tensor self, int64_t dimension, int64_t size, int64_t step); -void atg_unfold_copy_out(tensor *, tensor out, tensor self, int64_t dimension, int64_t size, int64_t step); -void atg_uniform(tensor *, tensor self, double from, double to); -void atg_uniform_(tensor *, tensor self, double from, double to); -void atg_uniform_out(tensor *, tensor out, tensor self, double from, double to); -void atg_unique_consecutive(tensor *, tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null); -void atg_unique_consecutive_out(tensor *, tensor out0, tensor out1, tensor out2, tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null); -void atg_unique_dim(tensor *, tensor self, int64_t dim, int sorted, int return_inverse, int return_counts); -void atg_unique_dim_consecutive(tensor *, tensor self, int64_t dim, int return_inverse, int return_counts); -void atg_unique_dim_consecutive_out(tensor *, tensor out0, tensor out1, tensor out2, tensor self, int64_t dim, int return_inverse, int return_counts); -void atg_unique_dim_out(tensor *, tensor out0, tensor out1, tensor out2, tensor self, int64_t dim, int sorted, int return_inverse, int return_counts); -tensor *atg_unsafe_chunk(tensor self, int64_t chunks, int64_t dim); -tensor *atg_unsafe_split(tensor self, int64_t split_size, int64_t dim); -void atg_unsafe_split_tensor_out(tensor *out_data, int out_len, tensor self, int64_t split_size, int64_t dim); -tensor *atg_unsafe_split_with_sizes(tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); -void atg_unsafe_split_with_sizes_out(tensor *out_data, int out_len, tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); -void atg_unsqueeze(tensor *, tensor self, int64_t dim); -void atg_unsqueeze_(tensor *, tensor self, int64_t dim); -void atg_unsqueeze_copy(tensor *, tensor self, int64_t dim); -void atg_unsqueeze_copy_out(tensor *, tensor out, tensor self, int64_t dim); -void atg_upsample_bicubic2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bicubic2d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bicubic2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bicubic2d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bicubic2d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); -void atg_upsample_bilinear2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bilinear2d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bilinear2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bilinear2d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_bilinear2d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); -void atg_upsample_linear1d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null); -void atg_upsample_linear1d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null); -void atg_upsample_linear1d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null); -void atg_upsample_linear1d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null); -void atg_upsample_linear1d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); -void atg_upsample_nearest1d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); -void atg_upsample_nearest1d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); -void atg_upsample_nearest1d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); -void atg_upsample_nearest1d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); -void atg_upsample_nearest1d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); -void atg_upsample_nearest2d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest2d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest2d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest2d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest2d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); -void atg_upsample_nearest3d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest3d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest3d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_nearest3d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); -void atg_upsample_trilinear3d(tensor *, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_trilinear3d_backward(tensor *, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_trilinear3d_backward_grad_input(tensor *, tensor grad_input, tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_trilinear3d_out(tensor *, tensor out, tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); -void atg_upsample_trilinear3d_vec(tensor *, tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); -void atg_value_selecting_reduction_backward(tensor *, tensor grad, int64_t dim, tensor indices, int64_t *sizes_data, int sizes_len, int keepdim); -void atg_values(tensor *, tensor self); -void atg_values_copy(tensor *, tensor self); -void atg_values_copy_out(tensor *, tensor out, tensor self); -void atg_vander(tensor *, tensor x, int64_t n_v, int n_null, int increasing); -void atg_var(tensor *, tensor self, int unbiased); -void atg_var_correction(tensor *, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_var_correction_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_var_dim(tensor *, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); -void atg_var_mean(tensor *, tensor self, int unbiased); -void atg_var_mean_correction(tensor *, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_var_mean_correction_out(tensor *, tensor out0, tensor out1, tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); -void atg_var_mean_dim(tensor *, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); -void atg_var_out(tensor *, tensor out, tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); -void atg_vdot(tensor *, tensor self, tensor other); -void atg_vdot_out(tensor *, tensor out, tensor self, tensor other); -void atg_view(tensor *, tensor self, int64_t *size_data, int size_len); -void atg_view_as(tensor *, tensor self, tensor other); -void atg_view_as_complex(tensor *, tensor self); -void atg_view_as_complex_copy(tensor *, tensor self); -void atg_view_as_complex_copy_out(tensor *, tensor out, tensor self); -void atg_view_as_real(tensor *, tensor self); -void atg_view_as_real_copy(tensor *, tensor self); -void atg_view_as_real_copy_out(tensor *, tensor out, tensor self); -void atg_view_copy(tensor *, tensor self, int64_t *size_data, int size_len); -void atg_view_copy_dtype(tensor *, tensor self, int dtype); -void atg_view_copy_dtype_out(tensor *, tensor out, tensor self, int dtype); -void atg_view_copy_out(tensor *, tensor out, tensor self, int64_t *size_data, int size_len); -void atg_view_dtype(tensor *, tensor self, int dtype); -tensor *atg_vsplit(tensor self, int64_t sections); -tensor *atg_vsplit_array(tensor self, int64_t *indices_data, int indices_len); -void atg_vstack(tensor *, tensor *tensors_data, int tensors_len); -void atg_vstack_out(tensor *, tensor out, tensor *tensors_data, int tensors_len); -tensor *atg_where(tensor condition); -void atg_where_scalar(tensor *, tensor condition, scalar self, scalar other); -void atg_where_scalarother(tensor *, tensor condition, tensor self, scalar other); -void atg_where_scalarself(tensor *, tensor condition, scalar self, tensor other); -void atg_where_self(tensor *, tensor condition, tensor self, tensor other); -void atg_where_self_out(tensor *, tensor out, tensor condition, tensor self, tensor other); -void atg_xlogy(tensor *, tensor self, tensor other); -void atg_xlogy_(tensor *, tensor self, tensor other); -void atg_xlogy_outscalar_other(tensor *, tensor out, tensor self, scalar other); -void atg_xlogy_outscalar_self(tensor *, tensor out, scalar self, tensor other); -void atg_xlogy_outtensor(tensor *, tensor out, tensor self, tensor other); -void atg_xlogy_scalar_other(tensor *, tensor self, scalar other); -void atg_xlogy_scalar_other_(tensor *, tensor self, scalar other); -void atg_xlogy_scalar_self(tensor *, scalar self, tensor other); -void atg_zero(tensor *, tensor self); -void atg_zero_(tensor *, tensor self); -void atg_zero_out(tensor *, tensor out, tensor self); -void atg_zeros(tensor *, int64_t *size_data, int size_len, int options_kind, int options_device); -void atg_zeros_like(tensor *, tensor self); -void atg_zeros_like_out(tensor *, tensor out, tensor self); -void atg_zeros_out(tensor *, tensor out, int64_t *size_data, int size_len); +raw_tensor atg_isclose(gc_tensor self, gc_tensor other, double rtol, double atol, int equal_nan); +raw_tensor atg_isfinite(gc_tensor self); +raw_tensor atg_isin(gc_tensor elements, gc_tensor test_elements, int assume_unique, int invert); +raw_tensor atg_isin_scalar_tensor(scalar element, gc_tensor test_elements, int assume_unique, int invert); +raw_tensor atg_isin_scalar_tensor_out(gc_tensor out, scalar element, gc_tensor test_elements, int assume_unique, int invert); +raw_tensor atg_isin_tensor_scalar(gc_tensor elements, scalar test_element, int assume_unique, int invert); +raw_tensor atg_isin_tensor_scalar_out(gc_tensor out, gc_tensor elements, scalar test_element, int assume_unique, int invert); +raw_tensor atg_isin_tensor_tensor_out(gc_tensor out, gc_tensor elements, gc_tensor test_elements, int assume_unique, int invert); +raw_tensor atg_isinf(gc_tensor self); +raw_tensor atg_isinf_out(gc_tensor out, gc_tensor self); +raw_tensor atg_isnan(gc_tensor self); +raw_tensor atg_isnan_out(gc_tensor out, gc_tensor self); +raw_tensor atg_isneginf(gc_tensor self); +raw_tensor atg_isneginf_out(gc_tensor out, gc_tensor self); +raw_tensor atg_isposinf(gc_tensor self); +raw_tensor atg_isposinf_out(gc_tensor out, gc_tensor self); +raw_tensor atg_isreal(gc_tensor self); +raw_tensor atg_istft(gc_tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, gc_tensor window, int center, int normalized, int onesided, int64_t length_v, int length_null, int return_complex); +raw_tensor atg_kaiser_window(int64_t window_length, int options_kind, int options_device); +raw_tensor atg_kaiser_window_beta(int64_t window_length, int periodic, double beta, int options_kind, int options_device); +raw_tensor atg_kaiser_window_beta_out(gc_tensor out, int64_t window_length, int periodic, double beta); +raw_tensor atg_kaiser_window_out(gc_tensor out, int64_t window_length); +raw_tensor atg_kaiser_window_periodic(int64_t window_length, int periodic, int options_kind, int options_device); +raw_tensor atg_kaiser_window_periodic_out(gc_tensor out, int64_t window_length, int periodic); +raw_tensor atg_kl_div(gc_tensor self, gc_tensor target, int64_t reduction, int log_target); +raw_tensor atg_kron(gc_tensor self, gc_tensor other); +raw_tensor atg_kron_out(gc_tensor out, gc_tensor self, gc_tensor other); +void atg_kthvalue(raw_tensor *, gc_tensor self, int64_t k, int64_t dim, int keepdim); +void atg_kthvalue_values(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t k, int64_t dim, int keepdim); +raw_tensor atg_l1_loss(gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_layer_norm(gc_tensor input, int64_t *normalized_shape_data, int normalized_shape_len, gc_tensor weight, gc_tensor bias, double eps, int cudnn_enable); +raw_tensor atg_lcm(gc_tensor self, gc_tensor other); +raw_tensor atg_lcm_(gc_tensor self, gc_tensor other); +raw_tensor atg_lcm_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_ldexp(gc_tensor self, gc_tensor other); +raw_tensor atg_ldexp_(gc_tensor self, gc_tensor other); +raw_tensor atg_ldexp_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_le(gc_tensor self, scalar other); +raw_tensor atg_le_(gc_tensor self, scalar other); +raw_tensor atg_le_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_le_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_le_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_le_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_leaky_relu(gc_tensor self); +raw_tensor atg_leaky_relu_(gc_tensor self); +raw_tensor atg_leaky_relu_backward(gc_tensor grad_output, gc_tensor self, scalar negative_slope, int self_is_result); +raw_tensor atg_leaky_relu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar negative_slope, int self_is_result); +raw_tensor atg_leaky_relu_out(gc_tensor out, gc_tensor self); +raw_tensor atg_lerp(gc_tensor self, gc_tensor end, scalar weight); +raw_tensor atg_lerp_(gc_tensor self, gc_tensor end, scalar weight); +raw_tensor atg_lerp_scalar_out(gc_tensor out, gc_tensor self, gc_tensor end, scalar weight); +raw_tensor atg_lerp_tensor(gc_tensor self, gc_tensor end, gc_tensor weight); +raw_tensor atg_lerp_tensor_(gc_tensor self, gc_tensor end, gc_tensor weight); +raw_tensor atg_lerp_tensor_out(gc_tensor out, gc_tensor self, gc_tensor end, gc_tensor weight); +raw_tensor atg_less(gc_tensor self, scalar other); +raw_tensor atg_less_(gc_tensor self, scalar other); +raw_tensor atg_less_equal(gc_tensor self, scalar other); +raw_tensor atg_less_equal_(gc_tensor self, scalar other); +raw_tensor atg_less_equal_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_less_equal_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_less_equal_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_less_equal_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_less_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_less_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_less_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_less_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_lgamma(gc_tensor self); +raw_tensor atg_lgamma_(gc_tensor self); +raw_tensor atg_lgamma_out(gc_tensor out, gc_tensor self); +raw_tensor atg_lift(gc_tensor self); +raw_tensor atg_lift_fresh(gc_tensor self); +raw_tensor atg_lift_fresh_copy(gc_tensor self); +raw_tensor atg_lift_fresh_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_lift_out(gc_tensor out, gc_tensor self); +raw_tensor atg_linalg_cholesky(gc_tensor self, int upper); +void atg_linalg_cholesky_ex(raw_tensor *, gc_tensor self, int upper, int check_errors); +void atg_linalg_cholesky_ex_l(raw_tensor *, gc_tensor L, gc_tensor info, gc_tensor self, int upper, int check_errors); +raw_tensor atg_linalg_cholesky_out(gc_tensor out, gc_tensor self, int upper); +raw_tensor atg_linalg_cond(gc_tensor self, scalar p); +raw_tensor atg_linalg_cond_out(gc_tensor out, gc_tensor self, scalar p); +raw_tensor atg_linalg_cond_p_str(gc_tensor self, char * p); +raw_tensor atg_linalg_cond_p_str_out(gc_tensor out, gc_tensor self, char * p); +raw_tensor atg_linalg_cross(gc_tensor self, gc_tensor other, int64_t dim); +raw_tensor atg_linalg_cross_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t dim); +raw_tensor atg_linalg_det(gc_tensor A); +raw_tensor atg_linalg_det_out(gc_tensor out, gc_tensor A); +raw_tensor atg_linalg_diagonal(gc_tensor A, int64_t offset, int64_t dim1, int64_t dim2); +void atg_linalg_eig(raw_tensor *, gc_tensor self); +void atg_linalg_eig_out(raw_tensor *, gc_tensor eigenvalues, gc_tensor eigenvectors, gc_tensor self); +void atg_linalg_eigh(raw_tensor *, gc_tensor self, char * UPLO); +void atg_linalg_eigh_eigvals(raw_tensor *, gc_tensor eigvals, gc_tensor eigvecs, gc_tensor self, char * UPLO); +raw_tensor atg_linalg_eigvals(gc_tensor self); +raw_tensor atg_linalg_eigvals_out(gc_tensor out, gc_tensor self); +raw_tensor atg_linalg_eigvalsh(gc_tensor self, char * UPLO); +raw_tensor atg_linalg_eigvalsh_out(gc_tensor out, gc_tensor self, char * UPLO); +raw_tensor atg_linalg_householder_product(gc_tensor input, gc_tensor tau); +raw_tensor atg_linalg_householder_product_out(gc_tensor out, gc_tensor input, gc_tensor tau); +raw_tensor atg_linalg_inv(gc_tensor A); +void atg_linalg_inv_ex(raw_tensor *, gc_tensor A, int check_errors); +void atg_linalg_inv_ex_inverse(raw_tensor *, gc_tensor inverse, gc_tensor info, gc_tensor A, int check_errors); +raw_tensor atg_linalg_inv_out(gc_tensor out, gc_tensor A); +void atg_linalg_ldl_factor(raw_tensor *, gc_tensor self, int hermitian); +void atg_linalg_ldl_factor_ex(raw_tensor *, gc_tensor self, int hermitian, int check_errors); +void atg_linalg_ldl_factor_ex_out(raw_tensor *, gc_tensor LD, gc_tensor pivots, gc_tensor info, gc_tensor self, int hermitian, int check_errors); +void atg_linalg_ldl_factor_out(raw_tensor *, gc_tensor LD, gc_tensor pivots, gc_tensor self, int hermitian); +raw_tensor atg_linalg_ldl_solve(gc_tensor LD, gc_tensor pivots, gc_tensor B, int hermitian); +raw_tensor atg_linalg_ldl_solve_out(gc_tensor out, gc_tensor LD, gc_tensor pivots, gc_tensor B, int hermitian); +void atg_linalg_lstsq(raw_tensor *, gc_tensor self, gc_tensor b, double rcond_v, int rcond_null, char * driver); +void atg_linalg_lstsq_out(raw_tensor *, gc_tensor solution, gc_tensor residuals, gc_tensor rank, gc_tensor singular_values, gc_tensor self, gc_tensor b, double rcond_v, int rcond_null, char * driver); +void atg_linalg_lu(raw_tensor *, gc_tensor A, int pivot); +void atg_linalg_lu_factor(raw_tensor *, gc_tensor A, int pivot); +void atg_linalg_lu_factor_ex(raw_tensor *, gc_tensor A, int pivot, int check_errors); +void atg_linalg_lu_factor_ex_out(raw_tensor *, gc_tensor LU, gc_tensor pivots, gc_tensor info, gc_tensor A, int pivot, int check_errors); +void atg_linalg_lu_factor_out(raw_tensor *, gc_tensor LU, gc_tensor pivots, gc_tensor A, int pivot); +void atg_linalg_lu_out(raw_tensor *, gc_tensor P, gc_tensor L, gc_tensor U, gc_tensor A, int pivot); +raw_tensor atg_linalg_lu_solve(gc_tensor LU, gc_tensor pivots, gc_tensor B, int left, int adjoint); +raw_tensor atg_linalg_lu_solve_out(gc_tensor out, gc_tensor LU, gc_tensor pivots, gc_tensor B, int left, int adjoint); +raw_tensor atg_linalg_matmul(gc_tensor self, gc_tensor other); +raw_tensor atg_linalg_matmul_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_linalg_matrix_exp(gc_tensor self); +raw_tensor atg_linalg_matrix_exp_out(gc_tensor out, gc_tensor self); +raw_tensor atg_linalg_matrix_power(gc_tensor self, int64_t n); +raw_tensor atg_linalg_matrix_power_out(gc_tensor out, gc_tensor self, int64_t n); +raw_tensor atg_linalg_matrix_rank(gc_tensor self, double tol, int hermitian); +raw_tensor atg_linalg_matrix_rank_atol_rtol_float(gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); +raw_tensor atg_linalg_matrix_rank_atol_rtol_float_out(gc_tensor out, gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); +raw_tensor atg_linalg_matrix_rank_atol_rtol_tensor(gc_tensor input, gc_tensor atol, gc_tensor rtol, int hermitian); +raw_tensor atg_linalg_matrix_rank_atol_rtol_tensor_out(gc_tensor out, gc_tensor input, gc_tensor atol, gc_tensor rtol, int hermitian); +raw_tensor atg_linalg_matrix_rank_out(gc_tensor out, gc_tensor self, double tol, int hermitian); +raw_tensor atg_linalg_matrix_rank_out_tol_tensor(gc_tensor out, gc_tensor input, gc_tensor tol, int hermitian); +raw_tensor atg_linalg_matrix_rank_tol_tensor(gc_tensor input, gc_tensor tol, int hermitian); +raw_tensor atg_linalg_multi_dot(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_linalg_multi_dot_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_linalg_pinv(gc_tensor self, double rcond, int hermitian); +raw_tensor atg_linalg_pinv_atol_rtol_float(gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); +raw_tensor atg_linalg_pinv_atol_rtol_float_out(gc_tensor out, gc_tensor self, double atol_v, int atol_null, double rtol_v, int rtol_null, int hermitian); +raw_tensor atg_linalg_pinv_atol_rtol_tensor(gc_tensor self, gc_tensor atol, gc_tensor rtol, int hermitian); +raw_tensor atg_linalg_pinv_atol_rtol_tensor_out(gc_tensor out, gc_tensor self, gc_tensor atol, gc_tensor rtol, int hermitian); +raw_tensor atg_linalg_pinv_out(gc_tensor out, gc_tensor self, double rcond, int hermitian); +raw_tensor atg_linalg_pinv_out_rcond_tensor(gc_tensor out, gc_tensor self, gc_tensor rcond, int hermitian); +raw_tensor atg_linalg_pinv_rcond_tensor(gc_tensor self, gc_tensor rcond, int hermitian); +void atg_linalg_qr(raw_tensor *, gc_tensor A, char * mode); +void atg_linalg_qr_out(raw_tensor *, gc_tensor Q, gc_tensor R, gc_tensor A, char * mode); +void atg_linalg_slogdet(raw_tensor *, gc_tensor A); +void atg_linalg_slogdet_out(raw_tensor *, gc_tensor sign, gc_tensor logabsdet, gc_tensor A); +raw_tensor atg_linalg_solve(gc_tensor A, gc_tensor B, int left); +void atg_linalg_solve_ex(raw_tensor *, gc_tensor A, gc_tensor B, int left, int check_errors); +void atg_linalg_solve_ex_out(raw_tensor *, gc_tensor result, gc_tensor info, gc_tensor A, gc_tensor B, int left, int check_errors); +raw_tensor atg_linalg_solve_out(gc_tensor out, gc_tensor A, gc_tensor B, int left); +raw_tensor atg_linalg_solve_triangular(gc_tensor self, gc_tensor B, int upper, int left, int unitriangular); +raw_tensor atg_linalg_solve_triangular_out(gc_tensor out, gc_tensor self, gc_tensor B, int upper, int left, int unitriangular); +void atg_linalg_svd(raw_tensor *, gc_tensor A, int full_matrices, char * driver); +void atg_linalg_svd_u(raw_tensor *, gc_tensor U, gc_tensor S, gc_tensor Vh, gc_tensor A, int full_matrices, char * driver); +raw_tensor atg_linalg_svdvals(gc_tensor A, char * driver); +raw_tensor atg_linalg_svdvals_out(gc_tensor out, gc_tensor A, char * driver); +raw_tensor atg_linalg_tensorinv(gc_tensor self, int64_t ind); +raw_tensor atg_linalg_tensorinv_out(gc_tensor out, gc_tensor self, int64_t ind); +raw_tensor atg_linalg_tensorsolve(gc_tensor self, gc_tensor other, int64_t *dims_data, int dims_len); +raw_tensor atg_linalg_tensorsolve_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t *dims_data, int dims_len); +raw_tensor atg_linalg_vander(gc_tensor x, int64_t n_v, int n_null); +raw_tensor atg_linalg_vecdot(gc_tensor x, gc_tensor y, int64_t dim); +raw_tensor atg_linalg_vecdot_out(gc_tensor out, gc_tensor x, gc_tensor y, int64_t dim); +raw_tensor atg_linear(gc_tensor input, gc_tensor weight, gc_tensor bias); +raw_tensor atg_linear_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias); +raw_tensor atg_linspace(scalar start, scalar end, int64_t steps, int options_kind, int options_device); +raw_tensor atg_linspace_out(gc_tensor out, scalar start, scalar end, int64_t steps); +raw_tensor atg_log(gc_tensor self); +raw_tensor atg_log10(gc_tensor self); +raw_tensor atg_log10_(gc_tensor self); +raw_tensor atg_log10_out(gc_tensor out, gc_tensor self); +raw_tensor atg_log1p(gc_tensor self); +raw_tensor atg_log1p_(gc_tensor self); +raw_tensor atg_log1p_out(gc_tensor out, gc_tensor self); +raw_tensor atg_log2(gc_tensor self); +raw_tensor atg_log2_(gc_tensor self); +raw_tensor atg_log2_out(gc_tensor out, gc_tensor self); +raw_tensor atg_log_(gc_tensor self); +raw_tensor atg_log_normal(gc_tensor self, double mean, double std); +raw_tensor atg_log_normal_(gc_tensor self, double mean, double std); +raw_tensor atg_log_normal_out(gc_tensor out, gc_tensor self, double mean, double std); +raw_tensor atg_log_out(gc_tensor out, gc_tensor self); +raw_tensor atg_log_sigmoid(gc_tensor self); +raw_tensor atg_log_sigmoid_backward(gc_tensor grad_output, gc_tensor self, gc_tensor buffer); +raw_tensor atg_log_sigmoid_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor buffer); +raw_tensor atg_log_sigmoid_out(gc_tensor out, gc_tensor self); +raw_tensor atg_log_softmax(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_log_softmax_int_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_logaddexp(gc_tensor self, gc_tensor other); +raw_tensor atg_logaddexp2(gc_tensor self, gc_tensor other); +raw_tensor atg_logaddexp2_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_logaddexp_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_logcumsumexp(gc_tensor self, int64_t dim); +raw_tensor atg_logcumsumexp_out(gc_tensor out, gc_tensor self, int64_t dim); +raw_tensor atg_logdet(gc_tensor self); +raw_tensor atg_logical_and(gc_tensor self, gc_tensor other); +raw_tensor atg_logical_and_(gc_tensor self, gc_tensor other); +raw_tensor atg_logical_and_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_logical_not(gc_tensor self); +raw_tensor atg_logical_not_(gc_tensor self); +raw_tensor atg_logical_not_out(gc_tensor out, gc_tensor self); +raw_tensor atg_logical_or(gc_tensor self, gc_tensor other); +raw_tensor atg_logical_or_(gc_tensor self, gc_tensor other); +raw_tensor atg_logical_or_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_logical_xor(gc_tensor self, gc_tensor other); +raw_tensor atg_logical_xor_(gc_tensor self, gc_tensor other); +raw_tensor atg_logical_xor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_logit(gc_tensor self, double eps_v, int eps_null); +raw_tensor atg_logit_(gc_tensor self, double eps_v, int eps_null); +raw_tensor atg_logit_backward(gc_tensor grad_output, gc_tensor self, double eps_v, int eps_null); +raw_tensor atg_logit_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, double eps_v, int eps_null); +raw_tensor atg_logit_out(gc_tensor out, gc_tensor self, double eps_v, int eps_null); +raw_tensor atg_logspace(scalar start, scalar end, int64_t steps, double base, int options_kind, int options_device); +raw_tensor atg_logspace_out(gc_tensor out, scalar start, scalar end, int64_t steps, double base); +raw_tensor atg_logsumexp(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_logsumexp_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +void atg_lstm(raw_tensor *, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); +void atg_lstm_cell(raw_tensor *, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh); +void atg_lstm_data(raw_tensor *, gc_tensor data, gc_tensor batch_sizes, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); +void atg_lstm_mps_backward(gc_tensor out0, gc_tensor *out1_data, int out1_len, gc_tensor *out2_data, int out2_len, gc_tensor grad_y, gc_tensor grad_hy, gc_tensor grad_cy, gc_tensor z_state, gc_tensor cell_state_fwd, gc_tensor input, gc_tensor layersOutputs, gc_tensor *hx_data, int hx_len, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); +raw_tensor atg_lt(gc_tensor self, scalar other); +raw_tensor atg_lt_(gc_tensor self, scalar other); +raw_tensor atg_lt_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_lt_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_lt_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_lt_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_lu_solve(gc_tensor self, gc_tensor LU_data, gc_tensor LU_pivots); +raw_tensor atg_lu_solve_out(gc_tensor out, gc_tensor self, gc_tensor LU_data, gc_tensor LU_pivots); +void atg_lu_unpack(raw_tensor *, gc_tensor LU_data, gc_tensor LU_pivots, int unpack_data, int unpack_pivots); +void atg_lu_unpack_out(raw_tensor *, gc_tensor P, gc_tensor L, gc_tensor U, gc_tensor LU_data, gc_tensor LU_pivots, int unpack_data, int unpack_pivots); +raw_tensor atg_margin_ranking_loss(gc_tensor input1, gc_tensor input2, gc_tensor target, double margin, int64_t reduction); +raw_tensor atg_masked_fill(gc_tensor self, gc_tensor mask, scalar value); +raw_tensor atg_masked_fill_(gc_tensor self, gc_tensor mask, scalar value); +raw_tensor atg_masked_fill_scalar_out(gc_tensor out, gc_tensor self, gc_tensor mask, scalar value); +raw_tensor atg_masked_fill_tensor(gc_tensor self, gc_tensor mask, gc_tensor value); +raw_tensor atg_masked_fill_tensor_(gc_tensor self, gc_tensor mask, gc_tensor value); +raw_tensor atg_masked_fill_tensor_out(gc_tensor out, gc_tensor self, gc_tensor mask, gc_tensor value); +raw_tensor atg_masked_scatter(gc_tensor self, gc_tensor mask, gc_tensor source); +raw_tensor atg_masked_scatter_(gc_tensor self, gc_tensor mask, gc_tensor source); +raw_tensor atg_masked_scatter_out(gc_tensor out, gc_tensor self, gc_tensor mask, gc_tensor source); +raw_tensor atg_masked_select(gc_tensor self, gc_tensor mask); +raw_tensor atg_masked_select_backward(gc_tensor grad, gc_tensor input, gc_tensor mask); +raw_tensor atg_masked_select_out(gc_tensor out, gc_tensor self, gc_tensor mask); +raw_tensor atg_matmul(gc_tensor self, gc_tensor other); +raw_tensor atg_matmul_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_matrix_exp(gc_tensor self); +raw_tensor atg_matrix_exp_backward(gc_tensor self, gc_tensor grad); +raw_tensor atg_matrix_h(gc_tensor self); +raw_tensor atg_matrix_power(gc_tensor self, int64_t n); +raw_tensor atg_matrix_power_out(gc_tensor out, gc_tensor self, int64_t n); +raw_tensor atg_max(gc_tensor self); +void atg_max_dim(raw_tensor *, gc_tensor self, int64_t dim, int keepdim); +void atg_max_dim_max(raw_tensor *, gc_tensor max, gc_tensor max_values, gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_max_other(gc_tensor self, gc_tensor other); +raw_tensor atg_max_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_max_pool1d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +void atg_max_pool1d_with_indices(raw_tensor *, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_max_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_max_pool2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_max_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +void atg_max_pool2d_with_indices(raw_tensor *, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_max_pool2d_with_indices_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices); +raw_tensor atg_max_pool2d_with_indices_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices); +void atg_max_pool2d_with_indices_out(raw_tensor *, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_max_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +void atg_max_pool3d_with_indices(raw_tensor *, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_max_pool3d_with_indices_backward(gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices); +raw_tensor atg_max_pool3d_with_indices_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode, gc_tensor indices); +void atg_max_pool3d_with_indices_out(raw_tensor *, gc_tensor out, gc_tensor indices, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_max_unary_out(gc_tensor out, gc_tensor self); +raw_tensor atg_max_unpool2d(gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len); +raw_tensor atg_max_unpool2d_out(gc_tensor out, gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len); +raw_tensor atg_max_unpool3d(gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); +raw_tensor atg_max_unpool3d_out(gc_tensor out, gc_tensor self, gc_tensor indices, int64_t *output_size_data, int output_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); +raw_tensor atg_maximum(gc_tensor self, gc_tensor other); +raw_tensor atg_maximum_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_mean(gc_tensor self, int dtype); +raw_tensor atg_mean_dim(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_mean_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_median(gc_tensor self); +void atg_median_dim(raw_tensor *, gc_tensor self, int64_t dim, int keepdim); +void atg_median_dim_values(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_median_out(gc_tensor out, gc_tensor self); +raw_tensor *atg_meshgrid(gc_tensor *tensors_data, int tensors_len); +raw_tensor *atg_meshgrid_indexing(gc_tensor *tensors_data, int tensors_len, char * indexing); +raw_tensor atg_mh(gc_tensor self); +raw_tensor atg_min(gc_tensor self); +void atg_min_dim(raw_tensor *, gc_tensor self, int64_t dim, int keepdim); +void atg_min_dim_min(raw_tensor *, gc_tensor min, gc_tensor min_indices, gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_min_other(gc_tensor self, gc_tensor other); +raw_tensor atg_min_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_min_unary_out(gc_tensor out, gc_tensor self); +raw_tensor atg_minimum(gc_tensor self, gc_tensor other); +raw_tensor atg_minimum_out(gc_tensor out, gc_tensor self, gc_tensor other); +void atg_miopen_batch_norm(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon); +void atg_miopen_batch_norm_backward(raw_tensor *, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon); +void atg_miopen_batch_norm_backward_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor grad_output, gc_tensor weight, gc_tensor running_mean, gc_tensor running_var, gc_tensor save_mean, gc_tensor save_var, double epsilon); +void atg_miopen_batch_norm_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double exponential_average_factor, double epsilon); +raw_tensor atg_miopen_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); +raw_tensor atg_miopen_convolution_add_relu(gc_tensor self, gc_tensor weight, gc_tensor z, scalar alpha, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_miopen_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); +raw_tensor atg_miopen_convolution_relu(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_miopen_convolution_transpose(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); +raw_tensor atg_miopen_convolution_transpose_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); +raw_tensor atg_miopen_depthwise_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); +raw_tensor atg_miopen_depthwise_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int benchmark, int deterministic); +void atg_miopen_rnn(raw_tensor *, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state); +void atg_miopen_rnn_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor input, gc_tensor *weight_data, int weight_len, int64_t weight_stride0, gc_tensor hx, gc_tensor cx, int64_t mode, int64_t hidden_size, int64_t num_layers, int batch_first, double dropout, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, gc_tensor dropout_state); +raw_tensor atg_mish(gc_tensor self); +raw_tensor atg_mish_(gc_tensor self); +raw_tensor atg_mish_backward(gc_tensor grad_output, gc_tensor self); +raw_tensor atg_mish_out(gc_tensor out, gc_tensor self); +raw_tensor atg_mkldnn_adaptive_avg_pool2d(gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_mkldnn_adaptive_avg_pool2d_backward(gc_tensor grad_output, gc_tensor self); +raw_tensor atg_mkldnn_adaptive_avg_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self); +raw_tensor atg_mkldnn_adaptive_avg_pool2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len); +raw_tensor atg_mkldnn_convolution(gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_mkldnn_convolution_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_mkldnn_linear(gc_tensor self, gc_tensor weight, gc_tensor bias); +raw_tensor atg_mkldnn_linear_backward_input(int64_t *input_size_data, int input_size_len, gc_tensor grad_output, gc_tensor weight); +raw_tensor atg_mkldnn_linear_backward_input_out(gc_tensor out, int64_t *input_size_data, int input_size_len, gc_tensor grad_output, gc_tensor weight); +void atg_mkldnn_linear_backward_weights(raw_tensor *, gc_tensor grad_output, gc_tensor input, gc_tensor weight, int bias_defined); +void atg_mkldnn_linear_backward_weights_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor grad_output, gc_tensor input, gc_tensor weight, int bias_defined); +raw_tensor atg_mkldnn_linear_out(gc_tensor out, gc_tensor self, gc_tensor weight, gc_tensor bias); +raw_tensor atg_mkldnn_max_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_max_pool2d_backward(gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_max_pool2d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_max_pool2d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_max_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_max_pool3d_backward(gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_max_pool3d_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor output, gc_tensor input, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_max_pool3d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_mkldnn_reorder_conv2d_weight(gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len); +raw_tensor atg_mkldnn_reorder_conv2d_weight_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups, int64_t *input_size_data, int input_size_len); +raw_tensor atg_mkldnn_reorder_conv3d_weight(gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +raw_tensor atg_mkldnn_reorder_conv3d_weight_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len, int64_t *stride_data, int stride_len, int64_t *dilation_data, int dilation_len, int64_t groups); +void atg_mkldnn_rnn_layer(raw_tensor *, gc_tensor input, gc_tensor weight0, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor hx_, gc_tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train); +void atg_mkldnn_rnn_layer_backward(raw_tensor *, gc_tensor input, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor weight4, gc_tensor hx_, gc_tensor cx_tmp, gc_tensor output, gc_tensor hy_, gc_tensor cy_, gc_tensor grad_output, gc_tensor grad_hy, gc_tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, gc_tensor workspace); +void atg_mkldnn_rnn_layer_backward_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor out4, gc_tensor out5, gc_tensor out6, gc_tensor input, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor weight4, gc_tensor hx_, gc_tensor cx_tmp, gc_tensor output, gc_tensor hy_, gc_tensor cy_, gc_tensor grad_output, gc_tensor grad_hy, gc_tensor grad_cy, int reverse, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int train, int bidirectional, int64_t *batch_sizes_data, int batch_sizes_len, int batch_first, gc_tensor workspace); +void atg_mkldnn_rnn_layer_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor out3, gc_tensor input, gc_tensor weight0, gc_tensor weight1, gc_tensor weight2, gc_tensor weight3, gc_tensor hx_, gc_tensor cx_, int reverse, int64_t *batch_sizes_data, int batch_sizes_len, int64_t mode, int64_t hidden_size, int64_t num_layers, int has_biases, int bidirectional, int batch_first, int train); +raw_tensor atg_mm(gc_tensor self, gc_tensor mat2); +raw_tensor atg_mm_out(gc_tensor out, gc_tensor self, gc_tensor mat2); +void atg_mode(raw_tensor *, gc_tensor self, int64_t dim, int keepdim); +void atg_mode_values(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_moveaxis(gc_tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len); +raw_tensor atg_moveaxis_int(gc_tensor self, int64_t source, int64_t destination); +raw_tensor atg_movedim(gc_tensor self, int64_t *source_data, int source_len, int64_t *destination_data, int destination_len); +raw_tensor atg_movedim_int(gc_tensor self, int64_t source, int64_t destination); +raw_tensor atg_mse_loss(gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_mse_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_mse_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_mse_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_msort(gc_tensor self); +raw_tensor atg_msort_out(gc_tensor out, gc_tensor self); +raw_tensor atg_mt(gc_tensor self); +raw_tensor atg_mul(gc_tensor self, gc_tensor other); +raw_tensor atg_mul_(gc_tensor self, gc_tensor other); +raw_tensor atg_mul_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_mul_scalar(gc_tensor self, scalar other); +raw_tensor atg_mul_scalar_(gc_tensor self, scalar other); +raw_tensor atg_mul_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_multi_margin_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, scalar p, scalar margin, gc_tensor weight, int64_t reduction); +raw_tensor atg_multi_margin_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, scalar p, scalar margin, gc_tensor weight, int64_t reduction); +raw_tensor atg_multilabel_margin_loss(gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_multilabel_margin_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, gc_tensor is_target); +raw_tensor atg_multilabel_margin_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, gc_tensor is_target); +raw_tensor atg_multilabel_margin_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_multinomial(gc_tensor self, int64_t num_samples, int replacement); +raw_tensor atg_multinomial_out(gc_tensor out, gc_tensor self, int64_t num_samples, int replacement); +raw_tensor atg_multiply(gc_tensor self, gc_tensor other); +raw_tensor atg_multiply_(gc_tensor self, gc_tensor other); +raw_tensor atg_multiply_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_multiply_scalar(gc_tensor self, scalar other); +raw_tensor atg_multiply_scalar_(gc_tensor self, scalar other); +raw_tensor atg_mv(gc_tensor self, gc_tensor vec); +raw_tensor atg_mv_out(gc_tensor out, gc_tensor self, gc_tensor vec); +raw_tensor atg_mvlgamma(gc_tensor self, int64_t p); +raw_tensor atg_mvlgamma_(gc_tensor self, int64_t p); +raw_tensor atg_mvlgamma_out(gc_tensor out, gc_tensor self, int64_t p); +raw_tensor atg_nan_to_num(gc_tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null); +raw_tensor atg_nan_to_num_(gc_tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null); +raw_tensor atg_nan_to_num_out(gc_tensor out, gc_tensor self, double nan_v, int nan_null, double posinf_v, int posinf_null, double neginf_v, int neginf_null); +raw_tensor atg_nanmean(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_nanmean_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_nanmedian(gc_tensor self); +void atg_nanmedian_dim(raw_tensor *, gc_tensor self, int64_t dim, int keepdim); +void atg_nanmedian_dim_values(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int keepdim); +raw_tensor atg_nanmedian_out(gc_tensor out, gc_tensor self); +raw_tensor atg_nanquantile(gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_nanquantile_out(gc_tensor out, gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_nanquantile_scalar(gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_nanquantile_scalar_out(gc_tensor out, gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_nansum(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_nansum_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_narrow(gc_tensor self, int64_t dim, int64_t start, int64_t length); +raw_tensor atg_narrow_copy(gc_tensor self, int64_t dim, int64_t start, int64_t length); +raw_tensor atg_narrow_copy_out(gc_tensor out, gc_tensor self, int64_t dim, int64_t start, int64_t length); +raw_tensor atg_narrow_tensor(gc_tensor self, int64_t dim, gc_tensor start, int64_t length); +void atg_native_batch_norm(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps); +void atg_native_batch_norm_out(raw_tensor *, gc_tensor out, gc_tensor save_mean, gc_tensor save_invstd, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor running_mean, gc_tensor running_var, int training, double momentum, double eps); +raw_tensor atg_native_channel_shuffle(gc_tensor self, int64_t groups); +void atg_native_dropout(raw_tensor *, gc_tensor input, double p, int train); +raw_tensor atg_native_dropout_backward(gc_tensor grad_output, gc_tensor mask, double scale); +raw_tensor atg_native_dropout_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor mask, double scale); +void atg_native_dropout_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor input, double p, int train); +void atg_native_group_norm(raw_tensor *, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps); +void atg_native_group_norm_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, gc_tensor weight, gc_tensor bias, int64_t n, int64_t C, int64_t HxW, int64_t group, double eps); +void atg_native_layer_norm(raw_tensor *, gc_tensor input, int64_t *normalized_shape_data, int normalized_shape_len, gc_tensor weight, gc_tensor bias, double eps); +void atg_native_layer_norm_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor input, int64_t *normalized_shape_data, int normalized_shape_len, gc_tensor weight, gc_tensor bias, double eps); +raw_tensor atg_native_norm(gc_tensor self); +raw_tensor atg_native_norm_out(gc_tensor out, gc_tensor self); +raw_tensor atg_native_norm_scalaropt_dim_dtype(gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_native_norm_scalaropt_dim_dtype_out(gc_tensor out, gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_ne(gc_tensor self, scalar other); +raw_tensor atg_ne_(gc_tensor self, scalar other); +raw_tensor atg_ne_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_ne_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_ne_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_ne_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_neg(gc_tensor self); +raw_tensor atg_neg_(gc_tensor self); +raw_tensor atg_neg_out(gc_tensor out, gc_tensor self); +raw_tensor atg_negative(gc_tensor self); +raw_tensor atg_negative_(gc_tensor self); +raw_tensor atg_negative_out(gc_tensor out, gc_tensor self); +raw_tensor atg_nested_to_padded_tensor(gc_tensor self, double padding, int64_t *output_size_data, int output_size_len); +raw_tensor atg_new_empty(gc_tensor self, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_new_empty_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_new_empty_strided(gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len, int options_kind, int options_device); +raw_tensor atg_new_empty_strided_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); +raw_tensor atg_new_full(gc_tensor self, int64_t *size_data, int size_len, scalar fill_value, int options_kind, int options_device); +raw_tensor atg_new_full_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, scalar fill_value); +raw_tensor atg_new_ones(gc_tensor self, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_new_ones_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_new_zeros(gc_tensor self, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_new_zeros_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_nextafter(gc_tensor self, gc_tensor other); +raw_tensor atg_nextafter_(gc_tensor self, gc_tensor other); +raw_tensor atg_nextafter_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_nll_loss(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index); +raw_tensor atg_nll_loss2d(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index); +raw_tensor atg_nll_loss2d_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight); +raw_tensor atg_nll_loss2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight); +raw_tensor atg_nll_loss2d_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index); +raw_tensor atg_nll_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight); +raw_tensor atg_nll_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index, gc_tensor total_weight); +raw_tensor atg_nll_loss_nd(gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index); +raw_tensor atg_nll_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, gc_tensor weight, int64_t reduction, int64_t ignore_index); +raw_tensor atg_nonzero(gc_tensor self); +raw_tensor *atg_nonzero_numpy(gc_tensor self); +raw_tensor atg_nonzero_out(gc_tensor out, gc_tensor self); +raw_tensor atg_nonzero_static(gc_tensor self, int64_t size, int64_t fill_value); +raw_tensor atg_nonzero_static_out(gc_tensor out, gc_tensor self, int64_t size, int64_t fill_value); +raw_tensor atg_norm(gc_tensor self); +raw_tensor atg_norm_dtype_out(gc_tensor out, gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_norm_except_dim(gc_tensor v, int64_t pow, int64_t dim); +raw_tensor atg_norm_out(gc_tensor out, gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_norm_scalar_out(gc_tensor out, gc_tensor self); +raw_tensor atg_norm_scalaropt_dim(gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_norm_scalaropt_dim_dtype(gc_tensor self, scalar p, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_norm_scalaropt_dtype(gc_tensor self, scalar p, int dtype); +raw_tensor atg_norm_scalaropt_dtype_out(gc_tensor out, gc_tensor self, scalar p, int dtype); +raw_tensor atg_normal_(gc_tensor self, double mean, double std); +raw_tensor atg_normal_functional(gc_tensor self, double mean, double std); +raw_tensor atg_not_equal(gc_tensor self, scalar other); +raw_tensor atg_not_equal_(gc_tensor self, scalar other); +raw_tensor atg_not_equal_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_not_equal_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_not_equal_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_not_equal_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_nuclear_norm(gc_tensor self, int keepdim); +raw_tensor atg_nuclear_norm_dim(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_nuclear_norm_dim_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_nuclear_norm_out(gc_tensor out, gc_tensor self, int keepdim); +raw_tensor atg_numpy_t(gc_tensor self); +raw_tensor atg_one_hot(gc_tensor self, int64_t num_classes); +raw_tensor atg_ones(int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_ones_like(gc_tensor self); +raw_tensor atg_ones_like_out(gc_tensor out, gc_tensor self); +raw_tensor atg_ones_out(gc_tensor out, int64_t *size_data, int size_len); +raw_tensor atg_orgqr(gc_tensor self, gc_tensor input2); +raw_tensor atg_orgqr_out(gc_tensor out, gc_tensor self, gc_tensor input2); +raw_tensor atg_ormqr(gc_tensor self, gc_tensor input2, gc_tensor input3, int left, int transpose); +raw_tensor atg_ormqr_out(gc_tensor out, gc_tensor self, gc_tensor input2, gc_tensor input3, int left, int transpose); +raw_tensor atg_outer(gc_tensor self, gc_tensor vec2); +raw_tensor atg_outer_out(gc_tensor out, gc_tensor self, gc_tensor vec2); +int64_t atg_output_nr(gc_tensor self); +raw_tensor atg_pad(gc_tensor self, int64_t *pad_data, int pad_len, char * mode, double value_v, int value_null); +raw_tensor atg_pad_sequence(gc_tensor *sequences_data, int sequences_len, int batch_first, double padding_value); +raw_tensor atg_pairwise_distance(gc_tensor x1, gc_tensor x2, double p, double eps, int keepdim); +raw_tensor atg_pdist(gc_tensor self, double p); +raw_tensor atg_permute(gc_tensor self, int64_t *dims_data, int dims_len); +raw_tensor atg_permute_copy(gc_tensor self, int64_t *dims_data, int dims_len); +raw_tensor atg_permute_copy_out(gc_tensor out, gc_tensor self, int64_t *dims_data, int dims_len); +raw_tensor atg_pin_memory(gc_tensor self, int device); +raw_tensor atg_pinverse(gc_tensor self, double rcond); +raw_tensor atg_pixel_shuffle(gc_tensor self, int64_t upscale_factor); +raw_tensor atg_pixel_shuffle_out(gc_tensor out, gc_tensor self, int64_t upscale_factor); +raw_tensor atg_pixel_unshuffle(gc_tensor self, int64_t downscale_factor); +raw_tensor atg_pixel_unshuffle_out(gc_tensor out, gc_tensor self, int64_t downscale_factor); +raw_tensor atg_poisson(gc_tensor self); +raw_tensor atg_poisson_nll_loss(gc_tensor input, gc_tensor target, int log_input, int full, double eps, int64_t reduction); +raw_tensor atg_poisson_out(gc_tensor out, gc_tensor self); +raw_tensor atg_polar(gc_tensor abs, gc_tensor angle); +raw_tensor atg_polar_out(gc_tensor out, gc_tensor abs, gc_tensor angle); +raw_tensor atg_polygamma(int64_t n, gc_tensor self); +raw_tensor atg_polygamma_(gc_tensor self, int64_t n); +raw_tensor atg_polygamma_out(gc_tensor out, int64_t n, gc_tensor self); +raw_tensor atg_positive(gc_tensor self); +raw_tensor atg_pow(gc_tensor self, gc_tensor exponent); +raw_tensor atg_pow_(gc_tensor self, scalar exponent); +raw_tensor atg_pow_scalar(scalar self, gc_tensor exponent); +raw_tensor atg_pow_scalar_out(gc_tensor out, scalar self, gc_tensor exponent); +raw_tensor atg_pow_tensor_(gc_tensor self, gc_tensor exponent); +raw_tensor atg_pow_tensor_scalar(gc_tensor self, scalar exponent); +raw_tensor atg_pow_tensor_scalar_out(gc_tensor out, gc_tensor self, scalar exponent); +raw_tensor atg_pow_tensor_tensor_out(gc_tensor out, gc_tensor self, gc_tensor exponent); +raw_tensor atg_prelu(gc_tensor self, gc_tensor weight); +raw_tensor atg_prod(gc_tensor self, int dtype); +raw_tensor atg_prod_dim_int(gc_tensor self, int64_t dim, int keepdim, int dtype); +raw_tensor atg_prod_int_out(gc_tensor out, gc_tensor self, int64_t dim, int keepdim, int dtype); +raw_tensor atg_prod_out(gc_tensor out, gc_tensor self, int dtype); +raw_tensor atg_put(gc_tensor self, gc_tensor index, gc_tensor source, int accumulate); +raw_tensor atg_put_(gc_tensor self, gc_tensor index, gc_tensor source, int accumulate); +raw_tensor atg_put_out(gc_tensor out, gc_tensor self, gc_tensor index, gc_tensor source, int accumulate); +int64_t atg_q_per_channel_axis(gc_tensor self); +raw_tensor atg_q_per_channel_scales(gc_tensor self); +raw_tensor atg_q_per_channel_scales_out(gc_tensor out, gc_tensor self); +raw_tensor atg_q_per_channel_zero_points(gc_tensor self); +raw_tensor atg_q_per_channel_zero_points_out(gc_tensor out, gc_tensor self); +double atg_q_scale(gc_tensor self); +int64_t atg_q_zero_point(gc_tensor self); +void atg_qr(raw_tensor *, gc_tensor self, int some); +void atg_qr_q(raw_tensor *, gc_tensor Q, gc_tensor R, gc_tensor self, int some); +raw_tensor atg_quantile(gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_quantile_out(gc_tensor out, gc_tensor self, gc_tensor q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_quantile_scalar(gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_quantile_scalar_out(gc_tensor out, gc_tensor self, double q, int64_t dim_v, int dim_null, int keepdim, char * interpolation); +raw_tensor atg_quantize_per_channel(gc_tensor self, gc_tensor scales, gc_tensor zero_points, int64_t axis, int dtype); +raw_tensor atg_quantize_per_channel_out(gc_tensor out, gc_tensor self, gc_tensor scales, gc_tensor zero_points, int64_t axis, int dtype); +raw_tensor atg_quantize_per_tensor(gc_tensor self, double scale, int64_t zero_point, int dtype); +raw_tensor atg_quantize_per_tensor_dynamic(gc_tensor self, int dtype, int reduce_range); +raw_tensor atg_quantize_per_tensor_dynamic_out(gc_tensor out, gc_tensor self, int dtype, int reduce_range); +raw_tensor atg_quantize_per_tensor_out(gc_tensor out, gc_tensor self, double scale, int64_t zero_point, int dtype); +raw_tensor atg_quantize_per_tensor_tensor_qparams(gc_tensor self, gc_tensor scale, gc_tensor zero_point, int dtype); +raw_tensor atg_quantize_per_tensor_tensor_qparams_out(gc_tensor out, gc_tensor self, gc_tensor scale, gc_tensor zero_point, int dtype); +raw_tensor *atg_quantize_per_tensor_tensors(gc_tensor *tensors_data, int tensors_len, gc_tensor scales, gc_tensor zero_points, int dtype); +void atg_quantize_per_tensor_tensors_out(gc_tensor *out_data, int out_len, gc_tensor *tensors_data, int tensors_len, gc_tensor scales, gc_tensor zero_points, int dtype); +raw_tensor atg_quantized_batch_norm(gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor var, double eps, double output_scale, int64_t output_zero_point); +raw_tensor atg_quantized_batch_norm_out(gc_tensor out, gc_tensor input, gc_tensor weight, gc_tensor bias, gc_tensor mean, gc_tensor var, double eps, double output_scale, int64_t output_zero_point); +raw_tensor atg_quantized_gru_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); +void atg_quantized_lstm_cell(raw_tensor *, gc_tensor input, gc_tensor *hx_data, int hx_len, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); +raw_tensor atg_quantized_max_pool1d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_quantized_max_pool1d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_quantized_max_pool2d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_quantized_max_pool2d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_quantized_max_pool3d(gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_quantized_max_pool3d_out(gc_tensor out, gc_tensor self, int64_t *kernel_size_data, int kernel_size_len, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len, int ceil_mode); +raw_tensor atg_quantized_rnn_relu_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); +raw_tensor atg_quantized_rnn_tanh_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh, gc_tensor packed_ih, gc_tensor packed_hh, gc_tensor col_offsets_ih, gc_tensor col_offsets_hh, scalar scale_ih, scalar scale_hh, scalar zero_point_ih, scalar zero_point_hh); +raw_tensor atg_rad2deg(gc_tensor self); +raw_tensor atg_rad2deg_(gc_tensor self); +raw_tensor atg_rad2deg_out(gc_tensor out, gc_tensor self); +raw_tensor atg_rand(int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_rand_like(gc_tensor self); +raw_tensor atg_rand_like_out(gc_tensor out, gc_tensor self); +raw_tensor atg_rand_out(gc_tensor out, int64_t *size_data, int size_len); +raw_tensor atg_randint(int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_randint_like(gc_tensor self, int64_t high); +raw_tensor atg_randint_like_low_dtype(gc_tensor self, int64_t low, int64_t high); +raw_tensor atg_randint_like_low_dtype_out(gc_tensor out, gc_tensor self, int64_t low, int64_t high); +raw_tensor atg_randint_like_out(gc_tensor out, gc_tensor self, int64_t high); +raw_tensor atg_randint_low(int64_t low, int64_t high, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_randint_low_out(gc_tensor out, int64_t low, int64_t high, int64_t *size_data, int size_len); +raw_tensor atg_randint_out(gc_tensor out, int64_t high, int64_t *size_data, int size_len); +raw_tensor atg_randn(int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_randn_like(gc_tensor self); +raw_tensor atg_randn_like_out(gc_tensor out, gc_tensor self); +raw_tensor atg_randn_out(gc_tensor out, int64_t *size_data, int size_len); +raw_tensor atg_random(gc_tensor self); +raw_tensor atg_random_(gc_tensor self); +raw_tensor atg_random_from(gc_tensor self, int64_t from, int64_t to_v, int to_null); +raw_tensor atg_random_from_(gc_tensor self, int64_t from, int64_t to_v, int to_null); +raw_tensor atg_random_from_out(gc_tensor out, gc_tensor self, int64_t from, int64_t to_v, int to_null); +raw_tensor atg_random_out(gc_tensor out, gc_tensor self); +raw_tensor atg_random_to(gc_tensor self, int64_t to); +raw_tensor atg_random_to_(gc_tensor self, int64_t to); +raw_tensor atg_random_to_out(gc_tensor out, gc_tensor self, int64_t to); +raw_tensor atg_randperm(int64_t n, int options_kind, int options_device); +raw_tensor atg_randperm_out(gc_tensor out, int64_t n); +raw_tensor atg_range(scalar start, scalar end, int options_kind, int options_device); +raw_tensor atg_range_out(gc_tensor out, scalar start, scalar end); +raw_tensor atg_range_out_(gc_tensor out, scalar start, scalar end); +raw_tensor atg_range_step(scalar start, scalar end, int options_kind, int options_device); +raw_tensor atg_ravel(gc_tensor self); +raw_tensor atg_real(gc_tensor self); +raw_tensor atg_reciprocal(gc_tensor self); +raw_tensor atg_reciprocal_(gc_tensor self); +raw_tensor atg_reciprocal_out(gc_tensor out, gc_tensor self); +raw_tensor atg_reflection_pad1d(gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad1d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad1d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad2d(gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad2d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad3d(gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_reflection_pad3d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_relu(gc_tensor self); +raw_tensor atg_relu6(gc_tensor self); +raw_tensor atg_relu6_(gc_tensor self); +raw_tensor atg_relu_(gc_tensor self); +raw_tensor atg_relu_out(gc_tensor out, gc_tensor self); +raw_tensor atg_remainder(gc_tensor self, scalar other); +raw_tensor atg_remainder_(gc_tensor self, scalar other); +raw_tensor atg_remainder_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_remainder_scalar_tensor(scalar self, gc_tensor other); +raw_tensor atg_remainder_scalar_tensor_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_remainder_tensor(gc_tensor self, gc_tensor other); +raw_tensor atg_remainder_tensor_(gc_tensor self, gc_tensor other); +raw_tensor atg_remainder_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_renorm(gc_tensor self, scalar p, int64_t dim, scalar maxnorm); +raw_tensor atg_renorm_(gc_tensor self, scalar p, int64_t dim, scalar maxnorm); +raw_tensor atg_renorm_out(gc_tensor out, gc_tensor self, scalar p, int64_t dim, scalar maxnorm); +raw_tensor atg_repeat(gc_tensor self, int64_t *repeats_data, int repeats_len); +raw_tensor atg_repeat_interleave(gc_tensor repeats, int64_t output_size_v, int output_size_null); +raw_tensor atg_repeat_interleave_self_int(gc_tensor self, int64_t repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null); +raw_tensor atg_repeat_interleave_self_tensor(gc_tensor self, gc_tensor repeats, int64_t dim_v, int dim_null, int64_t output_size_v, int output_size_null); +raw_tensor atg_repeat_interleave_tensor_out(gc_tensor out, gc_tensor repeats, int64_t output_size_v, int output_size_null); +raw_tensor atg_repeat_out(gc_tensor out, gc_tensor self, int64_t *repeats_data, int repeats_len); +raw_tensor atg_replication_pad1d(gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad1d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad1d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad2d(gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad2d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad2d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad3d(gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad3d_backward(gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_replication_pad3d_out(gc_tensor out, gc_tensor self, int64_t *padding_data, int padding_len); +raw_tensor atg_requires_grad_(gc_tensor self, int requires_grad); +raw_tensor atg_reshape(gc_tensor self, int64_t *shape_data, int shape_len); +raw_tensor atg_reshape_as(gc_tensor self, gc_tensor other); +raw_tensor atg_resize(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_resize_(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_resize_as(gc_tensor self, gc_tensor the_template); +raw_tensor atg_resize_as_(gc_tensor self, gc_tensor the_template); +raw_tensor atg_resize_as_out(gc_tensor out, gc_tensor self, gc_tensor the_template); +raw_tensor atg_resize_as_sparse(gc_tensor self, gc_tensor the_template); +raw_tensor atg_resize_as_sparse_(gc_tensor self, gc_tensor the_template); +raw_tensor atg_resize_as_sparse_out(gc_tensor out, gc_tensor self, gc_tensor the_template); +raw_tensor atg_resize_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_resolve_conj(gc_tensor self); +raw_tensor atg_resolve_neg(gc_tensor self); +int atg_retains_grad(gc_tensor self); +void atg_rnn_relu(raw_tensor *, gc_tensor input, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); +raw_tensor atg_rnn_relu_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh); +void atg_rnn_relu_data(raw_tensor *, gc_tensor data, gc_tensor batch_sizes, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); +void atg_rnn_tanh(raw_tensor *, gc_tensor input, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional, int batch_first); +raw_tensor atg_rnn_tanh_cell(gc_tensor input, gc_tensor hx, gc_tensor w_ih, gc_tensor w_hh, gc_tensor b_ih, gc_tensor b_hh); +void atg_rnn_tanh_data(raw_tensor *, gc_tensor data, gc_tensor batch_sizes, gc_tensor hx, gc_tensor *params_data, int params_len, int has_biases, int64_t num_layers, double dropout, int train, int bidirectional); +raw_tensor atg_roll(gc_tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len); +raw_tensor atg_roll_out(gc_tensor out, gc_tensor self, int64_t *shifts_data, int shifts_len, int64_t *dims_data, int dims_len); +raw_tensor atg_rot90(gc_tensor self, int64_t k, int64_t *dims_data, int dims_len); +raw_tensor atg_rot90_out(gc_tensor out, gc_tensor self, int64_t k, int64_t *dims_data, int dims_len); +raw_tensor atg_round(gc_tensor self); +raw_tensor atg_round_(gc_tensor self); +raw_tensor atg_round_decimals(gc_tensor self, int64_t decimals); +raw_tensor atg_round_decimals_(gc_tensor self, int64_t decimals); +raw_tensor atg_round_decimals_out(gc_tensor out, gc_tensor self, int64_t decimals); +raw_tensor atg_round_out(gc_tensor out, gc_tensor self); +raw_tensor atg_row_indices(gc_tensor self); +raw_tensor atg_row_indices_copy(gc_tensor self); +raw_tensor atg_row_indices_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_row_stack(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_row_stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_rrelu(gc_tensor self, int training); +raw_tensor atg_rrelu_(gc_tensor self, int training); +raw_tensor atg_rrelu_with_noise(gc_tensor self, gc_tensor noise, int training); +raw_tensor atg_rrelu_with_noise_(gc_tensor self, gc_tensor noise, int training); +raw_tensor atg_rrelu_with_noise_backward(gc_tensor grad_output, gc_tensor self, gc_tensor noise, scalar lower, scalar upper, int training, int self_is_result); +raw_tensor atg_rrelu_with_noise_backward_out(gc_tensor out, gc_tensor grad_output, gc_tensor self, gc_tensor noise, scalar lower, scalar upper, int training, int self_is_result); +raw_tensor atg_rrelu_with_noise_out(gc_tensor out, gc_tensor self, gc_tensor noise, int training); +raw_tensor atg_rsqrt(gc_tensor self); +raw_tensor atg_rsqrt_(gc_tensor self); +raw_tensor atg_rsqrt_out(gc_tensor out, gc_tensor self); +raw_tensor atg_rsub(gc_tensor self, gc_tensor other); +raw_tensor atg_rsub_scalar(gc_tensor self, scalar other); +raw_tensor atg_rsub_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_rsub_tensor_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_scalar_tensor(scalar s, int options_kind, int options_device); +raw_tensor atg_scalar_tensor_out(gc_tensor out, scalar s); +raw_tensor atg_scaled_dot_product_attention(gc_tensor query, gc_tensor key, gc_tensor value, gc_tensor attn_mask, double dropout_p, int is_causal, double scale_v, int scale_null); +raw_tensor atg_scatter(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src); +raw_tensor atg_scatter_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src); +raw_tensor atg_scatter_add(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src); +raw_tensor atg_scatter_add_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src); +raw_tensor atg_scatter_add_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src); +raw_tensor atg_scatter_reduce(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce); +raw_tensor atg_scatter_reduce_(gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce); +raw_tensor atg_scatter_reduce_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src, char * reduce); +raw_tensor atg_scatter_src_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, gc_tensor src); +raw_tensor atg_scatter_value(gc_tensor self, int64_t dim, gc_tensor index, scalar value); +raw_tensor atg_scatter_value_(gc_tensor self, int64_t dim, gc_tensor index, scalar value); +raw_tensor atg_scatter_value_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, scalar value); +raw_tensor atg_scatter_value_reduce(gc_tensor self, int64_t dim, gc_tensor index, scalar value, char * reduce); +raw_tensor atg_scatter_value_reduce_(gc_tensor self, int64_t dim, gc_tensor index, scalar value, char * reduce); +raw_tensor atg_scatter_value_reduce_out(gc_tensor out, gc_tensor self, int64_t dim, gc_tensor index, scalar value, char * reduce); +raw_tensor atg_searchsorted(gc_tensor sorted_sequence, gc_tensor self, int out_int32, int right, char * side, gc_tensor sorter); +raw_tensor atg_searchsorted_scalar(gc_tensor sorted_sequence, scalar self, int out_int32, int right, char * side, gc_tensor sorter); +raw_tensor atg_searchsorted_scalar_out(gc_tensor out, gc_tensor sorted_sequence, scalar self, int out_int32, int right, char * side, gc_tensor sorter); +raw_tensor atg_searchsorted_tensor_out(gc_tensor out, gc_tensor sorted_sequence, gc_tensor self, int out_int32, int right, char * side, gc_tensor sorter); +raw_tensor atg_segment_reduce(gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor indices, gc_tensor offsets, int64_t axis, int unsafe, scalar initial); +raw_tensor atg_segment_reduce_out(gc_tensor out, gc_tensor data, char * reduce, gc_tensor lengths, gc_tensor indices, gc_tensor offsets, int64_t axis, int unsafe, scalar initial); +raw_tensor atg_select(gc_tensor self, int64_t dim, int64_t index); +raw_tensor atg_select_backward(gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index); +raw_tensor atg_select_backward_out(gc_tensor out, gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t index); +raw_tensor atg_select_copy(gc_tensor self, int64_t dim, int64_t index); +raw_tensor atg_select_copy_int_out(gc_tensor out, gc_tensor self, int64_t dim, int64_t index); +raw_tensor atg_select_scatter(gc_tensor self, gc_tensor src, int64_t dim, int64_t index); +raw_tensor atg_select_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t dim, int64_t index); +raw_tensor atg_selu(gc_tensor self); +raw_tensor atg_selu_(gc_tensor self); +raw_tensor atg_set(gc_tensor self); +raw_tensor atg_set_(gc_tensor self); +raw_tensor atg_set_out(gc_tensor out, gc_tensor self); +raw_tensor atg_set_requires_grad(gc_tensor self, int r); +raw_tensor atg_set_source_tensor(gc_tensor self, gc_tensor source); +raw_tensor atg_set_source_tensor_(gc_tensor self, gc_tensor source); +raw_tensor atg_set_source_tensor_out(gc_tensor out, gc_tensor self, gc_tensor source); +raw_tensor atg_set_source_tensor_storage_offset_(gc_tensor self, gc_tensor source, int64_t storage_offset, int64_t *size_data, int size_len, int64_t *stride_data, int stride_len); +raw_tensor atg_sgn(gc_tensor self); +raw_tensor atg_sgn_(gc_tensor self); +raw_tensor atg_sgn_out(gc_tensor out, gc_tensor self); +raw_tensor atg_sigmoid(gc_tensor self); +raw_tensor atg_sigmoid_(gc_tensor self); +raw_tensor atg_sigmoid_backward(gc_tensor grad_output, gc_tensor output); +raw_tensor atg_sigmoid_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor output); +raw_tensor atg_sigmoid_out(gc_tensor out, gc_tensor self); +raw_tensor atg_sign(gc_tensor self); +raw_tensor atg_sign_(gc_tensor self); +raw_tensor atg_sign_out(gc_tensor out, gc_tensor self); +raw_tensor atg_signbit(gc_tensor self); +raw_tensor atg_signbit_out(gc_tensor out, gc_tensor self); +raw_tensor atg_silu(gc_tensor self); +raw_tensor atg_silu_(gc_tensor self); +raw_tensor atg_silu_backward(gc_tensor grad_output, gc_tensor self); +raw_tensor atg_silu_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self); +raw_tensor atg_silu_out(gc_tensor out, gc_tensor self); +raw_tensor atg_sin(gc_tensor self); +raw_tensor atg_sin_(gc_tensor self); +raw_tensor atg_sin_out(gc_tensor out, gc_tensor self); +raw_tensor atg_sinc(gc_tensor self); +raw_tensor atg_sinc_(gc_tensor self); +raw_tensor atg_sinc_out(gc_tensor out, gc_tensor self); +raw_tensor atg_sinh(gc_tensor self); +raw_tensor atg_sinh_(gc_tensor self); +raw_tensor atg_sinh_out(gc_tensor out, gc_tensor self); +int64_t atg_size(gc_tensor self, int64_t dim); +raw_tensor atg_slice(gc_tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); +raw_tensor atg_slice_backward(gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step); +raw_tensor atg_slice_backward_out(gc_tensor out, gc_tensor grad_output, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t start, int64_t end, int64_t step); +raw_tensor atg_slice_copy(gc_tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); +raw_tensor atg_slice_copy_tensor_out(gc_tensor out, gc_tensor self, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); +raw_tensor atg_slice_scatter(gc_tensor self, gc_tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); +raw_tensor atg_slice_scatter_out(gc_tensor out, gc_tensor self, gc_tensor src, int64_t dim, int64_t start_v, int start_null, int64_t end_v, int end_null, int64_t step); +void atg_slogdet(raw_tensor *, gc_tensor self); +void atg_slogdet_out(raw_tensor *, gc_tensor sign, gc_tensor logabsdet, gc_tensor self); +raw_tensor atg_slow_conv3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); +raw_tensor atg_slow_conv3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len); +raw_tensor atg_slow_conv_dilated2d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_slow_conv_dilated2d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_slow_conv_dilated3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_slow_conv_dilated3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_slow_conv_transpose2d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_slow_conv_transpose2d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_slow_conv_transpose3d(gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_slow_conv_transpose3d_out(gc_tensor out, gc_tensor self, gc_tensor weight, int64_t *kernel_size_data, int kernel_size_len, gc_tensor bias, int64_t *stride_data, int stride_len, int64_t *padding_data, int padding_len, int64_t *output_padding_data, int output_padding_len, int64_t *dilation_data, int dilation_len); +raw_tensor atg_smm(gc_tensor self, gc_tensor mat2); +raw_tensor atg_smooth_l1_loss(gc_tensor self, gc_tensor target, int64_t reduction, double beta); +raw_tensor atg_smooth_l1_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double beta); +raw_tensor atg_smooth_l1_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction, double beta); +raw_tensor atg_smooth_l1_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction, double beta); +raw_tensor atg_soft_margin_loss(gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_soft_margin_loss_backward(gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_soft_margin_loss_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_soft_margin_loss_out(gc_tensor out, gc_tensor self, gc_tensor target, int64_t reduction); +raw_tensor atg_softmax(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_softmax_int_out(gc_tensor out, gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_softplus(gc_tensor self); +raw_tensor atg_softplus_backward(gc_tensor grad_output, gc_tensor self, scalar beta, scalar threshold); +raw_tensor atg_softplus_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar beta, scalar threshold); +raw_tensor atg_softplus_out(gc_tensor out, gc_tensor self); +raw_tensor atg_softshrink(gc_tensor self); +raw_tensor atg_softshrink_backward(gc_tensor grad_output, gc_tensor self, scalar lambd); +raw_tensor atg_softshrink_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar lambd); +raw_tensor atg_softshrink_out(gc_tensor out, gc_tensor self); +void atg_sort(raw_tensor *, gc_tensor self, int64_t dim, int descending); +void atg_sort_stable(raw_tensor *, gc_tensor self, int stable, int64_t dim, int descending); +void atg_sort_values(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t dim, int descending); +void atg_sort_values_stable(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int stable, int64_t dim, int descending); +raw_tensor atg_sparse_bsc_tensor(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int options_kind, int options_device); +raw_tensor atg_sparse_bsc_tensor_ccol_row_value_size(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_sparse_bsr_tensor(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int options_kind, int options_device); +raw_tensor atg_sparse_bsr_tensor_crow_col_value_size(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_sparse_compressed_tensor(gc_tensor compressed_indices, gc_tensor plain_indices, gc_tensor values, int options_kind, int options_device); +raw_tensor atg_sparse_compressed_tensor_comp_plain_value_size(gc_tensor compressed_indices, gc_tensor plain_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_sparse_coo_tensor(int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_sparse_coo_tensor_indices(gc_tensor indices, gc_tensor values, int options_kind, int options_device, int is_coalesced); +raw_tensor atg_sparse_coo_tensor_indices_size(gc_tensor indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device, int is_coalesced); +raw_tensor atg_sparse_coo_tensor_size_out(gc_tensor out, int64_t *size_data, int size_len); +raw_tensor atg_sparse_csc_tensor(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int options_kind, int options_device); +raw_tensor atg_sparse_csc_tensor_ccol_row_value_size(gc_tensor ccol_indices, gc_tensor row_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_sparse_csr_tensor(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int options_kind, int options_device); +raw_tensor atg_sparse_csr_tensor_crow_col_value_size(gc_tensor crow_indices, gc_tensor col_indices, gc_tensor values, int64_t *size_data, int size_len, int options_kind, int options_device); +int64_t atg_sparse_dim(gc_tensor self); +raw_tensor atg_sparse_mask(gc_tensor self, gc_tensor mask); +raw_tensor atg_sparse_mask_out(gc_tensor out, gc_tensor self, gc_tensor mask); +raw_tensor atg_sparse_resize(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); +raw_tensor atg_sparse_resize_(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); +raw_tensor atg_sparse_resize_and_clear(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); +raw_tensor atg_sparse_resize_and_clear_(gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); +raw_tensor atg_sparse_resize_and_clear_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); +raw_tensor atg_sparse_resize_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len, int64_t sparse_dim, int64_t dense_dim); +raw_tensor atg_sparse_sampled_addmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_sparse_sampled_addmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_special_airy_ai(gc_tensor x); +raw_tensor atg_special_airy_ai_out(gc_tensor out, gc_tensor x); +raw_tensor atg_special_bessel_j0(gc_tensor self); +raw_tensor atg_special_bessel_j0_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_bessel_j1(gc_tensor self); +raw_tensor atg_special_bessel_j1_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_bessel_y0(gc_tensor self); +raw_tensor atg_special_bessel_y0_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_bessel_y1(gc_tensor self); +raw_tensor atg_special_bessel_y1_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_chebyshev_polynomial_t(gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_t_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_t_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_t_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_t_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_t_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_u(gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_u_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_u_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_u_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_u_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_u_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_v(gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_v_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_v_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_v_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_v_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_v_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_w(gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_w_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_w_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_chebyshev_polynomial_w_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_w_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_chebyshev_polynomial_w_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_digamma(gc_tensor self); +raw_tensor atg_special_digamma_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_entr(gc_tensor self); +raw_tensor atg_special_entr_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_erf(gc_tensor self); +raw_tensor atg_special_erf_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_erfc(gc_tensor self); +raw_tensor atg_special_erfc_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_erfcx(gc_tensor self); +raw_tensor atg_special_erfcx_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_erfinv(gc_tensor self); +raw_tensor atg_special_erfinv_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_exp2(gc_tensor self); +raw_tensor atg_special_exp2_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_expit(gc_tensor self); +raw_tensor atg_special_expit_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_expm1(gc_tensor self); +raw_tensor atg_special_expm1_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_gammainc(gc_tensor self, gc_tensor other); +raw_tensor atg_special_gammainc_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_special_gammaincc(gc_tensor self, gc_tensor other); +raw_tensor atg_special_gammaincc_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_special_gammaln(gc_tensor self); +raw_tensor atg_special_gammaln_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_hermite_polynomial_h(gc_tensor x, gc_tensor n); +raw_tensor atg_special_hermite_polynomial_h_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_hermite_polynomial_h_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_hermite_polynomial_h_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_hermite_polynomial_h_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_hermite_polynomial_h_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_hermite_polynomial_he(gc_tensor x, gc_tensor n); +raw_tensor atg_special_hermite_polynomial_he_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_hermite_polynomial_he_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_hermite_polynomial_he_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_hermite_polynomial_he_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_hermite_polynomial_he_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_i0(gc_tensor self); +raw_tensor atg_special_i0_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_i0e(gc_tensor self); +raw_tensor atg_special_i0e_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_i1(gc_tensor self); +raw_tensor atg_special_i1_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_i1e(gc_tensor self); +raw_tensor atg_special_i1e_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_laguerre_polynomial_l(gc_tensor x, gc_tensor n); +raw_tensor atg_special_laguerre_polynomial_l_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_laguerre_polynomial_l_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_laguerre_polynomial_l_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_laguerre_polynomial_l_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_laguerre_polynomial_l_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_legendre_polynomial_p(gc_tensor x, gc_tensor n); +raw_tensor atg_special_legendre_polynomial_p_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_legendre_polynomial_p_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_legendre_polynomial_p_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_legendre_polynomial_p_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_legendre_polynomial_p_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_log1p(gc_tensor self); +raw_tensor atg_special_log1p_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_log_ndtr(gc_tensor self); +raw_tensor atg_special_log_ndtr_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_log_softmax(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_special_logit(gc_tensor self, double eps_v, int eps_null); +raw_tensor atg_special_logit_out(gc_tensor out, gc_tensor self, double eps_v, int eps_null); +raw_tensor atg_special_logsumexp(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_special_logsumexp_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim); +raw_tensor atg_special_modified_bessel_i0(gc_tensor self); +raw_tensor atg_special_modified_bessel_i0_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_modified_bessel_i1(gc_tensor self); +raw_tensor atg_special_modified_bessel_i1_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_modified_bessel_k0(gc_tensor self); +raw_tensor atg_special_modified_bessel_k0_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_modified_bessel_k1(gc_tensor self); +raw_tensor atg_special_modified_bessel_k1_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_multigammaln(gc_tensor self, int64_t p); +raw_tensor atg_special_multigammaln_out(gc_tensor out, gc_tensor self, int64_t p); +raw_tensor atg_special_ndtr(gc_tensor self); +raw_tensor atg_special_ndtr_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_ndtri(gc_tensor self); +raw_tensor atg_special_ndtri_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_polygamma(int64_t n, gc_tensor self); +raw_tensor atg_special_polygamma_out(gc_tensor out, int64_t n, gc_tensor self); +raw_tensor atg_special_psi(gc_tensor self); +raw_tensor atg_special_psi_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_round(gc_tensor self, int64_t decimals); +raw_tensor atg_special_round_out(gc_tensor out, gc_tensor self, int64_t decimals); +raw_tensor atg_special_scaled_modified_bessel_k0(gc_tensor x); +raw_tensor atg_special_scaled_modified_bessel_k0_out(gc_tensor out, gc_tensor x); +raw_tensor atg_special_scaled_modified_bessel_k1(gc_tensor x); +raw_tensor atg_special_scaled_modified_bessel_k1_out(gc_tensor out, gc_tensor x); +raw_tensor atg_special_shifted_chebyshev_polynomial_t(gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_t_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_t_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_t_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_t_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_t_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_u(gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_u_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_u_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_u_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_u_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_u_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_v(gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_v_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_v_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_v_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_v_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_v_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_w(gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_w_n_scalar(gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_w_n_scalar_out(gc_tensor out, gc_tensor x, scalar n); +raw_tensor atg_special_shifted_chebyshev_polynomial_w_out(gc_tensor out, gc_tensor x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_w_x_scalar(scalar x, gc_tensor n); +raw_tensor atg_special_shifted_chebyshev_polynomial_w_x_scalar_out(gc_tensor out, scalar x, gc_tensor n); +raw_tensor atg_special_sinc(gc_tensor self); +raw_tensor atg_special_sinc_out(gc_tensor out, gc_tensor self); +raw_tensor atg_special_softmax(gc_tensor self, int64_t dim, int dtype); +raw_tensor atg_special_spherical_bessel_j0(gc_tensor x); +raw_tensor atg_special_spherical_bessel_j0_out(gc_tensor out, gc_tensor x); +raw_tensor atg_special_xlog1py(gc_tensor self, gc_tensor other); +raw_tensor atg_special_xlog1py_other_scalar(gc_tensor self, scalar other); +raw_tensor atg_special_xlog1py_other_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_special_xlog1py_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_special_xlog1py_self_scalar(scalar self, gc_tensor other); +raw_tensor atg_special_xlog1py_self_scalar_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_special_xlogy(gc_tensor self, gc_tensor other); +raw_tensor atg_special_xlogy_other_scalar(gc_tensor self, scalar other); +raw_tensor atg_special_xlogy_other_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_special_xlogy_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_special_xlogy_self_scalar(scalar self, gc_tensor other); +raw_tensor atg_special_xlogy_self_scalar_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_special_zeta(gc_tensor self, gc_tensor other); +raw_tensor atg_special_zeta_other_scalar(gc_tensor self, scalar other); +raw_tensor atg_special_zeta_other_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_special_zeta_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_special_zeta_self_scalar(scalar self, gc_tensor other); +raw_tensor atg_special_zeta_self_scalar_out(gc_tensor out, scalar self, gc_tensor other); +raw_tensor *atg_split(gc_tensor self, int64_t split_size, int64_t dim); +raw_tensor *atg_split_copy(gc_tensor self, int64_t split_size, int64_t dim); +void atg_split_copy_tensor_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t split_size, int64_t dim); +raw_tensor *atg_split_sizes(gc_tensor self, int64_t *split_size_data, int split_size_len, int64_t dim); +raw_tensor *atg_split_with_sizes(gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); +raw_tensor *atg_split_with_sizes_copy(gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); +void atg_split_with_sizes_copy_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); +raw_tensor atg_sqrt(gc_tensor self); +raw_tensor atg_sqrt_(gc_tensor self); +raw_tensor atg_sqrt_out(gc_tensor out, gc_tensor self); +raw_tensor atg_square(gc_tensor self); +raw_tensor atg_square_(gc_tensor self); +raw_tensor atg_square_out(gc_tensor out, gc_tensor self); +raw_tensor atg_squeeze(gc_tensor self); +raw_tensor atg_squeeze_(gc_tensor self); +raw_tensor atg_squeeze_copy(gc_tensor self); +raw_tensor atg_squeeze_copy_dim(gc_tensor self, int64_t dim); +raw_tensor atg_squeeze_copy_dim_out(gc_tensor out, gc_tensor self, int64_t dim); +raw_tensor atg_squeeze_copy_dims(gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg_squeeze_copy_dims_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg_squeeze_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_squeeze_dim(gc_tensor self, int64_t dim); +raw_tensor atg_squeeze_dim_(gc_tensor self, int64_t dim); +raw_tensor atg_squeeze_dims(gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg_squeeze_dims_(gc_tensor self, int64_t *dim_data, int dim_len); +raw_tensor atg_sspaddmm(gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_sspaddmm_out(gc_tensor out, gc_tensor self, gc_tensor mat1, gc_tensor mat2); +raw_tensor atg_stack(gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_stack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len, int64_t dim); +raw_tensor atg_std(gc_tensor self, int unbiased); +raw_tensor atg_std_correction(gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +raw_tensor atg_std_correction_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +raw_tensor atg_std_dim(gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); +void atg_std_mean(raw_tensor *, gc_tensor self, int unbiased); +void atg_std_mean_correction(raw_tensor *, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +void atg_std_mean_correction_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +void atg_std_mean_dim(raw_tensor *, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); +raw_tensor atg_std_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); +raw_tensor atg_stft(gc_tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, gc_tensor window, int normalized, int onesided, int return_complex); +raw_tensor atg_stft_center(gc_tensor self, int64_t n_fft, int64_t hop_length_v, int hop_length_null, int64_t win_length_v, int win_length_null, gc_tensor window, int center, char * pad_mode, int normalized, int onesided, int return_complex); +int64_t atg_stride(gc_tensor self, int64_t dim); +raw_tensor atg_sub(gc_tensor self, gc_tensor other); +raw_tensor atg_sub_(gc_tensor self, gc_tensor other); +raw_tensor atg_sub_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_sub_scalar(gc_tensor self, scalar other); +raw_tensor atg_sub_scalar_(gc_tensor self, scalar other); +raw_tensor atg_sub_scalar_out(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_subtract(gc_tensor self, gc_tensor other); +raw_tensor atg_subtract_(gc_tensor self, gc_tensor other); +raw_tensor atg_subtract_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_subtract_scalar(gc_tensor self, scalar other); +raw_tensor atg_subtract_scalar_(gc_tensor self, scalar other); +raw_tensor atg_sum(gc_tensor self, int dtype); +raw_tensor atg_sum_dim_intlist(gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_sum_intlist_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int keepdim, int dtype); +raw_tensor atg_sum_out(gc_tensor out, gc_tensor self, int dtype); +raw_tensor atg_sum_to_size(gc_tensor self, int64_t *size_data, int size_len); +void atg_svd(raw_tensor *, gc_tensor self, int some, int compute_uv); +void atg_svd_u(raw_tensor *, gc_tensor U, gc_tensor S, gc_tensor V, gc_tensor self, int some, int compute_uv); +raw_tensor atg_swapaxes(gc_tensor self, int64_t axis0, int64_t axis1); +raw_tensor atg_swapaxes_(gc_tensor self, int64_t axis0, int64_t axis1); +raw_tensor atg_swapdims(gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg_swapdims_(gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg_t(gc_tensor self); +raw_tensor atg_t_(gc_tensor self); +raw_tensor atg_t_copy(gc_tensor self); +raw_tensor atg_t_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_take(gc_tensor self, gc_tensor index); +raw_tensor atg_take_along_dim(gc_tensor self, gc_tensor indices, int64_t dim_v, int dim_null); +raw_tensor atg_take_along_dim_out(gc_tensor out, gc_tensor self, gc_tensor indices, int64_t dim_v, int dim_null); +raw_tensor atg_take_out(gc_tensor out, gc_tensor self, gc_tensor index); +raw_tensor atg_tan(gc_tensor self); +raw_tensor atg_tan_(gc_tensor self); +raw_tensor atg_tan_out(gc_tensor out, gc_tensor self); +raw_tensor atg_tanh(gc_tensor self); +raw_tensor atg_tanh_(gc_tensor self); +raw_tensor atg_tanh_backward(gc_tensor grad_output, gc_tensor output); +raw_tensor atg_tanh_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor output); +raw_tensor atg_tanh_out(gc_tensor out, gc_tensor self); +raw_tensor *atg_tensor_split(gc_tensor self, int64_t sections, int64_t dim); +raw_tensor *atg_tensor_split_indices(gc_tensor self, int64_t *indices_data, int indices_len, int64_t dim); +raw_tensor *atg_tensor_split_tensor_indices_or_sections(gc_tensor self, gc_tensor tensor_indices_or_sections, int64_t dim); +raw_tensor atg_tensordot(gc_tensor self, gc_tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len); +raw_tensor atg_tensordot_out(gc_tensor out, gc_tensor self, gc_tensor other, int64_t *dims_self_data, int dims_self_len, int64_t *dims_other_data, int dims_other_len); +raw_tensor atg_threshold(gc_tensor self, scalar threshold, scalar value); +raw_tensor atg_threshold_(gc_tensor self, scalar threshold, scalar value); +raw_tensor atg_threshold_backward(gc_tensor grad_output, gc_tensor self, scalar threshold); +raw_tensor atg_threshold_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, gc_tensor self, scalar threshold); +raw_tensor atg_threshold_out(gc_tensor out, gc_tensor self, scalar threshold, scalar value); +raw_tensor atg_tile(gc_tensor self, int64_t *dims_data, int dims_len); +raw_tensor atg_to(gc_tensor self, int device); +raw_tensor atg_to_dense(gc_tensor self, int dtype, int masked_grad); +raw_tensor atg_to_dense_backward(gc_tensor grad, gc_tensor input, int masked_grad); +raw_tensor atg_to_device(gc_tensor self, int device, int dtype, int non_blocking, int copy); +raw_tensor atg_to_dtype(gc_tensor self, int dtype, int non_blocking, int copy); +raw_tensor atg_to_dtype_layout(gc_tensor self, int options_kind, int options_device, int non_blocking, int copy); +raw_tensor atg_to_mkldnn(gc_tensor self, int dtype); +raw_tensor atg_to_mkldnn_backward(gc_tensor grad, gc_tensor input); +raw_tensor atg_to_mkldnn_out(gc_tensor out, gc_tensor self, int dtype); +raw_tensor atg_to_other(gc_tensor self, gc_tensor other, int non_blocking, int copy); +raw_tensor atg_to_padded_tensor(gc_tensor self, double padding, int64_t *output_size_data, int output_size_len); +raw_tensor atg_to_padded_tensor_out(gc_tensor out, gc_tensor self, double padding, int64_t *output_size_data, int output_size_len); +void atg_topk(raw_tensor *, gc_tensor self, int64_t k, int64_t dim, int largest, int sorted); +void atg_topk_values(raw_tensor *, gc_tensor values, gc_tensor indices, gc_tensor self, int64_t k, int64_t dim, int largest, int sorted); +raw_tensor atg_totype(gc_tensor self, int scalar_type); +raw_tensor atg_trace(gc_tensor self); +raw_tensor atg_trace_backward(gc_tensor grad, int64_t *sizes_data, int sizes_len); +raw_tensor atg_trace_out(gc_tensor out, gc_tensor self); +raw_tensor atg_transpose(gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg_transpose_(gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg_transpose_copy(gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg_transpose_copy_int_out(gc_tensor out, gc_tensor self, int64_t dim0, int64_t dim1); +raw_tensor atg_trapezoid(gc_tensor y, int64_t dim); +raw_tensor atg_trapezoid_x(gc_tensor y, gc_tensor x, int64_t dim); +raw_tensor atg_trapz(gc_tensor y, gc_tensor x, int64_t dim); +raw_tensor atg_trapz_dx(gc_tensor y, double dx, int64_t dim); +void atg_triangular_solve(raw_tensor *, gc_tensor self, gc_tensor A, int upper, int transpose, int unitriangular); +void atg_triangular_solve_x(raw_tensor *, gc_tensor X, gc_tensor M, gc_tensor self, gc_tensor A, int upper, int transpose, int unitriangular); +raw_tensor atg_tril(gc_tensor self, int64_t diagonal); +raw_tensor atg_tril_(gc_tensor self, int64_t diagonal); +raw_tensor atg_tril_indices(int64_t row, int64_t col, int64_t offset, int options_kind, int options_device); +raw_tensor atg_tril_indices_out(gc_tensor out, int64_t row, int64_t col, int64_t offset); +raw_tensor atg_tril_out(gc_tensor out, gc_tensor self, int64_t diagonal); +raw_tensor atg_triplet_margin_loss(gc_tensor anchor, gc_tensor positive, gc_tensor negative, double margin, double p, double eps, int swap, int64_t reduction); +raw_tensor atg_triu(gc_tensor self, int64_t diagonal); +raw_tensor atg_triu_(gc_tensor self, int64_t diagonal); +raw_tensor atg_triu_indices(int64_t row, int64_t col, int64_t offset, int options_kind, int options_device); +raw_tensor atg_triu_indices_out(gc_tensor out, int64_t row, int64_t col, int64_t offset); +raw_tensor atg_triu_out(gc_tensor out, gc_tensor self, int64_t diagonal); +raw_tensor atg_true_divide(gc_tensor self, gc_tensor other); +raw_tensor atg_true_divide_(gc_tensor self, gc_tensor other); +raw_tensor atg_true_divide_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_true_divide_scalar(gc_tensor self, scalar other); +raw_tensor atg_true_divide_scalar_(gc_tensor self, scalar other); +raw_tensor atg_trunc(gc_tensor self); +raw_tensor atg_trunc_(gc_tensor self); +raw_tensor atg_trunc_out(gc_tensor out, gc_tensor self); +raw_tensor atg_type_as(gc_tensor self, gc_tensor other); +raw_tensor *atg_unbind(gc_tensor self, int64_t dim); +raw_tensor *atg_unbind_copy(gc_tensor self, int64_t dim); +void atg_unbind_copy_int_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t dim); +raw_tensor atg_unflatten(gc_tensor self, int64_t dim, int64_t *sizes_data, int sizes_len); +raw_tensor *atg_unflatten_dense_tensors(gc_tensor flat, gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_unfold(gc_tensor self, int64_t dimension, int64_t size, int64_t step); +raw_tensor atg_unfold_backward(gc_tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step); +raw_tensor atg_unfold_backward_out(gc_tensor out, gc_tensor grad_in, int64_t *input_sizes_data, int input_sizes_len, int64_t dim, int64_t size, int64_t step); +raw_tensor atg_unfold_copy(gc_tensor self, int64_t dimension, int64_t size, int64_t step); +raw_tensor atg_unfold_copy_out(gc_tensor out, gc_tensor self, int64_t dimension, int64_t size, int64_t step); +raw_tensor atg_uniform(gc_tensor self, double from, double to); +raw_tensor atg_uniform_(gc_tensor self, double from, double to); +raw_tensor atg_uniform_out(gc_tensor out, gc_tensor self, double from, double to); +void atg_unique_consecutive(raw_tensor *, gc_tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null); +void atg_unique_consecutive_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int return_inverse, int return_counts, int64_t dim_v, int dim_null); +void atg_unique_dim(raw_tensor *, gc_tensor self, int64_t dim, int sorted, int return_inverse, int return_counts); +void atg_unique_dim_consecutive(raw_tensor *, gc_tensor self, int64_t dim, int return_inverse, int return_counts); +void atg_unique_dim_consecutive_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int64_t dim, int return_inverse, int return_counts); +void atg_unique_dim_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor out2, gc_tensor self, int64_t dim, int sorted, int return_inverse, int return_counts); +raw_tensor *atg_unsafe_chunk(gc_tensor self, int64_t chunks, int64_t dim); +raw_tensor *atg_unsafe_split(gc_tensor self, int64_t split_size, int64_t dim); +void atg_unsafe_split_tensor_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t split_size, int64_t dim); +raw_tensor *atg_unsafe_split_with_sizes(gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); +void atg_unsafe_split_with_sizes_out(gc_tensor *out_data, int out_len, gc_tensor self, int64_t *split_sizes_data, int split_sizes_len, int64_t dim); +raw_tensor atg_unsqueeze(gc_tensor self, int64_t dim); +raw_tensor atg_unsqueeze_(gc_tensor self, int64_t dim); +raw_tensor atg_unsqueeze_copy(gc_tensor self, int64_t dim); +raw_tensor atg_unsqueeze_copy_out(gc_tensor out, gc_tensor self, int64_t dim); +raw_tensor atg_upsample_bicubic2d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bicubic2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bicubic2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bicubic2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bicubic2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); +raw_tensor atg_upsample_bilinear2d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bilinear2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bilinear2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bilinear2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_bilinear2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); +raw_tensor atg_upsample_linear1d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null); +raw_tensor atg_upsample_linear1d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null); +raw_tensor atg_upsample_linear1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_v, int scales_null); +raw_tensor atg_upsample_linear1d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_v, int scales_null); +raw_tensor atg_upsample_linear1d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); +raw_tensor atg_upsample_nearest1d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); +raw_tensor atg_upsample_nearest1d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); +raw_tensor atg_upsample_nearest1d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_v, int scales_null); +raw_tensor atg_upsample_nearest1d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_v, int scales_null); +raw_tensor atg_upsample_nearest1d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); +raw_tensor atg_upsample_nearest2d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest2d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest2d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest2d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest2d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); +raw_tensor atg_upsample_nearest3d(gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest3d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_nearest3d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, double *scale_factors_data, int scale_factors_len); +raw_tensor atg_upsample_trilinear3d(gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_trilinear3d_backward(gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_trilinear3d_backward_grad_input(gc_tensor grad_input, gc_tensor grad_output, int64_t *output_size_data, int output_size_len, int64_t *input_size_data, int input_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_trilinear3d_out(gc_tensor out, gc_tensor self, int64_t *output_size_data, int output_size_len, int align_corners, double scales_d_v, int scales_d_null, double scales_h_v, int scales_h_null, double scales_w_v, int scales_w_null); +raw_tensor atg_upsample_trilinear3d_vec(gc_tensor input, int64_t *output_size_data, int output_size_len, int align_corners, double *scale_factors_data, int scale_factors_len); +raw_tensor atg_value_selecting_reduction_backward(gc_tensor grad, int64_t dim, gc_tensor indices, int64_t *sizes_data, int sizes_len, int keepdim); +raw_tensor atg_values(gc_tensor self); +raw_tensor atg_values_copy(gc_tensor self); +raw_tensor atg_values_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_vander(gc_tensor x, int64_t n_v, int n_null, int increasing); +raw_tensor atg_var(gc_tensor self, int unbiased); +raw_tensor atg_var_correction(gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +raw_tensor atg_var_correction_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +raw_tensor atg_var_dim(gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); +void atg_var_mean(raw_tensor *, gc_tensor self, int unbiased); +void atg_var_mean_correction(raw_tensor *, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +void atg_var_mean_correction_out(raw_tensor *, gc_tensor out0, gc_tensor out1, gc_tensor self, int64_t *dim_data, int dim_len, scalar correction, int keepdim); +void atg_var_mean_dim(raw_tensor *, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); +raw_tensor atg_var_out(gc_tensor out, gc_tensor self, int64_t *dim_data, int dim_len, int unbiased, int keepdim); +raw_tensor atg_vdot(gc_tensor self, gc_tensor other); +raw_tensor atg_vdot_out(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_view(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_view_as(gc_tensor self, gc_tensor other); +raw_tensor atg_view_as_complex(gc_tensor self); +raw_tensor atg_view_as_complex_copy(gc_tensor self); +raw_tensor atg_view_as_complex_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_view_as_real(gc_tensor self); +raw_tensor atg_view_as_real_copy(gc_tensor self); +raw_tensor atg_view_as_real_copy_out(gc_tensor out, gc_tensor self); +raw_tensor atg_view_copy(gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_view_copy_dtype(gc_tensor self, int dtype); +raw_tensor atg_view_copy_dtype_out(gc_tensor out, gc_tensor self, int dtype); +raw_tensor atg_view_copy_out(gc_tensor out, gc_tensor self, int64_t *size_data, int size_len); +raw_tensor atg_view_dtype(gc_tensor self, int dtype); +raw_tensor *atg_vsplit(gc_tensor self, int64_t sections); +raw_tensor *atg_vsplit_array(gc_tensor self, int64_t *indices_data, int indices_len); +raw_tensor atg_vstack(gc_tensor *tensors_data, int tensors_len); +raw_tensor atg_vstack_out(gc_tensor out, gc_tensor *tensors_data, int tensors_len); +raw_tensor *atg_where(gc_tensor condition); +raw_tensor atg_where_scalar(gc_tensor condition, scalar self, scalar other); +raw_tensor atg_where_scalarother(gc_tensor condition, gc_tensor self, scalar other); +raw_tensor atg_where_scalarself(gc_tensor condition, scalar self, gc_tensor other); +raw_tensor atg_where_self(gc_tensor condition, gc_tensor self, gc_tensor other); +raw_tensor atg_where_self_out(gc_tensor out, gc_tensor condition, gc_tensor self, gc_tensor other); +raw_tensor atg_xlogy(gc_tensor self, gc_tensor other); +raw_tensor atg_xlogy_(gc_tensor self, gc_tensor other); +raw_tensor atg_xlogy_outscalar_other(gc_tensor out, gc_tensor self, scalar other); +raw_tensor atg_xlogy_outscalar_self(gc_tensor out, scalar self, gc_tensor other); +raw_tensor atg_xlogy_outtensor(gc_tensor out, gc_tensor self, gc_tensor other); +raw_tensor atg_xlogy_scalar_other(gc_tensor self, scalar other); +raw_tensor atg_xlogy_scalar_other_(gc_tensor self, scalar other); +raw_tensor atg_xlogy_scalar_self(scalar self, gc_tensor other); +raw_tensor atg_zero(gc_tensor self); +raw_tensor atg_zero_(gc_tensor self); +raw_tensor atg_zero_out(gc_tensor out, gc_tensor self); +raw_tensor atg_zeros(int64_t *size_data, int size_len, int options_kind, int options_device); +raw_tensor atg_zeros_like(gc_tensor self); +raw_tensor atg_zeros_like_out(gc_tensor out, gc_tensor self); +raw_tensor atg_zeros_out(gc_tensor out, int64_t *size_data, int size_len); diff --git a/src/wrapper/torch_stubs.c b/src/wrapper/torch_stubs.c new file mode 100644 index 0000000..d28461f --- /dev/null +++ b/src/wrapper/torch_stubs.c @@ -0,0 +1,10 @@ +#include "torch_api.h" +#include "ctypes_cstubs_internals.h" +#include + +// returns a voidp (ocaml block storing a void pointer) +CAMLprim value with_tensor_gc(value raw_tensor_value) { + CAMLparam1(raw_tensor_value); + raw_tensor raw = CTYPES_ADDR_OF_FATPTR(raw_tensor_value); + CAMLreturn(with_tensor_gc_internal(raw)); +} diff --git a/src/wrapper/torch_stubs.ml b/src/wrapper/torch_stubs.ml new file mode 100644 index 0000000..9f27908 --- /dev/null +++ b/src/wrapper/torch_stubs.ml @@ -0,0 +1,21 @@ +open Ctypes +open Torch_bindings.Type_defs +module C = Torch_bindings.C (Torch_stubs_generated) + +external with_tensor_gc : _ Cstubs_internals.fatptr -> Ctypes_ptr.voidp = "with_tensor_gc" + +let with_tensor_gc (raw : raw_tensor) : gc_tensor = + fatptr_of_raw_tensor raw |> with_tensor_gc |> gc_tensor_of_voidp +;; + +let to_tensor_list (ptr : raw_tensor ptr) = + let rec loop ptr acc = + let tensor : raw_tensor = !@ptr in + if is_none_raw_tensor tensor + then acc + else loop (ptr +@ 1) (with_tensor_gc tensor :: acc) + in + let result = loop ptr [] in + C.free (to_voidp ptr); + List.rev result +;; diff --git a/src/wrapper/wrapper.ml b/src/wrapper/wrapper.ml index 838e17f..c9d0248 100644 --- a/src/wrapper/wrapper.ml +++ b/src/wrapper/wrapper.ml @@ -1,4 +1,6 @@ open Ctypes +open Torch_stubs +open Torch_bindings.Type_defs let ptr_of_string str = let len = String.length str in @@ -19,7 +21,9 @@ module Tensor = struct include Wrapper_generated open! C.Tensor - type nonrec t = t + type t = gc_tensor + + let new_tensor () = new_tensor () |> with_tensor_gc let float_vec ?(kind = `float) values = let values_len = List.length values in @@ -31,8 +35,7 @@ module Tensor = struct | `half -> Kind.T Half in let t = float_vec values values_len (Kind.packed_to_int kind) in - Gc.finalise free t; - t + with_tensor_gc t ;; let int_vec ?(kind = `int) values = @@ -47,8 +50,7 @@ module Tensor = struct | `int64 -> Kind.T Int64 in let t = int_vec values values_len (Kind.packed_to_int kind) in - Gc.finalise free t; - t + with_tensor_gc t ;; let of_bigarray (type a b) (ga : (b, a, Bigarray.c_layout) Bigarray.Genarray.t) = @@ -78,8 +80,7 @@ module Tensor = struct (Bigarray.kind_size_in_bytes kind) (Kind.packed_to_int tensor_kind) in - Gc.finalise free t; - t + with_tensor_gc t ;; let copy_to_bigarray (type a b) t (ga : (b, a, Bigarray.c_layout) Bigarray.Genarray.t) = @@ -135,8 +136,7 @@ module Tensor = struct let get t index = let t = get t index in - Gc.finalise free t; - t + with_tensor_gc t ;; let float_value t = double_value t (from_voidp int null) 0 @@ -184,30 +184,22 @@ module Tensor = struct let defined = defined let device t = device t |> Device.of_int - let new_tensor () = - let t = new_tensor () in - Gc.finalise free t; - t - ;; - let run_backward ?keep_graph ?(create_graph = false) tensors inputs = let keep_graph = match keep_graph with | None -> create_graph | Some keep_graph -> keep_graph in - let out_ = CArray.make t (List.length inputs) in + let out_ = CArray.make raw_tensor (List.length inputs) in run_backward - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (CArray.of_list t inputs |> CArray.start) + (CArray.of_list gc_tensor inputs |> CArray.start) (List.length inputs) (CArray.start out_) (if keep_graph then 1 else 0) (if create_graph then 1 else 0); - let out_ = CArray.to_list out_ in - List.iter (Gc.finalise free) out_; - out_ + List.map with_tensor_gc (CArray.to_list out_) ;; let sum t = sum t ~dtype:(kind t) @@ -215,16 +207,10 @@ module Tensor = struct end module Scalar = struct - module S = Wrapper_generated.C.Scalar - - include ( - S : - module type of struct - include S - end - with type t := S.t) + module S = C.Scalar + include (S : module type of S) - type nonrec _ t = S.t + type nonrec _ t = scalar let int i = let t = int (Int64.of_int i) in @@ -240,7 +226,9 @@ module Scalar = struct end module Optimizer = struct - include Wrapper_generated.C.Optimizer + include C.Optimizer + + type t = optimizer let adam ~learning_rate ~beta1 ~beta2 ~weight_decay ~eps = let t = adam learning_rate beta1 beta2 weight_decay eps in @@ -262,15 +250,12 @@ module Optimizer = struct ;; let add_parameters t tensors = - add_parameters - t - CArray.(of_list Wrapper_generated.C.Tensor.t tensors |> start) - (List.length tensors) + add_parameters t CArray.(of_list gc_tensor tensors |> start) (List.length tensors) ;; end module Serialize = struct - include Wrapper_generated.C.Serialize + include C.Serialize let save t ~filename = save t filename @@ -290,17 +275,13 @@ module Serialize = struct s ;; - let load ~filename = - let t = load filename in - Gc.finalise Wrapper_generated.C.Tensor.free t; - t - ;; + let load ~filename = load filename |> with_tensor_gc let save_multi ~named_tensors ~filename = let names, tensors = List.split named_tensors in let names = List.map escape names in save_multi - CArray.(of_list Wrapper_generated.C.Tensor.t tensors |> start) + CArray.(of_list gc_tensor tensors |> start) (ptr_of_strings names) (List.length named_tensors) filename @@ -309,10 +290,9 @@ module Serialize = struct let load_multi ~names ~filename = let names = List.map escape names in let ntensors = List.length names in - let tensors = CArray.make Wrapper_generated.C.Tensor.t ntensors in + let tensors = CArray.make raw_tensor ntensors in load_multi (CArray.start tensors) (ptr_of_strings names) ntensors filename; - let tensors = CArray.to_list tensors in - List.iter (Gc.finalise Wrapper_generated.C.Tensor.free) tensors; + let tensors = List.map with_tensor_gc (CArray.to_list tensors) in tensors ;; @@ -320,7 +300,7 @@ module Serialize = struct let names, tensors = List.split named_tensors in let names = List.map escape names in load_multi_ - CArray.(of_list Wrapper_generated.C.Tensor.t tensors |> start) + CArray.(of_list gc_tensor tensors |> start) (ptr_of_strings names) (List.length named_tensors) filename @@ -330,11 +310,10 @@ module Serialize = struct let all_tensors = ref [] in let callback = coerce - (Foreign.funptr (string @-> Wrapper_generated.C.Tensor.t @-> returning void)) - (static_funptr (string @-> Wrapper_generated.C.Tensor.t @-> returning void)) + (Foreign.funptr (string @-> raw_tensor @-> returning void)) + (static_funptr (string @-> raw_tensor @-> returning void)) (fun tensor_name tensor -> - Gc.finalise Wrapper_generated.C.Tensor.free tensor; - all_tensors := (unescape tensor_name, tensor) :: !all_tensors) + all_tensors := (unescape tensor_name, with_tensor_gc tensor) :: !all_tensors) [@alert "-deprecated"] in load_callback filename callback; @@ -343,7 +322,7 @@ module Serialize = struct end module Cuda = struct - include Wrapper_generated.C.Cuda + include C.Cuda let is_available () = is_available () <> 0 let cudnn_is_available () = cudnn_is_available () <> 0 @@ -368,7 +347,9 @@ module Ivalue = struct | GenericDict end - include Wrapper_generated.C.Ivalue + include C.Ivalue + + type t = ivalue let none () = let t = none () in @@ -401,17 +382,13 @@ module Ivalue = struct ;; let tuple ts = - let t = tuple CArray.(of_list t ts |> start) (List.length ts) in + let t = tuple CArray.(of_list ivalue ts |> start) (List.length ts) in Gc.finalise free t; t ;; let tensor_list ts = - let t = - tensor_list - CArray.(of_list Wrapper_generated.C.Tensor.t ts |> start) - (List.length ts) - in + let t = tensor_list CArray.(of_list gc_tensor ts |> start) (List.length ts) in Gc.finalise free t; t ;; @@ -441,16 +418,11 @@ module Ivalue = struct ;; let to_bool t = to_bool t <> 0 - - let to_tensor t = - let tensor = to_tensor t in - Gc.finalise Wrapper_generated.C.Tensor.free tensor; - tensor - ;; + let to_tensor t = to_tensor t |> with_tensor_gc let to_tuple t = let noutputs = tuple_length t in - let outputs = CArray.make Wrapper_generated.C.Tensor.t noutputs in + let outputs = CArray.make ivalue noutputs in to_tuple t (CArray.start outputs) noutputs; let outputs = CArray.to_list outputs in List.iter (Gc.finalise free) outputs; @@ -459,55 +431,43 @@ module Ivalue = struct let to_tensor_list t = let noutputs = list_length t in - let outputs = CArray.make Wrapper_generated.C.Tensor.t noutputs in - Wrapper_generated.C.Ivalue.to_tensor_list t (CArray.start outputs) noutputs; - let outputs = CArray.to_list outputs in - (* free calls ati_free which destructs an Ivalue. Here we need to destruct a Tensor. *) - List.iter (Gc.finalise Wrapper_generated.C.Tensor.free) outputs; - outputs + let outputs = CArray.make raw_tensor noutputs in + C.Ivalue.to_tensor_list t (CArray.start outputs) noutputs; + CArray.to_list outputs |> List.map with_tensor_gc ;; end module Module = struct - include Wrapper_generated.C.Module + include C.Module + + type t = module_ let forward t tensors = - let tensor = - forward - t - CArray.(of_list Wrapper_generated.C.Tensor.t tensors |> start) - (List.length tensors) - in - Gc.finalise Wrapper_generated.C.Tensor.free tensor; - tensor + forward t CArray.(of_list gc_tensor tensors |> start) (List.length tensors) + |> with_tensor_gc ;; let forward_ t ivalues = let ivalue = - forward_ - t - CArray.(of_list Wrapper_generated.C.Ivalue.t ivalues |> start) - (List.length ivalues) + forward_ t CArray.(of_list ivalue ivalues |> start) (List.length ivalues) in - Gc.finalise Wrapper_generated.C.Ivalue.free ivalue; + Gc.finalise C.Ivalue.free ivalue; ivalue ;; let named_buffers t = let wrapper_ivalue = named_buffers t in - let names_and_tensors = CArray.make Wrapper_generated.C.Ivalue.t 2 in - Wrapper_generated.C.Ivalue.to_tuple wrapper_ivalue (CArray.start names_and_tensors) 2; + let names_and_tensors = CArray.make ivalue 2 in + C.Ivalue.to_tuple wrapper_ivalue (CArray.start names_and_tensors) 2; let names_ivalue = CArray.get names_and_tensors 0 and tensors_ivalue = CArray.get names_and_tensors 1 in - let len = Wrapper_generated.C.Ivalue.list_length tensors_ivalue in - let names = CArray.make Wrapper_generated.C.Ivalue.t len - and tensors = CArray.make Wrapper_generated.C.Ivalue.t len in - Wrapper_generated.C.Ivalue.to_generic_list names_ivalue (CArray.start names) len; - Wrapper_generated.C.Ivalue.to_generic_list tensors_ivalue (CArray.start tensors) len; - let names = names |> CArray.to_list |> List.map Wrapper_generated.C.Ivalue.to_string - and tensors = - tensors |> CArray.to_list |> List.map Wrapper_generated.C.Ivalue.to_tensor - in + let len = C.Ivalue.list_length tensors_ivalue in + let names = CArray.make ivalue len + and tensors = CArray.make ivalue len in + C.Ivalue.to_generic_list names_ivalue (CArray.start names) len; + C.Ivalue.to_generic_list tensors_ivalue (CArray.start tensors) len; + let names = names |> CArray.to_list |> List.map C.Ivalue.to_string + and tensors = tensors |> CArray.to_list |> List.map Ivalue.to_tensor in List.combine names tensors |> Base.Map.of_alist_exn (module Base.String) ;; @@ -524,6 +484,6 @@ module Module = struct ;; end -let manual_seed seed = Wrapper_generated.C.manual_seed (Int64.of_int seed) -let set_num_threads = Wrapper_generated.C.set_num_threads -let get_num_threads = Wrapper_generated.C.get_num_threads +let manual_seed seed = C.manual_seed (Int64.of_int seed) +let set_num_threads = C.set_num_threads +let get_num_threads = C.get_num_threads diff --git a/src/wrapper/wrapper_generated.ml b/src/wrapper/wrapper_generated.ml index 5bb7c75..71915e4 100644 --- a/src/wrapper/wrapper_generated.ml +++ b/src/wrapper/wrapper_generated.ml @@ -1,405 +1,147 @@ (* THIS FILE IS AUTOMATICALLY GENERATED, DO NOT EDIT BY HAND! *) open Ctypes -module C = Torch_bindings.C (Torch_generated) -open C.TensorG - -let to_tensor_list ptr = - let rec loop ptr acc = - let tensor = !@ptr in - if is_null tensor - then acc - else ( - Gc.finalise C.Tensor.free tensor; - loop (ptr +@ 1) (tensor :: acc)) - in - let result = loop ptr [] in - C.free (to_voidp ptr); - List.rev result -;; - -let __and__ self other = - let out__ = CArray.make t 1 in - stubs___and__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __and__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___and__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __iand__ self other = - let out__ = CArray.make t 1 in - stubs___iand__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __iand__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___iand__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __ilshift__ self other = - let out__ = CArray.make t 1 in - stubs___ilshift__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __ilshift__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___ilshift__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __ior__ self other = - let out__ = CArray.make t 1 in - stubs___ior__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __ior__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___ior__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __irshift__ self other = - let out__ = CArray.make t 1 in - stubs___irshift__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __irshift__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___irshift__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __ixor__ self other = - let out__ = CArray.make t 1 in - stubs___ixor__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __ixor__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___ixor__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __lshift__ self other = - let out__ = CArray.make t 1 in - stubs___lshift__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +open Torch_bindings.Type_defs +open Torch_stubs +open C.Generated + +let __and__ self other = stubs___and__ self other |> with_tensor_gc +let __and__tensor_ self other = stubs___and__tensor_ self other |> with_tensor_gc +let __iand__ self other = stubs___iand__ self other |> with_tensor_gc +let __iand__tensor_ self other = stubs___iand__tensor_ self other |> with_tensor_gc +let __ilshift__ self other = stubs___ilshift__ self other |> with_tensor_gc +let __ilshift__tensor_ self other = stubs___ilshift__tensor_ self other |> with_tensor_gc +let __ior__ self other = stubs___ior__ self other |> with_tensor_gc +let __ior__tensor_ self other = stubs___ior__tensor_ self other |> with_tensor_gc +let __irshift__ self other = stubs___irshift__ self other |> with_tensor_gc +let __irshift__tensor_ self other = stubs___irshift__tensor_ self other |> with_tensor_gc +let __ixor__ self other = stubs___ixor__ self other |> with_tensor_gc +let __ixor__tensor_ self other = stubs___ixor__tensor_ self other |> with_tensor_gc +let __lshift__ self other = stubs___lshift__ self other |> with_tensor_gc let __lshift__scalar_out_ ~out self other = - let out__ = CArray.make t 1 in - stubs___lshift__scalar_out_ (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs___lshift__scalar_out_ out self other |> with_tensor_gc ;; -let __lshift__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___lshift__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let __lshift__tensor_ self other = stubs___lshift__tensor_ self other |> with_tensor_gc let __lshift__tensor_out_ ~out self other = - let out__ = CArray.make t 1 in - stubs___lshift__tensor_out_ (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __or__ self other = - let out__ = CArray.make t 1 in - stubs___or__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __or__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___or__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs___lshift__tensor_out_ out self other |> with_tensor_gc ;; -let __rshift__ self other = - let out__ = CArray.make t 1 in - stubs___rshift__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let __or__ self other = stubs___or__ self other |> with_tensor_gc +let __or__tensor_ self other = stubs___or__tensor_ self other |> with_tensor_gc +let __rshift__ self other = stubs___rshift__ self other |> with_tensor_gc let __rshift__scalar_out_ ~out self other = - let out__ = CArray.make t 1 in - stubs___rshift__scalar_out_ (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs___rshift__scalar_out_ out self other |> with_tensor_gc ;; -let __rshift__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___rshift__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let __rshift__tensor_ self other = stubs___rshift__tensor_ self other |> with_tensor_gc let __rshift__tensor_out_ ~out self other = - let out__ = CArray.make t 1 in - stubs___rshift__tensor_out_ (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let __xor__ self other = - let out__ = CArray.make t 1 in - stubs___xor__ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs___rshift__tensor_out_ out self other |> with_tensor_gc ;; -let __xor__tensor_ self other = - let out__ = CArray.make t 1 in - stubs___xor__tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let __xor__ self other = stubs___xor__ self other |> with_tensor_gc +let __xor__tensor_ self other = stubs___xor__tensor_ self other |> with_tensor_gc let _adaptive_avg_pool2d self ~output_size = - let out__ = CArray.make t 1 in stubs__adaptive_avg_pool2d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let _adaptive_avg_pool2d_backward ~grad_output self = - let out__ = CArray.make t 1 in - stubs__adaptive_avg_pool2d_backward (CArray.start out__) grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__adaptive_avg_pool2d_backward grad_output self |> with_tensor_gc ;; let _adaptive_avg_pool2d_backward_out ~out ~grad_output self = - let out__ = CArray.make t 1 in - stubs__adaptive_avg_pool2d_backward_out (CArray.start out__) out grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__adaptive_avg_pool2d_backward_out out grad_output self |> with_tensor_gc ;; let _adaptive_avg_pool2d_out ~out self ~output_size = - let out__ = CArray.make t 1 in stubs__adaptive_avg_pool2d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let _adaptive_avg_pool3d self ~output_size = - let out__ = CArray.make t 1 in stubs__adaptive_avg_pool3d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let _adaptive_avg_pool3d_backward ~grad_output self = - let out__ = CArray.make t 1 in - stubs__adaptive_avg_pool3d_backward (CArray.start out__) grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__adaptive_avg_pool3d_backward grad_output self |> with_tensor_gc ;; let _adaptive_avg_pool3d_backward_out ~out ~grad_output self = - let out__ = CArray.make t 1 in - stubs__adaptive_avg_pool3d_backward_out (CArray.start out__) out grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__adaptive_avg_pool3d_backward_out out grad_output self |> with_tensor_gc ;; let _adaptive_avg_pool3d_out ~out self ~output_size = - let out__ = CArray.make t 1 in stubs__adaptive_avg_pool3d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let _add_batch_dim self ~batch_dim ~level = - let out__ = CArray.make t 1 in - stubs__add_batch_dim - (CArray.start out__) - self - (Int64.of_int batch_dim) - (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _add_relu self other = - let out__ = CArray.make t 1 in - stubs__add_relu (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _add_relu_ self other = - let out__ = CArray.make t 1 in - stubs__add_relu_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _add_relu_out ~out self other = - let out__ = CArray.make t 1 in - stubs__add_relu_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _add_relu_scalar self other = - let out__ = CArray.make t 1 in - stubs__add_relu_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__add_batch_dim self (Int64.of_int batch_dim) (Int64.of_int level) + |> with_tensor_gc ;; -let _add_relu_scalar_ self other = - let out__ = CArray.make t 1 in - stubs__add_relu_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _add_relu self other = stubs__add_relu self other |> with_tensor_gc +let _add_relu_ self other = stubs__add_relu_ self other |> with_tensor_gc +let _add_relu_out ~out self other = stubs__add_relu_out out self other |> with_tensor_gc +let _add_relu_scalar self other = stubs__add_relu_scalar self other |> with_tensor_gc +let _add_relu_scalar_ self other = stubs__add_relu_scalar_ self other |> with_tensor_gc let _add_relu_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs__add_relu_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__add_relu_scalar_out out self other |> with_tensor_gc ;; let _addmm_activation self ~mat1 ~mat2 ~use_gelu = - let out__ = CArray.make t 1 in - stubs__addmm_activation (CArray.start out__) self mat1 mat2 (if use_gelu then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__addmm_activation self mat1 mat2 (if use_gelu then 1 else 0) |> with_tensor_gc ;; let _addmm_activation_out ~out self ~mat1 ~mat2 ~use_gelu = - let out__ = CArray.make t 1 in - stubs__addmm_activation_out - (CArray.start out__) - out - self - mat1 - mat2 - (if use_gelu then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__addmm_activation_out out self mat1 mat2 (if use_gelu then 1 else 0) + |> with_tensor_gc ;; let _aminmax self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__aminmax (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _aminmax_dim self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__aminmax_dim (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _aminmax_dim_out ~out0 ~out1 self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__aminmax_dim_out (CArray.start out__) out0 @@ -407,20 +149,16 @@ let _aminmax_dim_out ~out0 ~out1 self ~dim ~keepdim = self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _aminmax_out ~out0 ~out1 self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__aminmax_out (CArray.start out__) out0 out1 self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -432,7 +170,7 @@ let _amp_update_scale ~scale_backoff_factor ~growth_interval = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__amp_update_scale (CArray.start out__) self @@ -441,10 +179,8 @@ let _amp_update_scale scale_growth_factor scale_backoff_factor (Int64.of_int growth_interval); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -456,18 +192,14 @@ let _amp_update_scale_ ~scale_backoff_factor ~growth_interval = - let out__ = CArray.make t 1 in stubs__amp_update_scale_ - (CArray.start out__) self growth_tracker found_inf scale_growth_factor scale_backoff_factor - (Int64.of_int growth_interval); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int growth_interval) + |> with_tensor_gc ;; let _amp_update_scale_out @@ -479,19 +211,15 @@ let _amp_update_scale_out ~scale_backoff_factor ~growth_interval = - let out__ = CArray.make t 1 in stubs__amp_update_scale_out - (CArray.start out__) out self growth_tracker found_inf scale_growth_factor scale_backoff_factor - (Int64.of_int growth_interval); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int growth_interval) + |> with_tensor_gc ;; let _assert_tensor_metadata ~a ~size ~stride ~dtype = @@ -513,254 +241,120 @@ let _assert_tensor_metadata ~a ~size ~stride ~dtype = ;; let _autocast_to_full_precision self ~cuda_enabled ~cpu_enabled = - let out__ = CArray.make t 1 in stubs__autocast_to_full_precision - (CArray.start out__) self (if cuda_enabled then 1 else 0) - (if cpu_enabled then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if cpu_enabled then 1 else 0) + |> with_tensor_gc ;; let _autocast_to_reduced_precision self ~cuda_enabled ~cpu_enabled ~cuda_dtype ~cpu_dtype = - let out__ = CArray.make t 1 in stubs__autocast_to_reduced_precision - (CArray.start out__) self (if cuda_enabled then 1 else 0) (if cpu_enabled then 1 else 0) (Kind.packed_to_int cuda_dtype) - (Kind.packed_to_int cpu_dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int cpu_dtype) + |> with_tensor_gc ;; let _cast_byte self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_byte (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_byte self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cast_char self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_char (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_char self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cast_double self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_double (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_double self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cast_float self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_float (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_float self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cast_half self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_half (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_half self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cast_int self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_int (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_int self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cast_long self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_long (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_long self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cast_short self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__cast_short (CArray.start out__) self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cast_short self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _cdist_backward ~grad ~x1 ~x2 ~p ~cdist = - let out__ = CArray.make t 1 in - stubs__cdist_backward (CArray.start out__) grad x1 x2 p cdist; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cdist_backward grad x1 x2 p cdist |> with_tensor_gc ;; let _cdist_backward_out ~out ~grad ~x1 ~x2 ~p ~cdist = - let out__ = CArray.make t 1 in - stubs__cdist_backward_out (CArray.start out__) out grad x1 x2 p cdist; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cdist_backward_out out grad x1 x2 p cdist |> with_tensor_gc ;; let _cholesky_solve_helper self ~a ~upper = - let out__ = CArray.make t 1 in - stubs__cholesky_solve_helper (CArray.start out__) self a (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cholesky_solve_helper self a (if upper then 1 else 0) |> with_tensor_gc ;; let _cholesky_solve_helper_out ~out self ~a ~upper = - let out__ = CArray.make t 1 in - stubs__cholesky_solve_helper_out - (CArray.start out__) - out - self - a - (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _coalesce self = - let out__ = CArray.make t 1 in - stubs__coalesce (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__cholesky_solve_helper_out out self a (if upper then 1 else 0) |> with_tensor_gc ;; -let _coalesce_out ~out self = - let out__ = CArray.make t 1 in - stubs__coalesce_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _coalesce self = stubs__coalesce self |> with_tensor_gc +let _coalesce_out ~out self = stubs__coalesce_out out self |> with_tensor_gc let _coalesced self ~coalesced = - let out__ = CArray.make t 1 in - stubs__coalesced (CArray.start out__) self (if coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__coalesced self (if coalesced then 1 else 0) |> with_tensor_gc ;; let _coalesced_ self ~coalesced = - let out__ = CArray.make t 1 in - stubs__coalesced_ (CArray.start out__) self (if coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__coalesced_ self (if coalesced then 1 else 0) |> with_tensor_gc ;; let _coalesced_out ~out self ~coalesced = - let out__ = CArray.make t 1 in - stubs__coalesced_out (CArray.start out__) out self (if coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__coalesced_out out self (if coalesced then 1 else 0) |> with_tensor_gc ;; let _compute_linear_combination input ~coefficients = - let out__ = CArray.make t 1 in - stubs__compute_linear_combination (CArray.start out__) input coefficients; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__compute_linear_combination input coefficients |> with_tensor_gc ;; let _compute_linear_combination_out ~out input ~coefficients = - let out__ = CArray.make t 1 in - stubs__compute_linear_combination_out (CArray.start out__) out input coefficients; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _conj self = - let out__ = CArray.make t 1 in - stubs__conj (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__compute_linear_combination_out out input coefficients |> with_tensor_gc ;; -let _conj_copy self = - let out__ = CArray.make t 1 in - stubs__conj_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _conj_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs__conj_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _conj_physical self = - let out__ = CArray.make t 1 in - stubs__conj_physical (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _conj_physical_out ~out self = - let out__ = CArray.make t 1 in - stubs__conj_physical_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _conj self = stubs__conj self |> with_tensor_gc +let _conj_copy self = stubs__conj_copy self |> with_tensor_gc +let _conj_copy_out ~out self = stubs__conj_copy_out out self |> with_tensor_gc +let _conj_physical self = stubs__conj_physical self |> with_tensor_gc +let _conj_physical_out ~out self = stubs__conj_physical_out out self |> with_tensor_gc let _conv_depthwise2d self ~weight ~kernel_size ~bias ~stride ~padding ~dilation = - let out__ = CArray.make t 1 in stubs__conv_depthwise2d - (CArray.start out__) self weight (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let _conv_depthwise2d_out ~out self ~weight ~kernel_size ~bias ~stride ~padding ~dilation = - let out__ = CArray.make t 1 in stubs__conv_depthwise2d_out - (CArray.start out__) out self weight @@ -768,54 +362,40 @@ let _conv_depthwise2d_out ~out self ~weight ~kernel_size ~bias ~stride ~padding (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let _convert_indices_from_coo_to_csr self ~size ~out_int32 = - let out__ = CArray.make t 1 in stubs__convert_indices_from_coo_to_csr - (CArray.start out__) self (Int64.of_int size) - (if out_int32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if out_int32 then 1 else 0) + |> with_tensor_gc ;; let _convert_indices_from_coo_to_csr_out ~out self ~size ~out_int32 = - let out__ = CArray.make t 1 in stubs__convert_indices_from_coo_to_csr_out - (CArray.start out__) out self (Int64.of_int size) - (if out_int32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if out_int32 then 1 else 0) + |> with_tensor_gc ;; let _convert_indices_from_csr_to_coo ~crow_indices ~col_indices ~out_int32 ~transpose = - let out__ = CArray.make t 1 in stubs__convert_indices_from_csr_to_coo - (CArray.start out__) crow_indices col_indices (if out_int32 then 1 else 0) - (if transpose then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if transpose then 1 else 0) + |> with_tensor_gc ;; let _convert_indices_from_csr_to_coo_out @@ -825,17 +405,13 @@ let _convert_indices_from_csr_to_coo_out ~out_int32 ~transpose = - let out__ = CArray.make t 1 in stubs__convert_indices_from_csr_to_coo_out - (CArray.start out__) out crow_indices col_indices (if out_int32 then 1 else 0) - (if transpose then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if transpose then 1 else 0) + |> with_tensor_gc ;; let _convolution @@ -853,14 +429,12 @@ let _convolution ~cudnn_enabled ~allow_tf32 = - let out__ = CArray.make t 1 in stubs__convolution - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -874,10 +448,8 @@ let _convolution (if benchmark then 1 else 0) (if deterministic then 1 else 0) (if cudnn_enabled then 1 else 0) - (if allow_tf32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if allow_tf32 then 1 else 0) + |> with_tensor_gc ;; let _convolution_deprecated @@ -894,14 +466,12 @@ let _convolution_deprecated ~deterministic ~cudnn_enabled = - let out__ = CArray.make t 1 in stubs__convolution_deprecated - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -914,30 +484,24 @@ let _convolution_deprecated (Int64.of_int groups) (if benchmark then 1 else 0) (if deterministic then 1 else 0) - (if cudnn_enabled then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if cudnn_enabled then 1 else 0) + |> with_tensor_gc ;; let _convolution_mode input ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs__convolution_mode - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) padding (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let _convolution_out @@ -956,15 +520,13 @@ let _convolution_out ~cudnn_enabled ~allow_tf32 = - let out__ = CArray.make t 1 in stubs__convolution_out - (CArray.start out__) out input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -978,69 +540,41 @@ let _convolution_out (if benchmark then 1 else 0) (if deterministic then 1 else 0) (if cudnn_enabled then 1 else 0) - (if allow_tf32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if allow_tf32 then 1 else 0) + |> with_tensor_gc ;; let _copy_from self ~dst ~non_blocking = - let out__ = CArray.make t 1 in - stubs__copy_from (CArray.start out__) self dst (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__copy_from self dst (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _copy_from_and_resize self ~dst = - let out__ = CArray.make t 1 in - stubs__copy_from_and_resize (CArray.start out__) self dst; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__copy_from_and_resize self dst |> with_tensor_gc ;; let _copy_from_and_resize_out ~out self ~dst = - let out__ = CArray.make t 1 in - stubs__copy_from_and_resize_out (CArray.start out__) out self dst; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__copy_from_and_resize_out out self dst |> with_tensor_gc ;; let _copy_from_out ~out self ~dst ~non_blocking = - let out__ = CArray.make t 1 in - stubs__copy_from_out (CArray.start out__) out self dst (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__copy_from_out out self dst (if non_blocking then 1 else 0) |> with_tensor_gc ;; -let _cslt_compress input = - let out__ = CArray.make t 1 in - stubs__cslt_compress (CArray.start out__) input; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _cslt_compress input = stubs__cslt_compress input |> with_tensor_gc let _cslt_sparse_mm ~compressed_a ~dense_b ~bias ~transpose_result = - let out__ = CArray.make t 1 in stubs__cslt_sparse_mm - (CArray.start out__) compressed_a dense_b (match bias with | Some v -> v - | None -> null) - (if transpose_result then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if transpose_result then 1 else 0) + |> with_tensor_gc ;; let _ctc_loss ~log_probs ~targets ~input_lengths ~target_lengths ~blank ~zero_infinity = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__ctc_loss (CArray.start out__) log_probs @@ -1051,10 +585,8 @@ let _ctc_loss ~log_probs ~targets ~input_lengths ~target_lengths ~blank ~zero_in (List.length target_lengths) (Int64.of_int blank) (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -1069,9 +601,7 @@ let _ctc_loss_backward ~blank ~zero_infinity = - let out__ = CArray.make t 1 in stubs__ctc_loss_backward - (CArray.start out__) grad log_probs targets @@ -1082,10 +612,8 @@ let _ctc_loss_backward neg_log_likelihood log_alpha (Int64.of_int blank) - (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if zero_infinity then 1 else 0) + |> with_tensor_gc ;; let _ctc_loss_backward_out @@ -1100,9 +628,7 @@ let _ctc_loss_backward_out ~blank ~zero_infinity = - let out__ = CArray.make t 1 in stubs__ctc_loss_backward_out - (CArray.start out__) out grad log_probs @@ -1114,10 +640,8 @@ let _ctc_loss_backward_out neg_log_likelihood log_alpha (Int64.of_int blank) - (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if zero_infinity then 1 else 0) + |> with_tensor_gc ;; let _ctc_loss_backward_tensor @@ -1131,9 +655,7 @@ let _ctc_loss_backward_tensor ~blank ~zero_infinity = - let out__ = CArray.make t 1 in stubs__ctc_loss_backward_tensor - (CArray.start out__) grad log_probs targets @@ -1142,10 +664,8 @@ let _ctc_loss_backward_tensor neg_log_likelihood log_alpha (Int64.of_int blank) - (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if zero_infinity then 1 else 0) + |> with_tensor_gc ;; let _ctc_loss_out @@ -1158,7 +678,7 @@ let _ctc_loss_out ~blank ~zero_infinity = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__ctc_loss_out (CArray.start out__) out0 @@ -1171,10 +691,8 @@ let _ctc_loss_out (List.length target_lengths) (Int64.of_int blank) (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -1186,7 +704,7 @@ let _ctc_loss_tensor ~blank ~zero_infinity = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__ctc_loss_tensor (CArray.start out__) log_probs @@ -1195,10 +713,8 @@ let _ctc_loss_tensor target_lengths (Int64.of_int blank) (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -1212,7 +728,7 @@ let _ctc_loss_tensor_out ~blank ~zero_infinity = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__ctc_loss_tensor_out (CArray.start out__) out0 @@ -1223,10 +739,8 @@ let _ctc_loss_tensor_out target_lengths (Int64.of_int blank) (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -1239,7 +753,7 @@ let _cudnn_ctc_loss ~deterministic ~zero_infinity = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__cudnn_ctc_loss (CArray.start out__) log_probs @@ -1251,10 +765,8 @@ let _cudnn_ctc_loss (Int64.of_int blank) (if deterministic then 1 else 0) (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -1269,7 +781,7 @@ let _cudnn_ctc_loss_out ~deterministic ~zero_infinity = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__cudnn_ctc_loss_out (CArray.start out__) out0 @@ -1283,10 +795,8 @@ let _cudnn_ctc_loss_out (Int64.of_int blank) (if deterministic then 1 else 0) (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -1299,7 +809,7 @@ let _cudnn_ctc_loss_tensor ~deterministic ~zero_infinity = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__cudnn_ctc_loss_tensor (CArray.start out__) log_probs @@ -1309,38 +819,28 @@ let _cudnn_ctc_loss_tensor (Int64.of_int blank) (if deterministic then 1 else 0) (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _cudnn_init_dropout_state ~dropout ~train ~dropout_seed ~options = - let out__ = CArray.make t 1 in stubs__cudnn_init_dropout_state - (CArray.start out__) dropout (if train then 1 else 0) (Int64.of_int dropout_seed) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _cudnn_init_dropout_state_out ~out ~dropout ~train ~dropout_seed = - let out__ = CArray.make t 1 in stubs__cudnn_init_dropout_state_out - (CArray.start out__) out dropout (if train then 1 else 0) - (Int64.of_int dropout_seed); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dropout_seed) + |> with_tensor_gc ;; let _cudnn_rnn @@ -1361,20 +861,20 @@ let _cudnn_rnn ~batch_sizes ~dropout_state = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__cudnn_rnn (CArray.start out__) input - (CArray.of_list t weight |> CArray.start) + (CArray.of_list gc_tensor weight |> CArray.start) (List.length weight) (Int64.of_int weight_stride0) (match weight_buf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) hx (match cx with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int mode) (Int64.of_int hidden_size) (Int64.of_int proj_size) @@ -1387,17 +887,12 @@ let _cudnn_rnn (List.length batch_sizes) (match dropout_state with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; @@ -1412,10 +907,8 @@ let _cudnn_rnn_flatten_weight ~batch_first ~bidirectional = - let out__ = CArray.make t 1 in stubs__cudnn_rnn_flatten_weight - (CArray.start out__) - (CArray.of_list t weight_arr |> CArray.start) + (CArray.of_list gc_tensor weight_arr |> CArray.start) (List.length weight_arr) (Int64.of_int weight_stride0) (Int64.of_int input_size) @@ -1424,10 +917,8 @@ let _cudnn_rnn_flatten_weight (Int64.of_int proj_size) (Int64.of_int num_layers) (if batch_first then 1 else 0) - (if bidirectional then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if bidirectional then 1 else 0) + |> with_tensor_gc ;; let _cudnn_rnn_flatten_weight_out @@ -1442,11 +933,9 @@ let _cudnn_rnn_flatten_weight_out ~batch_first ~bidirectional = - let out__ = CArray.make t 1 in stubs__cudnn_rnn_flatten_weight_out - (CArray.start out__) out - (CArray.of_list t weight_arr |> CArray.start) + (CArray.of_list gc_tensor weight_arr |> CArray.start) (List.length weight_arr) (Int64.of_int weight_stride0) (Int64.of_int input_size) @@ -1455,10 +944,8 @@ let _cudnn_rnn_flatten_weight_out (Int64.of_int proj_size) (Int64.of_int num_layers) (if batch_first then 1 else 0) - (if bidirectional then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if bidirectional then 1 else 0) + |> with_tensor_gc ;; let _cudnn_rnn_out @@ -1484,7 +971,7 @@ let _cudnn_rnn_out ~batch_sizes ~dropout_state = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__cudnn_rnn_out (CArray.start out__) out0 @@ -1493,16 +980,16 @@ let _cudnn_rnn_out out3 out4 input - (CArray.of_list t weight |> CArray.start) + (CArray.of_list gc_tensor weight |> CArray.start) (List.length weight) (Int64.of_int weight_stride0) (match weight_buf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) hx (match cx with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int mode) (Int64.of_int hidden_size) (Int64.of_int proj_size) @@ -1515,47 +1002,26 @@ let _cudnn_rnn_out (List.length batch_sizes) (match dropout_state with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; let _debug_has_internal_overlap self = stubs__debug_has_internal_overlap self - -let _dim_arange ~like ~dim = - let out__ = CArray.make t 1 in - stubs__dim_arange (CArray.start out__) like (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - +let _dim_arange ~like ~dim = stubs__dim_arange like (Int64.of_int dim) |> with_tensor_gc let _dimi self = stubs__dimi self let _dimv self = stubs__dimv self let _dirichlet_grad ~x ~alpha ~total = - let out__ = CArray.make t 1 in - stubs__dirichlet_grad (CArray.start out__) x alpha total; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__dirichlet_grad x alpha total |> with_tensor_gc ;; let _dirichlet_grad_out ~out ~x ~alpha ~total = - let out__ = CArray.make t 1 in - stubs__dirichlet_grad_out (CArray.start out__) out x alpha total; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__dirichlet_grad_out out x alpha total |> with_tensor_gc ;; let _efficient_attention_backward @@ -1578,7 +1044,7 @@ let _efficient_attention_backward ~scale ~num_splits_key = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__efficient_attention_backward (CArray.start out__) grad_out_ @@ -1587,14 +1053,14 @@ let _efficient_attention_backward value (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) out (match cu_seqlens_q with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match cu_seqlens_k with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int max_seqlen_k) (Int64.of_int max_seqlen_q) logsumexp @@ -1613,40 +1079,28 @@ let _efficient_attention_backward (match num_splits_key with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let _efficientzerotensor ~size ~options = - let out__ = CArray.make t 1 in stubs__efficientzerotensor - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _efficientzerotensor_out ~out ~size = - let out__ = CArray.make t 1 in stubs__efficientzerotensor_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _embedding_bag @@ -1660,7 +1114,7 @@ let _embedding_bag ~include_last_offset ~padding_idx = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__embedding_bag (CArray.start out__) weight @@ -1671,17 +1125,13 @@ let _embedding_bag (if sparse then 1 else 0) (match per_sample_weights with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if include_last_offset then 1 else 0) (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -1699,9 +1149,7 @@ let _embedding_bag_backward ~per_sample_weights ~padding_idx = - let out__ = CArray.make t 1 in stubs__embedding_bag_backward - (CArray.start out__) grad indices offsets @@ -1714,11 +1162,9 @@ let _embedding_bag_backward (if sparse then 1 else 0) (match per_sample_weights with | Some v -> v - | None -> null) - (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Int64.of_int padding_idx) + |> with_tensor_gc ;; let _embedding_bag_dense_backward @@ -1733,9 +1179,7 @@ let _embedding_bag_dense_backward ~per_sample_weights ~padding_idx = - let out__ = CArray.make t 1 in stubs__embedding_bag_dense_backward - (CArray.start out__) grad indices offset2bag @@ -1746,11 +1190,9 @@ let _embedding_bag_dense_backward (Int64.of_int mode) (match per_sample_weights with | Some v -> v - | None -> null) - (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Int64.of_int padding_idx) + |> with_tensor_gc ;; let _embedding_bag_dense_backward_out @@ -1766,9 +1208,7 @@ let _embedding_bag_dense_backward_out ~per_sample_weights ~padding_idx = - let out__ = CArray.make t 1 in stubs__embedding_bag_dense_backward_out - (CArray.start out__) out grad indices @@ -1780,11 +1220,9 @@ let _embedding_bag_dense_backward_out (Int64.of_int mode) (match per_sample_weights with | Some v -> v - | None -> null) - (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Int64.of_int padding_idx) + |> with_tensor_gc ;; let _embedding_bag_forward_only @@ -1798,7 +1236,7 @@ let _embedding_bag_forward_only ~include_last_offset ~padding_idx = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__embedding_bag_forward_only (CArray.start out__) weight @@ -1809,17 +1247,13 @@ let _embedding_bag_forward_only (if sparse then 1 else 0) (match per_sample_weights with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if include_last_offset then 1 else 0) (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -1838,7 +1272,7 @@ let _embedding_bag_forward_only_out ~include_last_offset ~padding_idx = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__embedding_bag_forward_only_out (CArray.start out__) out0 @@ -1853,17 +1287,13 @@ let _embedding_bag_forward_only_out (if sparse then 1 else 0) (match per_sample_weights with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if include_last_offset then 1 else 0) (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -1882,7 +1312,7 @@ let _embedding_bag_out ~include_last_offset ~padding_idx = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__embedding_bag_out (CArray.start out__) out0 @@ -1897,17 +1327,13 @@ let _embedding_bag_out (if sparse then 1 else 0) (match per_sample_weights with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if include_last_offset then 1 else 0) (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -1920,19 +1346,15 @@ let _embedding_bag_per_sample_weights_backward ~mode ~padding_idx = - let out__ = CArray.make t 1 in stubs__embedding_bag_per_sample_weights_backward - (CArray.start out__) grad weight indices offsets offset2bag (Int64.of_int mode) - (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int padding_idx) + |> with_tensor_gc ;; let _embedding_bag_per_sample_weights_backward_out @@ -1945,9 +1367,7 @@ let _embedding_bag_per_sample_weights_backward_out ~mode ~padding_idx = - let out__ = CArray.make t 1 in stubs__embedding_bag_per_sample_weights_backward_out - (CArray.start out__) out grad weight @@ -1955,10 +1375,8 @@ let _embedding_bag_per_sample_weights_backward_out offsets offset2bag (Int64.of_int mode) - (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int padding_idx) + |> with_tensor_gc ;; let _embedding_bag_sparse_backward @@ -1973,9 +1391,7 @@ let _embedding_bag_sparse_backward ~per_sample_weights ~padding_idx = - let out__ = CArray.make t 1 in stubs__embedding_bag_sparse_backward - (CArray.start out__) grad indices offsets @@ -1986,87 +1402,59 @@ let _embedding_bag_sparse_backward (Int64.of_int mode) (match per_sample_weights with | Some v -> v - | None -> null) - (Int64.of_int padding_idx); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Int64.of_int padding_idx) + |> with_tensor_gc ;; let _empty_affine_quantized ~size ~options ~scale ~zero_point = - let out__ = CArray.make t 1 in stubs__empty_affine_quantized - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) scale - (Int64.of_int zero_point); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int zero_point) + |> with_tensor_gc ;; let _empty_affine_quantized_out ~out ~size ~scale ~zero_point = - let out__ = CArray.make t 1 in stubs__empty_affine_quantized_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) scale - (Int64.of_int zero_point); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int zero_point) + |> with_tensor_gc ;; let _empty_per_channel_affine_quantized ~size ~scales ~zero_points ~axis ~options = - let out__ = CArray.make t 1 in stubs__empty_per_channel_affine_quantized - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) scales zero_points (Int64.of_int axis) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _empty_per_channel_affine_quantized_out ~out ~size ~scales ~zero_points ~axis = - let out__ = CArray.make t 1 in stubs__empty_per_channel_affine_quantized_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) scales zero_points - (Int64.of_int axis); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int axis) + |> with_tensor_gc ;; -let _euclidean_dist ~x1 ~x2 = - let out__ = CArray.make t 1 in - stubs__euclidean_dist (CArray.start out__) x1 x2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _euclidean_dist ~x1 ~x2 = stubs__euclidean_dist x1 x2 |> with_tensor_gc let _euclidean_dist_out ~out ~x1 ~x2 = - let out__ = CArray.make t 1 in - stubs__euclidean_dist_out (CArray.start out__) out x1 x2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__euclidean_dist_out out x1 x2 |> with_tensor_gc ;; let _fake_quantize_learnable_per_channel_affine @@ -2078,19 +1466,15 @@ let _fake_quantize_learnable_per_channel_affine ~quant_max ~grad_factor = - let out__ = CArray.make t 1 in stubs__fake_quantize_learnable_per_channel_affine - (CArray.start out__) self scale zero_point (Int64.of_int axis) (Int64.of_int quant_min) (Int64.of_int quant_max) - grad_factor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + grad_factor + |> with_tensor_gc ;; let _fake_quantize_learnable_per_channel_affine_backward @@ -2103,7 +1487,7 @@ let _fake_quantize_learnable_per_channel_affine_backward ~quant_max ~grad_factor = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__fake_quantize_learnable_per_channel_affine_backward (CArray.start out__) grad @@ -2114,12 +1498,9 @@ let _fake_quantize_learnable_per_channel_affine_backward (Int64.of_int quant_min) (Int64.of_int quant_max) grad_factor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -2133,9 +1514,7 @@ let _fake_quantize_learnable_per_channel_affine_out ~quant_max ~grad_factor = - let out__ = CArray.make t 1 in stubs__fake_quantize_learnable_per_channel_affine_out - (CArray.start out__) out self scale @@ -2143,10 +1522,8 @@ let _fake_quantize_learnable_per_channel_affine_out (Int64.of_int axis) (Int64.of_int quant_min) (Int64.of_int quant_max) - grad_factor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + grad_factor + |> with_tensor_gc ;; let _fake_quantize_learnable_per_tensor_affine @@ -2157,18 +1534,14 @@ let _fake_quantize_learnable_per_tensor_affine ~quant_max ~grad_factor = - let out__ = CArray.make t 1 in stubs__fake_quantize_learnable_per_tensor_affine - (CArray.start out__) self scale zero_point (Int64.of_int quant_min) (Int64.of_int quant_max) - grad_factor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + grad_factor + |> with_tensor_gc ;; let _fake_quantize_learnable_per_tensor_affine_backward @@ -2180,7 +1553,7 @@ let _fake_quantize_learnable_per_tensor_affine_backward ~quant_max ~grad_factor = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__fake_quantize_learnable_per_tensor_affine_backward (CArray.start out__) grad @@ -2190,12 +1563,9 @@ let _fake_quantize_learnable_per_tensor_affine_backward (Int64.of_int quant_min) (Int64.of_int quant_max) grad_factor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -2208,19 +1578,15 @@ let _fake_quantize_learnable_per_tensor_affine_out ~quant_max ~grad_factor = - let out__ = CArray.make t 1 in stubs__fake_quantize_learnable_per_tensor_affine_out - (CArray.start out__) out self scale zero_point (Int64.of_int quant_min) (Int64.of_int quant_max) - grad_factor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + grad_factor + |> with_tensor_gc ;; let _fake_quantize_per_tensor_affine_cachemask_tensor_qparams @@ -2231,7 +1597,7 @@ let _fake_quantize_per_tensor_affine_cachemask_tensor_qparams ~quant_min ~quant_max = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__fake_quantize_per_tensor_affine_cachemask_tensor_qparams (CArray.start out__) self @@ -2240,10 +1606,8 @@ let _fake_quantize_per_tensor_affine_cachemask_tensor_qparams fake_quant_enabled (Int64.of_int quant_min) (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -2257,7 +1621,7 @@ let _fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out ~quant_min ~quant_max = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out (CArray.start out__) out0 @@ -2268,111 +1632,81 @@ let _fake_quantize_per_tensor_affine_cachemask_tensor_qparams_out fake_quant_enabled (Int64.of_int quant_min) (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _fft_c2c self ~dim ~normalization ~forward = - let out__ = CArray.make t 1 in stubs__fft_c2c - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (Int64.of_int normalization) - (if forward then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if forward then 1 else 0) + |> with_tensor_gc ;; let _fft_c2c_out ~out self ~dim ~normalization ~forward = - let out__ = CArray.make t 1 in stubs__fft_c2c_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (Int64.of_int normalization) - (if forward then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if forward then 1 else 0) + |> with_tensor_gc ;; let _fft_c2r self ~dim ~normalization ~last_dim_size = - let out__ = CArray.make t 1 in stubs__fft_c2r - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (Int64.of_int normalization) - (Int64.of_int last_dim_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int last_dim_size) + |> with_tensor_gc ;; let _fft_c2r_out ~out self ~dim ~normalization ~last_dim_size = - let out__ = CArray.make t 1 in stubs__fft_c2r_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (Int64.of_int normalization) - (Int64.of_int last_dim_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int last_dim_size) + |> with_tensor_gc ;; let _fft_r2c self ~dim ~normalization ~onesided = - let out__ = CArray.make t 1 in stubs__fft_r2c - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (Int64.of_int normalization) - (if onesided then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if onesided then 1 else 0) + |> with_tensor_gc ;; let _fft_r2c_out ~out self ~dim ~normalization ~onesided = - let out__ = CArray.make t 1 in stubs__fft_r2c_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (Int64.of_int normalization) - (if onesided then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if onesided then 1 else 0) + |> with_tensor_gc ;; let _fill_mem_eff_dropout_mask_ self ~dropout_p ~seed ~offset = - let out__ = CArray.make t 1 in stubs__fill_mem_eff_dropout_mask_ - (CArray.start out__) self dropout_p (Int64.of_int seed) - (Int64.of_int offset); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int offset) + |> with_tensor_gc ;; let _flash_attention_backward @@ -2392,7 +1726,7 @@ let _flash_attention_backward ~philox_offset ~scale = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__flash_attention_backward (CArray.start out__) grad_out @@ -2413,54 +1747,37 @@ let _flash_attention_backward (match scale with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _foobar self ~arg1 ~arg2 ~arg3 = - let out__ = CArray.make t 1 in stubs__foobar - (CArray.start out__) self (if arg1 then 1 else 0) (if arg2 then 1 else 0) - (if arg3 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if arg3 then 1 else 0) + |> with_tensor_gc ;; let _foobar_out ~out self ~arg1 ~arg2 ~arg3 = - let out__ = CArray.make t 1 in stubs__foobar_out - (CArray.start out__) out self (if arg1 then 1 else 0) (if arg2 then 1 else 0) - (if arg3 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if arg3 then 1 else 0) + |> with_tensor_gc ;; let _functional_assert_async self ~assert_msg ~dep_token = - let out__ = CArray.make t 1 in - stubs__functional_assert_async (CArray.start out__) self assert_msg dep_token; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__functional_assert_async self assert_msg dep_token |> with_tensor_gc ;; let _functional_sym_constrain_range ~size ~min ~max ~dep_token = - let out__ = CArray.make t 1 in stubs__functional_sym_constrain_range - (CArray.start out__) size (match min with | None -> Int64.zero @@ -2474,16 +1791,12 @@ let _functional_sym_constrain_range ~size ~min ~max ~dep_token = (match max with | Some _ -> 0 | None -> 1) - dep_token; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + dep_token + |> with_tensor_gc ;; let _functional_sym_constrain_range_for_size ~size ~min ~max ~dep_token = - let out__ = CArray.make t 1 in stubs__functional_sym_constrain_range_for_size - (CArray.start out__) size (match min with | None -> Int64.zero @@ -2497,10 +1810,8 @@ let _functional_sym_constrain_range_for_size ~size ~min ~max ~dep_token = (match max with | Some _ -> 0 | None -> 1) - dep_token; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + dep_token + |> with_tensor_gc ;; let _fused_adam @@ -2522,19 +1833,19 @@ let _fused_adam ~found_inf = stubs__fused_adam - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2545,10 +1856,10 @@ let _fused_adam (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_adam_ @@ -2569,17 +1880,17 @@ let _fused_adam_ ~found_inf = stubs__fused_adam_ - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2590,10 +1901,10 @@ let _fused_adam_ (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_adam_tensor_lr_ @@ -2614,17 +1925,17 @@ let _fused_adam_tensor_lr_ ~found_inf = stubs__fused_adam_tensor_lr_ - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2635,10 +1946,10 @@ let _fused_adam_tensor_lr_ (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_adam_tensor_lr_out @@ -2660,19 +1971,19 @@ let _fused_adam_tensor_lr_out ~found_inf = stubs__fused_adam_tensor_lr_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2683,10 +1994,10 @@ let _fused_adam_tensor_lr_out (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_adamw @@ -2708,19 +2019,19 @@ let _fused_adamw ~found_inf = stubs__fused_adamw - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2731,10 +2042,10 @@ let _fused_adamw (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_adamw_ @@ -2755,17 +2066,17 @@ let _fused_adamw_ ~found_inf = stubs__fused_adamw_ - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2776,10 +2087,10 @@ let _fused_adamw_ (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_adamw_tensor_lr_ @@ -2800,17 +2111,17 @@ let _fused_adamw_tensor_lr_ ~found_inf = stubs__fused_adamw_tensor_lr_ - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2821,10 +2132,10 @@ let _fused_adamw_tensor_lr_ (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_adamw_tensor_lr_out @@ -2846,19 +2157,19 @@ let _fused_adamw_tensor_lr_out ~found_inf = stubs__fused_adamw_tensor_lr_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) - (CArray.of_list t self |> CArray.start) + (CArray.of_list gc_tensor self |> CArray.start) (List.length self) - (CArray.of_list t grads |> CArray.start) + (CArray.of_list gc_tensor grads |> CArray.start) (List.length grads) - (CArray.of_list t exp_avgs |> CArray.start) + (CArray.of_list gc_tensor exp_avgs |> CArray.start) (List.length exp_avgs) - (CArray.of_list t exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor exp_avg_sqs |> CArray.start) (List.length exp_avg_sqs) - (CArray.of_list t max_exp_avg_sqs |> CArray.start) + (CArray.of_list gc_tensor max_exp_avg_sqs |> CArray.start) (List.length max_exp_avg_sqs) - (CArray.of_list t state_steps |> CArray.start) + (CArray.of_list gc_tensor state_steps |> CArray.start) (List.length state_steps) lr beta1 @@ -2869,29 +2180,25 @@ let _fused_adamw_tensor_lr_out (if maximize then 1 else 0) (match grad_scale with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match found_inf with | Some v -> v - | None -> null) + | None -> none_gc_tensor) ;; let _fused_dropout self ~p = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__fused_dropout (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _fused_dropout_out ~out0 ~out1 self ~p = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__fused_dropout_out (CArray.start out__) out0 out1 self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -2910,7 +2217,7 @@ let _fused_moving_avg_obs_fq_helper ~per_row_fake_quant ~symmetric_quant = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__fused_moving_avg_obs_fq_helper (CArray.start out__) self @@ -2926,10 +2233,8 @@ let _fused_moving_avg_obs_fq_helper (Int64.of_int ch_axis) (if per_row_fake_quant then 1 else 0) (if symmetric_quant then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -2948,7 +2253,7 @@ let _fused_moving_avg_obs_fq_helper_functional ~per_row_fake_quant ~symmetric_quant = - let out__ = CArray.make t 6 in + let out__ = CArray.make raw_tensor 6 in stubs__fused_moving_avg_obs_fq_helper_functional (CArray.start out__) self @@ -2964,18 +2269,12 @@ let _fused_moving_avg_obs_fq_helper_functional (Int64.of_int ch_axis) (if per_row_fake_quant then 1 else 0) (if symmetric_quant then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; - let t5 = CArray.get out__ 5 in - Gc.finalise C.Tensor.free t5; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in + let t5 = CArray.get out__ 5 |> with_tensor_gc in t0, t1, t2, t3, t4, t5 ;; @@ -2996,7 +2295,7 @@ let _fused_moving_avg_obs_fq_helper_out ~per_row_fake_quant ~symmetric_quant = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__fused_moving_avg_obs_fq_helper_out (CArray.start out__) out0 @@ -3014,10 +2313,8 @@ let _fused_moving_avg_obs_fq_helper_out (Int64.of_int ch_axis) (if per_row_fake_quant then 1 else 0) (if symmetric_quant then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -3028,7 +2325,7 @@ let _fused_sdp_choice ~query ~key ~value ~attn_mask ~dropout_p ~is_causal ~scale value (match attn_mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) dropout_p (if is_causal then 1 else 0) (Option.value scale ~default:0.0) @@ -3037,36 +2334,18 @@ let _fused_sdp_choice ~query ~key ~value ~attn_mask ~dropout_p ~is_causal ~scale | None -> 1) ;; -let _fw_primal self ~level = - let out__ = CArray.make t 1 in - stubs__fw_primal (CArray.start out__) self (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _fw_primal self ~level = stubs__fw_primal self (Int64.of_int level) |> with_tensor_gc let _fw_primal_copy self ~level = - let out__ = CArray.make t 1 in - stubs__fw_primal_copy (CArray.start out__) self (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__fw_primal_copy self (Int64.of_int level) |> with_tensor_gc ;; let _fw_primal_copy_out ~out self ~level = - let out__ = CArray.make t 1 in - stubs__fw_primal_copy_out (CArray.start out__) out self (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__fw_primal_copy_out out self (Int64.of_int level) |> with_tensor_gc ;; let _gather_sparse_backward self ~dim ~index ~grad = - let out__ = CArray.make t 1 in - stubs__gather_sparse_backward (CArray.start out__) self (Int64.of_int dim) index grad; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__gather_sparse_backward self (Int64.of_int dim) index grad |> with_tensor_gc ;; let _grid_sampler_2d_cpu_fallback @@ -3076,17 +2355,13 @@ let _grid_sampler_2d_cpu_fallback ~padding_mode ~align_corners = - let out__ = CArray.make t 1 in stubs__grid_sampler_2d_cpu_fallback - (CArray.start out__) input grid (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let _grid_sampler_2d_cpu_fallback_backward @@ -3097,7 +2372,7 @@ let _grid_sampler_2d_cpu_fallback_backward ~padding_mode ~align_corners = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__grid_sampler_2d_cpu_fallback_backward (CArray.start out__) grad_output @@ -3106,10 +2381,8 @@ let _grid_sampler_2d_cpu_fallback_backward (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -3121,18 +2394,14 @@ let _grid_sampler_2d_cpu_fallback_out ~padding_mode ~align_corners = - let out__ = CArray.make t 1 in stubs__grid_sampler_2d_cpu_fallback_out - (CArray.start out__) out input grid (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let _has_compatible_shallow_copy_type self ~from = @@ -3150,14 +2419,14 @@ let _histogramdd_bin_edges self ~bins ~range ~weight ~density = (List.length range) (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if density then 1 else 0) |> to_tensor_list ;; let _histogramdd_bin_edges_out ~out self ~bins ~range ~weight ~density = stubs__histogramdd_bin_edges_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) self (List.map Int64.of_int bins |> CArray.of_list int64_t |> CArray.start) @@ -3166,14 +2435,12 @@ let _histogramdd_bin_edges_out ~out self ~bins ~range ~weight ~density = (List.length range) (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if density then 1 else 0) ;; let _histogramdd_from_bin_cts self ~bins ~range ~weight ~density = - let out__ = CArray.make t 1 in stubs__histogramdd_from_bin_cts - (CArray.start out__) self (List.map Int64.of_int bins |> CArray.of_list int64_t |> CArray.start) (List.length bins) @@ -3181,17 +2448,13 @@ let _histogramdd_from_bin_cts self ~bins ~range ~weight ~density = (List.length range) (match weight with | Some v -> v - | None -> null) - (if density then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if density then 1 else 0) + |> with_tensor_gc ;; let _histogramdd_from_bin_cts_out ~out self ~bins ~range ~weight ~density = - let out__ = CArray.make t 1 in stubs__histogramdd_from_bin_cts_out - (CArray.start out__) out self (List.map Int64.of_int bins |> CArray.of_list int64_t |> CArray.start) @@ -3200,166 +2463,95 @@ let _histogramdd_from_bin_cts_out ~out self ~bins ~range ~weight ~density = (List.length range) (match weight with | Some v -> v - | None -> null) - (if density then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if density then 1 else 0) + |> with_tensor_gc ;; let _histogramdd_from_bin_tensors self ~bins ~weight ~density = - let out__ = CArray.make t 1 in stubs__histogramdd_from_bin_tensors - (CArray.start out__) self - (CArray.of_list t bins |> CArray.start) + (CArray.of_list gc_tensor bins |> CArray.start) (List.length bins) (match weight with | Some v -> v - | None -> null) - (if density then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if density then 1 else 0) + |> with_tensor_gc ;; let _histogramdd_from_bin_tensors_out ~out self ~bins ~weight ~density = - let out__ = CArray.make t 1 in stubs__histogramdd_from_bin_tensors_out - (CArray.start out__) out self - (CArray.of_list t bins |> CArray.start) + (CArray.of_list gc_tensor bins |> CArray.start) (List.length bins) (match weight with | Some v -> v - | None -> null) - (if density then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if density then 1 else 0) + |> with_tensor_gc ;; let _index_put_impl self ~indices ~values ~accumulate ~unsafe = - let out__ = CArray.make t 1 in stubs__index_put_impl - (CArray.start out__) self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) (List.length indices) values (if accumulate then 1 else 0) - (if unsafe then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if unsafe then 1 else 0) + |> with_tensor_gc ;; let _index_put_impl_ self ~indices ~values ~accumulate ~unsafe = - let out__ = CArray.make t 1 in stubs__index_put_impl_ - (CArray.start out__) self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) (List.length indices) values (if accumulate then 1 else 0) - (if unsafe then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if unsafe then 1 else 0) + |> with_tensor_gc ;; let _index_put_impl_out ~out self ~indices ~values ~accumulate ~unsafe = - let out__ = CArray.make t 1 in stubs__index_put_impl_out - (CArray.start out__) out self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) (List.length indices) values (if accumulate then 1 else 0) - (if unsafe then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _indices self = - let out__ = CArray.make t 1 in - stubs__indices (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _indices_copy self = - let out__ = CArray.make t 1 in - stubs__indices_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _indices_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs__indices_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _int_mm self ~mat2 = - let out__ = CArray.make t 1 in - stubs__int_mm (CArray.start out__) self mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _int_mm_out ~out self ~mat2 = - let out__ = CArray.make t 1 in - stubs__int_mm_out (CArray.start out__) out self mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _is_all_true self = - let out__ = CArray.make t 1 in - stubs__is_all_true (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _is_any_true self = - let out__ = CArray.make t 1 in - stubs__is_any_true (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if unsafe then 1 else 0) + |> with_tensor_gc ;; +let _indices self = stubs__indices self |> with_tensor_gc +let _indices_copy self = stubs__indices_copy self |> with_tensor_gc +let _indices_copy_out ~out self = stubs__indices_copy_out out self |> with_tensor_gc +let _int_mm self ~mat2 = stubs__int_mm self mat2 |> with_tensor_gc +let _int_mm_out ~out self ~mat2 = stubs__int_mm_out out self mat2 |> with_tensor_gc +let _is_all_true self = stubs__is_all_true self |> with_tensor_gc +let _is_any_true self = stubs__is_any_true self |> with_tensor_gc let _is_zerotensor self = stubs__is_zerotensor self let _linalg_check_errors ~info ~api_name ~is_matrix = @@ -3367,41 +2559,33 @@ let _linalg_check_errors ~info ~api_name ~is_matrix = ;; let _linalg_det ~a = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__linalg_det (CArray.start out__) a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _linalg_det_result result ~lu ~pivots ~a = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__linalg_det_result (CArray.start out__) result lu pivots a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _linalg_eigh ~a ~uplo ~compute_v = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__linalg_eigh (CArray.start out__) a uplo (if compute_v then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _linalg_eigh_eigenvalues ~eigenvalues ~eigenvectors ~a ~uplo ~compute_v = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__linalg_eigh_eigenvalues (CArray.start out__) eigenvalues @@ -3409,62 +2593,48 @@ let _linalg_eigh_eigenvalues ~eigenvalues ~eigenvectors ~a ~uplo ~compute_v = a uplo (if compute_v then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _linalg_slogdet ~a = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__linalg_slogdet (CArray.start out__) a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let _linalg_slogdet_sign ~sign ~logabsdet ~lu ~pivots ~a = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__linalg_slogdet_sign (CArray.start out__) sign logabsdet lu pivots a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let _linalg_solve_ex ~a ~b ~left ~check_errors = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__linalg_solve_ex (CArray.start out__) a b (if left then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let _linalg_solve_ex_result result ~lu ~pivots ~info ~a ~b ~left ~check_errors = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__linalg_solve_ex_result (CArray.start out__) result @@ -3475,36 +2645,29 @@ let _linalg_solve_ex_result result ~lu ~pivots ~info ~a ~b ~left ~check_errors = b (if left then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let _linalg_svd ~a ~full_matrices ~compute_uv ~driver = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__linalg_svd (CArray.start out__) a (if full_matrices then 1 else 0) (if compute_uv then 1 else 0) driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _linalg_svd_u ~u ~s ~vh ~a ~full_matrices ~compute_uv ~driver = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__linalg_svd_u (CArray.start out__) u @@ -3514,81 +2677,47 @@ let _linalg_svd_u ~u ~s ~vh ~a ~full_matrices ~compute_uv ~driver = (if full_matrices then 1 else 0) (if compute_uv then 1 else 0) driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _log_softmax self ~dim ~half_to_float = - let out__ = CArray.make t 1 in - stubs__log_softmax - (CArray.start out__) - self - (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__log_softmax self (Int64.of_int dim) (if half_to_float then 1 else 0) + |> with_tensor_gc ;; let _log_softmax_backward_data ~grad_output ~output ~dim ~input_dtype = - let out__ = CArray.make t 1 in stubs__log_softmax_backward_data - (CArray.start out__) grad_output output (Int64.of_int dim) - (Kind.packed_to_int input_dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int input_dtype) + |> with_tensor_gc ;; let _log_softmax_backward_data_out ~out ~grad_output ~output ~dim ~input_dtype = - let out__ = CArray.make t 1 in stubs__log_softmax_backward_data_out - (CArray.start out__) out grad_output output (Int64.of_int dim) - (Kind.packed_to_int input_dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int input_dtype) + |> with_tensor_gc ;; let _log_softmax_out ~out self ~dim ~half_to_float = - let out__ = CArray.make t 1 in - stubs__log_softmax_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__log_softmax_out out self (Int64.of_int dim) (if half_to_float then 1 else 0) + |> with_tensor_gc ;; let _logcumsumexp self ~dim = - let out__ = CArray.make t 1 in - stubs__logcumsumexp (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__logcumsumexp self (Int64.of_int dim) |> with_tensor_gc ;; let _logcumsumexp_out ~out self ~dim = - let out__ = CArray.make t 1 in - stubs__logcumsumexp_out (CArray.start out__) out self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__logcumsumexp_out out self (Int64.of_int dim) |> with_tensor_gc ;; let _lstm_mps @@ -3602,13 +2731,13 @@ let _lstm_mps ~bidirectional ~batch_first = - let out__ = CArray.make t 6 in + let out__ = CArray.make raw_tensor 6 in stubs__lstm_mps (CArray.start out__) input - (CArray.of_list t hx |> CArray.start) + (CArray.of_list gc_tensor hx |> CArray.start) (List.length hx) - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) @@ -3616,18 +2745,12 @@ let _lstm_mps (if train then 1 else 0) (if bidirectional then 1 else 0) (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; - let t5 = CArray.get out__ 5 in - Gc.finalise C.Tensor.free t5; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in + let t5 = CArray.get out__ 5 |> with_tensor_gc in t0, t1, t2, t3, t4, t5 ;; @@ -3648,7 +2771,7 @@ let _lstm_mps_out ~bidirectional ~batch_first = - let out__ = CArray.make t 6 in + let out__ = CArray.make raw_tensor 6 in stubs__lstm_mps_out (CArray.start out__) out0 @@ -3658,9 +2781,9 @@ let _lstm_mps_out out4 out5 input - (CArray.of_list t hx |> CArray.start) + (CArray.of_list gc_tensor hx |> CArray.start) (List.length hx) - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) @@ -3668,144 +2791,80 @@ let _lstm_mps_out (if train then 1 else 0) (if bidirectional then 1 else 0) (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; - let t5 = CArray.get out__ 5 in - Gc.finalise C.Tensor.free t5; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in + let t5 = CArray.get out__ 5 |> with_tensor_gc in t0, t1, t2, t3, t4, t5 ;; let _lu_with_info self ~pivot ~check_errors = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__lu_with_info (CArray.start out__) self (if pivot then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _make_dep_token ~options = - let out__ = CArray.make t 1 in - stubs__make_dep_token - (CArray.start out__) - (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__make_dep_token (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) + |> with_tensor_gc ;; let _make_dual ~primal ~tangent ~level = - let out__ = CArray.make t 1 in - stubs__make_dual (CArray.start out__) primal tangent (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__make_dual primal tangent (Int64.of_int level) |> with_tensor_gc ;; let _make_dual_copy ~primal ~tangent ~level = - let out__ = CArray.make t 1 in - stubs__make_dual_copy (CArray.start out__) primal tangent (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__make_dual_copy primal tangent (Int64.of_int level) |> with_tensor_gc ;; let _make_dual_copy_out ~out ~primal ~tangent ~level = - let out__ = CArray.make t 1 in - stubs__make_dual_copy_out (CArray.start out__) out primal tangent (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__make_dual_copy_out out primal tangent (Int64.of_int level) |> with_tensor_gc ;; let _make_per_channel_quantized_tensor self ~scale ~zero_point ~axis = - let out__ = CArray.make t 1 in - stubs__make_per_channel_quantized_tensor - (CArray.start out__) - self - scale - zero_point - (Int64.of_int axis); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__make_per_channel_quantized_tensor self scale zero_point (Int64.of_int axis) + |> with_tensor_gc ;; let _make_per_channel_quantized_tensor_out ~out self ~scale ~zero_point ~axis = - let out__ = CArray.make t 1 in stubs__make_per_channel_quantized_tensor_out - (CArray.start out__) out self scale zero_point - (Int64.of_int axis); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int axis) + |> with_tensor_gc ;; let _make_per_tensor_quantized_tensor self ~scale ~zero_point = - let out__ = CArray.make t 1 in - stubs__make_per_tensor_quantized_tensor - (CArray.start out__) - self - scale - (Int64.of_int zero_point); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__make_per_tensor_quantized_tensor self scale (Int64.of_int zero_point) + |> with_tensor_gc ;; let _make_per_tensor_quantized_tensor_out ~out self ~scale ~zero_point = - let out__ = CArray.make t 1 in - stubs__make_per_tensor_quantized_tensor_out - (CArray.start out__) - out - self - scale - (Int64.of_int zero_point); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__make_per_tensor_quantized_tensor_out out self scale (Int64.of_int zero_point) + |> with_tensor_gc ;; let _masked_scale self ~mask ~scale = - let out__ = CArray.make t 1 in - stubs__masked_scale (CArray.start out__) self mask scale; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__masked_scale self mask scale |> with_tensor_gc ;; let _masked_scale_out ~out self ~mask ~scale = - let out__ = CArray.make t 1 in - stubs__masked_scale_out (CArray.start out__) out self mask scale; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__masked_scale_out out self mask scale |> with_tensor_gc ;; let _masked_softmax self ~mask ~dim ~mask_type = - let out__ = CArray.make t 1 in stubs__masked_softmax - (CArray.start out__) self mask (match dim with @@ -3819,16 +2878,12 @@ let _masked_softmax self ~mask ~dim ~mask_type = | Some v -> Int64.of_int v) (match mask_type with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _masked_softmax_backward ~grad_output ~output ~mask ~dim = - let out__ = CArray.make t 1 in stubs__masked_softmax_backward - (CArray.start out__) grad_output output mask @@ -3837,16 +2892,12 @@ let _masked_softmax_backward ~grad_output ~output ~mask ~dim = | Some v -> Int64.of_int v) (match dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _masked_softmax_backward_out ~out ~grad_output ~output ~mask ~dim = - let out__ = CArray.make t 1 in stubs__masked_softmax_backward_out - (CArray.start out__) out grad_output output @@ -3856,16 +2907,12 @@ let _masked_softmax_backward_out ~out ~grad_output ~output ~mask ~dim = | Some v -> Int64.of_int v) (match dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _masked_softmax_out ~out self ~mask ~dim ~mask_type = - let out__ = CArray.make t 1 in stubs__masked_softmax_out - (CArray.start out__) out self mask @@ -3880,115 +2927,73 @@ let _masked_softmax_out ~out self ~mask ~dim ~mask_type = | Some v -> Int64.of_int v) (match mask_type with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _mkldnn_reshape self ~shape = - let out__ = CArray.make t 1 in stubs__mkldnn_reshape - (CArray.start out__) self (List.map Int64.of_int shape |> CArray.of_list int64_t |> CArray.start) - (List.length shape); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length shape) + |> with_tensor_gc ;; let _mkldnn_reshape_out ~out self ~shape = - let out__ = CArray.make t 1 in stubs__mkldnn_reshape_out - (CArray.start out__) out self (List.map Int64.of_int shape |> CArray.of_list int64_t |> CArray.start) - (List.length shape); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length shape) + |> with_tensor_gc ;; let _mkldnn_transpose self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs__mkldnn_transpose - (CArray.start out__) - self - (Int64.of_int dim0) - (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__mkldnn_transpose self (Int64.of_int dim0) (Int64.of_int dim1) |> with_tensor_gc ;; let _mkldnn_transpose_ self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs__mkldnn_transpose_ - (CArray.start out__) - self - (Int64.of_int dim0) - (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__mkldnn_transpose_ self (Int64.of_int dim0) (Int64.of_int dim1) |> with_tensor_gc ;; let _mkldnn_transpose_out ~out self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs__mkldnn_transpose_out - (CArray.start out__) - out - self - (Int64.of_int dim0) - (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__mkldnn_transpose_out out self (Int64.of_int dim0) (Int64.of_int dim1) + |> with_tensor_gc ;; let _mps_convolution self ~weight ~bias ~padding ~stride ~dilation ~groups = - let out__ = CArray.make t 1 in stubs__mps_convolution - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let _mps_convolution_out ~out self ~weight ~bias ~padding ~stride ~dilation ~groups = - let out__ = CArray.make t 1 in stubs__mps_convolution_out - (CArray.start out__) out self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let _mps_convolution_transpose @@ -4000,9 +3005,7 @@ let _mps_convolution_transpose ~dilation ~groups = - let out__ = CArray.make t 1 in stubs__mps_convolution_transpose - (CArray.start out__) self weight (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -4013,10 +3016,8 @@ let _mps_convolution_transpose (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let _mps_convolution_transpose_out @@ -4029,9 +3030,7 @@ let _mps_convolution_transpose_out ~dilation ~groups = - let out__ = CArray.make t 1 in stubs__mps_convolution_transpose_out - (CArray.start out__) out self weight @@ -4043,10 +3042,8 @@ let _mps_convolution_transpose_out (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let _native_batch_norm_legit @@ -4059,27 +3056,24 @@ let _native_batch_norm_legit ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__native_batch_norm_legit (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) running_mean running_var (if training then 1 else 0) momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -4093,54 +3087,46 @@ let _native_batch_norm_legit_functional ~momentum ~eps = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__native_batch_norm_legit_functional (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) running_mean running_var (if training then 1 else 0) momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; let _native_batch_norm_legit_no_stats input ~weight ~bias ~training ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__native_batch_norm_legit_no_stats (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -4155,7 +3141,7 @@ let _native_batch_norm_legit_no_stats_out ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__native_batch_norm_legit_no_stats_out (CArray.start out__) out @@ -4164,19 +3150,16 @@ let _native_batch_norm_legit_no_stats_out input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -4189,26 +3172,23 @@ let _native_batch_norm_legit_no_training ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__native_batch_norm_legit_no_training (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) running_mean running_var momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -4224,7 +3204,7 @@ let _native_batch_norm_legit_no_training_out ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__native_batch_norm_legit_no_training_out (CArray.start out__) out0 @@ -4233,20 +3213,17 @@ let _native_batch_norm_legit_no_training_out input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) running_mean running_var momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -4263,7 +3240,7 @@ let _native_batch_norm_legit_out ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__native_batch_norm_legit_out (CArray.start out__) out @@ -4272,21 +3249,18 @@ let _native_batch_norm_legit_out input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) running_mean running_var (if training then 1 else 0) momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -4305,7 +3279,7 @@ let _native_multi_head_attention ~average_attn_weights ~mask_type = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__native_multi_head_attention (CArray.start out__) query @@ -4319,7 +3293,7 @@ let _native_multi_head_attention proj_bias (match mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if need_weights then 1 else 0) (if average_attn_weights then 1 else 0) (match mask_type with @@ -4328,10 +3302,8 @@ let _native_multi_head_attention (match mask_type with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -4352,7 +3324,7 @@ let _native_multi_head_attention_out ~average_attn_weights ~mask_type = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__native_multi_head_attention_out (CArray.start out__) out0 @@ -4368,7 +3340,7 @@ let _native_multi_head_attention_out proj_bias (match mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if need_weights then 1 else 0) (if average_attn_weights then 1 else 0) (match mask_type with @@ -4377,99 +3349,47 @@ let _native_multi_head_attention_out (match mask_type with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let _neg_view self = - let out__ = CArray.make t 1 in - stubs__neg_view (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _neg_view_copy self = - let out__ = CArray.make t 1 in - stubs__neg_view_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _neg_view_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs__neg_view_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _neg_view self = stubs__neg_view self |> with_tensor_gc +let _neg_view_copy self = stubs__neg_view_copy self |> with_tensor_gc +let _neg_view_copy_out ~out self = stubs__neg_view_copy_out out self |> with_tensor_gc let _nested_from_padded ~padded ~cpu_nested_shape_example ~fuse_transform_0213 = - let out__ = CArray.make t 1 in stubs__nested_from_padded - (CArray.start out__) padded cpu_nested_shape_example - (if fuse_transform_0213 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if fuse_transform_0213 then 1 else 0) + |> with_tensor_gc ;; let _nested_from_padded_and_nested_example ~padded ~nt_example = - let out__ = CArray.make t 1 in - stubs__nested_from_padded_and_nested_example (CArray.start out__) padded nt_example; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__nested_from_padded_and_nested_example padded nt_example |> with_tensor_gc ;; let _nested_from_padded_and_nested_example_out ~out ~padded ~nt_example = - let out__ = CArray.make t 1 in - stubs__nested_from_padded_and_nested_example_out - (CArray.start out__) - out - padded - nt_example; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__nested_from_padded_and_nested_example_out out padded nt_example |> with_tensor_gc ;; let _nested_from_padded_out ~out ~padded ~cpu_nested_shape_example ~fuse_transform_0213 = - let out__ = CArray.make t 1 in stubs__nested_from_padded_out - (CArray.start out__) out padded cpu_nested_shape_example - (if fuse_transform_0213 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if fuse_transform_0213 then 1 else 0) + |> with_tensor_gc ;; let _nested_select_backward ~grad_output self ~dim ~index = - let out__ = CArray.make t 1 in - stubs__nested_select_backward - (CArray.start out__) - grad_output - self - (Int64.of_int dim) - (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__nested_select_backward grad_output self (Int64.of_int dim) (Int64.of_int index) + |> with_tensor_gc ;; let _nested_sum_backward ~grad self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs__nested_sum_backward - (CArray.start out__) grad self (match dim with @@ -4478,148 +3398,95 @@ let _nested_sum_backward ~grad self ~dim ~keepdim = (match dim with | None -> -1 | Some v -> List.length v) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let _nested_view_from_buffer self ~nested_size ~nested_strides ~offsets = - let out__ = CArray.make t 1 in - stubs__nested_view_from_buffer - (CArray.start out__) - self - nested_size - nested_strides - offsets; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__nested_view_from_buffer self nested_size nested_strides offsets |> with_tensor_gc ;; let _nested_view_from_buffer_copy self ~nested_size ~nested_strides ~offsets = - let out__ = CArray.make t 1 in - stubs__nested_view_from_buffer_copy - (CArray.start out__) - self - nested_size - nested_strides - offsets; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__nested_view_from_buffer_copy self nested_size nested_strides offsets + |> with_tensor_gc ;; let _nested_view_from_buffer_copy_out ~out self ~nested_size ~nested_strides ~offsets = - let out__ = CArray.make t 1 in - stubs__nested_view_from_buffer_copy_out - (CArray.start out__) - out - self - nested_size - nested_strides - offsets; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__nested_view_from_buffer_copy_out out self nested_size nested_strides offsets + |> with_tensor_gc ;; let _new_zeros_with_same_feature_meta self other ~self_num_batch_dims = - let out__ = CArray.make t 1 in - stubs__new_zeros_with_same_feature_meta - (CArray.start out__) - self - other - (Int64.of_int self_num_batch_dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__new_zeros_with_same_feature_meta self other (Int64.of_int self_num_batch_dims) + |> with_tensor_gc ;; let _new_zeros_with_same_feature_meta_out ~out self other ~self_num_batch_dims = - let out__ = CArray.make t 1 in stubs__new_zeros_with_same_feature_meta_out - (CArray.start out__) out self other - (Int64.of_int self_num_batch_dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int self_num_batch_dims) + |> with_tensor_gc ;; let _nnpack_available = stubs__nnpack_available let _nnpack_spatial_convolution input ~weight ~bias ~padding ~stride = - let out__ = CArray.make t 1 in stubs__nnpack_spatial_convolution - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let _nnpack_spatial_convolution_out ~out input ~weight ~bias ~padding ~stride = - let out__ = CArray.make t 1 in stubs__nnpack_spatial_convolution_out - (CArray.start out__) out input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let _nnz self = stubs__nnz self let _pack_padded_sequence input ~lengths ~batch_first = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__pack_padded_sequence (CArray.start out__) input lengths (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _pack_padded_sequence_backward ~grad ~input_size ~batch_sizes ~batch_first = - let out__ = CArray.make t 1 in stubs__pack_padded_sequence_backward - (CArray.start out__) grad (List.map Int64.of_int input_size |> CArray.of_list int64_t |> CArray.start) (List.length input_size) batch_sizes - (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if batch_first then 1 else 0) + |> with_tensor_gc ;; let _pack_padded_sequence_out ~out0 ~out1 input ~lengths ~batch_first = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__pack_padded_sequence_out (CArray.start out__) out0 @@ -4627,29 +3494,21 @@ let _pack_padded_sequence_out ~out0 ~out1 input ~lengths ~batch_first = input lengths (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _pad_circular self ~pad = - let out__ = CArray.make t 1 in stubs__pad_circular - (CArray.start out__) self (List.map Int64.of_int pad |> CArray.of_list int64_t |> CArray.start) - (List.length pad); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length pad) + |> with_tensor_gc ;; let _pad_enum self ~pad ~mode ~value = - let out__ = CArray.make t 1 in stubs__pad_enum - (CArray.start out__) self (List.map Int64.of_int pad |> CArray.of_list int64_t |> CArray.start) (List.length pad) @@ -4657,14 +3516,12 @@ let _pad_enum self ~pad ~mode ~value = (Option.value value ~default:0.0) (match value with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _pad_packed_sequence ~data ~batch_sizes ~batch_first ~padding_value ~total_length = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__pad_packed_sequence (CArray.start out__) data @@ -4672,217 +3529,139 @@ let _pad_packed_sequence ~data ~batch_sizes ~batch_first ~padding_value ~total_l (if batch_first then 1 else 0) padding_value (Int64.of_int total_length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _pdist_backward ~grad self ~p ~pdist = - let out__ = CArray.make t 1 in - stubs__pdist_backward (CArray.start out__) grad self p pdist; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__pdist_backward grad self p pdist |> with_tensor_gc ;; let _pdist_backward_out ~out ~grad self ~p ~pdist = - let out__ = CArray.make t 1 in - stubs__pdist_backward_out (CArray.start out__) out grad self p pdist; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__pdist_backward_out out grad self p pdist |> with_tensor_gc ;; let _pin_memory self ~device = - let out__ = CArray.make t 1 in - stubs__pin_memory (CArray.start out__) self (Device.to_int device); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__pin_memory self (Device.to_int device) |> with_tensor_gc ;; let _pin_memory_out ~out self ~device = - let out__ = CArray.make t 1 in - stubs__pin_memory_out (CArray.start out__) out self (Device.to_int device); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__pin_memory_out out self (Device.to_int device) |> with_tensor_gc ;; -let _prelu_kernel self ~weight = - let out__ = CArray.make t 1 in - stubs__prelu_kernel (CArray.start out__) self weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _prelu_kernel self ~weight = stubs__prelu_kernel self weight |> with_tensor_gc let _prelu_kernel_backward ~grad_output self ~weight = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__prelu_kernel_backward (CArray.start out__) grad_output self weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _propagate_xla_data input ~output = stubs__propagate_xla_data input output let _remove_batch_dim self ~level ~batch_size ~out_dim = - let out__ = CArray.make t 1 in stubs__remove_batch_dim - (CArray.start out__) self (Int64.of_int level) (Int64.of_int batch_size) - (Int64.of_int out_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int out_dim) + |> with_tensor_gc ;; let _reshape_alias self ~size ~stride = - let out__ = CArray.make t 1 in stubs__reshape_alias - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let _reshape_alias_copy self ~size ~stride = - let out__ = CArray.make t 1 in stubs__reshape_alias_copy - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let _reshape_alias_copy_out ~out self ~size ~stride = - let out__ = CArray.make t 1 in stubs__reshape_alias_copy_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let _reshape_copy self ~size = - let out__ = CArray.make t 1 in stubs__reshape_copy - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _reshape_from_tensor self ~shape = - let out__ = CArray.make t 1 in - stubs__reshape_from_tensor (CArray.start out__) self shape; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__reshape_from_tensor self shape |> with_tensor_gc ;; let _resize_output self ~size ~device = - let out__ = CArray.make t 1 in stubs__resize_output - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (Device.to_int device); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int device) + |> with_tensor_gc ;; let _resize_output_ self ~size ~device = - let out__ = CArray.make t 1 in stubs__resize_output_ - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (Device.to_int device); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int device) + |> with_tensor_gc ;; let _resize_output_out ~out self ~size ~device = - let out__ = CArray.make t 1 in stubs__resize_output_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (Device.to_int device); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int device) + |> with_tensor_gc ;; let _rowwise_prune ~weight ~mask ~compressed_indices_dtype = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__rowwise_prune (CArray.start out__) weight mask (Kind.packed_to_int compressed_indices_dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let _sample_dirichlet self = - let out__ = CArray.make t 1 in - stubs__sample_dirichlet (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _sample_dirichlet self = stubs__sample_dirichlet self |> with_tensor_gc let _sample_dirichlet_out ~out self = - let out__ = CArray.make t 1 in - stubs__sample_dirichlet_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sample_dirichlet_out out self |> with_tensor_gc ;; let _saturate_weight_to_fp16 ~weight = - let out__ = CArray.make t 1 in - stubs__saturate_weight_to_fp16 (CArray.start out__) weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__saturate_weight_to_fp16 weight |> with_tensor_gc ;; let _scaled_dot_product_attention_math @@ -4895,7 +3674,7 @@ let _scaled_dot_product_attention_math ~dropout_mask ~scale = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__scaled_dot_product_attention_math (CArray.start out__) query @@ -4903,20 +3682,18 @@ let _scaled_dot_product_attention_math value (match attn_mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) dropout_p (if is_causal then 1 else 0) (match dropout_mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Option.value scale ~default:0.0) (match scale with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -4930,7 +3707,7 @@ let _scaled_dot_product_efficient_attention ~is_causal ~scale = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs__scaled_dot_product_efficient_attention (CArray.start out__) query @@ -4938,7 +3715,7 @@ let _scaled_dot_product_efficient_attention value (match attn_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if compute_log_sumexp then 1 else 0) dropout_p (if is_causal then 1 else 0) @@ -4946,14 +3723,10 @@ let _scaled_dot_product_efficient_attention (match scale with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -4974,7 +3747,7 @@ let _scaled_dot_product_flash_attention_backward ~philox_offset ~scale = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__scaled_dot_product_flash_attention_backward (CArray.start out__) grad_out @@ -4995,38 +3768,33 @@ let _scaled_dot_product_flash_attention_backward (match scale with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _scaled_mm self ~mat2 ~bias ~out_dtype ~scale_a ~scale_b ~scale_result = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__scaled_mm (CArray.start out__) self mat2 (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Kind.packed_to_int out_dtype) (match scale_a with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match scale_b with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match scale_result with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -5041,7 +3809,7 @@ let _scaled_mm_out ~scale_b ~scale_result = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__scaled_mm_out (CArray.start out__) out @@ -5050,89 +3818,71 @@ let _scaled_mm_out mat2 (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Kind.packed_to_int out_dtype) (match scale_a with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match scale_b with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match scale_result with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _scatter_reduce self ~dim ~index ~src ~reduce ~include_self = - let out__ = CArray.make t 1 in stubs__scatter_reduce - (CArray.start out__) self (Int64.of_int dim) index src reduce - (if include_self then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if include_self then 1 else 0) + |> with_tensor_gc ;; let _scatter_reduce_ self ~dim ~index ~src ~reduce ~include_self = - let out__ = CArray.make t 1 in stubs__scatter_reduce_ - (CArray.start out__) self (Int64.of_int dim) index src reduce - (if include_self then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if include_self then 1 else 0) + |> with_tensor_gc ;; let _scatter_reduce_two_out ~out self ~dim ~index ~src ~reduce ~include_self = - let out__ = CArray.make t 1 in stubs__scatter_reduce_two_out - (CArray.start out__) out self (Int64.of_int dim) index src reduce - (if include_self then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if include_self then 1 else 0) + |> with_tensor_gc ;; let _segment_reduce_backward ~grad ~output ~data ~reduce ~lengths ~offsets ~axis ~initial = - let out__ = CArray.make t 1 in stubs__segment_reduce_backward - (CArray.start out__) grad output data reduce (match lengths with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match offsets with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int axis) - initial; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + initial + |> with_tensor_gc ;; let _segment_reduce_backward_out @@ -5146,9 +3896,7 @@ let _segment_reduce_backward_out ~axis ~initial = - let out__ = CArray.make t 1 in stubs__segment_reduce_backward_out - (CArray.start out__) out grad output @@ -5156,24 +3904,16 @@ let _segment_reduce_backward_out reduce (match lengths with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match offsets with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int axis) - initial; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + initial + |> with_tensor_gc ;; -let _shape_as_tensor self = - let out__ = CArray.make t 1 in - stubs__shape_as_tensor (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _shape_as_tensor self = stubs__shape_as_tensor self |> with_tensor_gc let _slow_conv2d_backward ~grad_input @@ -5186,7 +3926,7 @@ let _slow_conv2d_backward ~stride ~padding = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__slow_conv2d_backward (CArray.start out__) grad_input @@ -5201,17 +3941,14 @@ let _slow_conv2d_backward (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _sobol_engine_draw ~quasi ~n ~sobolstate ~dimension ~num_generated ~dtype = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__sobol_engine_draw (CArray.start out__) quasi @@ -5220,178 +3957,111 @@ let _sobol_engine_draw ~quasi ~n ~sobolstate ~dimension ~num_generated ~dtype = (Int64.of_int dimension) (Int64.of_int num_generated) (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _sobol_engine_ff_ self ~n ~sobolstate ~dimension ~num_generated = - let out__ = CArray.make t 1 in stubs__sobol_engine_ff_ - (CArray.start out__) self (Int64.of_int n) sobolstate (Int64.of_int dimension) - (Int64.of_int num_generated); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int num_generated) + |> with_tensor_gc ;; let _sobol_engine_initialize_state_ self ~dimension = - let out__ = CArray.make t 1 in - stubs__sobol_engine_initialize_state_ (CArray.start out__) self (Int64.of_int dimension); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sobol_engine_initialize_state_ self (Int64.of_int dimension) |> with_tensor_gc ;; let _sobol_engine_scramble_ self ~ltm ~dimension = - let out__ = CArray.make t 1 in - stubs__sobol_engine_scramble_ (CArray.start out__) self ltm (Int64.of_int dimension); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sobol_engine_scramble_ self ltm (Int64.of_int dimension) |> with_tensor_gc ;; let _softmax self ~dim ~half_to_float = - let out__ = CArray.make t 1 in - stubs__softmax - (CArray.start out__) - self - (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__softmax self (Int64.of_int dim) (if half_to_float then 1 else 0) + |> with_tensor_gc ;; let _softmax_backward_data ~grad_output ~output ~dim ~input_dtype = - let out__ = CArray.make t 1 in stubs__softmax_backward_data - (CArray.start out__) grad_output output (Int64.of_int dim) - (Kind.packed_to_int input_dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int input_dtype) + |> with_tensor_gc ;; let _softmax_backward_data_out ~grad_input ~grad_output ~output ~dim ~input_dtype = - let out__ = CArray.make t 1 in stubs__softmax_backward_data_out - (CArray.start out__) grad_input grad_output output (Int64.of_int dim) - (Kind.packed_to_int input_dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int input_dtype) + |> with_tensor_gc ;; let _softmax_out ~out self ~dim ~half_to_float = - let out__ = CArray.make t 1 in - stubs__softmax_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__softmax_out out self (Int64.of_int dim) (if half_to_float then 1 else 0) + |> with_tensor_gc ;; -let _sparse_addmm self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs__sparse_addmm (CArray.start out__) self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _sparse_addmm self ~mat1 ~mat2 = stubs__sparse_addmm self mat1 mat2 |> with_tensor_gc let _sparse_addmm_out ~out self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs__sparse_addmm_out (CArray.start out__) out self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_addmm_out out self mat1 mat2 |> with_tensor_gc ;; let _sparse_broadcast_to self ~size = - let out__ = CArray.make t 1 in stubs__sparse_broadcast_to - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _sparse_broadcast_to_copy self ~size = - let out__ = CArray.make t 1 in stubs__sparse_broadcast_to_copy - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _sparse_broadcast_to_copy_out ~out self ~size = - let out__ = CArray.make t 1 in stubs__sparse_broadcast_to_copy_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _sparse_bsc_tensor_unsafe ~ccol_indices ~row_indices ~values ~size ~options = - let out__ = CArray.make t 1 in stubs__sparse_bsc_tensor_unsafe - (CArray.start out__) ccol_indices row_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _sparse_bsr_tensor_unsafe ~crow_indices ~col_indices ~values ~size ~options = - let out__ = CArray.make t 1 in stubs__sparse_bsr_tensor_unsafe - (CArray.start out__) crow_indices col_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _sparse_compressed_tensor_unsafe @@ -5401,50 +4071,38 @@ let _sparse_compressed_tensor_unsafe ~size ~options = - let out__ = CArray.make t 1 in stubs__sparse_compressed_tensor_unsafe - (CArray.start out__) compressed_indices plain_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _sparse_coo_tensor_unsafe ~indices ~values ~size ~options ~is_coalesced = - let out__ = CArray.make t 1 in stubs__sparse_coo_tensor_unsafe - (CArray.start out__) indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) - (if is_coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if is_coalesced then 1 else 0) + |> with_tensor_gc ;; let _sparse_coo_tensor_with_dims ~sparse_dim ~dense_dim ~size ~options = - let out__ = CArray.make t 1 in stubs__sparse_coo_tensor_with_dims - (CArray.start out__) (Int64.of_int sparse_dim) (Int64.of_int dense_dim) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _sparse_coo_tensor_with_dims_and_tensors @@ -5456,9 +4114,7 @@ let _sparse_coo_tensor_with_dims_and_tensors ~options ~is_coalesced = - let out__ = CArray.make t 1 in stubs__sparse_coo_tensor_with_dims_and_tensors - (CArray.start out__) (Int64.of_int sparse_dim) (Int64.of_int dense_dim) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) @@ -5467,10 +4123,8 @@ let _sparse_coo_tensor_with_dims_and_tensors values (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) - (if is_coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if is_coalesced then 1 else 0) + |> with_tensor_gc ;; let _sparse_coo_tensor_with_dims_and_tensors_out @@ -5482,9 +4136,7 @@ let _sparse_coo_tensor_with_dims_and_tensors_out ~values ~is_coalesced = - let out__ = CArray.make t 1 in stubs__sparse_coo_tensor_with_dims_and_tensors_out - (CArray.start out__) out (Int64.of_int sparse_dim) (Int64.of_int dense_dim) @@ -5492,607 +4144,345 @@ let _sparse_coo_tensor_with_dims_and_tensors_out (List.length size) indices values - (if is_coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if is_coalesced then 1 else 0) + |> with_tensor_gc ;; let _sparse_coo_tensor_with_dims_out ~out ~sparse_dim ~dense_dim ~size = - let out__ = CArray.make t 1 in stubs__sparse_coo_tensor_with_dims_out - (CArray.start out__) out (Int64.of_int sparse_dim) (Int64.of_int dense_dim) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _sparse_csc_tensor_unsafe ~ccol_indices ~row_indices ~values ~size ~options = - let out__ = CArray.make t 1 in stubs__sparse_csc_tensor_unsafe - (CArray.start out__) ccol_indices row_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _sparse_csr_prod self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs__sparse_csr_prod - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let _sparse_csr_prod_dim_dtype_out ~out self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs__sparse_csr_prod_dim_dtype_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let _sparse_csr_sum self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs__sparse_csr_sum - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let _sparse_csr_sum_dim_dtype_out ~out self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs__sparse_csr_sum_dim_dtype_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let _sparse_csr_tensor_unsafe ~crow_indices ~col_indices ~values ~size ~options = - let out__ = CArray.make t 1 in stubs__sparse_csr_tensor_unsafe - (CArray.start out__) crow_indices col_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let _sparse_log_softmax self ~dim ~half_to_float = - let out__ = CArray.make t 1 in - stubs__sparse_log_softmax - (CArray.start out__) - self - (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_log_softmax self (Int64.of_int dim) (if half_to_float then 1 else 0) + |> with_tensor_gc ;; let _sparse_log_softmax_backward_data ~grad_output ~output ~dim self = - let out__ = CArray.make t 1 in - stubs__sparse_log_softmax_backward_data - (CArray.start out__) - grad_output - output - (Int64.of_int dim) - self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_log_softmax_backward_data grad_output output (Int64.of_int dim) self + |> with_tensor_gc ;; let _sparse_log_softmax_backward_data_out ~out ~grad_output ~output ~dim self = - let out__ = CArray.make t 1 in stubs__sparse_log_softmax_backward_data_out - (CArray.start out__) out grad_output output (Int64.of_int dim) - self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + self + |> with_tensor_gc ;; let _sparse_log_softmax_int self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs__sparse_log_softmax_int - (CArray.start out__) - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_log_softmax_int self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let _sparse_log_softmax_out ~out self ~dim ~half_to_float = - let out__ = CArray.make t 1 in stubs__sparse_log_softmax_out - (CArray.start out__) out self (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if half_to_float then 1 else 0) + |> with_tensor_gc ;; let _sparse_mask_projection self ~mask ~accumulate_matches = - let out__ = CArray.make t 1 in - stubs__sparse_mask_projection - (CArray.start out__) - self - mask - (if accumulate_matches then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_mask_projection self mask (if accumulate_matches then 1 else 0) + |> with_tensor_gc ;; let _sparse_mask_projection_out ~out self ~mask ~accumulate_matches = - let out__ = CArray.make t 1 in - stubs__sparse_mask_projection_out - (CArray.start out__) - out - self - mask - (if accumulate_matches then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_mask_projection_out out self mask (if accumulate_matches then 1 else 0) + |> with_tensor_gc ;; -let _sparse_mm ~sparse ~dense = - let out__ = CArray.make t 1 in - stubs__sparse_mm (CArray.start out__) sparse dense; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _sparse_mm ~sparse ~dense = stubs__sparse_mm sparse dense |> with_tensor_gc let _sparse_mm_reduce ~sparse ~dense ~reduce = - let out__ = CArray.make t 1 in - stubs__sparse_mm_reduce (CArray.start out__) sparse dense reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_mm_reduce sparse dense reduce |> with_tensor_gc ;; let _sparse_mm_reduce_impl self other ~reduce = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__sparse_mm_reduce_impl (CArray.start out__) self other reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _sparse_semi_structured_linear input ~weight ~meta ~bias ~activation = - let out__ = CArray.make t 1 in stubs__sparse_semi_structured_linear - (CArray.start out__) input weight meta (match bias with | Some v -> v - | None -> null) - activation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + activation + |> with_tensor_gc ;; let _sparse_softmax self ~dim ~half_to_float = - let out__ = CArray.make t 1 in - stubs__sparse_softmax - (CArray.start out__) - self - (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_softmax self (Int64.of_int dim) (if half_to_float then 1 else 0) + |> with_tensor_gc ;; let _sparse_softmax_backward_data ~grad_output ~output ~dim self = - let out__ = CArray.make t 1 in - stubs__sparse_softmax_backward_data - (CArray.start out__) - grad_output - output - (Int64.of_int dim) - self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_softmax_backward_data grad_output output (Int64.of_int dim) self + |> with_tensor_gc ;; let _sparse_softmax_backward_data_out ~out ~grad_output ~output ~dim self = - let out__ = CArray.make t 1 in - stubs__sparse_softmax_backward_data_out - (CArray.start out__) - out - grad_output - output - (Int64.of_int dim) - self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_softmax_backward_data_out out grad_output output (Int64.of_int dim) self + |> with_tensor_gc ;; let _sparse_softmax_int self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs__sparse_softmax_int - (CArray.start out__) - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_softmax_int self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let _sparse_softmax_out ~out self ~dim ~half_to_float = - let out__ = CArray.make t 1 in - stubs__sparse_softmax_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (if half_to_float then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_softmax_out out self (Int64.of_int dim) (if half_to_float then 1 else 0) + |> with_tensor_gc ;; let _sparse_sparse_matmul self other = - let out__ = CArray.make t 1 in - stubs__sparse_sparse_matmul (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_sparse_matmul self other |> with_tensor_gc ;; let _sparse_sparse_matmul_out ~out self other = - let out__ = CArray.make t 1 in - stubs__sparse_sparse_matmul_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_sparse_matmul_out out self other |> with_tensor_gc ;; -let _sparse_sum self = - let out__ = CArray.make t 1 in - stubs__sparse_sum (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _sparse_sum self = stubs__sparse_sum self |> with_tensor_gc let _sparse_sum_backward ~grad self ~dim = - let out__ = CArray.make t 1 in stubs__sparse_sum_backward - (CArray.start out__) grad self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; let _sparse_sum_backward_out ~out ~grad self ~dim = - let out__ = CArray.make t 1 in stubs__sparse_sum_backward_out - (CArray.start out__) out grad self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; let _sparse_sum_dim self ~dim = - let out__ = CArray.make t 1 in stubs__sparse_sum_dim - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; let _sparse_sum_dim_dtype self ~dim ~dtype = - let out__ = CArray.make t 1 in stubs__sparse_sum_dim_dtype - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let _sparse_sum_dim_out ~out self ~dim = - let out__ = CArray.make t 1 in stubs__sparse_sum_dim_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; let _sparse_sum_dtype self ~dtype = - let out__ = CArray.make t 1 in - stubs__sparse_sum_dtype (CArray.start out__) self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__sparse_sum_dtype self (Kind.packed_to_int dtype) |> with_tensor_gc ;; let _spdiags ~diagonals ~offsets ~shape = - let out__ = CArray.make t 1 in stubs__spdiags - (CArray.start out__) diagonals offsets (List.map Int64.of_int shape |> CArray.of_list int64_t |> CArray.start) - (List.length shape); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length shape) + |> with_tensor_gc ;; let _spdiags_out ~out ~diagonals ~offsets ~shape = - let out__ = CArray.make t 1 in stubs__spdiags_out - (CArray.start out__) out diagonals offsets (List.map Int64.of_int shape |> CArray.of_list int64_t |> CArray.start) - (List.length shape); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length shape) + |> with_tensor_gc ;; let _stack tensors ~dim = - let out__ = CArray.make t 1 in stubs__stack - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; let _stack_out ~out tensors ~dim = - let out__ = CArray.make t 1 in stubs__stack_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; -let _standard_gamma self = - let out__ = CArray.make t 1 in - stubs__standard_gamma (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _standard_gamma self = stubs__standard_gamma self |> with_tensor_gc let _standard_gamma_grad self ~output = - let out__ = CArray.make t 1 in - stubs__standard_gamma_grad (CArray.start out__) self output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__standard_gamma_grad self output |> with_tensor_gc ;; let _standard_gamma_grad_out ~out self ~output = - let out__ = CArray.make t 1 in - stubs__standard_gamma_grad_out (CArray.start out__) out self output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__standard_gamma_grad_out out self output |> with_tensor_gc ;; -let _standard_gamma_out ~out self = - let out__ = CArray.make t 1 in - stubs__standard_gamma_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _standard_gamma_out ~out self = stubs__standard_gamma_out out self |> with_tensor_gc let _test_ambiguous_defaults ~dummy ~a ~b = - let out__ = CArray.make t 1 in - stubs__test_ambiguous_defaults - (CArray.start out__) - dummy - (Int64.of_int a) - (Int64.of_int b); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_ambiguous_defaults dummy (Int64.of_int a) (Int64.of_int b) |> with_tensor_gc ;; let _test_ambiguous_defaults_b ~dummy ~a ~b = - let out__ = CArray.make t 1 in - stubs__test_ambiguous_defaults_b (CArray.start out__) dummy (Int64.of_int a) b; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_ambiguous_defaults_b dummy (Int64.of_int a) b |> with_tensor_gc ;; let _test_autograd_multiple_dispatch self = - let out__ = CArray.make t 1 in - stubs__test_autograd_multiple_dispatch (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_autograd_multiple_dispatch self |> with_tensor_gc ;; let _test_autograd_multiple_dispatch_fullcoverage_out ~out self = - let out__ = CArray.make t 1 in - stubs__test_autograd_multiple_dispatch_fullcoverage_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_autograd_multiple_dispatch_fullcoverage_out out self |> with_tensor_gc ;; let _test_autograd_multiple_dispatch_ntonly self ~b = - let out__ = CArray.make t 1 in - stubs__test_autograd_multiple_dispatch_ntonly - (CArray.start out__) - self - (if b then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_autograd_multiple_dispatch_ntonly self (if b then 1 else 0) + |> with_tensor_gc ;; let _test_autograd_multiple_dispatch_view self = - let out__ = CArray.make t 1 in - stubs__test_autograd_multiple_dispatch_view (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_autograd_multiple_dispatch_view self |> with_tensor_gc ;; let _test_autograd_multiple_dispatch_view_copy self = - let out__ = CArray.make t 1 in - stubs__test_autograd_multiple_dispatch_view_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_autograd_multiple_dispatch_view_copy self |> with_tensor_gc ;; let _test_autograd_multiple_dispatch_view_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs__test_autograd_multiple_dispatch_view_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_autograd_multiple_dispatch_view_copy_out out self |> with_tensor_gc ;; -let _test_check_tensor self = - let out__ = CArray.make t 1 in - stubs__test_check_tensor (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _test_check_tensor self = stubs__test_check_tensor self |> with_tensor_gc let _test_functorch_fallback self other = - let out__ = CArray.make t 1 in - stubs__test_functorch_fallback (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_functorch_fallback self other |> with_tensor_gc ;; let _test_functorch_fallback_out ~out self other = - let out__ = CArray.make t 1 in - stubs__test_functorch_fallback_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_functorch_fallback_out out self other |> with_tensor_gc ;; let _test_optional_filled_intlist ~values ~addends = - let out__ = CArray.make t 1 in stubs__test_optional_filled_intlist - (CArray.start out__) values (match addends with | None -> from_voidp int64_t null | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match addends with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let _test_optional_filled_intlist_out ~out ~values ~addends = - let out__ = CArray.make t 1 in stubs__test_optional_filled_intlist_out - (CArray.start out__) out values (match addends with @@ -6100,57 +4490,41 @@ let _test_optional_filled_intlist_out ~out ~values ~addends = | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match addends with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let _test_optional_floatlist ~values ~addends = - let out__ = CArray.make t 1 in stubs__test_optional_floatlist - (CArray.start out__) values (addends |> CArray.of_list double |> CArray.start) - (List.length addends); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length addends) + |> with_tensor_gc ;; let _test_optional_floatlist_out ~out ~values ~addends = - let out__ = CArray.make t 1 in stubs__test_optional_floatlist_out - (CArray.start out__) out values (addends |> CArray.of_list double |> CArray.start) - (List.length addends); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length addends) + |> with_tensor_gc ;; let _test_optional_intlist ~values ~addends = - let out__ = CArray.make t 1 in stubs__test_optional_intlist - (CArray.start out__) values (match addends with | None -> from_voidp int64_t null | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match addends with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let _test_optional_intlist_out ~out ~values ~addends = - let out__ = CArray.make t 1 in stubs__test_optional_intlist_out - (CArray.start out__) out values (match addends with @@ -6158,42 +4532,22 @@ let _test_optional_intlist_out ~out ~values ~addends = | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match addends with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let _test_serialization_subcmul self other = - let out__ = CArray.make t 1 in - stubs__test_serialization_subcmul (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_serialization_subcmul self other |> with_tensor_gc ;; let _test_string_default ~dummy ~a ~b = - let out__ = CArray.make t 1 in - stubs__test_string_default (CArray.start out__) dummy a b; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_string_default dummy a b |> with_tensor_gc ;; -let _test_warn_in_autograd self = - let out__ = CArray.make t 1 in - stubs__test_warn_in_autograd (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _test_warn_in_autograd self = stubs__test_warn_in_autograd self |> with_tensor_gc let _test_warn_in_autograd_out ~out self = - let out__ = CArray.make t 1 in - stubs__test_warn_in_autograd_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__test_warn_in_autograd_out out self |> with_tensor_gc ;; let _thnn_differentiable_gru_cell_backward @@ -6204,7 +4558,7 @@ let _thnn_differentiable_gru_cell_backward ~input_bias ~hidden_bias = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__thnn_differentiable_gru_cell_backward (CArray.start out__) grad_hy @@ -6213,20 +4567,15 @@ let _thnn_differentiable_gru_cell_backward hx (match input_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match hidden_bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; @@ -6240,40 +4589,35 @@ let _thnn_differentiable_lstm_cell_backward ~cx ~cy = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__thnn_differentiable_lstm_cell_backward (CArray.start out__) (match grad_hy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_cy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) input_gates hidden_gates (match input_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match hidden_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) cx cy; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; let _thnn_fused_gru_cell ~input_gates ~hidden_gates ~hx ~input_bias ~hidden_bias = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__thnn_fused_gru_cell (CArray.start out__) input_gates @@ -6281,34 +4625,27 @@ let _thnn_fused_gru_cell ~input_gates ~hidden_gates ~hx ~input_bias ~hidden_bias hx (match input_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match hidden_bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _thnn_fused_gru_cell_backward ~grad_hy ~workspace ~has_bias = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__thnn_fused_gru_cell_backward (CArray.start out__) grad_hy workspace (if has_bias then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; @@ -6322,7 +4659,7 @@ let _thnn_fused_gru_cell_backward_out ~workspace ~has_bias = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__thnn_fused_gru_cell_backward_out (CArray.start out__) out0 @@ -6333,16 +4670,11 @@ let _thnn_fused_gru_cell_backward_out grad_hy workspace (if has_bias then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; @@ -6355,7 +4687,7 @@ let _thnn_fused_gru_cell_out ~input_bias ~hidden_bias = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__thnn_fused_gru_cell_out (CArray.start out__) out0 @@ -6365,19 +4697,17 @@ let _thnn_fused_gru_cell_out hx (match input_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match hidden_bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _thnn_fused_lstm_cell ~input_gates ~hidden_gates ~cx ~input_bias ~hidden_bias = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__thnn_fused_lstm_cell (CArray.start out__) input_gates @@ -6385,66 +4715,55 @@ let _thnn_fused_lstm_cell ~input_gates ~hidden_gates ~cx ~input_bias ~hidden_bia cx (match input_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match hidden_bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _thnn_fused_lstm_cell_backward ~grad_hy ~grad_cy ~cx ~cy ~workspace ~has_bias = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs__thnn_fused_lstm_cell_backward (CArray.start out__) (match grad_hy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_cy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) cx cy workspace (if has_bias then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; let _thnn_fused_lstm_cell_backward_impl ~grad_hy ~grad_cy ~cx ~cy ~workspace ~has_bias = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__thnn_fused_lstm_cell_backward_impl (CArray.start out__) (match grad_hy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_cy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) cx cy workspace (if has_bias then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -6459,7 +4778,7 @@ let _thnn_fused_lstm_cell_backward_impl_out ~workspace ~has_bias = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__thnn_fused_lstm_cell_backward_impl_out (CArray.start out__) out0 @@ -6467,20 +4786,17 @@ let _thnn_fused_lstm_cell_backward_impl_out out2 (match grad_hy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_cy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) cx cy workspace (if has_bias then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -6494,7 +4810,7 @@ let _thnn_fused_lstm_cell_out ~input_bias ~hidden_bias = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__thnn_fused_lstm_cell_out (CArray.start out__) out0 @@ -6505,74 +4821,46 @@ let _thnn_fused_lstm_cell_out cx (match input_bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match hidden_bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _to_copy self ~options ~non_blocking = - let out__ = CArray.make t 1 in stubs__to_copy - (CArray.start out__) self (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) - (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if non_blocking then 1 else 0) + |> with_tensor_gc ;; let _to_copy_out ~out self ~non_blocking = - let out__ = CArray.make t 1 in - stubs__to_copy_out (CArray.start out__) out self (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__to_copy_out out self (if non_blocking then 1 else 0) |> with_tensor_gc ;; let _to_cpu tensors = - stubs__to_cpu (CArray.of_list t tensors |> CArray.start) (List.length tensors) + stubs__to_cpu (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) |> to_tensor_list ;; let _to_dense self ~dtype ~masked_grad = - let out__ = CArray.make t 1 in - stubs__to_dense - (CArray.start out__) - self - (Kind.packed_to_int dtype) - (if masked_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__to_dense self (Kind.packed_to_int dtype) (if masked_grad then 1 else 0) + |> with_tensor_gc ;; let _to_dense_out ~out self ~dtype ~masked_grad = - let out__ = CArray.make t 1 in - stubs__to_dense_out - (CArray.start out__) - out - self - (Kind.packed_to_int dtype) - (if masked_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__to_dense_out out self (Kind.packed_to_int dtype) (if masked_grad then 1 else 0) + |> with_tensor_gc ;; let _to_sparse_bsc self ~blocksize ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_bsc - (CArray.start out__) self (List.map Int64.of_int blocksize |> CArray.of_list int64_t |> CArray.start) (List.length blocksize) @@ -6581,16 +4869,12 @@ let _to_sparse_bsc self ~blocksize ~dense_dim = | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_bsc_out ~out self ~blocksize ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_bsc_out - (CArray.start out__) out self (List.map Int64.of_int blocksize |> CArray.of_list int64_t |> CArray.start) @@ -6600,16 +4884,12 @@ let _to_sparse_bsc_out ~out self ~blocksize ~dense_dim = | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_bsr self ~blocksize ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_bsr - (CArray.start out__) self (List.map Int64.of_int blocksize |> CArray.of_list int64_t |> CArray.start) (List.length blocksize) @@ -6618,16 +4898,12 @@ let _to_sparse_bsr self ~blocksize ~dense_dim = | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_bsr_out ~out self ~blocksize ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_bsr_out - (CArray.start out__) out self (List.map Int64.of_int blocksize |> CArray.of_list int64_t |> CArray.start) @@ -6637,32 +4913,24 @@ let _to_sparse_bsr_out ~out self ~blocksize ~dense_dim = | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_csc self ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_csc - (CArray.start out__) self (match dense_dim with | None -> Int64.zero | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_csc_out ~out self ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_csc_out - (CArray.start out__) out self (match dense_dim with @@ -6670,32 +4938,24 @@ let _to_sparse_csc_out ~out self ~dense_dim = | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_csr self ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_csr - (CArray.start out__) self (match dense_dim with | None -> Int64.zero | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_csr_out ~out self ~dense_dim = - let out__ = CArray.make t 1 in stubs__to_sparse_csr_out - (CArray.start out__) out self (match dense_dim with @@ -6703,40 +4963,33 @@ let _to_sparse_csr_out ~out self ~dense_dim = | Some v -> Int64.of_int v) (match dense_dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _to_sparse_semi_structured ~dense = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__to_sparse_semi_structured (CArray.start out__) dense; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _transform_bias_rescale_qkv ~qkv ~qkv_bias ~num_heads = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__transform_bias_rescale_qkv (CArray.start out__) qkv qkv_bias (Int64.of_int num_heads); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _transform_bias_rescale_qkv_out ~out0 ~out1 ~out2 ~qkv ~qkv_bias ~num_heads = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__transform_bias_rescale_qkv_out (CArray.start out__) out0 @@ -6745,12 +4998,9 @@ let _transform_bias_rescale_qkv_out ~out0 ~out1 ~out2 ~qkv ~qkv_bias ~num_heads qkv qkv_bias (Int64.of_int num_heads); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -6776,9 +5026,7 @@ let _transformer_encoder_layer_fwd ~mask ~mask_type = - let out__ = CArray.make t 1 in stubs__transformer_encoder_layer_fwd - (CArray.start out__) src (Int64.of_int embed_dim) (Int64.of_int num_heads) @@ -6799,16 +5047,14 @@ let _transformer_encoder_layer_fwd ffn_bias_2 (match mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match mask_type with | None -> Int64.zero | Some v -> Int64.of_int v) (match mask_type with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _transformer_encoder_layer_fwd_out @@ -6834,9 +5080,7 @@ let _transformer_encoder_layer_fwd_out ~mask ~mask_type = - let out__ = CArray.make t 1 in stubs__transformer_encoder_layer_fwd_out - (CArray.start out__) out src (Int64.of_int embed_dim) @@ -6858,22 +5102,18 @@ let _transformer_encoder_layer_fwd_out ffn_bias_2 (match mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match mask_type with | None -> Int64.zero | Some v -> Int64.of_int v) (match mask_type with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _trilinear ~i1 ~i2 ~i3 ~expand1 ~expand2 ~expand3 ~sumdim ~unroll_dim = - let out__ = CArray.make t 1 in stubs__trilinear - (CArray.start out__) i1 i2 i3 @@ -6885,16 +5125,12 @@ let _trilinear ~i1 ~i2 ~i3 ~expand1 ~expand2 ~expand3 ~sumdim ~unroll_dim = (List.length expand3) (List.map Int64.of_int sumdim |> CArray.of_list int64_t |> CArray.start) (List.length sumdim) - (Int64.of_int unroll_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int unroll_dim) + |> with_tensor_gc ;; let _trilinear_out ~out ~i1 ~i2 ~i3 ~expand1 ~expand2 ~expand3 ~sumdim ~unroll_dim = - let out__ = CArray.make t 1 in stubs__trilinear_out - (CArray.start out__) out i1 i2 @@ -6907,10 +5143,8 @@ let _trilinear_out ~out ~i1 ~i2 ~i3 ~expand1 ~expand2 ~expand3 ~sumdim ~unroll_d (List.length expand3) (List.map Int64.of_int sumdim |> CArray.of_list int64_t |> CArray.start) (List.length sumdim) - (Int64.of_int unroll_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int unroll_dim) + |> with_tensor_gc ;; let _triton_multi_head_attention @@ -6925,9 +5159,7 @@ let _triton_multi_head_attention ~proj_bias ~mask = - let out__ = CArray.make t 1 in stubs__triton_multi_head_attention - (CArray.start out__) query key value @@ -6939,10 +5171,8 @@ let _triton_multi_head_attention proj_bias (match mask with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let _triton_multi_head_attention_out @@ -6958,9 +5188,7 @@ let _triton_multi_head_attention_out ~proj_bias ~mask = - let out__ = CArray.make t 1 in stubs__triton_multi_head_attention_out - (CArray.start out__) out query key @@ -6973,61 +5201,46 @@ let _triton_multi_head_attention_out proj_bias (match mask with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let _triton_scaled_dot_attention ~q ~k ~v ~dropout_p = - let out__ = CArray.make t 1 in - stubs__triton_scaled_dot_attention (CArray.start out__) q k v dropout_p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__triton_scaled_dot_attention q k v dropout_p |> with_tensor_gc ;; let _triton_scaled_dot_attention_out ~out ~q ~k ~v ~dropout_p = - let out__ = CArray.make t 1 in - stubs__triton_scaled_dot_attention_out (CArray.start out__) out q k v dropout_p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs__triton_scaled_dot_attention_out out q k v dropout_p |> with_tensor_gc ;; let _unique self ~sorted ~return_inverse = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__unique (CArray.start out__) self (if sorted then 1 else 0) (if return_inverse then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _unique2 self ~sorted ~return_inverse ~return_counts = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__unique2 (CArray.start out__) self (if sorted then 1 else 0) (if return_inverse then 1 else 0) (if return_counts then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _unique2_out ~out0 ~out1 ~out2 self ~sorted ~return_inverse ~return_counts = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs__unique2_out (CArray.start out__) out0 @@ -7037,17 +5250,14 @@ let _unique2_out ~out0 ~out1 ~out2 self ~sorted ~return_inverse ~return_counts = (if sorted then 1 else 0) (if return_inverse then 1 else 0) (if return_counts then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let _unique_out ~out0 ~out1 self ~sorted ~return_inverse = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__unique_out (CArray.start out__) out0 @@ -7055,90 +5265,68 @@ let _unique_out ~out0 ~out1 self ~sorted ~return_inverse = self (if sorted then 1 else 0) (if return_inverse then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _unpack_dual ~dual ~level = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__unpack_dual (CArray.start out__) dual (Int64.of_int level); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _unsafe_index self ~indices = - let out__ = CArray.make t 1 in stubs__unsafe_index - (CArray.start out__) self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) - (List.length indices); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length indices) + |> with_tensor_gc ;; let _unsafe_index_put self ~indices ~values ~accumulate = - let out__ = CArray.make t 1 in stubs__unsafe_index_put - (CArray.start out__) self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) (List.length indices) values - (if accumulate then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if accumulate then 1 else 0) + |> with_tensor_gc ;; let _unsafe_view self ~size = - let out__ = CArray.make t 1 in stubs__unsafe_view - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _unsafe_view_out ~out self ~size = - let out__ = CArray.make t 1 in stubs__unsafe_view_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let _upsample_bicubic2d_aa self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bicubic2d_aa - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7150,10 +5338,8 @@ let _upsample_bicubic2d_aa self ~output_size ~align_corners ~scales_h ~scales_w (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bicubic2d_aa_backward @@ -7164,9 +5350,7 @@ let _upsample_bicubic2d_aa_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bicubic2d_aa_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7180,10 +5364,8 @@ let _upsample_bicubic2d_aa_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bicubic2d_aa_backward_grad_input @@ -7195,9 +5377,7 @@ let _upsample_bicubic2d_aa_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bicubic2d_aa_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7212,16 +5392,12 @@ let _upsample_bicubic2d_aa_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bicubic2d_aa_out ~out self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bicubic2d_aa_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7234,16 +5410,12 @@ let _upsample_bicubic2d_aa_out ~out self ~output_size ~align_corners ~scales_h ~ (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bicubic2d_aa_vec input ~output_size ~align_corners ~scale_factors = - let out__ = CArray.make t 1 in stubs__upsample_bicubic2d_aa_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -7253,16 +5425,12 @@ let _upsample_bicubic2d_aa_vec input ~output_size ~align_corners ~scale_factors | Some v -> List.length v) (if align_corners then 1 else 0) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let _upsample_bilinear2d_aa self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bilinear2d_aa - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7274,10 +5442,8 @@ let _upsample_bilinear2d_aa self ~output_size ~align_corners ~scales_h ~scales_w (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bilinear2d_aa_backward @@ -7288,9 +5454,7 @@ let _upsample_bilinear2d_aa_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bilinear2d_aa_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7304,10 +5468,8 @@ let _upsample_bilinear2d_aa_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bilinear2d_aa_backward_grad_input @@ -7319,9 +5481,7 @@ let _upsample_bilinear2d_aa_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bilinear2d_aa_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7336,16 +5496,12 @@ let _upsample_bilinear2d_aa_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bilinear2d_aa_out ~out self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_bilinear2d_aa_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7358,16 +5514,12 @@ let _upsample_bilinear2d_aa_out ~out self ~output_size ~align_corners ~scales_h (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_bilinear2d_aa_vec input ~output_size ~align_corners ~scale_factors = - let out__ = CArray.make t 1 in stubs__upsample_bilinear2d_aa_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -7377,32 +5529,24 @@ let _upsample_bilinear2d_aa_vec input ~output_size ~align_corners ~scale_factors | Some v -> List.length v) (if align_corners then 1 else 0) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let _upsample_nearest_exact1d self ~output_size ~scales = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact1d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact1d_backward ~grad_output ~output_size ~input_size ~scales = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact1d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7411,10 +5555,8 @@ let _upsample_nearest_exact1d_backward ~grad_output ~output_size ~input_size ~sc (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact1d_backward_grad_input @@ -7424,9 +5566,7 @@ let _upsample_nearest_exact1d_backward_grad_input ~input_size ~scales = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact1d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7436,16 +5576,12 @@ let _upsample_nearest_exact1d_backward_grad_input (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact1d_out ~out self ~output_size ~scales = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact1d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7453,16 +5589,12 @@ let _upsample_nearest_exact1d_out ~out self ~output_size ~scales = (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact1d_vec input ~output_size ~scale_factors = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact1d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -7471,16 +5603,12 @@ let _upsample_nearest_exact1d_vec input ~output_size ~scale_factors = | None -> -1 | Some v -> List.length v) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let _upsample_nearest_exact2d self ~output_size ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact2d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7491,10 +5619,8 @@ let _upsample_nearest_exact2d self ~output_size ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact2d_backward @@ -7504,9 +5630,7 @@ let _upsample_nearest_exact2d_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact2d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7519,10 +5643,8 @@ let _upsample_nearest_exact2d_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact2d_backward_grad_input @@ -7533,9 +5655,7 @@ let _upsample_nearest_exact2d_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact2d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7549,16 +5669,12 @@ let _upsample_nearest_exact2d_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact2d_out ~out self ~output_size ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact2d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7570,16 +5686,12 @@ let _upsample_nearest_exact2d_out ~out self ~output_size ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact2d_vec input ~output_size ~scale_factors = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact2d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -7588,16 +5700,12 @@ let _upsample_nearest_exact2d_vec input ~output_size ~scale_factors = | None -> -1 | Some v -> List.length v) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let _upsample_nearest_exact3d self ~output_size ~scales_d ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact3d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7612,10 +5720,8 @@ let _upsample_nearest_exact3d self ~output_size ~scales_d ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact3d_backward @@ -7626,9 +5732,7 @@ let _upsample_nearest_exact3d_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact3d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -7645,10 +5749,8 @@ let _upsample_nearest_exact3d_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact3d_backward_grad_input @@ -7660,9 +5762,7 @@ let _upsample_nearest_exact3d_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact3d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7680,16 +5780,12 @@ let _upsample_nearest_exact3d_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact3d_out ~out self ~output_size ~scales_d ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact3d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -7705,16 +5801,12 @@ let _upsample_nearest_exact3d_out ~out self ~output_size ~scales_d ~scales_h ~sc (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let _upsample_nearest_exact3d_vec input ~output_size ~scale_factors = - let out__ = CArray.make t 1 in stubs__upsample_nearest_exact3d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -7723,10 +5815,8 @@ let _upsample_nearest_exact3d_vec input ~output_size ~scale_factors = | None -> -1 | Some v -> List.length v) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let _use_cudnn_ctc_loss ~log_probs ~targets ~input_lengths ~target_lengths ~blank = @@ -7795,42 +5885,14 @@ let _validate_sparse_csc_tensor_args ~ccol_indices ~row_indices ~values ~size = (List.length size) ;; -let _values self = - let out__ = CArray.make t 1 in - stubs__values (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _values_copy self = - let out__ = CArray.make t 1 in - stubs__values_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let _values_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs__values_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - +let _values self = stubs__values self |> with_tensor_gc +let _values_copy self = stubs__values_copy self |> with_tensor_gc +let _values_copy_out ~out self = stubs__values_copy_out out self |> with_tensor_gc let _version self = stubs__version self - -let _weight_norm ~v ~g ~dim = - let out__ = CArray.make t 1 in - stubs__weight_norm (CArray.start out__) v g (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let _weight_norm ~v ~g ~dim = stubs__weight_norm v g (Int64.of_int dim) |> with_tensor_gc let _weight_norm_differentiable_backward ~grad_w ~saved_v ~saved_g ~saved_norms ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__weight_norm_differentiable_backward (CArray.start out__) grad_w @@ -7838,25 +5900,21 @@ let _weight_norm_differentiable_backward ~grad_w ~saved_v ~saved_g ~saved_norms saved_g saved_norms (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _weight_norm_interface ~v ~g ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__weight_norm_interface (CArray.start out__) v g (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _weight_norm_interface_backward ~grad_w ~saved_v ~saved_g ~saved_norms ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__weight_norm_interface_backward (CArray.start out__) grad_w @@ -7864,10 +5922,8 @@ let _weight_norm_interface_backward ~grad_w ~saved_v ~saved_g ~saved_norms ~dim saved_g saved_norms (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -7880,7 +5936,7 @@ let _weight_norm_interface_backward_out ~saved_norms ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__weight_norm_interface_backward_out (CArray.start out__) out0 @@ -7890,240 +5946,113 @@ let _weight_norm_interface_backward_out saved_g saved_norms (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let _weight_norm_interface_out ~out0 ~out1 ~v ~g ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs__weight_norm_interface_out (CArray.start out__) out0 out1 v g (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let abs self = - let out__ = CArray.make t 1 in - stubs_abs (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let abs_ self = - let out__ = CArray.make t 1 in - stubs_abs_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let abs_out ~out self = - let out__ = CArray.make t 1 in - stubs_abs_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let absolute self = - let out__ = CArray.make t 1 in - stubs_absolute (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let absolute_ self = - let out__ = CArray.make t 1 in - stubs_absolute_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let absolute_out ~out self = - let out__ = CArray.make t 1 in - stubs_absolute_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let acos self = - let out__ = CArray.make t 1 in - stubs_acos (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let acos_ self = - let out__ = CArray.make t 1 in - stubs_acos_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let acos_out ~out self = - let out__ = CArray.make t 1 in - stubs_acos_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let acosh self = - let out__ = CArray.make t 1 in - stubs_acosh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let acosh_ self = - let out__ = CArray.make t 1 in - stubs_acosh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let acosh_out ~out self = - let out__ = CArray.make t 1 in - stubs_acosh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let abs self = stubs_abs self |> with_tensor_gc +let abs_ self = stubs_abs_ self |> with_tensor_gc +let abs_out ~out self = stubs_abs_out out self |> with_tensor_gc +let absolute self = stubs_absolute self |> with_tensor_gc +let absolute_ self = stubs_absolute_ self |> with_tensor_gc +let absolute_out ~out self = stubs_absolute_out out self |> with_tensor_gc +let acos self = stubs_acos self |> with_tensor_gc +let acos_ self = stubs_acos_ self |> with_tensor_gc +let acos_out ~out self = stubs_acos_out out self |> with_tensor_gc +let acosh self = stubs_acosh self |> with_tensor_gc +let acosh_ self = stubs_acosh_ self |> with_tensor_gc +let acosh_out ~out self = stubs_acosh_out out self |> with_tensor_gc let adaptive_avg_pool1d self ~output_size = - let out__ = CArray.make t 1 in stubs_adaptive_avg_pool1d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let adaptive_avg_pool2d self ~output_size = - let out__ = CArray.make t 1 in stubs_adaptive_avg_pool2d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let adaptive_avg_pool2d_out ~out self ~output_size = - let out__ = CArray.make t 1 in stubs_adaptive_avg_pool2d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let adaptive_avg_pool3d self ~output_size = - let out__ = CArray.make t 1 in stubs_adaptive_avg_pool3d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let adaptive_avg_pool3d_backward ~grad_input ~grad_output self = - let out__ = CArray.make t 1 in - stubs_adaptive_avg_pool3d_backward (CArray.start out__) grad_input grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_adaptive_avg_pool3d_backward grad_input grad_output self |> with_tensor_gc ;; let adaptive_avg_pool3d_out ~out self ~output_size = - let out__ = CArray.make t 1 in stubs_adaptive_avg_pool3d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let adaptive_max_pool1d self ~output_size = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_adaptive_max_pool1d (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let adaptive_max_pool2d self ~output_size = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_adaptive_max_pool2d (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let adaptive_max_pool2d_backward ~grad_output self ~indices = - let out__ = CArray.make t 1 in - stubs_adaptive_max_pool2d_backward (CArray.start out__) grad_output self indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_adaptive_max_pool2d_backward grad_output self indices |> with_tensor_gc ;; let adaptive_max_pool2d_backward_grad_input ~grad_input ~grad_output self ~indices = - let out__ = CArray.make t 1 in - stubs_adaptive_max_pool2d_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_adaptive_max_pool2d_backward_grad_input grad_input grad_output self indices + |> with_tensor_gc ;; let adaptive_max_pool2d_out ~out ~indices self ~output_size = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_adaptive_max_pool2d_out (CArray.start out__) out @@ -8131,50 +6060,34 @@ let adaptive_max_pool2d_out ~out ~indices self ~output_size = self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let adaptive_max_pool3d self ~output_size = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_adaptive_max_pool3d (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let adaptive_max_pool3d_backward ~grad_output self ~indices = - let out__ = CArray.make t 1 in - stubs_adaptive_max_pool3d_backward (CArray.start out__) grad_output self indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_adaptive_max_pool3d_backward grad_output self indices |> with_tensor_gc ;; let adaptive_max_pool3d_backward_grad_input ~grad_input ~grad_output self ~indices = - let out__ = CArray.make t 1 in - stubs_adaptive_max_pool3d_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_adaptive_max_pool3d_backward_grad_input grad_input grad_output self indices + |> with_tensor_gc ;; let adaptive_max_pool3d_out ~out ~indices self ~output_size = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_adaptive_max_pool3d_out (CArray.start out__) out @@ -8182,325 +6095,104 @@ let adaptive_max_pool3d_out ~out ~indices self ~output_size = self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let add self other = - let out__ = CArray.make t 1 in - stubs_add (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let add_ self other = - let out__ = CArray.make t 1 in - stubs_add_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let add_out ~out self other = - let out__ = CArray.make t 1 in - stubs_add_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let add_scalar self other = - let out__ = CArray.make t 1 in - stubs_add_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let add_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_add_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let add_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_add_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addbmm self ~batch1 ~batch2 = - let out__ = CArray.make t 1 in - stubs_addbmm (CArray.start out__) self batch1 batch2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addbmm_ self ~batch1 ~batch2 = - let out__ = CArray.make t 1 in - stubs_addbmm_ (CArray.start out__) self batch1 batch2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let add self other = stubs_add self other |> with_tensor_gc +let add_ self other = stubs_add_ self other |> with_tensor_gc +let add_out ~out self other = stubs_add_out out self other |> with_tensor_gc +let add_scalar self other = stubs_add_scalar self other |> with_tensor_gc +let add_scalar_ self other = stubs_add_scalar_ self other |> with_tensor_gc +let add_scalar_out ~out self other = stubs_add_scalar_out out self other |> with_tensor_gc +let addbmm self ~batch1 ~batch2 = stubs_addbmm self batch1 batch2 |> with_tensor_gc +let addbmm_ self ~batch1 ~batch2 = stubs_addbmm_ self batch1 batch2 |> with_tensor_gc let addbmm_out ~out self ~batch1 ~batch2 = - let out__ = CArray.make t 1 in - stubs_addbmm_out (CArray.start out__) out self batch1 batch2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_addbmm_out out self batch1 batch2 |> with_tensor_gc ;; -let addcdiv self ~tensor1 ~tensor2 = - let out__ = CArray.make t 1 in - stubs_addcdiv (CArray.start out__) self tensor1 tensor2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let addcdiv self ~tensor1 ~tensor2 = stubs_addcdiv self tensor1 tensor2 |> with_tensor_gc let addcdiv_ self ~tensor1 ~tensor2 = - let out__ = CArray.make t 1 in - stubs_addcdiv_ (CArray.start out__) self tensor1 tensor2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_addcdiv_ self tensor1 tensor2 |> with_tensor_gc ;; let addcdiv_out ~out self ~tensor1 ~tensor2 = - let out__ = CArray.make t 1 in - stubs_addcdiv_out (CArray.start out__) out self tensor1 tensor2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_addcdiv_out out self tensor1 tensor2 |> with_tensor_gc ;; -let addcmul self ~tensor1 ~tensor2 = - let out__ = CArray.make t 1 in - stubs_addcmul (CArray.start out__) self tensor1 tensor2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let addcmul self ~tensor1 ~tensor2 = stubs_addcmul self tensor1 tensor2 |> with_tensor_gc let addcmul_ self ~tensor1 ~tensor2 = - let out__ = CArray.make t 1 in - stubs_addcmul_ (CArray.start out__) self tensor1 tensor2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_addcmul_ self tensor1 tensor2 |> with_tensor_gc ;; let addcmul_out ~out self ~tensor1 ~tensor2 = - let out__ = CArray.make t 1 in - stubs_addcmul_out (CArray.start out__) out self tensor1 tensor2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addmm self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_addmm (CArray.start out__) self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addmm_ self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_addmm_ (CArray.start out__) self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_addcmul_out out self tensor1 tensor2 |> with_tensor_gc ;; -let addmm_out ~out self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_addmm_out (CArray.start out__) out self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addmv self ~mat ~vec = - let out__ = CArray.make t 1 in - stubs_addmv (CArray.start out__) self mat vec; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addmv_ self ~mat ~vec = - let out__ = CArray.make t 1 in - stubs_addmv_ (CArray.start out__) self mat vec; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addmv_out ~out self ~mat ~vec = - let out__ = CArray.make t 1 in - stubs_addmv_out (CArray.start out__) out self mat vec; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addr self ~vec1 ~vec2 = - let out__ = CArray.make t 1 in - stubs_addr (CArray.start out__) self vec1 vec2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addr_ self ~vec1 ~vec2 = - let out__ = CArray.make t 1 in - stubs_addr_ (CArray.start out__) self vec1 vec2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let addr_out ~out self ~vec1 ~vec2 = - let out__ = CArray.make t 1 in - stubs_addr_out (CArray.start out__) out self vec1 vec2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let adjoint self = - let out__ = CArray.make t 1 in - stubs_adjoint (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let addmm self ~mat1 ~mat2 = stubs_addmm self mat1 mat2 |> with_tensor_gc +let addmm_ self ~mat1 ~mat2 = stubs_addmm_ self mat1 mat2 |> with_tensor_gc +let addmm_out ~out self ~mat1 ~mat2 = stubs_addmm_out out self mat1 mat2 |> with_tensor_gc +let addmv self ~mat ~vec = stubs_addmv self mat vec |> with_tensor_gc +let addmv_ self ~mat ~vec = stubs_addmv_ self mat vec |> with_tensor_gc +let addmv_out ~out self ~mat ~vec = stubs_addmv_out out self mat vec |> with_tensor_gc +let addr self ~vec1 ~vec2 = stubs_addr self vec1 vec2 |> with_tensor_gc +let addr_ self ~vec1 ~vec2 = stubs_addr_ self vec1 vec2 |> with_tensor_gc +let addr_out ~out self ~vec1 ~vec2 = stubs_addr_out out self vec1 vec2 |> with_tensor_gc +let adjoint self = stubs_adjoint self |> with_tensor_gc let affine_grid_generator ~theta ~size ~align_corners = - let out__ = CArray.make t 1 in stubs_affine_grid_generator - (CArray.start out__) theta (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let affine_grid_generator_backward ~grad ~size ~align_corners = - let out__ = CArray.make t 1 in stubs_affine_grid_generator_backward - (CArray.start out__) grad (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let affine_grid_generator_out ~out ~theta ~size ~align_corners = - let out__ = CArray.make t 1 in stubs_affine_grid_generator_out - (CArray.start out__) out theta (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let alias self = - let out__ = CArray.make t 1 in - stubs_alias (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let alias_copy self = - let out__ = CArray.make t 1 in - stubs_alias_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let alias_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_alias_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; -let align_as self other = - let out__ = CArray.make t 1 in - stubs_align_as (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let alias self = stubs_alias self |> with_tensor_gc +let alias_copy self = stubs_alias_copy self |> with_tensor_gc +let alias_copy_out ~out self = stubs_alias_copy_out out self |> with_tensor_gc +let align_as self other = stubs_align_as self other |> with_tensor_gc let align_tensors tensors = - stubs_align_tensors (CArray.of_list t tensors |> CArray.start) (List.length tensors) + stubs_align_tensors + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) |> to_tensor_list ;; -let all self = - let out__ = CArray.make t 1 in - stubs_all (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let all_all_out ~out self = - let out__ = CArray.make t 1 in - stubs_all_all_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let all self = stubs_all self |> with_tensor_gc +let all_all_out ~out self = stubs_all_all_out out self |> with_tensor_gc let all_dim self ~dim ~keepdim = - let out__ = CArray.make t 1 in - stubs_all_dim (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_all_dim self (Int64.of_int dim) (if keepdim then 1 else 0) |> with_tensor_gc ;; let all_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in - stubs_all_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_all_out out self (Int64.of_int dim) (if keepdim then 1 else 0) |> with_tensor_gc ;; let allclose self other ~rtol ~atol ~equal_nan = @@ -8508,77 +6200,53 @@ let allclose self other ~rtol ~atol ~equal_nan = ;; let alpha_dropout input ~p ~train = - let out__ = CArray.make t 1 in - stubs_alpha_dropout (CArray.start out__) input p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_alpha_dropout input p (if train then 1 else 0) |> with_tensor_gc ;; let alpha_dropout_ self ~p ~train = - let out__ = CArray.make t 1 in - stubs_alpha_dropout_ (CArray.start out__) self p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_alpha_dropout_ self p (if train then 1 else 0) |> with_tensor_gc ;; let amax self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_amax - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let amax_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_amax_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let amin self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_amin - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let amin_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_amin_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let aminmax self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_aminmax (CArray.start out__) self @@ -8589,15 +6257,13 @@ let aminmax self ~dim ~keepdim = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let aminmax_out ~min ~max self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_aminmax_out (CArray.start out__) min @@ -8610,276 +6276,71 @@ let aminmax_out ~min ~max self ~dim ~keepdim = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let angle self = - let out__ = CArray.make t 1 in - stubs_angle (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let angle_out ~out self = - let out__ = CArray.make t 1 in - stubs_angle_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let any self = - let out__ = CArray.make t 1 in - stubs_any (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let any_all_out ~out self = - let out__ = CArray.make t 1 in - stubs_any_all_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let angle self = stubs_angle self |> with_tensor_gc +let angle_out ~out self = stubs_angle_out out self |> with_tensor_gc +let any self = stubs_any self |> with_tensor_gc +let any_all_out ~out self = stubs_any_all_out out self |> with_tensor_gc let any_dim self ~dim ~keepdim = - let out__ = CArray.make t 1 in - stubs_any_dim (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_any_dim self (Int64.of_int dim) (if keepdim then 1 else 0) |> with_tensor_gc ;; let any_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in - stubs_any_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_any_out out self (Int64.of_int dim) (if keepdim then 1 else 0) |> with_tensor_gc ;; let arange ~end_ ~options = - let out__ = CArray.make t 1 in - stubs_arange - (CArray.start out__) - end_ - (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_arange end_ (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) + |> with_tensor_gc ;; let arange_start ~start ~end_ ~options = - let out__ = CArray.make t 1 in stubs_arange_start - (CArray.start out__) start end_ (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let arange_start_step ~start ~end_ ~options = - let out__ = CArray.make t 1 in stubs_arange_start_step - (CArray.start out__) start end_ (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arccos self = - let out__ = CArray.make t 1 in - stubs_arccos (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arccos_ self = - let out__ = CArray.make t 1 in - stubs_arccos_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arccos_out ~out self = - let out__ = CArray.make t 1 in - stubs_arccos_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arccosh self = - let out__ = CArray.make t 1 in - stubs_arccosh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arccosh_ self = - let out__ = CArray.make t 1 in - stubs_arccosh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arccosh_out ~out self = - let out__ = CArray.make t 1 in - stubs_arccosh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arcsin self = - let out__ = CArray.make t 1 in - stubs_arcsin (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arcsin_ self = - let out__ = CArray.make t 1 in - stubs_arcsin_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arcsin_out ~out self = - let out__ = CArray.make t 1 in - stubs_arcsin_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arcsinh self = - let out__ = CArray.make t 1 in - stubs_arcsinh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arcsinh_ self = - let out__ = CArray.make t 1 in - stubs_arcsinh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arcsinh_out ~out self = - let out__ = CArray.make t 1 in - stubs_arcsinh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctan self = - let out__ = CArray.make t 1 in - stubs_arctan (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctan2 self other = - let out__ = CArray.make t 1 in - stubs_arctan2 (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctan2_ self other = - let out__ = CArray.make t 1 in - stubs_arctan2_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctan2_out ~out self other = - let out__ = CArray.make t 1 in - stubs_arctan2_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctan_ self = - let out__ = CArray.make t 1 in - stubs_arctan_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctan_out ~out self = - let out__ = CArray.make t 1 in - stubs_arctan_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctanh self = - let out__ = CArray.make t 1 in - stubs_arctanh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctanh_ self = - let out__ = CArray.make t 1 in - stubs_arctanh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let arctanh_out ~out self = - let out__ = CArray.make t 1 in - stubs_arctanh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; + (Device.to_int (snd options)) + |> with_tensor_gc +;; + +let arccos self = stubs_arccos self |> with_tensor_gc +let arccos_ self = stubs_arccos_ self |> with_tensor_gc +let arccos_out ~out self = stubs_arccos_out out self |> with_tensor_gc +let arccosh self = stubs_arccosh self |> with_tensor_gc +let arccosh_ self = stubs_arccosh_ self |> with_tensor_gc +let arccosh_out ~out self = stubs_arccosh_out out self |> with_tensor_gc +let arcsin self = stubs_arcsin self |> with_tensor_gc +let arcsin_ self = stubs_arcsin_ self |> with_tensor_gc +let arcsin_out ~out self = stubs_arcsin_out out self |> with_tensor_gc +let arcsinh self = stubs_arcsinh self |> with_tensor_gc +let arcsinh_ self = stubs_arcsinh_ self |> with_tensor_gc +let arcsinh_out ~out self = stubs_arcsinh_out out self |> with_tensor_gc +let arctan self = stubs_arctan self |> with_tensor_gc +let arctan2 self other = stubs_arctan2 self other |> with_tensor_gc +let arctan2_ self other = stubs_arctan2_ self other |> with_tensor_gc +let arctan2_out ~out self other = stubs_arctan2_out out self other |> with_tensor_gc +let arctan_ self = stubs_arctan_ self |> with_tensor_gc +let arctan_out ~out self = stubs_arctan_out out self |> with_tensor_gc +let arctanh self = stubs_arctanh self |> with_tensor_gc +let arctanh_ self = stubs_arctanh_ self |> with_tensor_gc +let arctanh_out ~out self = stubs_arctanh_out out self |> with_tensor_gc let argmax self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_argmax - (CArray.start out__) self (match dim with | None -> Int64.zero @@ -8887,16 +6348,12 @@ let argmax self ~dim ~keepdim = (match dim with | Some _ -> 0 | None -> 1) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let argmax_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_argmax_out - (CArray.start out__) out self (match dim with @@ -8905,16 +6362,12 @@ let argmax_out ~out self ~dim ~keepdim = (match dim with | Some _ -> 0 | None -> 1) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let argmin self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_argmin - (CArray.start out__) self (match dim with | None -> Int64.zero @@ -8922,16 +6375,12 @@ let argmin self ~dim ~keepdim = (match dim with | Some _ -> 0 | None -> 1) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let argmin_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_argmin_out - (CArray.start out__) out self (match dim with @@ -8940,59 +6389,37 @@ let argmin_out ~out self ~dim ~keepdim = (match dim with | Some _ -> 0 | None -> 1) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let argsort self ~dim ~descending = - let out__ = CArray.make t 1 in - stubs_argsort (CArray.start out__) self (Int64.of_int dim) (if descending then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_argsort self (Int64.of_int dim) (if descending then 1 else 0) |> with_tensor_gc ;; let argsort_stable self ~stable ~dim ~descending = - let out__ = CArray.make t 1 in stubs_argsort_stable - (CArray.start out__) self (if stable then 1 else 0) (Int64.of_int dim) - (if descending then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if descending then 1 else 0) + |> with_tensor_gc ;; let argsort_stable_out ~out self ~stable ~dim ~descending = - let out__ = CArray.make t 1 in stubs_argsort_stable_out - (CArray.start out__) out self (if stable then 1 else 0) (Int64.of_int dim) - (if descending then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if descending then 1 else 0) + |> with_tensor_gc ;; -let argwhere self = - let out__ = CArray.make t 1 in - stubs_argwhere (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let argwhere self = stubs_argwhere self |> with_tensor_gc let as_strided self ~size ~stride ~storage_offset = - let out__ = CArray.make t 1 in stubs_as_strided - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) @@ -9003,16 +6430,12 @@ let as_strided self ~size ~stride ~storage_offset = | Some v -> Int64.of_int v) (match storage_offset with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let as_strided_ self ~size ~stride ~storage_offset = - let out__ = CArray.make t 1 in stubs_as_strided_ - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) @@ -9023,16 +6446,12 @@ let as_strided_ self ~size ~stride ~storage_offset = | Some v -> Int64.of_int v) (match storage_offset with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let as_strided_copy self ~size ~stride ~storage_offset = - let out__ = CArray.make t 1 in stubs_as_strided_copy - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) @@ -9043,16 +6462,12 @@ let as_strided_copy self ~size ~stride ~storage_offset = | Some v -> Int64.of_int v) (match storage_offset with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let as_strided_copy_out ~out self ~size ~stride ~storage_offset = - let out__ = CArray.make t 1 in stubs_as_strided_copy_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) @@ -9064,16 +6479,12 @@ let as_strided_copy_out ~out self ~size ~stride ~storage_offset = | Some v -> Int64.of_int v) (match storage_offset with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let as_strided_scatter self ~src ~size ~stride ~storage_offset = - let out__ = CArray.make t 1 in stubs_as_strided_scatter - (CArray.start out__) self src (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) @@ -9085,16 +6496,12 @@ let as_strided_scatter self ~src ~size ~stride ~storage_offset = | Some v -> Int64.of_int v) (match storage_offset with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let as_strided_scatter_out ~out self ~src ~size ~stride ~storage_offset = - let out__ = CArray.make t 1 in stubs_as_strided_scatter_out - (CArray.start out__) out self src @@ -9107,181 +6514,54 @@ let as_strided_scatter_out ~out self ~src ~size ~stride ~storage_offset = | Some v -> Int64.of_int v) (match storage_offset with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let asin self = - let out__ = CArray.make t 1 in - stubs_asin (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let asin_ self = - let out__ = CArray.make t 1 in - stubs_asin_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let asin_out ~out self = - let out__ = CArray.make t 1 in - stubs_asin_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let asinh self = - let out__ = CArray.make t 1 in - stubs_asinh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let asinh_ self = - let out__ = CArray.make t 1 in - stubs_asinh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let asinh_out ~out self = - let out__ = CArray.make t 1 in - stubs_asinh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atan self = - let out__ = CArray.make t 1 in - stubs_atan (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atan2 self other = - let out__ = CArray.make t 1 in - stubs_atan2 (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atan2_ self other = - let out__ = CArray.make t 1 in - stubs_atan2_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atan2_out ~out self other = - let out__ = CArray.make t 1 in - stubs_atan2_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atan_ self = - let out__ = CArray.make t 1 in - stubs_atan_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atan_out ~out self = - let out__ = CArray.make t 1 in - stubs_atan_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atanh self = - let out__ = CArray.make t 1 in - stubs_atanh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atanh_ self = - let out__ = CArray.make t 1 in - stubs_atanh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atanh_out ~out self = - let out__ = CArray.make t 1 in - stubs_atanh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let atleast_1d self = - let out__ = CArray.make t 1 in - stubs_atleast_1d (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; + | None -> 1) + |> with_tensor_gc +;; + +let asin self = stubs_asin self |> with_tensor_gc +let asin_ self = stubs_asin_ self |> with_tensor_gc +let asin_out ~out self = stubs_asin_out out self |> with_tensor_gc +let asinh self = stubs_asinh self |> with_tensor_gc +let asinh_ self = stubs_asinh_ self |> with_tensor_gc +let asinh_out ~out self = stubs_asinh_out out self |> with_tensor_gc +let atan self = stubs_atan self |> with_tensor_gc +let atan2 self other = stubs_atan2 self other |> with_tensor_gc +let atan2_ self other = stubs_atan2_ self other |> with_tensor_gc +let atan2_out ~out self other = stubs_atan2_out out self other |> with_tensor_gc +let atan_ self = stubs_atan_ self |> with_tensor_gc +let atan_out ~out self = stubs_atan_out out self |> with_tensor_gc +let atanh self = stubs_atanh self |> with_tensor_gc +let atanh_ self = stubs_atanh_ self |> with_tensor_gc +let atanh_out ~out self = stubs_atanh_out out self |> with_tensor_gc +let atleast_1d self = stubs_atleast_1d self |> with_tensor_gc let atleast_1d_sequence tensors = stubs_atleast_1d_sequence - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) |> to_tensor_list ;; -let atleast_2d self = - let out__ = CArray.make t 1 in - stubs_atleast_2d (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let atleast_2d self = stubs_atleast_2d self |> with_tensor_gc let atleast_2d_sequence tensors = stubs_atleast_2d_sequence - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) |> to_tensor_list ;; -let atleast_3d self = - let out__ = CArray.make t 1 in - stubs_atleast_3d (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let atleast_3d self = stubs_atleast_3d self |> with_tensor_gc let atleast_3d_sequence tensors = stubs_atleast_3d_sequence - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) |> to_tensor_list ;; let avg_pool1d self ~kernel_size ~stride ~padding ~ceil_mode ~count_include_pad = - let out__ = CArray.make t 1 in stubs_avg_pool1d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -9290,10 +6570,8 @@ let avg_pool1d self ~kernel_size ~stride ~padding ~ceil_mode ~count_include_pad (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (if ceil_mode then 1 else 0) - (if count_include_pad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if count_include_pad then 1 else 0) + |> with_tensor_gc ;; let avg_pool2d @@ -9305,9 +6583,7 @@ let avg_pool2d ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool2d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -9322,10 +6598,8 @@ let avg_pool2d | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let avg_pool2d_backward @@ -9338,9 +6612,7 @@ let avg_pool2d_backward ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool2d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -9356,10 +6628,8 @@ let avg_pool2d_backward | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let avg_pool2d_backward_grad_input @@ -9373,9 +6643,7 @@ let avg_pool2d_backward_grad_input ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool2d_backward_grad_input - (CArray.start out__) grad_input grad_output self @@ -9392,10 +6660,8 @@ let avg_pool2d_backward_grad_input | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let avg_pool2d_out @@ -9408,9 +6674,7 @@ let avg_pool2d_out ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool2d_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -9426,10 +6690,8 @@ let avg_pool2d_out | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let avg_pool3d @@ -9441,9 +6703,7 @@ let avg_pool3d ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool3d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -9458,10 +6718,8 @@ let avg_pool3d | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let avg_pool3d_backward @@ -9474,9 +6732,7 @@ let avg_pool3d_backward ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool3d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -9492,10 +6748,8 @@ let avg_pool3d_backward | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let avg_pool3d_backward_grad_input @@ -9509,9 +6763,7 @@ let avg_pool3d_backward_grad_input ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool3d_backward_grad_input - (CArray.start out__) grad_input grad_output self @@ -9528,10 +6780,8 @@ let avg_pool3d_backward_grad_input | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let avg_pool3d_out @@ -9544,9 +6794,7 @@ let avg_pool3d_out ~count_include_pad ~divisor_override = - let out__ = CArray.make t 1 in stubs_avg_pool3d_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -9562,79 +6810,44 @@ let avg_pool3d_out | Some v -> Int64.of_int v) (match divisor_override with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let baddbmm self ~batch1 ~batch2 = - let out__ = CArray.make t 1 in - stubs_baddbmm (CArray.start out__) self batch1 batch2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; -let baddbmm_ self ~batch1 ~batch2 = - let out__ = CArray.make t 1 in - stubs_baddbmm_ (CArray.start out__) self batch1 batch2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let baddbmm self ~batch1 ~batch2 = stubs_baddbmm self batch1 batch2 |> with_tensor_gc +let baddbmm_ self ~batch1 ~batch2 = stubs_baddbmm_ self batch1 batch2 |> with_tensor_gc let baddbmm_out ~out self ~batch1 ~batch2 = - let out__ = CArray.make t 1 in - stubs_baddbmm_out (CArray.start out__) out self batch1 batch2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_baddbmm_out out self batch1 batch2 |> with_tensor_gc ;; let bartlett_window ~window_length ~options = - let out__ = CArray.make t 1 in stubs_bartlett_window - (CArray.start out__) (Int64.of_int window_length) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let bartlett_window_out ~out ~window_length = - let out__ = CArray.make t 1 in - stubs_bartlett_window_out (CArray.start out__) out (Int64.of_int window_length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bartlett_window_out out (Int64.of_int window_length) |> with_tensor_gc ;; let bartlett_window_periodic ~window_length ~periodic ~options = - let out__ = CArray.make t 1 in stubs_bartlett_window_periodic - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let bartlett_window_periodic_out ~out ~window_length ~periodic = - let out__ = CArray.make t 1 in stubs_bartlett_window_periodic_out - (CArray.start out__) out (Int64.of_int window_length) - (if periodic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if periodic then 1 else 0) + |> with_tensor_gc ;; let batch_norm @@ -9648,29 +6861,25 @@ let batch_norm ~eps ~cudnn_enabled = - let out__ = CArray.make t 1 in stubs_batch_norm - (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) momentum eps - (if cudnn_enabled then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if cudnn_enabled then 1 else 0) + |> with_tensor_gc ;; let batch_norm_backward_elemt @@ -9683,22 +6892,18 @@ let batch_norm_backward_elemt ~sum_dy_xmu ~count = - let out__ = CArray.make t 1 in stubs_batch_norm_backward_elemt - (CArray.start out__) grad_out input mean invstd (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) sum_dy sum_dy_xmu - count; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + count + |> with_tensor_gc ;; let batch_norm_backward_elemt_out @@ -9712,9 +6917,7 @@ let batch_norm_backward_elemt_out ~sum_dy_xmu ~count = - let out__ = CArray.make t 1 in stubs_batch_norm_backward_elemt_out - (CArray.start out__) out grad_out input @@ -9722,13 +6925,11 @@ let batch_norm_backward_elemt_out invstd (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) sum_dy sum_dy_xmu - count; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + count + |> with_tensor_gc ;; let batch_norm_backward_reduce @@ -9741,7 +6942,7 @@ let batch_norm_backward_reduce ~weight_g ~bias_g = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_batch_norm_backward_reduce (CArray.start out__) grad_out @@ -9750,18 +6951,14 @@ let batch_norm_backward_reduce invstd (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if input_g then 1 else 0) (if weight_g then 1 else 0) (if bias_g then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -9779,7 +6976,7 @@ let batch_norm_backward_reduce_out ~weight_g ~bias_g = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_batch_norm_backward_reduce_out (CArray.start out__) out0 @@ -9792,58 +6989,46 @@ let batch_norm_backward_reduce_out invstd (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if input_g then 1 else 0) (if weight_g then 1 else 0) (if bias_g then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let batch_norm_elemt input ~weight ~bias ~mean ~invstd ~eps = - let out__ = CArray.make t 1 in stubs_batch_norm_elemt - (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) mean invstd - eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + eps + |> with_tensor_gc ;; let batch_norm_elemt_out ~out input ~weight ~bias ~mean ~invstd ~eps = - let out__ = CArray.make t 1 in stubs_batch_norm_elemt_out - (CArray.start out__) out input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) mean invstd - eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + eps + |> with_tensor_gc ;; let batch_norm_gather_stats @@ -9856,7 +7041,7 @@ let batch_norm_gather_stats ~eps ~count = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_gather_stats (CArray.start out__) input @@ -9864,17 +7049,15 @@ let batch_norm_gather_stats invstd (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) momentum eps (Int64.of_int count); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -9890,7 +7073,7 @@ let batch_norm_gather_stats_out ~eps ~count = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_gather_stats_out (CArray.start out__) out0 @@ -9900,17 +7083,15 @@ let batch_norm_gather_stats_out invstd (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) momentum eps (Int64.of_int count); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -9924,7 +7105,7 @@ let batch_norm_gather_stats_with_counts ~eps ~counts = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_gather_stats_with_counts (CArray.start out__) input @@ -9932,17 +7113,15 @@ let batch_norm_gather_stats_with_counts invstd (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) momentum eps counts; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -9958,7 +7137,7 @@ let batch_norm_gather_stats_with_counts_out ~eps ~counts = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_gather_stats_with_counts_out (CArray.start out__) out0 @@ -9968,61 +7147,53 @@ let batch_norm_gather_stats_with_counts_out invstd (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) momentum eps counts; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let batch_norm_stats input ~eps = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_stats (CArray.start out__) input eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let batch_norm_stats_out ~out0 ~out1 input ~eps = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_stats_out (CArray.start out__) out0 out1 input eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let batch_norm_update_stats input ~running_mean ~running_var ~momentum = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_update_stats (CArray.start out__) input (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) momentum; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let batch_norm_update_stats_out ~out0 ~out1 input ~running_mean ~running_var ~momentum = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_batch_norm_update_stats_out (CArray.start out__) out0 @@ -10030,102 +7201,54 @@ let batch_norm_update_stats_out ~out0 ~out1 input ~running_mean ~running_var ~mo input (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) momentum; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let bernoulli self = - let out__ = CArray.make t 1 in - stubs_bernoulli (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bernoulli_ self ~p = - let out__ = CArray.make t 1 in - stubs_bernoulli_ (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bernoulli_float_ self ~p = - let out__ = CArray.make t 1 in - stubs_bernoulli_float_ (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bernoulli_p self ~p = - let out__ = CArray.make t 1 in - stubs_bernoulli_p (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bernoulli_tensor self ~p = - let out__ = CArray.make t 1 in - stubs_bernoulli_tensor (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bernoulli self = stubs_bernoulli self |> with_tensor_gc +let bernoulli_ self ~p = stubs_bernoulli_ self p |> with_tensor_gc +let bernoulli_float_ self ~p = stubs_bernoulli_float_ self p |> with_tensor_gc +let bernoulli_p self ~p = stubs_bernoulli_p self p |> with_tensor_gc +let bernoulli_tensor self ~p = stubs_bernoulli_tensor self p |> with_tensor_gc let bilinear ~input1 ~input2 ~weight ~bias = - let out__ = CArray.make t 1 in stubs_bilinear - (CArray.start out__) input1 input2 weight (match bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let binary_cross_entropy self ~target ~weight ~reduction = - let out__ = CArray.make t 1 in stubs_binary_cross_entropy - (CArray.start out__) self target (match weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let binary_cross_entropy_backward ~grad_output self ~target ~weight ~reduction = - let out__ = CArray.make t 1 in stubs_binary_cross_entropy_backward - (CArray.start out__) grad_output self target (match weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let binary_cross_entropy_backward_grad_input @@ -10136,617 +7259,313 @@ let binary_cross_entropy_backward_grad_input ~weight ~reduction = - let out__ = CArray.make t 1 in stubs_binary_cross_entropy_backward_grad_input - (CArray.start out__) grad_input grad_output self target (match weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let binary_cross_entropy_out ~out self ~target ~weight ~reduction = - let out__ = CArray.make t 1 in stubs_binary_cross_entropy_out - (CArray.start out__) out self target (match weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let binary_cross_entropy_with_logits self ~target ~weight ~pos_weight ~reduction = - let out__ = CArray.make t 1 in stubs_binary_cross_entropy_with_logits - (CArray.start out__) self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match pos_weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let binary_cross_entropy_with_logits_out ~out self ~target ~weight ~pos_weight ~reduction = - let out__ = CArray.make t 1 in stubs_binary_cross_entropy_with_logits_out - (CArray.start out__) out self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match pos_weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let bincount self ~weights ~minlength = - let out__ = CArray.make t 1 in stubs_bincount - (CArray.start out__) self (match weights with | Some v -> v - | None -> null) - (Int64.of_int minlength); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Int64.of_int minlength) + |> with_tensor_gc ;; let bincount_out ~out self ~weights ~minlength = - let out__ = CArray.make t 1 in stubs_bincount_out - (CArray.start out__) out self (match weights with | Some v -> v - | None -> null) - (Int64.of_int minlength); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let binomial ~count ~prob = - let out__ = CArray.make t 1 in - stubs_binomial (CArray.start out__) count prob; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let binomial_out ~out ~count ~prob = - let out__ = CArray.make t 1 in - stubs_binomial_out (CArray.start out__) out count prob; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bitwise_and self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Int64.of_int minlength) + |> with_tensor_gc ;; -let bitwise_and_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let binomial ~count ~prob = stubs_binomial count prob |> with_tensor_gc +let binomial_out ~out ~count ~prob = stubs_binomial_out out count prob |> with_tensor_gc +let bitwise_and self other = stubs_bitwise_and self other |> with_tensor_gc +let bitwise_and_ self other = stubs_bitwise_and_ self other |> with_tensor_gc let bitwise_and_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_and_scalar_out out self other |> with_tensor_gc ;; let bitwise_and_scalar_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and_scalar_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_and_scalar_tensor self other |> with_tensor_gc ;; let bitwise_and_scalar_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and_scalar_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_and_scalar_tensor_out out self other |> with_tensor_gc ;; -let bitwise_and_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bitwise_and_tensor self other = stubs_bitwise_and_tensor self other |> with_tensor_gc let bitwise_and_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_and_tensor_ self other |> with_tensor_gc ;; let bitwise_and_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_and_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_and_tensor_out out self other |> with_tensor_gc ;; -let bitwise_left_shift self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bitwise_left_shift self other = stubs_bitwise_left_shift self other |> with_tensor_gc let bitwise_left_shift_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_left_shift_ self other |> with_tensor_gc ;; let bitwise_left_shift_scalar_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift_scalar_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_left_shift_scalar_tensor self other |> with_tensor_gc ;; let bitwise_left_shift_scalar_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift_scalar_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_left_shift_scalar_tensor_out out self other |> with_tensor_gc ;; let bitwise_left_shift_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_left_shift_tensor_out out self other |> with_tensor_gc ;; let bitwise_left_shift_tensor_scalar self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift_tensor_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_left_shift_tensor_scalar self other |> with_tensor_gc ;; let bitwise_left_shift_tensor_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift_tensor_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_left_shift_tensor_scalar_ self other |> with_tensor_gc ;; let bitwise_left_shift_tensor_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_left_shift_tensor_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bitwise_not self = - let out__ = CArray.make t 1 in - stubs_bitwise_not (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_left_shift_tensor_scalar_out out self other |> with_tensor_gc ;; -let bitwise_not_ self = - let out__ = CArray.make t 1 in - stubs_bitwise_not_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bitwise_not_out ~out self = - let out__ = CArray.make t 1 in - stubs_bitwise_not_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bitwise_or self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bitwise_or_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bitwise_not self = stubs_bitwise_not self |> with_tensor_gc +let bitwise_not_ self = stubs_bitwise_not_ self |> with_tensor_gc +let bitwise_not_out ~out self = stubs_bitwise_not_out out self |> with_tensor_gc +let bitwise_or self other = stubs_bitwise_or self other |> with_tensor_gc +let bitwise_or_ self other = stubs_bitwise_or_ self other |> with_tensor_gc let bitwise_or_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_or_scalar_out out self other |> with_tensor_gc ;; let bitwise_or_scalar_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or_scalar_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_or_scalar_tensor self other |> with_tensor_gc ;; let bitwise_or_scalar_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or_scalar_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bitwise_or_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_or_scalar_tensor_out out self other |> with_tensor_gc ;; -let bitwise_or_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bitwise_or_tensor self other = stubs_bitwise_or_tensor self other |> with_tensor_gc +let bitwise_or_tensor_ self other = stubs_bitwise_or_tensor_ self other |> with_tensor_gc let bitwise_or_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_or_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_or_tensor_out out self other |> with_tensor_gc ;; let bitwise_right_shift self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift self other |> with_tensor_gc ;; let bitwise_right_shift_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift_ self other |> with_tensor_gc ;; let bitwise_right_shift_scalar_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift_scalar_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift_scalar_tensor self other |> with_tensor_gc ;; let bitwise_right_shift_scalar_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift_scalar_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift_scalar_tensor_out out self other |> with_tensor_gc ;; let bitwise_right_shift_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift_tensor_out out self other |> with_tensor_gc ;; let bitwise_right_shift_tensor_scalar self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift_tensor_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift_tensor_scalar self other |> with_tensor_gc ;; let bitwise_right_shift_tensor_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift_tensor_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift_tensor_scalar_ self other |> with_tensor_gc ;; let bitwise_right_shift_tensor_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_right_shift_tensor_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bitwise_xor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_right_shift_tensor_scalar_out out self other |> with_tensor_gc ;; -let bitwise_xor_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bitwise_xor self other = stubs_bitwise_xor self other |> with_tensor_gc +let bitwise_xor_ self other = stubs_bitwise_xor_ self other |> with_tensor_gc let bitwise_xor_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_xor_scalar_out out self other |> with_tensor_gc ;; let bitwise_xor_scalar_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor_scalar_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_xor_scalar_tensor self other |> with_tensor_gc ;; let bitwise_xor_scalar_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor_scalar_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_xor_scalar_tensor_out out self other |> with_tensor_gc ;; -let bitwise_xor_tensor self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bitwise_xor_tensor self other = stubs_bitwise_xor_tensor self other |> with_tensor_gc let bitwise_xor_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_xor_tensor_ self other |> with_tensor_gc ;; let bitwise_xor_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_bitwise_xor_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bitwise_xor_tensor_out out self other |> with_tensor_gc ;; let blackman_window ~window_length ~options = - let out__ = CArray.make t 1 in stubs_blackman_window - (CArray.start out__) (Int64.of_int window_length) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let blackman_window_out ~out ~window_length = - let out__ = CArray.make t 1 in - stubs_blackman_window_out (CArray.start out__) out (Int64.of_int window_length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_blackman_window_out out (Int64.of_int window_length) |> with_tensor_gc ;; let blackman_window_periodic ~window_length ~periodic ~options = - let out__ = CArray.make t 1 in stubs_blackman_window_periodic - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let blackman_window_periodic_out ~out ~window_length ~periodic = - let out__ = CArray.make t 1 in stubs_blackman_window_periodic_out - (CArray.start out__) out (Int64.of_int window_length) - (if periodic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if periodic then 1 else 0) + |> with_tensor_gc ;; let block_diag tensors = - let out__ = CArray.make t 1 in stubs_block_diag - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let block_diag_out ~out tensors = - let out__ = CArray.make t 1 in stubs_block_diag_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let bmm self ~mat2 = - let out__ = CArray.make t 1 in - stubs_bmm (CArray.start out__) self mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; -let bmm_out ~out self ~mat2 = - let out__ = CArray.make t 1 in - stubs_bmm_out (CArray.start out__) out self mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let bmm self ~mat2 = stubs_bmm self mat2 |> with_tensor_gc +let bmm_out ~out self ~mat2 = stubs_bmm_out out self mat2 |> with_tensor_gc let broadcast_tensors tensors = - stubs_broadcast_tensors (CArray.of_list t tensors |> CArray.start) (List.length tensors) + stubs_broadcast_tensors + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) |> to_tensor_list ;; let broadcast_to self ~size = - let out__ = CArray.make t 1 in stubs_broadcast_to - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let bucketize self ~boundaries ~out_int32 ~right = - let out__ = CArray.make t 1 in - stubs_bucketize - (CArray.start out__) - self - boundaries - (if out_int32 then 1 else 0) - (if right then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_bucketize self boundaries (if out_int32 then 1 else 0) (if right then 1 else 0) + |> with_tensor_gc ;; let bucketize_scalar self ~boundaries ~out_int32 ~right = - let out__ = CArray.make t 1 in stubs_bucketize_scalar - (CArray.start out__) self boundaries (if out_int32 then 1 else 0) - (if right then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if right then 1 else 0) + |> with_tensor_gc ;; let bucketize_scalar_out ~out self ~boundaries ~out_int32 ~right = - let out__ = CArray.make t 1 in stubs_bucketize_scalar_out - (CArray.start out__) out self boundaries (if out_int32 then 1 else 0) - (if right then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if right then 1 else 0) + |> with_tensor_gc ;; let bucketize_tensor_out ~out self ~boundaries ~out_int32 ~right = - let out__ = CArray.make t 1 in stubs_bucketize_tensor_out - (CArray.start out__) out self boundaries (if out_int32 then 1 else 0) - (if right then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if right then 1 else 0) + |> with_tensor_gc ;; let can_cast ~from ~to_ = @@ -10754,93 +7573,45 @@ let can_cast ~from ~to_ = ;; let cartesian_prod tensors = - let out__ = CArray.make t 1 in stubs_cartesian_prod - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let cat tensors ~dim = - let out__ = CArray.make t 1 in stubs_cat - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; let cat_out ~out tensors ~dim = - let out__ = CArray.make t 1 in stubs_cat_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let cauchy self ~median ~sigma = - let out__ = CArray.make t 1 in - stubs_cauchy (CArray.start out__) self median sigma; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; -let cauchy_ self ~median ~sigma = - let out__ = CArray.make t 1 in - stubs_cauchy_ (CArray.start out__) self median sigma; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let cauchy self ~median ~sigma = stubs_cauchy self median sigma |> with_tensor_gc +let cauchy_ self ~median ~sigma = stubs_cauchy_ self median sigma |> with_tensor_gc let cauchy_out ~out self ~median ~sigma = - let out__ = CArray.make t 1 in - stubs_cauchy_out (CArray.start out__) out self median sigma; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ccol_indices self = - let out__ = CArray.make t 1 in - stubs_ccol_indices (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cauchy_out out self median sigma |> with_tensor_gc ;; -let ccol_indices_copy self = - let out__ = CArray.make t 1 in - stubs_ccol_indices_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let ccol_indices self = stubs_ccol_indices self |> with_tensor_gc +let ccol_indices_copy self = stubs_ccol_indices_copy self |> with_tensor_gc let ccol_indices_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_ccol_indices_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_ccol_indices_copy_out out self |> with_tensor_gc ;; let cdist ~x1 ~x2 ~p ~compute_mode = - let out__ = CArray.make t 1 in stubs_cdist - (CArray.start out__) x1 x2 p @@ -10849,157 +7620,66 @@ let cdist ~x1 ~x2 ~p ~compute_mode = | Some v -> Int64.of_int v) (match compute_mode with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ceil self = - let out__ = CArray.make t 1 in - stubs_ceil (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ceil_ self = - let out__ = CArray.make t 1 in - stubs_ceil_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ceil_out ~out self = - let out__ = CArray.make t 1 in - stubs_ceil_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let celu self = - let out__ = CArray.make t 1 in - stubs_celu (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let celu_ self = - let out__ = CArray.make t 1 in - stubs_celu_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; -let celu_out ~out self = - let out__ = CArray.make t 1 in - stubs_celu_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let ceil self = stubs_ceil self |> with_tensor_gc +let ceil_ self = stubs_ceil_ self |> with_tensor_gc +let ceil_out ~out self = stubs_ceil_out out self |> with_tensor_gc +let celu self = stubs_celu self |> with_tensor_gc +let celu_ self = stubs_celu_ self |> with_tensor_gc +let celu_out ~out self = stubs_celu_out out self |> with_tensor_gc let chain_matmul ~matrices = - let out__ = CArray.make t 1 in stubs_chain_matmul - (CArray.start out__) - (CArray.of_list t matrices |> CArray.start) - (List.length matrices); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor matrices |> CArray.start) + (List.length matrices) + |> with_tensor_gc ;; let chain_matmul_out ~out ~matrices = - let out__ = CArray.make t 1 in stubs_chain_matmul_out - (CArray.start out__) out - (CArray.of_list t matrices |> CArray.start) - (List.length matrices); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor matrices |> CArray.start) + (List.length matrices) + |> with_tensor_gc ;; -let chalf self = - let out__ = CArray.make t 1 in - stubs_chalf (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let chalf self = stubs_chalf self |> with_tensor_gc let channel_shuffle self ~groups = - let out__ = CArray.make t 1 in - stubs_channel_shuffle (CArray.start out__) self (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_channel_shuffle self (Int64.of_int groups) |> with_tensor_gc ;; let channel_shuffle_out ~out self ~groups = - let out__ = CArray.make t 1 in - stubs_channel_shuffle_out (CArray.start out__) out self (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_channel_shuffle_out out self (Int64.of_int groups) |> with_tensor_gc ;; -let cholesky self ~upper = - let out__ = CArray.make t 1 in - stubs_cholesky (CArray.start out__) self (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let cholesky self ~upper = stubs_cholesky self (if upper then 1 else 0) |> with_tensor_gc let cholesky_inverse self ~upper = - let out__ = CArray.make t 1 in - stubs_cholesky_inverse (CArray.start out__) self (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cholesky_inverse self (if upper then 1 else 0) |> with_tensor_gc ;; let cholesky_inverse_out ~out self ~upper = - let out__ = CArray.make t 1 in - stubs_cholesky_inverse_out (CArray.start out__) out self (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cholesky_inverse_out out self (if upper then 1 else 0) |> with_tensor_gc ;; let cholesky_out ~out self ~upper = - let out__ = CArray.make t 1 in - stubs_cholesky_out (CArray.start out__) out self (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cholesky_out out self (if upper then 1 else 0) |> with_tensor_gc ;; let cholesky_solve self ~input2 ~upper = - let out__ = CArray.make t 1 in - stubs_cholesky_solve (CArray.start out__) self input2 (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cholesky_solve self input2 (if upper then 1 else 0) |> with_tensor_gc ;; let cholesky_solve_out ~out self ~input2 ~upper = - let out__ = CArray.make t 1 in - stubs_cholesky_solve_out (CArray.start out__) out self input2 (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cholesky_solve_out out self input2 (if upper then 1 else 0) |> with_tensor_gc ;; let choose_qparams_optimized input ~numel ~n_bins ~ratio ~bit_width = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_choose_qparams_optimized (CArray.start out__) input @@ -11007,10 +7687,8 @@ let choose_qparams_optimized input ~numel ~n_bins ~ratio ~bit_width = (Int64.of_int n_bins) ratio (Int64.of_int bit_width); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -11018,276 +7696,114 @@ let chunk self ~chunks ~dim = stubs_chunk self (Int64.of_int chunks) (Int64.of_int dim) |> to_tensor_list ;; -let clamp self ~min ~max = - let out__ = CArray.make t 1 in - stubs_clamp (CArray.start out__) self min max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_ self ~min ~max = - let out__ = CArray.make t 1 in - stubs_clamp_ (CArray.start out__) self min max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_max self ~max = - let out__ = CArray.make t 1 in - stubs_clamp_max (CArray.start out__) self max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_max_ self ~max = - let out__ = CArray.make t 1 in - stubs_clamp_max_ (CArray.start out__) self max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_max_out ~out self ~max = - let out__ = CArray.make t 1 in - stubs_clamp_max_out (CArray.start out__) out self max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_max_tensor self ~max = - let out__ = CArray.make t 1 in - stubs_clamp_max_tensor (CArray.start out__) self max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_max_tensor_ self ~max = - let out__ = CArray.make t 1 in - stubs_clamp_max_tensor_ (CArray.start out__) self max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let clamp self ~min ~max = stubs_clamp self min max |> with_tensor_gc +let clamp_ self ~min ~max = stubs_clamp_ self min max |> with_tensor_gc +let clamp_max self ~max = stubs_clamp_max self max |> with_tensor_gc +let clamp_max_ self ~max = stubs_clamp_max_ self max |> with_tensor_gc +let clamp_max_out ~out self ~max = stubs_clamp_max_out out self max |> with_tensor_gc +let clamp_max_tensor self ~max = stubs_clamp_max_tensor self max |> with_tensor_gc +let clamp_max_tensor_ self ~max = stubs_clamp_max_tensor_ self max |> with_tensor_gc let clamp_max_tensor_out ~out self ~max = - let out__ = CArray.make t 1 in - stubs_clamp_max_tensor_out (CArray.start out__) out self max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_min self ~min = - let out__ = CArray.make t 1 in - stubs_clamp_min (CArray.start out__) self min; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_min_ self ~min = - let out__ = CArray.make t 1 in - stubs_clamp_min_ (CArray.start out__) self min; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_min_out ~out self ~min = - let out__ = CArray.make t 1 in - stubs_clamp_min_out (CArray.start out__) out self min; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clamp_min_tensor self ~min = - let out__ = CArray.make t 1 in - stubs_clamp_min_tensor (CArray.start out__) self min; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_clamp_max_tensor_out out self max |> with_tensor_gc ;; -let clamp_min_tensor_ self ~min = - let out__ = CArray.make t 1 in - stubs_clamp_min_tensor_ (CArray.start out__) self min; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let clamp_min self ~min = stubs_clamp_min self min |> with_tensor_gc +let clamp_min_ self ~min = stubs_clamp_min_ self min |> with_tensor_gc +let clamp_min_out ~out self ~min = stubs_clamp_min_out out self min |> with_tensor_gc +let clamp_min_tensor self ~min = stubs_clamp_min_tensor self min |> with_tensor_gc +let clamp_min_tensor_ self ~min = stubs_clamp_min_tensor_ self min |> with_tensor_gc let clamp_min_tensor_out ~out self ~min = - let out__ = CArray.make t 1 in - stubs_clamp_min_tensor_out (CArray.start out__) out self min; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_clamp_min_tensor_out out self min |> with_tensor_gc ;; -let clamp_out ~out self ~min ~max = - let out__ = CArray.make t 1 in - stubs_clamp_out (CArray.start out__) out self min max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let clamp_out ~out self ~min ~max = stubs_clamp_out out self min max |> with_tensor_gc let clamp_tensor self ~min ~max = - let out__ = CArray.make t 1 in stubs_clamp_tensor - (CArray.start out__) self (match min with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match max with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let clamp_tensor_ self ~min ~max = - let out__ = CArray.make t 1 in stubs_clamp_tensor_ - (CArray.start out__) self (match min with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match max with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let clamp_tensor_out ~out self ~min ~max = - let out__ = CArray.make t 1 in stubs_clamp_tensor_out - (CArray.start out__) out self (match min with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match max with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clip self ~min ~max = - let out__ = CArray.make t 1 in - stubs_clip (CArray.start out__) self min max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; -let clip_ self ~min ~max = - let out__ = CArray.make t 1 in - stubs_clip_ (CArray.start out__) self min max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clip_out ~out self ~min ~max = - let out__ = CArray.make t 1 in - stubs_clip_out (CArray.start out__) out self min max; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let clip self ~min ~max = stubs_clip self min max |> with_tensor_gc +let clip_ self ~min ~max = stubs_clip_ self min max |> with_tensor_gc +let clip_out ~out self ~min ~max = stubs_clip_out out self min max |> with_tensor_gc let clip_tensor self ~min ~max = - let out__ = CArray.make t 1 in stubs_clip_tensor - (CArray.start out__) self (match min with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match max with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let clip_tensor_ self ~min ~max = - let out__ = CArray.make t 1 in stubs_clip_tensor_ - (CArray.start out__) self (match min with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match max with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let clip_tensor_out ~out self ~min ~max = - let out__ = CArray.make t 1 in stubs_clip_tensor_out - (CArray.start out__) out self (match min with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match max with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let clone self = - let out__ = CArray.make t 1 in - stubs_clone (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; -let clone_out ~out self = - let out__ = CArray.make t 1 in - stubs_clone_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let coalesce self = - let out__ = CArray.make t 1 in - stubs_coalesce (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let clone self = stubs_clone self |> with_tensor_gc +let clone_out ~out self = stubs_clone_out out self |> with_tensor_gc +let coalesce self = stubs_coalesce self |> with_tensor_gc let col2im self ~output_size ~kernel_size ~dilation ~padding ~stride = - let out__ = CArray.make t 1 in stubs_col2im - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -11298,16 +7814,12 @@ let col2im self ~output_size ~kernel_size ~dilation ~padding ~stride = (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let col2im_out ~out self ~output_size ~kernel_size ~dilation ~padding ~stride = - let out__ = CArray.make t 1 in stubs_col2im_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -11319,351 +7831,214 @@ let col2im_out ~out self ~output_size ~kernel_size ~dilation ~padding ~stride = (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let col_indices self = - let out__ = CArray.make t 1 in - stubs_col_indices (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let col_indices_copy self = - let out__ = CArray.make t 1 in - stubs_col_indices_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; -let col_indices_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_col_indices_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let col_indices self = stubs_col_indices self |> with_tensor_gc +let col_indices_copy self = stubs_col_indices_copy self |> with_tensor_gc +let col_indices_copy_out ~out self = stubs_col_indices_copy_out out self |> with_tensor_gc let column_stack tensors = - let out__ = CArray.make t 1 in stubs_column_stack - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let column_stack_out ~out tensors = - let out__ = CArray.make t 1 in stubs_column_stack_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let combinations self ~r ~with_replacement = - let out__ = CArray.make t 1 in - stubs_combinations - (CArray.start out__) - self - (Int64.of_int r) - (if with_replacement then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let complex ~real ~imag = - let out__ = CArray.make t 1 in - stubs_complex (CArray.start out__) real imag; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_combinations self (Int64.of_int r) (if with_replacement then 1 else 0) + |> with_tensor_gc ;; -let complex_out ~out ~real ~imag = - let out__ = CArray.make t 1 in - stubs_complex_out (CArray.start out__) out real imag; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let complex ~real ~imag = stubs_complex real imag |> with_tensor_gc +let complex_out ~out ~real ~imag = stubs_complex_out out real imag |> with_tensor_gc let concat tensors ~dim = - let out__ = CArray.make t 1 in stubs_concat - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; let concat_out ~out tensors ~dim = - let out__ = CArray.make t 1 in stubs_concat_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; let concatenate tensors ~dim = - let out__ = CArray.make t 1 in stubs_concatenate - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; let concatenate_out ~out tensors ~dim = - let out__ = CArray.make t 1 in stubs_concatenate_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let conj self = - let out__ = CArray.make t 1 in - stubs_conj (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let conj_physical self = - let out__ = CArray.make t 1 in - stubs_conj_physical (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let conj_physical_ self = - let out__ = CArray.make t 1 in - stubs_conj_physical_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; -let conj_physical_out ~out self = - let out__ = CArray.make t 1 in - stubs_conj_physical_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let conj self = stubs_conj self |> with_tensor_gc +let conj_physical self = stubs_conj_physical self |> with_tensor_gc +let conj_physical_ self = stubs_conj_physical_ self |> with_tensor_gc +let conj_physical_out ~out self = stubs_conj_physical_out out self |> with_tensor_gc let constant_pad_nd self ~pad = - let out__ = CArray.make t 1 in stubs_constant_pad_nd - (CArray.start out__) self (List.map Int64.of_int pad |> CArray.of_list int64_t |> CArray.start) - (List.length pad); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length pad) + |> with_tensor_gc ;; let constant_pad_nd_out ~out self ~pad = - let out__ = CArray.make t 1 in stubs_constant_pad_nd_out - (CArray.start out__) out self (List.map Int64.of_int pad |> CArray.of_list int64_t |> CArray.start) - (List.length pad); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length pad) + |> with_tensor_gc ;; -let contiguous self = - let out__ = CArray.make t 1 in - stubs_contiguous (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let contiguous self = stubs_contiguous self |> with_tensor_gc let conv1d input ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_conv1d - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let conv1d_padding input ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_conv1d_padding - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) padding (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let conv2d input ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_conv2d - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let conv2d_padding input ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_conv2d_padding - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) padding (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let conv3d input ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_conv3d - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let conv3d_padding input ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_conv3d_padding - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) padding (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let conv_depthwise3d self ~weight ~kernel_size ~bias ~stride ~padding ~dilation = - let out__ = CArray.make t 1 in stubs_conv_depthwise3d - (CArray.start out__) self weight (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let conv_depthwise3d_out ~out self ~weight ~kernel_size ~bias ~stride ~padding ~dilation = - let out__ = CArray.make t 1 in stubs_conv_depthwise3d_out - (CArray.start out__) out self weight @@ -11671,44 +8046,31 @@ let conv_depthwise3d_out ~out self ~weight ~kernel_size ~bias ~stride ~padding ~ (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let conv_tbc self ~weight ~bias ~pad = - let out__ = CArray.make t 1 in - stubs_conv_tbc (CArray.start out__) self weight bias (Int64.of_int pad); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_conv_tbc self weight bias (Int64.of_int pad) |> with_tensor_gc ;; let conv_tbc_backward self input ~weight ~bias ~pad = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_conv_tbc_backward (CArray.start out__) self input weight bias (Int64.of_int pad); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let conv_tbc_out ~out self ~weight ~bias ~pad = - let out__ = CArray.make t 1 in - stubs_conv_tbc_out (CArray.start out__) out self weight bias (Int64.of_int pad); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_conv_tbc_out out self weight bias (Int64.of_int pad) |> with_tensor_gc ;; let conv_transpose1d @@ -11721,14 +8083,12 @@ let conv_transpose1d ~groups ~dilation = - let out__ = CArray.make t 1 in stubs_conv_transpose1d - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -11737,10 +8097,8 @@ let conv_transpose1d (List.length output_padding) (Int64.of_int groups) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let conv_transpose2d @@ -11753,14 +8111,12 @@ let conv_transpose2d ~groups ~dilation = - let out__ = CArray.make t 1 in stubs_conv_transpose2d - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -11769,10 +8125,8 @@ let conv_transpose2d (List.length output_padding) (Int64.of_int groups) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let conv_transpose3d @@ -11785,14 +8139,12 @@ let conv_transpose3d ~groups ~dilation = - let out__ = CArray.make t 1 in stubs_conv_transpose3d - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -11801,10 +8153,8 @@ let conv_transpose3d (List.length output_padding) (Int64.of_int groups) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let convolution @@ -11818,14 +8168,12 @@ let convolution ~output_padding ~groups = - let out__ = CArray.make t 1 in stubs_convolution - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -11835,10 +8183,8 @@ let convolution (if transposed then 1 else 0) (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let convolution_out @@ -11853,15 +8199,13 @@ let convolution_out ~output_padding ~groups = - let out__ = CArray.make t 1 in stubs_convolution_out - (CArray.start out__) out input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -11871,10 +8215,8 @@ let convolution_out (if transposed then 1 else 0) (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let convolution_overrideable @@ -11888,14 +8230,12 @@ let convolution_overrideable ~output_padding ~groups = - let out__ = CArray.make t 1 in stubs_convolution_overrideable - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -11905,10 +8245,8 @@ let convolution_overrideable (if transposed then 1 else 0) (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let convolution_overrideable_out @@ -11923,15 +8261,13 @@ let convolution_overrideable_out ~output_padding ~groups = - let out__ = CArray.make t 1 in stubs_convolution_overrideable_out - (CArray.start out__) out input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -11941,208 +8277,74 @@ let convolution_overrideable_out (if transposed then 1 else 0) (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let copy self ~src ~non_blocking = - let out__ = CArray.make t 1 in - stubs_copy (CArray.start out__) self src (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_copy self src (if non_blocking then 1 else 0) |> with_tensor_gc ;; let copy_out ~out self ~src ~non_blocking = - let out__ = CArray.make t 1 in - stubs_copy_out (CArray.start out__) out self src (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_copy_out out self src (if non_blocking then 1 else 0) |> with_tensor_gc ;; let copy_sparse_to_sparse self ~src ~non_blocking = - let out__ = CArray.make t 1 in - stubs_copy_sparse_to_sparse - (CArray.start out__) - self - src - (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_copy_sparse_to_sparse self src (if non_blocking then 1 else 0) |> with_tensor_gc ;; let copy_sparse_to_sparse_ self ~src ~non_blocking = - let out__ = CArray.make t 1 in - stubs_copy_sparse_to_sparse_ - (CArray.start out__) - self - src - (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_copy_sparse_to_sparse_ self src (if non_blocking then 1 else 0) |> with_tensor_gc ;; let copy_sparse_to_sparse_out ~out self ~src ~non_blocking = - let out__ = CArray.make t 1 in - stubs_copy_sparse_to_sparse_out - (CArray.start out__) - out - self - src - (if non_blocking then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let copysign self other = - let out__ = CArray.make t 1 in - stubs_copysign (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let copysign_ self other = - let out__ = CArray.make t 1 in - stubs_copysign_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let copysign_out ~out self other = - let out__ = CArray.make t 1 in - stubs_copysign_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let copysign_scalar self other = - let out__ = CArray.make t 1 in - stubs_copysign_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_copy_sparse_to_sparse_out out self src (if non_blocking then 1 else 0) + |> with_tensor_gc ;; -let copysign_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_copysign_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let copysign self other = stubs_copysign self other |> with_tensor_gc +let copysign_ self other = stubs_copysign_ self other |> with_tensor_gc +let copysign_out ~out self other = stubs_copysign_out out self other |> with_tensor_gc +let copysign_scalar self other = stubs_copysign_scalar self other |> with_tensor_gc +let copysign_scalar_ self other = stubs_copysign_scalar_ self other |> with_tensor_gc let copysign_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_copysign_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let corrcoef self = - let out__ = CArray.make t 1 in - stubs_corrcoef (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let cos self = - let out__ = CArray.make t 1 in - stubs_cos (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let cos_ self = - let out__ = CArray.make t 1 in - stubs_cos_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_copysign_scalar_out out self other |> with_tensor_gc ;; -let cos_out ~out self = - let out__ = CArray.make t 1 in - stubs_cos_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let cosh self = - let out__ = CArray.make t 1 in - stubs_cosh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let cosh_ self = - let out__ = CArray.make t 1 in - stubs_cosh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let cosh_out ~out self = - let out__ = CArray.make t 1 in - stubs_cosh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let corrcoef self = stubs_corrcoef self |> with_tensor_gc +let cos self = stubs_cos self |> with_tensor_gc +let cos_ self = stubs_cos_ self |> with_tensor_gc +let cos_out ~out self = stubs_cos_out out self |> with_tensor_gc +let cosh self = stubs_cosh self |> with_tensor_gc +let cosh_ self = stubs_cosh_ self |> with_tensor_gc +let cosh_out ~out self = stubs_cosh_out out self |> with_tensor_gc let cosine_embedding_loss ~input1 ~input2 ~target ~margin ~reduction = - let out__ = CArray.make t 1 in stubs_cosine_embedding_loss - (CArray.start out__) input1 input2 target margin - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let cosine_similarity ~x1 ~x2 ~dim ~eps = - let out__ = CArray.make t 1 in - stubs_cosine_similarity (CArray.start out__) x1 x2 (Int64.of_int dim) eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cosine_similarity x1 x2 (Int64.of_int dim) eps |> with_tensor_gc ;; let count_nonzero ~out self ~dim = - let out__ = CArray.make t 1 in stubs_count_nonzero - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; let count_nonzero_out ~out self ~dim = - let out__ = CArray.make t 1 in stubs_count_nonzero_out - (CArray.start out__) out self (match dim with @@ -12150,33 +8352,25 @@ let count_nonzero_out ~out self ~dim = | Some v -> Int64.of_int v) (match dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let cov self ~correction ~fweights ~aweights = - let out__ = CArray.make t 1 in stubs_cov - (CArray.start out__) self (Int64.of_int correction) (match fweights with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match aweights with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let cross self other ~dim = - let out__ = CArray.make t 1 in stubs_cross - (CArray.start out__) self other (match dim with @@ -12184,33 +8378,25 @@ let cross self other ~dim = | Some v -> Int64.of_int v) (match dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let cross_entropy_loss self ~target ~weight ~reduction ~ignore_index ~label_smoothing = - let out__ = CArray.make t 1 in stubs_cross_entropy_loss - (CArray.start out__) self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) (Int64.of_int ignore_index) - label_smoothing; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + label_smoothing + |> with_tensor_gc ;; let cross_out ~out self other ~dim = - let out__ = CArray.make t 1 in stubs_cross_out - (CArray.start out__) out self other @@ -12219,34 +8405,15 @@ let cross_out ~out self other ~dim = | Some v -> Int64.of_int v) (match dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let crow_indices self = - let out__ = CArray.make t 1 in - stubs_crow_indices (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; -let crow_indices_copy self = - let out__ = CArray.make t 1 in - stubs_crow_indices_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let crow_indices self = stubs_crow_indices self |> with_tensor_gc +let crow_indices_copy self = stubs_crow_indices_copy self |> with_tensor_gc let crow_indices_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_crow_indices_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_crow_indices_copy_out out self |> with_tensor_gc ;; let ctc_loss @@ -12258,9 +8425,7 @@ let ctc_loss ~reduction ~zero_infinity = - let out__ = CArray.make t 1 in stubs_ctc_loss - (CArray.start out__) log_probs targets (List.map Int64.of_int input_lengths |> CArray.of_list int64_t |> CArray.start) @@ -12269,10 +8434,8 @@ let ctc_loss (List.length target_lengths) (Int64.of_int blank) (Reduction.to_int reduction |> Int64.of_int) - (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if zero_infinity then 1 else 0) + |> with_tensor_gc ;; let ctc_loss_tensor @@ -12284,77 +8447,57 @@ let ctc_loss_tensor ~reduction ~zero_infinity = - let out__ = CArray.make t 1 in stubs_ctc_loss_tensor - (CArray.start out__) log_probs targets input_lengths target_lengths (Int64.of_int blank) (Reduction.to_int reduction |> Int64.of_int) - (if zero_infinity then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if zero_infinity then 1 else 0) + |> with_tensor_gc ;; let cudnn_affine_grid_generator ~theta ~n ~c ~h ~w = - let out__ = CArray.make t 1 in stubs_cudnn_affine_grid_generator - (CArray.start out__) theta (Int64.of_int n) (Int64.of_int c) (Int64.of_int h) - (Int64.of_int w); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int w) + |> with_tensor_gc ;; let cudnn_affine_grid_generator_backward ~grad ~n ~c ~h ~w = - let out__ = CArray.make t 1 in stubs_cudnn_affine_grid_generator_backward - (CArray.start out__) grad (Int64.of_int n) (Int64.of_int c) (Int64.of_int h) - (Int64.of_int w); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int w) + |> with_tensor_gc ;; let cudnn_affine_grid_generator_backward_out ~out ~grad ~n ~c ~h ~w = - let out__ = CArray.make t 1 in stubs_cudnn_affine_grid_generator_backward_out - (CArray.start out__) out grad (Int64.of_int n) (Int64.of_int c) (Int64.of_int h) - (Int64.of_int w); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int w) + |> with_tensor_gc ;; let cudnn_affine_grid_generator_out ~out ~theta ~n ~c ~h ~w = - let out__ = CArray.make t 1 in stubs_cudnn_affine_grid_generator_out - (CArray.start out__) out theta (Int64.of_int n) (Int64.of_int c) (Int64.of_int h) - (Int64.of_int w); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int w) + |> with_tensor_gc ;; let cudnn_batch_norm @@ -12367,31 +8510,27 @@ let cudnn_batch_norm ~exponential_average_factor ~epsilon = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_cudnn_batch_norm (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) exponential_average_factor epsilon; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -12406,7 +8545,7 @@ let cudnn_batch_norm_backward ~epsilon ~reservespace = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_cudnn_batch_norm_backward (CArray.start out__) input @@ -12414,24 +8553,21 @@ let cudnn_batch_norm_backward weight (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) epsilon reservespace; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -12449,7 +8585,7 @@ let cudnn_batch_norm_backward_out ~epsilon ~reservespace = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_cudnn_batch_norm_backward_out (CArray.start out__) out0 @@ -12460,24 +8596,21 @@ let cudnn_batch_norm_backward_out weight (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) epsilon reservespace; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -12495,7 +8628,7 @@ let cudnn_batch_norm_out ~exponential_average_factor ~epsilon = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_cudnn_batch_norm_out (CArray.start out__) out0 @@ -12506,24 +8639,20 @@ let cudnn_batch_norm_out weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) exponential_average_factor epsilon; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -12538,9 +8667,7 @@ let cudnn_convolution ~deterministic ~allow_tf32 = - let out__ = CArray.make t 1 in stubs_cudnn_convolution - (CArray.start out__) self weight (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -12552,10 +8679,8 @@ let cudnn_convolution (Int64.of_int groups) (if benchmark then 1 else 0) (if deterministic then 1 else 0) - (if allow_tf32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if allow_tf32 then 1 else 0) + |> with_tensor_gc ;; let cudnn_convolution_add_relu @@ -12569,26 +8694,22 @@ let cudnn_convolution_add_relu ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_cudnn_convolution_add_relu - (CArray.start out__) self weight z alpha (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let cudnn_convolution_add_relu_out @@ -12603,9 +8724,7 @@ let cudnn_convolution_add_relu_out ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_cudnn_convolution_add_relu_out - (CArray.start out__) out self weight @@ -12613,17 +8732,15 @@ let cudnn_convolution_add_relu_out alpha (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let cudnn_convolution_out @@ -12638,9 +8755,7 @@ let cudnn_convolution_out ~deterministic ~allow_tf32 = - let out__ = CArray.make t 1 in stubs_cudnn_convolution_out - (CArray.start out__) out self weight @@ -12653,53 +8768,43 @@ let cudnn_convolution_out (Int64.of_int groups) (if benchmark then 1 else 0) (if deterministic then 1 else 0) - (if allow_tf32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if allow_tf32 then 1 else 0) + |> with_tensor_gc ;; let cudnn_convolution_relu self ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_cudnn_convolution_relu - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let cudnn_convolution_relu_out ~out self ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_cudnn_convolution_relu_out - (CArray.start out__) out self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let cudnn_convolution_transpose @@ -12714,9 +8819,7 @@ let cudnn_convolution_transpose ~deterministic ~allow_tf32 = - let out__ = CArray.make t 1 in stubs_cudnn_convolution_transpose - (CArray.start out__) self weight (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -12730,10 +8833,8 @@ let cudnn_convolution_transpose (Int64.of_int groups) (if benchmark then 1 else 0) (if deterministic then 1 else 0) - (if allow_tf32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if allow_tf32 then 1 else 0) + |> with_tensor_gc ;; let cudnn_convolution_transpose_out @@ -12749,9 +8850,7 @@ let cudnn_convolution_transpose_out ~deterministic ~allow_tf32 = - let out__ = CArray.make t 1 in stubs_cudnn_convolution_transpose_out - (CArray.start out__) out self weight @@ -12766,32 +8865,22 @@ let cudnn_convolution_transpose_out (Int64.of_int groups) (if benchmark then 1 else 0) (if deterministic then 1 else 0) - (if allow_tf32 then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if allow_tf32 then 1 else 0) + |> with_tensor_gc ;; -let cudnn_grid_sampler self ~grid = - let out__ = CArray.make t 1 in - stubs_cudnn_grid_sampler (CArray.start out__) self grid; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let cudnn_grid_sampler self ~grid = stubs_cudnn_grid_sampler self grid |> with_tensor_gc let cudnn_grid_sampler_backward self ~grid ~grad_output = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_cudnn_grid_sampler_backward (CArray.start out__) self grid grad_output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let cudnn_grid_sampler_backward_out ~out0 ~out1 self ~grid ~grad_output = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_cudnn_grid_sampler_backward_out (CArray.start out__) out0 @@ -12799,692 +8888,309 @@ let cudnn_grid_sampler_backward_out ~out0 ~out1 self ~grid ~grad_output = self grid grad_output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let cudnn_grid_sampler_out ~out self ~grid = - let out__ = CArray.make t 1 in - stubs_cudnn_grid_sampler_out (CArray.start out__) out self grid; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cudnn_grid_sampler_out out self grid |> with_tensor_gc ;; let cudnn_is_acceptable self = stubs_cudnn_is_acceptable self let cummax self ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_cummax (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let cummax_out ~values ~indices self ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_cummax_out (CArray.start out__) values indices self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let cummaxmin_backward ~grad input ~indices ~dim = - let out__ = CArray.make t 1 in - stubs_cummaxmin_backward (CArray.start out__) grad input indices (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cummaxmin_backward grad input indices (Int64.of_int dim) |> with_tensor_gc ;; let cummin self ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_cummin (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let cummin_out ~values ~indices self ~dim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_cummin_out (CArray.start out__) values indices self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let cumprod self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_cumprod (CArray.start out__) self (Int64.of_int dim) (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumprod self (Int64.of_int dim) (Kind.packed_to_int dtype) |> with_tensor_gc ;; let cumprod_ self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_cumprod_ (CArray.start out__) self (Int64.of_int dim) (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumprod_ self (Int64.of_int dim) (Kind.packed_to_int dtype) |> with_tensor_gc ;; let cumprod_backward ~grad input ~dim ~output = - let out__ = CArray.make t 1 in - stubs_cumprod_backward (CArray.start out__) grad input (Int64.of_int dim) output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumprod_backward grad input (Int64.of_int dim) output |> with_tensor_gc ;; let cumprod_out ~out self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_cumprod_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumprod_out out self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let cumsum self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_cumsum (CArray.start out__) self (Int64.of_int dim) (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumsum self (Int64.of_int dim) (Kind.packed_to_int dtype) |> with_tensor_gc ;; let cumsum_ self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_cumsum_ (CArray.start out__) self (Int64.of_int dim) (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumsum_ self (Int64.of_int dim) (Kind.packed_to_int dtype) |> with_tensor_gc ;; let cumsum_out ~out self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_cumsum_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumsum_out out self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let cumulative_trapezoid ~y ~dim = - let out__ = CArray.make t 1 in - stubs_cumulative_trapezoid (CArray.start out__) y (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumulative_trapezoid y (Int64.of_int dim) |> with_tensor_gc ;; let cumulative_trapezoid_x ~y ~x ~dim = - let out__ = CArray.make t 1 in - stubs_cumulative_trapezoid_x (CArray.start out__) y x (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let data self = - let out__ = CArray.make t 1 in - stubs_data (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let deg2rad self = - let out__ = CArray.make t 1 in - stubs_deg2rad (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let deg2rad_ self = - let out__ = CArray.make t 1 in - stubs_deg2rad_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let deg2rad_out ~out self = - let out__ = CArray.make t 1 in - stubs_deg2rad_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_cumulative_trapezoid_x y x (Int64.of_int dim) |> with_tensor_gc ;; +let data self = stubs_data self |> with_tensor_gc +let deg2rad self = stubs_deg2rad self |> with_tensor_gc +let deg2rad_ self = stubs_deg2rad_ self |> with_tensor_gc +let deg2rad_out ~out self = stubs_deg2rad_out out self |> with_tensor_gc let dense_dim self = stubs_dense_dim self - -let dequantize self = - let out__ = CArray.make t 1 in - stubs_dequantize (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let dequantize_self_out ~out self = - let out__ = CArray.make t 1 in - stubs_dequantize_self_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let dequantize self = stubs_dequantize self |> with_tensor_gc +let dequantize_self_out ~out self = stubs_dequantize_self_out out self |> with_tensor_gc let dequantize_tensors tensors = stubs_dequantize_tensors - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) |> to_tensor_list ;; let dequantize_tensors_out ~out tensors = stubs_dequantize_tensors_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) ;; -let det self = - let out__ = CArray.make t 1 in - stubs_det (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let detach self = - let out__ = CArray.make t 1 in - stubs_detach (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let detach_ self = - let out__ = CArray.make t 1 in - stubs_detach_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let detach_copy self = - let out__ = CArray.make t 1 in - stubs_detach_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let detach_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_detach_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let diag self ~diagonal = - let out__ = CArray.make t 1 in - stubs_diag (CArray.start out__) self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let det self = stubs_det self |> with_tensor_gc +let detach self = stubs_detach self |> with_tensor_gc +let detach_ self = stubs_detach_ self |> with_tensor_gc +let detach_copy self = stubs_detach_copy self |> with_tensor_gc +let detach_copy_out ~out self = stubs_detach_copy_out out self |> with_tensor_gc +let diag self ~diagonal = stubs_diag self (Int64.of_int diagonal) |> with_tensor_gc let diag_embed self ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in - stubs_diag_embed - (CArray.start out__) - self - (Int64.of_int offset) - (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_diag_embed self (Int64.of_int offset) (Int64.of_int dim1) (Int64.of_int dim2) + |> with_tensor_gc ;; let diag_embed_out ~out self ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in stubs_diag_embed_out - (CArray.start out__) out self (Int64.of_int offset) (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim2) + |> with_tensor_gc ;; let diag_out ~out self ~diagonal = - let out__ = CArray.make t 1 in - stubs_diag_out (CArray.start out__) out self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_diag_out out self (Int64.of_int diagonal) |> with_tensor_gc ;; -let diagflat self ~offset = - let out__ = CArray.make t 1 in - stubs_diagflat (CArray.start out__) self (Int64.of_int offset); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let diagflat self ~offset = stubs_diagflat self (Int64.of_int offset) |> with_tensor_gc let diagonal self ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in - stubs_diagonal - (CArray.start out__) - self - (Int64.of_int offset) - (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_diagonal self (Int64.of_int offset) (Int64.of_int dim1) (Int64.of_int dim2) + |> with_tensor_gc ;; let diagonal_backward ~grad_output ~input_sizes ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in stubs_diagonal_backward - (CArray.start out__) grad_output (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) (List.length input_sizes) (Int64.of_int offset) (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim2) + |> with_tensor_gc ;; let diagonal_backward_out ~out ~grad_output ~input_sizes ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in stubs_diagonal_backward_out - (CArray.start out__) out grad_output (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) (List.length input_sizes) (Int64.of_int offset) (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim2) + |> with_tensor_gc ;; let diagonal_copy self ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in - stubs_diagonal_copy - (CArray.start out__) - self - (Int64.of_int offset) - (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_diagonal_copy self (Int64.of_int offset) (Int64.of_int dim1) (Int64.of_int dim2) + |> with_tensor_gc ;; let diagonal_copy_out ~out self ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in stubs_diagonal_copy_out - (CArray.start out__) out self (Int64.of_int offset) (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim2) + |> with_tensor_gc ;; let diagonal_scatter self ~src ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in stubs_diagonal_scatter - (CArray.start out__) self src (Int64.of_int offset) (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim2) + |> with_tensor_gc ;; let diagonal_scatter_out ~out self ~src ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in stubs_diagonal_scatter_out - (CArray.start out__) out self src (Int64.of_int offset) (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim2) + |> with_tensor_gc ;; let diff self ~n ~dim ~prepend ~append = - let out__ = CArray.make t 1 in stubs_diff - (CArray.start out__) self (Int64.of_int n) (Int64.of_int dim) (match prepend with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match append with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let diff_out ~out self ~n ~dim ~prepend ~append = - let out__ = CArray.make t 1 in stubs_diff_out - (CArray.start out__) out self (Int64.of_int n) (Int64.of_int dim) (match prepend with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match append with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let digamma self = - let out__ = CArray.make t 1 in - stubs_digamma (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let digamma_ self = - let out__ = CArray.make t 1 in - stubs_digamma_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; -let digamma_out ~out self = - let out__ = CArray.make t 1 in - stubs_digamma_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let dist self other = - let out__ = CArray.make t 1 in - stubs_dist (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let dist_out ~out self other = - let out__ = CArray.make t 1 in - stubs_dist_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let div self other = - let out__ = CArray.make t 1 in - stubs_div (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let div_ self other = - let out__ = CArray.make t 1 in - stubs_div_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let div_out ~out self other = - let out__ = CArray.make t 1 in - stubs_div_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let digamma self = stubs_digamma self |> with_tensor_gc +let digamma_ self = stubs_digamma_ self |> with_tensor_gc +let digamma_out ~out self = stubs_digamma_out out self |> with_tensor_gc +let dist self other = stubs_dist self other |> with_tensor_gc +let dist_out ~out self other = stubs_dist_out out self other |> with_tensor_gc +let div self other = stubs_div self other |> with_tensor_gc +let div_ self other = stubs_div_ self other |> with_tensor_gc +let div_out ~out self other = stubs_div_out out self other |> with_tensor_gc let div_out_mode ~out self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_div_out_mode (CArray.start out__) out self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let div_scalar self other = - let out__ = CArray.make t 1 in - stubs_div_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_div_out_mode out self other rounding_mode |> with_tensor_gc ;; -let div_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_div_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let div_scalar self other = stubs_div_scalar self other |> with_tensor_gc +let div_scalar_ self other = stubs_div_scalar_ self other |> with_tensor_gc let div_scalar_mode self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_div_scalar_mode (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_div_scalar_mode self other rounding_mode |> with_tensor_gc ;; let div_scalar_mode_ self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_div_scalar_mode_ (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_div_scalar_mode_ self other rounding_mode |> with_tensor_gc ;; let div_scalar_mode_out ~out self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_div_scalar_mode_out (CArray.start out__) out self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_div_scalar_mode_out out self other rounding_mode |> with_tensor_gc ;; -let div_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_div_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let div_scalar_out ~out self other = stubs_div_scalar_out out self other |> with_tensor_gc let div_tensor_mode self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_div_tensor_mode (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_div_tensor_mode self other rounding_mode |> with_tensor_gc ;; let div_tensor_mode_ self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_div_tensor_mode_ (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let divide self other = - let out__ = CArray.make t 1 in - stubs_divide (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let divide_ self other = - let out__ = CArray.make t 1 in - stubs_divide_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_div_tensor_mode_ self other rounding_mode |> with_tensor_gc ;; -let divide_out ~out self other = - let out__ = CArray.make t 1 in - stubs_divide_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let divide self other = stubs_divide self other |> with_tensor_gc +let divide_ self other = stubs_divide_ self other |> with_tensor_gc +let divide_out ~out self other = stubs_divide_out out self other |> with_tensor_gc let divide_out_mode ~out self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_divide_out_mode (CArray.start out__) out self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_divide_out_mode out self other rounding_mode |> with_tensor_gc ;; -let divide_scalar self other = - let out__ = CArray.make t 1 in - stubs_divide_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let divide_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_divide_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let divide_scalar self other = stubs_divide_scalar self other |> with_tensor_gc +let divide_scalar_ self other = stubs_divide_scalar_ self other |> with_tensor_gc let divide_scalar_mode self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_divide_scalar_mode (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_divide_scalar_mode self other rounding_mode |> with_tensor_gc ;; let divide_scalar_mode_ self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_divide_scalar_mode_ (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_divide_scalar_mode_ self other rounding_mode |> with_tensor_gc ;; let divide_tensor_mode self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_divide_tensor_mode (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_divide_tensor_mode self other rounding_mode |> with_tensor_gc ;; let divide_tensor_mode_ self other ~rounding_mode = - let out__ = CArray.make t 1 in - stubs_divide_tensor_mode_ (CArray.start out__) self other rounding_mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let dot self tensor = - let out__ = CArray.make t 1 in - stubs_dot (CArray.start out__) self tensor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_divide_tensor_mode_ self other rounding_mode |> with_tensor_gc ;; -let dot_out ~out self tensor = - let out__ = CArray.make t 1 in - stubs_dot_out (CArray.start out__) out self tensor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let dot self tensor = stubs_dot self tensor |> with_tensor_gc +let dot_out ~out self tensor = stubs_dot_out out self tensor |> with_tensor_gc let dropout input ~p ~train = - let out__ = CArray.make t 1 in - stubs_dropout (CArray.start out__) input p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_dropout input p (if train then 1 else 0) |> with_tensor_gc ;; let dropout_ self ~p ~train = - let out__ = CArray.make t 1 in - stubs_dropout_ (CArray.start out__) self p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_dropout_ self p (if train then 1 else 0) |> with_tensor_gc ;; let dsplit self ~sections = stubs_dsplit self (Int64.of_int sections) |> to_tensor_list @@ -13498,75 +9204,44 @@ let dsplit_array self ~indices = ;; let dstack tensors = - let out__ = CArray.make t 1 in - stubs_dstack - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_dstack (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) + |> with_tensor_gc ;; let dstack_out ~out tensors = - let out__ = CArray.make t 1 in stubs_dstack_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let einsum ~equation tensors ~path = - let out__ = CArray.make t 1 in stubs_einsum - (CArray.start out__) equation - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) (match path with | None -> from_voidp int64_t null | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match path with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let elu self = - let out__ = CArray.make t 1 in - stubs_elu (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; -let elu_ self = - let out__ = CArray.make t 1 in - stubs_elu_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let elu self = stubs_elu self |> with_tensor_gc +let elu_ self = stubs_elu_ self |> with_tensor_gc let elu_backward ~grad_output ~alpha ~scale ~input_scale ~is_result ~self_or_result = - let out__ = CArray.make t 1 in stubs_elu_backward - (CArray.start out__) grad_output alpha scale input_scale (if is_result then 1 else 0) - self_or_result; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + self_or_result + |> with_tensor_gc ;; let elu_backward_grad_input @@ -13578,41 +9253,27 @@ let elu_backward_grad_input ~is_result ~self_or_result = - let out__ = CArray.make t 1 in stubs_elu_backward_grad_input - (CArray.start out__) grad_input grad_output alpha scale input_scale (if is_result then 1 else 0) - self_or_result; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + self_or_result + |> with_tensor_gc ;; -let elu_out ~out self = - let out__ = CArray.make t 1 in - stubs_elu_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let elu_out ~out self = stubs_elu_out out self |> with_tensor_gc let embedding ~weight ~indices ~padding_idx ~scale_grad_by_freq ~sparse = - let out__ = CArray.make t 1 in stubs_embedding - (CArray.start out__) weight indices (Int64.of_int padding_idx) (if scale_grad_by_freq then 1 else 0) - (if sparse then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if sparse then 1 else 0) + |> with_tensor_gc ;; let embedding_backward @@ -13623,18 +9284,14 @@ let embedding_backward ~scale_grad_by_freq ~sparse = - let out__ = CArray.make t 1 in stubs_embedding_backward - (CArray.start out__) grad indices (Int64.of_int num_weights) (Int64.of_int padding_idx) (if scale_grad_by_freq then 1 else 0) - (if sparse then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if sparse then 1 else 0) + |> with_tensor_gc ;; let embedding_bag @@ -13647,7 +9304,7 @@ let embedding_bag ~per_sample_weights ~include_last_offset = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_embedding_bag (CArray.start out__) weight @@ -13658,16 +9315,12 @@ let embedding_bag (if sparse then 1 else 0) (match per_sample_weights with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if include_last_offset then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -13682,7 +9335,7 @@ let embedding_bag_padding_idx ~include_last_offset ~padding_idx = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_embedding_bag_padding_idx (CArray.start out__) weight @@ -13693,7 +9346,7 @@ let embedding_bag_padding_idx (if sparse then 1 else 0) (match per_sample_weights with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if include_last_offset then 1 else 0) (match padding_idx with | None -> Int64.zero @@ -13701,14 +9354,10 @@ let embedding_bag_padding_idx (match padding_idx with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -13719,17 +9368,13 @@ let embedding_dense_backward ~padding_idx ~scale_grad_by_freq = - let out__ = CArray.make t 1 in stubs_embedding_dense_backward - (CArray.start out__) grad_output indices (Int64.of_int num_weights) (Int64.of_int padding_idx) - (if scale_grad_by_freq then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if scale_grad_by_freq then 1 else 0) + |> with_tensor_gc ;; let embedding_dense_backward_out @@ -13740,520 +9385,226 @@ let embedding_dense_backward_out ~padding_idx ~scale_grad_by_freq = - let out__ = CArray.make t 1 in stubs_embedding_dense_backward_out - (CArray.start out__) out grad_output indices (Int64.of_int num_weights) (Int64.of_int padding_idx) - (if scale_grad_by_freq then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if scale_grad_by_freq then 1 else 0) + |> with_tensor_gc ;; let embedding_out ~out ~weight ~indices ~padding_idx ~scale_grad_by_freq ~sparse = - let out__ = CArray.make t 1 in stubs_embedding_out - (CArray.start out__) out weight indices (Int64.of_int padding_idx) (if scale_grad_by_freq then 1 else 0) - (if sparse then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if sparse then 1 else 0) + |> with_tensor_gc ;; let embedding_renorm self ~indices ~max_norm ~norm_type = - let out__ = CArray.make t 1 in - stubs_embedding_renorm (CArray.start out__) self indices max_norm norm_type; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_embedding_renorm self indices max_norm norm_type |> with_tensor_gc ;; let embedding_renorm_ self ~indices ~max_norm ~norm_type = - let out__ = CArray.make t 1 in - stubs_embedding_renorm_ (CArray.start out__) self indices max_norm norm_type; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_embedding_renorm_ self indices max_norm norm_type |> with_tensor_gc ;; let embedding_renorm_out ~out self ~indices ~max_norm ~norm_type = - let out__ = CArray.make t 1 in - stubs_embedding_renorm_out (CArray.start out__) out self indices max_norm norm_type; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_embedding_renorm_out out self indices max_norm norm_type |> with_tensor_gc ;; let embedding_sparse_backward ~grad ~indices ~num_weights ~padding_idx ~scale_grad_by_freq = - let out__ = CArray.make t 1 in stubs_embedding_sparse_backward - (CArray.start out__) grad indices (Int64.of_int num_weights) (Int64.of_int padding_idx) - (if scale_grad_by_freq then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if scale_grad_by_freq then 1 else 0) + |> with_tensor_gc ;; let empty ~size ~options = - let out__ = CArray.make t 1 in stubs_empty - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let empty_like self = - let out__ = CArray.make t 1 in - stubs_empty_like (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let empty_like_out ~out self = - let out__ = CArray.make t 1 in - stubs_empty_like_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let empty_like self = stubs_empty_like self |> with_tensor_gc +let empty_like_out ~out self = stubs_empty_like_out out self |> with_tensor_gc let empty_out ~out ~size = - let out__ = CArray.make t 1 in stubs_empty_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let empty_permuted ~size ~physical_layout ~options = - let out__ = CArray.make t 1 in stubs_empty_permuted - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int physical_layout |> CArray.of_list int64_t |> CArray.start) (List.length physical_layout) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let empty_permuted_out ~out ~size ~physical_layout = - let out__ = CArray.make t 1 in stubs_empty_permuted_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int physical_layout |> CArray.of_list int64_t |> CArray.start) - (List.length physical_layout); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length physical_layout) + |> with_tensor_gc ;; let empty_quantized ~size ~qtensor ~options = - let out__ = CArray.make t 1 in stubs_empty_quantized - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) qtensor (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let empty_quantized_out ~out ~size ~qtensor = - let out__ = CArray.make t 1 in stubs_empty_quantized_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - qtensor; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + qtensor + |> with_tensor_gc ;; let empty_strided ~size ~stride ~options = - let out__ = CArray.make t 1 in stubs_empty_strided - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let empty_strided_out ~out ~size ~stride = - let out__ = CArray.make t 1 in stubs_empty_strided_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let eq self other = - let out__ = CArray.make t 1 in - stubs_eq (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let eq_ self other = - let out__ = CArray.make t 1 in - stubs_eq_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; -let eq_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_eq_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let eq self other = stubs_eq self other |> with_tensor_gc +let eq_ self other = stubs_eq_ self other |> with_tensor_gc +let eq_scalar_out ~out self other = stubs_eq_scalar_out out self other |> with_tensor_gc +let eq_tensor self other = stubs_eq_tensor self other |> with_tensor_gc +let eq_tensor_ self other = stubs_eq_tensor_ self other |> with_tensor_gc +let eq_tensor_out ~out self other = stubs_eq_tensor_out out self other |> with_tensor_gc +let equal self other = stubs_equal self other +let erf self = stubs_erf self |> with_tensor_gc +let erf_ self = stubs_erf_ self |> with_tensor_gc +let erf_out ~out self = stubs_erf_out out self |> with_tensor_gc +let erfc self = stubs_erfc self |> with_tensor_gc +let erfc_ self = stubs_erfc_ self |> with_tensor_gc +let erfc_out ~out self = stubs_erfc_out out self |> with_tensor_gc +let erfinv self = stubs_erfinv self |> with_tensor_gc +let erfinv_ self = stubs_erfinv_ self |> with_tensor_gc +let erfinv_out ~out self = stubs_erfinv_out out self |> with_tensor_gc +let exp self = stubs_exp self |> with_tensor_gc +let exp2 self = stubs_exp2 self |> with_tensor_gc +let exp2_ self = stubs_exp2_ self |> with_tensor_gc +let exp2_out ~out self = stubs_exp2_out out self |> with_tensor_gc +let exp_ self = stubs_exp_ self |> with_tensor_gc +let exp_out ~out self = stubs_exp_out out self |> with_tensor_gc -let eq_tensor self other = - let out__ = CArray.make t 1 in - stubs_eq_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let expand self ~size ~implicit = + stubs_expand + self + (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) + (List.length size) + (if implicit then 1 else 0) + |> with_tensor_gc ;; -let eq_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_eq_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let eq_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_eq_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let equal self other = stubs_equal self other - -let erf self = - let out__ = CArray.make t 1 in - stubs_erf (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erf_ self = - let out__ = CArray.make t 1 in - stubs_erf_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erf_out ~out self = - let out__ = CArray.make t 1 in - stubs_erf_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erfc self = - let out__ = CArray.make t 1 in - stubs_erfc (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erfc_ self = - let out__ = CArray.make t 1 in - stubs_erfc_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erfc_out ~out self = - let out__ = CArray.make t 1 in - stubs_erfc_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erfinv self = - let out__ = CArray.make t 1 in - stubs_erfinv (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erfinv_ self = - let out__ = CArray.make t 1 in - stubs_erfinv_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let erfinv_out ~out self = - let out__ = CArray.make t 1 in - stubs_erfinv_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exp self = - let out__ = CArray.make t 1 in - stubs_exp (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exp2 self = - let out__ = CArray.make t 1 in - stubs_exp2 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exp2_ self = - let out__ = CArray.make t 1 in - stubs_exp2_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exp2_out ~out self = - let out__ = CArray.make t 1 in - stubs_exp2_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exp_ self = - let out__ = CArray.make t 1 in - stubs_exp_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exp_out ~out self = - let out__ = CArray.make t 1 in - stubs_exp_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let expand self ~size ~implicit = - let out__ = CArray.make t 1 in - stubs_expand - (CArray.start out__) - self - (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size) - (if implicit then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let expand_as self other = - let out__ = CArray.make t 1 in - stubs_expand_as (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let expand_as self other = stubs_expand_as self other |> with_tensor_gc let expand_copy self ~size ~implicit = - let out__ = CArray.make t 1 in stubs_expand_copy - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (if implicit then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if implicit then 1 else 0) + |> with_tensor_gc ;; let expand_copy_out ~out self ~size ~implicit = - let out__ = CArray.make t 1 in stubs_expand_copy_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - (if implicit then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if implicit then 1 else 0) + |> with_tensor_gc ;; -let expm1 self = - let out__ = CArray.make t 1 in - stubs_expm1 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let expm1_ self = - let out__ = CArray.make t 1 in - stubs_expm1_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let expm1_out ~out self = - let out__ = CArray.make t 1 in - stubs_expm1_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exponential self ~lambd = - let out__ = CArray.make t 1 in - stubs_exponential (CArray.start out__) self lambd; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let exponential_ self ~lambd = - let out__ = CArray.make t 1 in - stubs_exponential_ (CArray.start out__) self lambd; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let expm1 self = stubs_expm1 self |> with_tensor_gc +let expm1_ self = stubs_expm1_ self |> with_tensor_gc +let expm1_out ~out self = stubs_expm1_out out self |> with_tensor_gc +let exponential self ~lambd = stubs_exponential self lambd |> with_tensor_gc +let exponential_ self ~lambd = stubs_exponential_ self lambd |> with_tensor_gc let exponential_out ~out self ~lambd = - let out__ = CArray.make t 1 in - stubs_exponential_out (CArray.start out__) out self lambd; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_exponential_out out self lambd |> with_tensor_gc ;; let eye ~n ~options = - let out__ = CArray.make t 1 in stubs_eye - (CArray.start out__) (Int64.of_int n) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let eye_m ~n ~m ~options = - let out__ = CArray.make t 1 in stubs_eye_m - (CArray.start out__) (Int64.of_int n) (Int64.of_int m) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let eye_m_out ~out ~n ~m = - let out__ = CArray.make t 1 in - stubs_eye_m_out (CArray.start out__) out (Int64.of_int n) (Int64.of_int m); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_eye_m_out out (Int64.of_int n) (Int64.of_int m) |> with_tensor_gc ;; -let eye_out ~out ~n = - let out__ = CArray.make t 1 in - stubs_eye_out (CArray.start out__) out (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let eye_out ~out ~n = stubs_eye_out out (Int64.of_int n) |> with_tensor_gc let fake_quantize_per_channel_affine self ~scale ~zero_point ~axis ~quant_min ~quant_max = - let out__ = CArray.make t 1 in stubs_fake_quantize_per_channel_affine - (CArray.start out__) self scale zero_point (Int64.of_int axis) (Int64.of_int quant_min) - (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int quant_max) + |> with_tensor_gc ;; let fake_quantize_per_channel_affine_cachemask @@ -14264,7 +9615,7 @@ let fake_quantize_per_channel_affine_cachemask ~quant_min ~quant_max = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fake_quantize_per_channel_affine_cachemask (CArray.start out__) self @@ -14273,19 +9624,13 @@ let fake_quantize_per_channel_affine_cachemask (Int64.of_int axis) (Int64.of_int quant_min) (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let fake_quantize_per_channel_affine_cachemask_backward ~grad ~mask = - let out__ = CArray.make t 1 in - stubs_fake_quantize_per_channel_affine_cachemask_backward (CArray.start out__) grad mask; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fake_quantize_per_channel_affine_cachemask_backward grad mask |> with_tensor_gc ;; let fake_quantize_per_channel_affine_cachemask_out @@ -14298,7 +9643,7 @@ let fake_quantize_per_channel_affine_cachemask_out ~quant_min ~quant_max = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fake_quantize_per_channel_affine_cachemask_out (CArray.start out__) out0 @@ -14309,25 +9654,19 @@ let fake_quantize_per_channel_affine_cachemask_out (Int64.of_int axis) (Int64.of_int quant_min) (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let fake_quantize_per_tensor_affine self ~scale ~zero_point ~quant_min ~quant_max = - let out__ = CArray.make t 1 in stubs_fake_quantize_per_tensor_affine - (CArray.start out__) self scale (Int64.of_int zero_point) (Int64.of_int quant_min) - (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int quant_max) + |> with_tensor_gc ;; let fake_quantize_per_tensor_affine_cachemask @@ -14337,7 +9676,7 @@ let fake_quantize_per_tensor_affine_cachemask ~quant_min ~quant_max = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fake_quantize_per_tensor_affine_cachemask (CArray.start out__) self @@ -14345,19 +9684,13 @@ let fake_quantize_per_tensor_affine_cachemask (Int64.of_int zero_point) (Int64.of_int quant_min) (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let fake_quantize_per_tensor_affine_cachemask_backward ~grad ~mask = - let out__ = CArray.make t 1 in - stubs_fake_quantize_per_tensor_affine_cachemask_backward (CArray.start out__) grad mask; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fake_quantize_per_tensor_affine_cachemask_backward grad mask |> with_tensor_gc ;; let fake_quantize_per_tensor_affine_cachemask_out @@ -14369,7 +9702,7 @@ let fake_quantize_per_tensor_affine_cachemask_out ~quant_min ~quant_max = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fake_quantize_per_tensor_affine_cachemask_out (CArray.start out__) out0 @@ -14379,10 +9712,8 @@ let fake_quantize_per_tensor_affine_cachemask_out (Int64.of_int zero_point) (Int64.of_int quant_min) (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -14393,37 +9724,22 @@ let fake_quantize_per_tensor_affine_tensor_qparams ~quant_min ~quant_max = - let out__ = CArray.make t 1 in stubs_fake_quantize_per_tensor_affine_tensor_qparams - (CArray.start out__) self scale zero_point (Int64.of_int quant_min) - (Int64.of_int quant_max); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int quant_max) + |> with_tensor_gc ;; let fbgemm_linear_fp16_weight input ~packed_weight ~bias = - let out__ = CArray.make t 1 in - stubs_fbgemm_linear_fp16_weight (CArray.start out__) input packed_weight bias; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fbgemm_linear_fp16_weight input packed_weight bias |> with_tensor_gc ;; let fbgemm_linear_fp16_weight_fp32_activation input ~packed_weight ~bias = - let out__ = CArray.make t 1 in - stubs_fbgemm_linear_fp16_weight_fp32_activation - (CArray.start out__) - input - packed_weight - bias; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fbgemm_linear_fp16_weight_fp32_activation input packed_weight bias + |> with_tensor_gc ;; let fbgemm_linear_int8_weight @@ -14435,19 +9751,15 @@ let fbgemm_linear_int8_weight ~weight_zero_point ~bias = - let out__ = CArray.make t 1 in stubs_fbgemm_linear_int8_weight - (CArray.start out__) input weight packed col_offsets weight_scale weight_zero_point - bias; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + bias + |> with_tensor_gc ;; let fbgemm_linear_int8_weight_fp32_activation @@ -14459,85 +9771,48 @@ let fbgemm_linear_int8_weight_fp32_activation ~weight_zero_point ~bias = - let out__ = CArray.make t 1 in stubs_fbgemm_linear_int8_weight_fp32_activation - (CArray.start out__) input weight packed col_offsets weight_scale weight_zero_point - bias; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + bias + |> with_tensor_gc ;; let fbgemm_pack_gemm_matrix_fp16 input = - let out__ = CArray.make t 1 in - stubs_fbgemm_pack_gemm_matrix_fp16 (CArray.start out__) input; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fbgemm_pack_gemm_matrix_fp16 input |> with_tensor_gc ;; let fbgemm_pack_quantized_matrix input = - let out__ = CArray.make t 1 in - stubs_fbgemm_pack_quantized_matrix (CArray.start out__) input; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fbgemm_pack_quantized_matrix input |> with_tensor_gc ;; let fbgemm_pack_quantized_matrix_kn input ~k ~n = - let out__ = CArray.make t 1 in - stubs_fbgemm_pack_quantized_matrix_kn - (CArray.start out__) - input - (Int64.of_int k) - (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fbgemm_pack_quantized_matrix_kn input (Int64.of_int k) (Int64.of_int n) + |> with_tensor_gc ;; let feature_alpha_dropout input ~p ~train = - let out__ = CArray.make t 1 in - stubs_feature_alpha_dropout (CArray.start out__) input p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_feature_alpha_dropout input p (if train then 1 else 0) |> with_tensor_gc ;; let feature_alpha_dropout_ self ~p ~train = - let out__ = CArray.make t 1 in - stubs_feature_alpha_dropout_ (CArray.start out__) self p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_feature_alpha_dropout_ self p (if train then 1 else 0) |> with_tensor_gc ;; let feature_dropout input ~p ~train = - let out__ = CArray.make t 1 in - stubs_feature_dropout (CArray.start out__) input p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_feature_dropout input p (if train then 1 else 0) |> with_tensor_gc ;; let feature_dropout_ self ~p ~train = - let out__ = CArray.make t 1 in - stubs_feature_dropout_ (CArray.start out__) self p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_feature_dropout_ self p (if train then 1 else 0) |> with_tensor_gc ;; let fft_fft self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_fft - (CArray.start out__) self (match n with | None -> Int64.zero @@ -14546,16 +9821,12 @@ let fft_fft self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_fft2 self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_fft2 - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -14565,16 +9836,12 @@ let fft_fft2 self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_fft2_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_fft2_out - (CArray.start out__) out self (match s with @@ -14585,16 +9852,12 @@ let fft_fft2_out ~out self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_fft_out ~out self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_fft_out - (CArray.start out__) out self (match n with @@ -14604,37 +9867,25 @@ let fft_fft_out ~out self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_fftfreq ~n ~d ~options = - let out__ = CArray.make t 1 in stubs_fft_fftfreq - (CArray.start out__) (Int64.of_int n) d (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let fft_fftfreq_out ~out ~n ~d = - let out__ = CArray.make t 1 in - stubs_fft_fftfreq_out (CArray.start out__) out (Int64.of_int n) d; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fft_fftfreq_out out (Int64.of_int n) d |> with_tensor_gc ;; let fft_fftn self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_fftn - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -14648,16 +9899,12 @@ let fft_fftn self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_fftn_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_fftn_out - (CArray.start out__) out self (match s with @@ -14672,32 +9919,24 @@ let fft_fftn_out ~out self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_fftshift self ~dim = - let out__ = CArray.make t 1 in stubs_fft_fftshift - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match dim with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let fft_hfft self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_hfft - (CArray.start out__) self (match n with | None -> Int64.zero @@ -14706,16 +9945,12 @@ let fft_hfft self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_hfft2 self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_hfft2 - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -14725,16 +9960,12 @@ let fft_hfft2 self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_hfft2_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_hfft2_out - (CArray.start out__) out self (match s with @@ -14745,16 +9976,12 @@ let fft_hfft2_out ~out self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_hfft_out ~out self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_hfft_out - (CArray.start out__) out self (match n with @@ -14764,16 +9991,12 @@ let fft_hfft_out ~out self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_hfftn self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_hfftn - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -14787,16 +10010,12 @@ let fft_hfftn self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_hfftn_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_hfftn_out - (CArray.start out__) out self (match s with @@ -14811,16 +10030,12 @@ let fft_hfftn_out ~out self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ifft self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ifft - (CArray.start out__) self (match n with | None -> Int64.zero @@ -14829,16 +10044,12 @@ let fft_ifft self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ifft2 self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ifft2 - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -14848,16 +10059,12 @@ let fft_ifft2 self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ifft2_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ifft2_out - (CArray.start out__) out self (match s with @@ -14868,16 +10075,12 @@ let fft_ifft2_out ~out self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ifft_out ~out self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ifft_out - (CArray.start out__) out self (match n with @@ -14887,16 +10090,12 @@ let fft_ifft_out ~out self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ifftn self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ifftn - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -14910,16 +10109,12 @@ let fft_ifftn self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ifftn_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ifftn_out - (CArray.start out__) out self (match s with @@ -14934,32 +10129,24 @@ let fft_ifftn_out ~out self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ifftshift self ~dim = - let out__ = CArray.make t 1 in stubs_fft_ifftshift - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match dim with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let fft_ihfft self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ihfft - (CArray.start out__) self (match n with | None -> Int64.zero @@ -14968,16 +10155,12 @@ let fft_ihfft self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ihfft2 self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ihfft2 - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -14987,16 +10170,12 @@ let fft_ihfft2 self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ihfft2_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ihfft2_out - (CArray.start out__) out self (match s with @@ -15007,16 +10186,12 @@ let fft_ihfft2_out ~out self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ihfft_out ~out self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ihfft_out - (CArray.start out__) out self (match n with @@ -15026,16 +10201,12 @@ let fft_ihfft_out ~out self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ihfftn self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ihfftn - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -15049,16 +10220,12 @@ let fft_ihfftn self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_ihfftn_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_ihfftn_out - (CArray.start out__) out self (match s with @@ -15073,16 +10240,12 @@ let fft_ihfftn_out ~out self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_irfft self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_irfft - (CArray.start out__) self (match n with | None -> Int64.zero @@ -15091,16 +10254,12 @@ let fft_irfft self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_irfft2 self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_irfft2 - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -15110,16 +10269,12 @@ let fft_irfft2 self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_irfft2_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_irfft2_out - (CArray.start out__) out self (match s with @@ -15130,16 +10285,12 @@ let fft_irfft2_out ~out self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_irfft_out ~out self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_irfft_out - (CArray.start out__) out self (match n with @@ -15149,16 +10300,12 @@ let fft_irfft_out ~out self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_irfftn self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_irfftn - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -15172,16 +10319,12 @@ let fft_irfftn self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_irfftn_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_irfftn_out - (CArray.start out__) out self (match s with @@ -15196,16 +10339,12 @@ let fft_irfftn_out ~out self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_rfft self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_rfft - (CArray.start out__) self (match n with | None -> Int64.zero @@ -15214,16 +10353,12 @@ let fft_rfft self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_rfft2 self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_rfft2 - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -15233,16 +10368,12 @@ let fft_rfft2 self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_rfft2_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_rfft2_out - (CArray.start out__) out self (match s with @@ -15253,16 +10384,12 @@ let fft_rfft2_out ~out self ~s ~dim ~norm = | Some v -> List.length v) (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_rfft_out ~out self ~n ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_rfft_out - (CArray.start out__) out self (match n with @@ -15272,37 +10399,25 @@ let fft_rfft_out ~out self ~n ~dim ~norm = | Some _ -> 0 | None -> 1) (Int64.of_int dim) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_rfftfreq ~n ~d ~options = - let out__ = CArray.make t 1 in stubs_fft_rfftfreq - (CArray.start out__) (Int64.of_int n) d (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let fft_rfftfreq_out ~out ~n ~d = - let out__ = CArray.make t 1 in - stubs_fft_rfftfreq_out (CArray.start out__) out (Int64.of_int n) d; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fft_rfftfreq_out out (Int64.of_int n) d |> with_tensor_gc ;; let fft_rfftn self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_rfftn - (CArray.start out__) self (match s with | None -> from_voidp int64_t null @@ -15316,16 +10431,12 @@ let fft_rfftn self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; let fft_rfftn_out ~out self ~s ~dim ~norm = - let out__ = CArray.make t 1 in stubs_fft_rfftn_out - (CArray.start out__) out self (match s with @@ -15340,386 +10451,131 @@ let fft_rfftn_out ~out self ~s ~dim ~norm = (match dim with | None -> -1 | Some v -> List.length v) - norm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + norm + |> with_tensor_gc ;; -let fill self ~value = - let out__ = CArray.make t 1 in - stubs_fill (CArray.start out__) self value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fill_ self ~value = - let out__ = CArray.make t 1 in - stubs_fill_ (CArray.start out__) self value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let fill self ~value = stubs_fill self value |> with_tensor_gc +let fill_ self ~value = stubs_fill_ self value |> with_tensor_gc let fill_diagonal_ self ~fill_value ~wrap = - let out__ = CArray.make t 1 in - stubs_fill_diagonal_ (CArray.start out__) self fill_value (if wrap then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fill_diagonal_ self fill_value (if wrap then 1 else 0) |> with_tensor_gc ;; let fill_scalar_out ~out self ~value = - let out__ = CArray.make t 1 in - stubs_fill_scalar_out (CArray.start out__) out self value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fill_scalar_out out self value |> with_tensor_gc ;; -let fill_tensor self ~value = - let out__ = CArray.make t 1 in - stubs_fill_tensor (CArray.start out__) self value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fill_tensor_ self ~value = - let out__ = CArray.make t 1 in - stubs_fill_tensor_ (CArray.start out__) self value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let fill_tensor self ~value = stubs_fill_tensor self value |> with_tensor_gc +let fill_tensor_ self ~value = stubs_fill_tensor_ self value |> with_tensor_gc let fill_tensor_out ~out self ~value = - let out__ = CArray.make t 1 in - stubs_fill_tensor_out (CArray.start out__) out self value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fill_tensor_out out self value |> with_tensor_gc ;; -let fix self = - let out__ = CArray.make t 1 in - stubs_fix (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fix_ self = - let out__ = CArray.make t 1 in - stubs_fix_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fix_out ~out self = - let out__ = CArray.make t 1 in - stubs_fix_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let fix self = stubs_fix self |> with_tensor_gc +let fix_ self = stubs_fix_ self |> with_tensor_gc +let fix_out ~out self = stubs_fix_out out self |> with_tensor_gc let flatten self ~start_dim ~end_dim = - let out__ = CArray.make t 1 in - stubs_flatten (CArray.start out__) self (Int64.of_int start_dim) (Int64.of_int end_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_flatten self (Int64.of_int start_dim) (Int64.of_int end_dim) |> with_tensor_gc ;; let flatten_dense_tensors tensors = - let out__ = CArray.make t 1 in stubs_flatten_dense_tensors - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let flip self ~dims = - let out__ = CArray.make t 1 in stubs_flip - (CArray.start out__) self (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; let flip_out ~out self ~dims = - let out__ = CArray.make t 1 in stubs_flip_out - (CArray.start out__) out self (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; -let fliplr self = - let out__ = CArray.make t 1 in - stubs_fliplr (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let flipud self = - let out__ = CArray.make t 1 in - stubs_flipud (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let float_power self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let float_power_ self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power_ (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let fliplr self = stubs_fliplr self |> with_tensor_gc +let flipud self = stubs_flipud self |> with_tensor_gc +let float_power self ~exponent = stubs_float_power self exponent |> with_tensor_gc +let float_power_ self ~exponent = stubs_float_power_ self exponent |> with_tensor_gc let float_power_scalar self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power_scalar (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_float_power_scalar self exponent |> with_tensor_gc ;; let float_power_scalar_out ~out self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power_scalar_out (CArray.start out__) out self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_float_power_scalar_out out self exponent |> with_tensor_gc ;; let float_power_tensor_ self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power_tensor_ (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_float_power_tensor_ self exponent |> with_tensor_gc ;; let float_power_tensor_scalar self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power_tensor_scalar (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_float_power_tensor_scalar self exponent |> with_tensor_gc ;; let float_power_tensor_scalar_out ~out self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power_tensor_scalar_out (CArray.start out__) out self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_float_power_tensor_scalar_out out self exponent |> with_tensor_gc ;; let float_power_tensor_tensor_out ~out self ~exponent = - let out__ = CArray.make t 1 in - stubs_float_power_tensor_tensor_out (CArray.start out__) out self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_float_power_tensor_tensor_out out self exponent |> with_tensor_gc ;; -let floor self = - let out__ = CArray.make t 1 in - stubs_floor (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let floor_ self = - let out__ = CArray.make t 1 in - stubs_floor_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let floor_divide self other = - let out__ = CArray.make t 1 in - stubs_floor_divide (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let floor_divide_ self other = - let out__ = CArray.make t 1 in - stubs_floor_divide_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let floor self = stubs_floor self |> with_tensor_gc +let floor_ self = stubs_floor_ self |> with_tensor_gc +let floor_divide self other = stubs_floor_divide self other |> with_tensor_gc +let floor_divide_ self other = stubs_floor_divide_ self other |> with_tensor_gc let floor_divide_out ~out self other = - let out__ = CArray.make t 1 in - stubs_floor_divide_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_floor_divide_out out self other |> with_tensor_gc ;; let floor_divide_scalar self other = - let out__ = CArray.make t 1 in - stubs_floor_divide_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_floor_divide_scalar self other |> with_tensor_gc ;; let floor_divide_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_floor_divide_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let floor_out ~out self = - let out__ = CArray.make t 1 in - stubs_floor_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fmax self other = - let out__ = CArray.make t 1 in - stubs_fmax (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fmax_out ~out self other = - let out__ = CArray.make t 1 in - stubs_fmax_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fmin self other = - let out__ = CArray.make t 1 in - stubs_fmin (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_floor_divide_scalar_ self other |> with_tensor_gc ;; -let fmin_out ~out self other = - let out__ = CArray.make t 1 in - stubs_fmin_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fmod self other = - let out__ = CArray.make t 1 in - stubs_fmod (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fmod_ self other = - let out__ = CArray.make t 1 in - stubs_fmod_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let floor_out ~out self = stubs_floor_out out self |> with_tensor_gc +let fmax self other = stubs_fmax self other |> with_tensor_gc +let fmax_out ~out self other = stubs_fmax_out out self other |> with_tensor_gc +let fmin self other = stubs_fmin self other |> with_tensor_gc +let fmin_out ~out self other = stubs_fmin_out out self other |> with_tensor_gc +let fmod self other = stubs_fmod self other |> with_tensor_gc +let fmod_ self other = stubs_fmod_ self other |> with_tensor_gc let fmod_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_fmod_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let fmod_tensor self other = - let out__ = CArray.make t 1 in - stubs_fmod_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fmod_scalar_out out self other |> with_tensor_gc ;; -let fmod_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_fmod_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let fmod_tensor self other = stubs_fmod_tensor self other |> with_tensor_gc +let fmod_tensor_ self other = stubs_fmod_tensor_ self other |> with_tensor_gc let fmod_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_fmod_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let frac self = - let out__ = CArray.make t 1 in - stubs_frac (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_fmod_tensor_out out self other |> with_tensor_gc ;; -let frac_ self = - let out__ = CArray.make t 1 in - stubs_frac_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let frac_out ~out self = - let out__ = CArray.make t 1 in - stubs_frac_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let frac self = stubs_frac self |> with_tensor_gc +let frac_ self = stubs_frac_ self |> with_tensor_gc +let frac_out ~out self = stubs_frac_out out self |> with_tensor_gc let fractional_max_pool2d self ~kernel_size ~output_size ~random_samples = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fractional_max_pool2d (CArray.start out__) self @@ -15728,27 +10584,21 @@ let fractional_max_pool2d self ~kernel_size ~output_size ~random_samples = (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) random_samples; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let fractional_max_pool2d_backward ~grad_output self ~kernel_size ~output_size ~indices = - let out__ = CArray.make t 1 in stubs_fractional_max_pool2d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let fractional_max_pool2d_backward_grad_input @@ -15759,9 +10609,7 @@ let fractional_max_pool2d_backward_grad_input ~output_size ~indices = - let out__ = CArray.make t 1 in stubs_fractional_max_pool2d_backward_grad_input - (CArray.start out__) grad_input grad_output self @@ -15769,10 +10617,8 @@ let fractional_max_pool2d_backward_grad_input (List.length kernel_size) (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let fractional_max_pool2d_output @@ -15783,7 +10629,7 @@ let fractional_max_pool2d_output ~output_size ~random_samples = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fractional_max_pool2d_output (CArray.start out__) output @@ -15794,15 +10640,13 @@ let fractional_max_pool2d_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) random_samples; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let fractional_max_pool3d self ~kernel_size ~output_size ~random_samples = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fractional_max_pool3d (CArray.start out__) self @@ -15811,27 +10655,21 @@ let fractional_max_pool3d self ~kernel_size ~output_size ~random_samples = (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) random_samples; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let fractional_max_pool3d_backward ~grad_output self ~kernel_size ~output_size ~indices = - let out__ = CArray.make t 1 in stubs_fractional_max_pool3d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let fractional_max_pool3d_backward_grad_input @@ -15842,9 +10680,7 @@ let fractional_max_pool3d_backward_grad_input ~output_size ~indices = - let out__ = CArray.make t 1 in stubs_fractional_max_pool3d_backward_grad_input - (CArray.start out__) grad_input grad_output self @@ -15852,10 +10688,8 @@ let fractional_max_pool3d_backward_grad_input (List.length kernel_size) (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let fractional_max_pool3d_output @@ -15866,7 +10700,7 @@ let fractional_max_pool3d_output ~output_size ~random_samples = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_fractional_max_pool3d_output (CArray.start out__) output @@ -15877,64 +10711,48 @@ let fractional_max_pool3d_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) random_samples; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let frexp self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_frexp (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let frexp_tensor_out ~mantissa ~exponent self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_frexp_tensor_out (CArray.start out__) mantissa exponent self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let frobenius_norm self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_frobenius_norm - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let frobenius_norm_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_frobenius_norm_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let from_file ~filename ~shared ~size ~options = - let out__ = CArray.make t 1 in stubs_from_file - (CArray.start out__) filename (if shared then 1 else 0) (match size with @@ -15944,16 +10762,12 @@ let from_file ~filename ~shared ~size ~options = | Some _ -> 0 | None -> 1) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let from_file_out ~out ~filename ~shared ~size = - let out__ = CArray.make t 1 in stubs_from_file_out - (CArray.start out__) out filename (if shared then 1 else 0) @@ -15962,53 +10776,33 @@ let from_file_out ~out ~filename ~shared ~size = | Some v -> Int64.of_int v) (match size with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let full ~size ~fill_value ~options = - let out__ = CArray.make t 1 in stubs_full - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) fill_value (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let full_like self ~fill_value = - let out__ = CArray.make t 1 in - stubs_full_like (CArray.start out__) self fill_value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let full_like self ~fill_value = stubs_full_like self fill_value |> with_tensor_gc let full_like_out ~out self ~fill_value = - let out__ = CArray.make t 1 in - stubs_full_like_out (CArray.start out__) out self fill_value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_full_like_out out self fill_value |> with_tensor_gc ;; let full_out ~out ~size ~fill_value = - let out__ = CArray.make t 1 in stubs_full_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - fill_value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + fill_value + |> with_tensor_gc ;; let fused_moving_avg_obs_fake_quant @@ -16026,9 +10820,7 @@ let fused_moving_avg_obs_fake_quant ~per_row_fake_quant ~symmetric_quant = - let out__ = CArray.make t 1 in stubs_fused_moving_avg_obs_fake_quant - (CArray.start out__) self observer_on fake_quant_on @@ -16041,507 +10833,197 @@ let fused_moving_avg_obs_fake_quant (Int64.of_int quant_max) (Int64.of_int ch_axis) (if per_row_fake_quant then 1 else 0) - (if symmetric_quant then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if symmetric_quant then 1 else 0) + |> with_tensor_gc ;; let gather self ~dim ~index ~sparse_grad = - let out__ = CArray.make t 1 in - stubs_gather - (CArray.start out__) - self - (Int64.of_int dim) - index - (if sparse_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_gather self (Int64.of_int dim) index (if sparse_grad then 1 else 0) + |> with_tensor_gc ;; let gather_backward ~grad self ~dim ~index ~sparse_grad = - let out__ = CArray.make t 1 in - stubs_gather_backward - (CArray.start out__) - grad - self - (Int64.of_int dim) - index - (if sparse_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_gather_backward grad self (Int64.of_int dim) index (if sparse_grad then 1 else 0) + |> with_tensor_gc ;; let gather_out ~out self ~dim ~index ~sparse_grad = - let out__ = CArray.make t 1 in - stubs_gather_out - (CArray.start out__) - out - self - (Int64.of_int dim) - index - (if sparse_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gcd self other = - let out__ = CArray.make t 1 in - stubs_gcd (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gcd_ self other = - let out__ = CArray.make t 1 in - stubs_gcd_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gcd_out ~out self other = - let out__ = CArray.make t 1 in - stubs_gcd_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ge self other = - let out__ = CArray.make t 1 in - stubs_ge (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ge_ self other = - let out__ = CArray.make t 1 in - stubs_ge_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ge_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_ge_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ge_tensor self other = - let out__ = CArray.make t 1 in - stubs_ge_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ge_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_ge_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ge_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_ge_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gelu self ~approximate = - let out__ = CArray.make t 1 in - stubs_gelu (CArray.start out__) self approximate; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gelu_ self ~approximate = - let out__ = CArray.make t 1 in - stubs_gelu_ (CArray.start out__) self approximate; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; + stubs_gather_out out self (Int64.of_int dim) index (if sparse_grad then 1 else 0) + |> with_tensor_gc +;; + +let gcd self other = stubs_gcd self other |> with_tensor_gc +let gcd_ self other = stubs_gcd_ self other |> with_tensor_gc +let gcd_out ~out self other = stubs_gcd_out out self other |> with_tensor_gc +let ge self other = stubs_ge self other |> with_tensor_gc +let ge_ self other = stubs_ge_ self other |> with_tensor_gc +let ge_scalar_out ~out self other = stubs_ge_scalar_out out self other |> with_tensor_gc +let ge_tensor self other = stubs_ge_tensor self other |> with_tensor_gc +let ge_tensor_ self other = stubs_ge_tensor_ self other |> with_tensor_gc +let ge_tensor_out ~out self other = stubs_ge_tensor_out out self other |> with_tensor_gc +let gelu self ~approximate = stubs_gelu self approximate |> with_tensor_gc +let gelu_ self ~approximate = stubs_gelu_ self approximate |> with_tensor_gc let gelu_backward ~grad_output self ~approximate = - let out__ = CArray.make t 1 in - stubs_gelu_backward (CArray.start out__) grad_output self approximate; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_gelu_backward grad_output self approximate |> with_tensor_gc ;; let gelu_backward_grad_input ~grad_input ~grad_output self ~approximate = - let out__ = CArray.make t 1 in - stubs_gelu_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - approximate; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_gelu_backward_grad_input grad_input grad_output self approximate |> with_tensor_gc ;; let gelu_out ~out self ~approximate = - let out__ = CArray.make t 1 in - stubs_gelu_out (CArray.start out__) out self approximate; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_gelu_out out self approximate |> with_tensor_gc ;; -let geometric self ~p = - let out__ = CArray.make t 1 in - stubs_geometric (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let geometric_ self ~p = - let out__ = CArray.make t 1 in - stubs_geometric_ (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let geometric_out ~out self ~p = - let out__ = CArray.make t 1 in - stubs_geometric_out (CArray.start out__) out self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let geometric self ~p = stubs_geometric self p |> with_tensor_gc +let geometric_ self ~p = stubs_geometric_ self p |> with_tensor_gc +let geometric_out ~out self ~p = stubs_geometric_out out self p |> with_tensor_gc let geqrf self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_geqrf (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let geqrf_a ~a ~tau self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_geqrf_a (CArray.start out__) a tau self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let ger self ~vec2 = - let out__ = CArray.make t 1 in - stubs_ger (CArray.start out__) self vec2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ger_out ~out self ~vec2 = - let out__ = CArray.make t 1 in - stubs_ger_out (CArray.start out__) out self vec2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let glu self ~dim = - let out__ = CArray.make t 1 in - stubs_glu (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let ger self ~vec2 = stubs_ger self vec2 |> with_tensor_gc +let ger_out ~out self ~vec2 = stubs_ger_out out self vec2 |> with_tensor_gc +let glu self ~dim = stubs_glu self (Int64.of_int dim) |> with_tensor_gc let glu_backward ~grad_output self ~dim = - let out__ = CArray.make t 1 in - stubs_glu_backward (CArray.start out__) grad_output self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_glu_backward grad_output self (Int64.of_int dim) |> with_tensor_gc ;; let glu_backward_grad_input ~grad_input ~grad_output self ~dim = - let out__ = CArray.make t 1 in - stubs_glu_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_glu_backward_grad_input grad_input grad_output self (Int64.of_int dim) + |> with_tensor_gc ;; let glu_backward_jvp ~grad_x ~grad_glu ~x ~dgrad_glu ~dx ~dim = - let out__ = CArray.make t 1 in - stubs_glu_backward_jvp - (CArray.start out__) - grad_x - grad_glu - x - dgrad_glu - dx - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_glu_backward_jvp grad_x grad_glu x dgrad_glu dx (Int64.of_int dim) + |> with_tensor_gc ;; let glu_backward_jvp_out ~out ~grad_x ~grad_glu ~x ~dgrad_glu ~dx ~dim = - let out__ = CArray.make t 1 in - stubs_glu_backward_jvp_out - (CArray.start out__) - out - grad_x - grad_glu - x - dgrad_glu - dx - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_glu_backward_jvp_out out grad_x grad_glu x dgrad_glu dx (Int64.of_int dim) + |> with_tensor_gc ;; -let glu_jvp ~glu ~x ~dx ~dim = - let out__ = CArray.make t 1 in - stubs_glu_jvp (CArray.start out__) glu x dx (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let glu_jvp ~glu ~x ~dx ~dim = stubs_glu_jvp glu x dx (Int64.of_int dim) |> with_tensor_gc let glu_jvp_out ~out ~glu ~x ~dx ~dim = - let out__ = CArray.make t 1 in - stubs_glu_jvp_out (CArray.start out__) out glu x dx (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let glu_out ~out self ~dim = - let out__ = CArray.make t 1 in - stubs_glu_out (CArray.start out__) out self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let grad self = - let out__ = CArray.make t 1 in - stubs_grad (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let greater self other = - let out__ = CArray.make t 1 in - stubs_greater (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let greater_ self other = - let out__ = CArray.make t 1 in - stubs_greater_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let greater_equal self other = - let out__ = CArray.make t 1 in - stubs_greater_equal (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_glu_jvp_out out glu x dx (Int64.of_int dim) |> with_tensor_gc ;; -let greater_equal_ self other = - let out__ = CArray.make t 1 in - stubs_greater_equal_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let glu_out ~out self ~dim = stubs_glu_out out self (Int64.of_int dim) |> with_tensor_gc +let grad self = stubs_grad self |> with_tensor_gc +let greater self other = stubs_greater self other |> with_tensor_gc +let greater_ self other = stubs_greater_ self other |> with_tensor_gc +let greater_equal self other = stubs_greater_equal self other |> with_tensor_gc +let greater_equal_ self other = stubs_greater_equal_ self other |> with_tensor_gc let greater_equal_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_greater_equal_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_greater_equal_scalar_out out self other |> with_tensor_gc ;; let greater_equal_tensor self other = - let out__ = CArray.make t 1 in - stubs_greater_equal_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_greater_equal_tensor self other |> with_tensor_gc ;; let greater_equal_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_greater_equal_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_greater_equal_tensor_ self other |> with_tensor_gc ;; let greater_equal_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_greater_equal_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_greater_equal_tensor_out out self other |> with_tensor_gc ;; let greater_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_greater_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let greater_tensor self other = - let out__ = CArray.make t 1 in - stubs_greater_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_greater_scalar_out out self other |> with_tensor_gc ;; -let greater_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_greater_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let greater_tensor self other = stubs_greater_tensor self other |> with_tensor_gc +let greater_tensor_ self other = stubs_greater_tensor_ self other |> with_tensor_gc let greater_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_greater_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_greater_tensor_out out self other |> with_tensor_gc ;; let grid_sampler input ~grid ~interpolation_mode ~padding_mode ~align_corners = - let out__ = CArray.make t 1 in stubs_grid_sampler - (CArray.start out__) input grid (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let grid_sampler_2d input ~grid ~interpolation_mode ~padding_mode ~align_corners = - let out__ = CArray.make t 1 in stubs_grid_sampler_2d - (CArray.start out__) input grid (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let grid_sampler_2d_out ~out input ~grid ~interpolation_mode ~padding_mode ~align_corners = - let out__ = CArray.make t 1 in stubs_grid_sampler_2d_out - (CArray.start out__) out input grid (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let grid_sampler_3d input ~grid ~interpolation_mode ~padding_mode ~align_corners = - let out__ = CArray.make t 1 in stubs_grid_sampler_3d - (CArray.start out__) input grid (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let grid_sampler_3d_out ~out input ~grid ~interpolation_mode ~padding_mode ~align_corners = - let out__ = CArray.make t 1 in stubs_grid_sampler_3d_out - (CArray.start out__) out input grid (Int64.of_int interpolation_mode) (Int64.of_int padding_mode) - (if align_corners then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if align_corners then 1 else 0) + |> with_tensor_gc ;; let group_norm input ~num_groups ~weight ~bias ~eps ~cudnn_enabled = - let out__ = CArray.make t 1 in stubs_group_norm - (CArray.start out__) input (Int64.of_int num_groups) (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) eps - (if cudnn_enabled then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if cudnn_enabled then 1 else 0) + |> with_tensor_gc ;; let gru @@ -16555,12 +11037,12 @@ let gru ~bidirectional ~batch_first = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_gru (CArray.start out__) input hx - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) @@ -16568,30 +11050,24 @@ let gru (if train then 1 else 0) (if bidirectional then 1 else 0) (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let gru_cell input ~hx ~w_ih ~w_hh ~b_ih ~b_hh = - let out__ = CArray.make t 1 in stubs_gru_cell - (CArray.start out__) input hx w_ih w_hh (match b_ih with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match b_hh with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let gru_data @@ -16605,429 +11081,197 @@ let gru_data ~train ~bidirectional = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_gru_data (CArray.start out__) data batch_sizes hx - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) dropout (if train then 1 else 0) (if bidirectional then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let gt self other = - let out__ = CArray.make t 1 in - stubs_gt (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gt_ self other = - let out__ = CArray.make t 1 in - stubs_gt_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gt_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_gt_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gt_tensor self other = - let out__ = CArray.make t 1 in - stubs_gt_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gt_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_gt_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let gt_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_gt_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let gt self other = stubs_gt self other |> with_tensor_gc +let gt_ self other = stubs_gt_ self other |> with_tensor_gc +let gt_scalar_out ~out self other = stubs_gt_scalar_out out self other |> with_tensor_gc +let gt_tensor self other = stubs_gt_tensor self other |> with_tensor_gc +let gt_tensor_ self other = stubs_gt_tensor_ self other |> with_tensor_gc +let gt_tensor_out ~out self other = stubs_gt_tensor_out out self other |> with_tensor_gc let hamming_window ~window_length ~options = - let out__ = CArray.make t 1 in stubs_hamming_window - (CArray.start out__) (Int64.of_int window_length) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let hamming_window_out ~out ~window_length = - let out__ = CArray.make t 1 in - stubs_hamming_window_out (CArray.start out__) out (Int64.of_int window_length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hamming_window_out out (Int64.of_int window_length) |> with_tensor_gc ;; let hamming_window_periodic ~window_length ~periodic ~options = - let out__ = CArray.make t 1 in stubs_hamming_window_periodic - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let hamming_window_periodic_alpha ~window_length ~periodic ~alpha ~options = - let out__ = CArray.make t 1 in stubs_hamming_window_periodic_alpha - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) alpha (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let hamming_window_periodic_alpha_beta ~window_length ~periodic ~alpha ~beta ~options = - let out__ = CArray.make t 1 in stubs_hamming_window_periodic_alpha_beta - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) alpha beta (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let hamming_window_periodic_alpha_beta_out ~out ~window_length ~periodic ~alpha ~beta = - let out__ = CArray.make t 1 in stubs_hamming_window_periodic_alpha_beta_out - (CArray.start out__) out (Int64.of_int window_length) (if periodic then 1 else 0) alpha - beta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + beta + |> with_tensor_gc ;; let hamming_window_periodic_alpha_out ~out ~window_length ~periodic ~alpha = - let out__ = CArray.make t 1 in stubs_hamming_window_periodic_alpha_out - (CArray.start out__) out (Int64.of_int window_length) (if periodic then 1 else 0) - alpha; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + alpha + |> with_tensor_gc ;; let hamming_window_periodic_out ~out ~window_length ~periodic = - let out__ = CArray.make t 1 in stubs_hamming_window_periodic_out - (CArray.start out__) out (Int64.of_int window_length) - (if periodic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if periodic then 1 else 0) + |> with_tensor_gc ;; let hann_window ~window_length ~options = - let out__ = CArray.make t 1 in stubs_hann_window - (CArray.start out__) (Int64.of_int window_length) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let hann_window_out ~out ~window_length = - let out__ = CArray.make t 1 in - stubs_hann_window_out (CArray.start out__) out (Int64.of_int window_length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hann_window_out out (Int64.of_int window_length) |> with_tensor_gc ;; let hann_window_periodic ~window_length ~periodic ~options = - let out__ = CArray.make t 1 in stubs_hann_window_periodic - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let hann_window_periodic_out ~out ~window_length ~periodic = - let out__ = CArray.make t 1 in stubs_hann_window_periodic_out - (CArray.start out__) out (Int64.of_int window_length) - (if periodic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if periodic then 1 else 0) + |> with_tensor_gc ;; -let hardshrink self = - let out__ = CArray.make t 1 in - stubs_hardshrink (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let hardshrink self = stubs_hardshrink self |> with_tensor_gc let hardshrink_backward ~grad_out self ~lambd = - let out__ = CArray.make t 1 in - stubs_hardshrink_backward (CArray.start out__) grad_out self lambd; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardshrink_backward grad_out self lambd |> with_tensor_gc ;; let hardshrink_backward_grad_input ~grad_input ~grad_out self ~lambd = - let out__ = CArray.make t 1 in - stubs_hardshrink_backward_grad_input (CArray.start out__) grad_input grad_out self lambd; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardshrink_backward_grad_input grad_input grad_out self lambd |> with_tensor_gc ;; -let hardshrink_out ~out self = - let out__ = CArray.make t 1 in - stubs_hardshrink_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hardsigmoid self = - let out__ = CArray.make t 1 in - stubs_hardsigmoid (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hardsigmoid_ self = - let out__ = CArray.make t 1 in - stubs_hardsigmoid_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let hardshrink_out ~out self = stubs_hardshrink_out out self |> with_tensor_gc +let hardsigmoid self = stubs_hardsigmoid self |> with_tensor_gc +let hardsigmoid_ self = stubs_hardsigmoid_ self |> with_tensor_gc let hardsigmoid_backward ~grad_output self = - let out__ = CArray.make t 1 in - stubs_hardsigmoid_backward (CArray.start out__) grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardsigmoid_backward grad_output self |> with_tensor_gc ;; let hardsigmoid_backward_grad_input ~grad_input ~grad_output self = - let out__ = CArray.make t 1 in - stubs_hardsigmoid_backward_grad_input (CArray.start out__) grad_input grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hardsigmoid_out ~out self = - let out__ = CArray.make t 1 in - stubs_hardsigmoid_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardsigmoid_backward_grad_input grad_input grad_output self |> with_tensor_gc ;; -let hardswish self = - let out__ = CArray.make t 1 in - stubs_hardswish (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hardswish_ self = - let out__ = CArray.make t 1 in - stubs_hardswish_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let hardsigmoid_out ~out self = stubs_hardsigmoid_out out self |> with_tensor_gc +let hardswish self = stubs_hardswish self |> with_tensor_gc +let hardswish_ self = stubs_hardswish_ self |> with_tensor_gc let hardswish_backward ~grad_output self = - let out__ = CArray.make t 1 in - stubs_hardswish_backward (CArray.start out__) grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardswish_backward grad_output self |> with_tensor_gc ;; let hardswish_backward_out ~out ~grad_output self = - let out__ = CArray.make t 1 in - stubs_hardswish_backward_out (CArray.start out__) out grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardswish_backward_out out grad_output self |> with_tensor_gc ;; -let hardswish_out ~out self = - let out__ = CArray.make t 1 in - stubs_hardswish_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hardtanh self = - let out__ = CArray.make t 1 in - stubs_hardtanh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hardtanh_ self = - let out__ = CArray.make t 1 in - stubs_hardtanh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let hardswish_out ~out self = stubs_hardswish_out out self |> with_tensor_gc +let hardtanh self = stubs_hardtanh self |> with_tensor_gc +let hardtanh_ self = stubs_hardtanh_ self |> with_tensor_gc let hardtanh_backward ~grad_output self ~min_val ~max_val = - let out__ = CArray.make t 1 in - stubs_hardtanh_backward (CArray.start out__) grad_output self min_val max_val; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardtanh_backward grad_output self min_val max_val |> with_tensor_gc ;; let hardtanh_backward_grad_input ~grad_input ~grad_output self ~min_val ~max_val = - let out__ = CArray.make t 1 in - stubs_hardtanh_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - min_val - max_val; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hardtanh_out ~out self = - let out__ = CArray.make t 1 in - stubs_hardtanh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_hardtanh_backward_grad_input grad_input grad_output self min_val max_val + |> with_tensor_gc ;; -let heaviside self ~values = - let out__ = CArray.make t 1 in - stubs_heaviside (CArray.start out__) self values; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let heaviside_ self ~values = - let out__ = CArray.make t 1 in - stubs_heaviside_ (CArray.start out__) self values; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let hardtanh_out ~out self = stubs_hardtanh_out out self |> with_tensor_gc +let heaviside self ~values = stubs_heaviside self values |> with_tensor_gc +let heaviside_ self ~values = stubs_heaviside_ self values |> with_tensor_gc let heaviside_out ~out self ~values = - let out__ = CArray.make t 1 in - stubs_heaviside_out (CArray.start out__) out self values; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_heaviside_out out self values |> with_tensor_gc ;; let hinge_embedding_loss self ~target ~margin ~reduction = - let out__ = CArray.make t 1 in stubs_hinge_embedding_loss - (CArray.start out__) self target margin - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; -let histc self ~bins = - let out__ = CArray.make t 1 in - stubs_histc (CArray.start out__) self (Int64.of_int bins); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let histc self ~bins = stubs_histc self (Int64.of_int bins) |> with_tensor_gc let histc_out ~out self ~bins = - let out__ = CArray.make t 1 in - stubs_histc_out (CArray.start out__) out self (Int64.of_int bins); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_histc_out out self (Int64.of_int bins) |> with_tensor_gc ;; let hsplit self ~sections = stubs_hsplit self (Int64.of_int sections) |> to_tensor_list @@ -17040,201 +11284,68 @@ let hsplit_array self ~indices = |> to_tensor_list ;; -let hspmm ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_hspmm (CArray.start out__) mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hspmm_out ~out ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_hspmm_out (CArray.start out__) out mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hstack tensors = - let out__ = CArray.make t 1 in - stubs_hstack - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hstack_out ~out tensors = - let out__ = CArray.make t 1 in - stubs_hstack_out - (CArray.start out__) - out - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let huber_loss self ~target ~reduction ~delta = - let out__ = CArray.make t 1 in - stubs_huber_loss - (CArray.start out__) - self - target - (Reduction.to_int reduction |> Int64.of_int) - delta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let huber_loss_backward ~grad_output self ~target ~reduction ~delta = - let out__ = CArray.make t 1 in - stubs_huber_loss_backward - (CArray.start out__) - grad_output - self - target - (Reduction.to_int reduction |> Int64.of_int) - delta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let huber_loss_backward_out ~grad_input ~grad_output self ~target ~reduction ~delta = - let out__ = CArray.make t 1 in - stubs_huber_loss_backward_out - (CArray.start out__) - grad_input - grad_output - self - target - (Reduction.to_int reduction |> Int64.of_int) - delta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let huber_loss_out ~out self ~target ~reduction ~delta = - let out__ = CArray.make t 1 in - stubs_huber_loss_out - (CArray.start out__) - out - self - target - (Reduction.to_int reduction |> Int64.of_int) - delta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hypot self other = - let out__ = CArray.make t 1 in - stubs_hypot (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hypot_ self other = - let out__ = CArray.make t 1 in - stubs_hypot_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let hypot_out ~out self other = - let out__ = CArray.make t 1 in - stubs_hypot_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let i0 self = - let out__ = CArray.make t 1 in - stubs_i0 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let i0_ self = - let out__ = CArray.make t 1 in - stubs_i0_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let i0_out ~out self = - let out__ = CArray.make t 1 in - stubs_i0_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let hspmm ~mat1 ~mat2 = stubs_hspmm mat1 mat2 |> with_tensor_gc +let hspmm_out ~out ~mat1 ~mat2 = stubs_hspmm_out out mat1 mat2 |> with_tensor_gc -let igamma self other = - let out__ = CArray.make t 1 in - stubs_igamma (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let hstack tensors = + stubs_hstack (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) + |> with_tensor_gc ;; -let igamma_ self other = - let out__ = CArray.make t 1 in - stubs_igamma_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let hstack_out ~out tensors = + stubs_hstack_out + out + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; -let igamma_out ~out self other = - let out__ = CArray.make t 1 in - stubs_igamma_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let huber_loss self ~target ~reduction ~delta = + stubs_huber_loss self target (Reduction.to_int reduction |> Int64.of_int) delta + |> with_tensor_gc ;; -let igammac self other = - let out__ = CArray.make t 1 in - stubs_igammac (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let huber_loss_backward ~grad_output self ~target ~reduction ~delta = + stubs_huber_loss_backward + grad_output + self + target + (Reduction.to_int reduction |> Int64.of_int) + delta + |> with_tensor_gc ;; -let igammac_ self other = - let out__ = CArray.make t 1 in - stubs_igammac_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let huber_loss_backward_out ~grad_input ~grad_output self ~target ~reduction ~delta = + stubs_huber_loss_backward_out + grad_input + grad_output + self + target + (Reduction.to_int reduction |> Int64.of_int) + delta + |> with_tensor_gc ;; -let igammac_out ~out self other = - let out__ = CArray.make t 1 in - stubs_igammac_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let huber_loss_out ~out self ~target ~reduction ~delta = + stubs_huber_loss_out out self target (Reduction.to_int reduction |> Int64.of_int) delta + |> with_tensor_gc +;; + +let hypot self other = stubs_hypot self other |> with_tensor_gc +let hypot_ self other = stubs_hypot_ self other |> with_tensor_gc +let hypot_out ~out self other = stubs_hypot_out out self other |> with_tensor_gc +let i0 self = stubs_i0 self |> with_tensor_gc +let i0_ self = stubs_i0_ self |> with_tensor_gc +let i0_out ~out self = stubs_i0_out out self |> with_tensor_gc +let igamma self other = stubs_igamma self other |> with_tensor_gc +let igamma_ self other = stubs_igamma_ self other |> with_tensor_gc +let igamma_out ~out self other = stubs_igamma_out out self other |> with_tensor_gc +let igammac self other = stubs_igammac self other |> with_tensor_gc +let igammac_ self other = stubs_igammac_ self other |> with_tensor_gc +let igammac_out ~out self other = stubs_igammac_out out self other |> with_tensor_gc let im2col self ~kernel_size ~dilation ~padding ~stride = - let out__ = CArray.make t 1 in stubs_im2col - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -17243,16 +11354,12 @@ let im2col self ~kernel_size ~dilation ~padding ~stride = (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let im2col_out ~out self ~kernel_size ~dilation ~padding ~stride = - let out__ = CArray.make t 1 in stubs_im2col_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -17262,349 +11369,202 @@ let im2col_out ~out self ~kernel_size ~dilation ~padding ~stride = (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; -let imag self = - let out__ = CArray.make t 1 in - stubs_imag (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let imag self = stubs_imag self |> with_tensor_gc let index self ~indices = - let out__ = CArray.make t 1 in stubs_index - (CArray.start out__) self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) - (List.length indices); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length indices) + |> with_tensor_gc ;; let index_add self ~dim ~index ~source = - let out__ = CArray.make t 1 in - stubs_index_add (CArray.start out__) self (Int64.of_int dim) index source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_add self (Int64.of_int dim) index source |> with_tensor_gc ;; let index_add_ self ~dim ~index ~source = - let out__ = CArray.make t 1 in - stubs_index_add_ (CArray.start out__) self (Int64.of_int dim) index source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_add_ self (Int64.of_int dim) index source |> with_tensor_gc ;; let index_add_out ~out self ~dim ~index ~source = - let out__ = CArray.make t 1 in - stubs_index_add_out (CArray.start out__) out self (Int64.of_int dim) index source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_add_out out self (Int64.of_int dim) index source |> with_tensor_gc ;; let index_copy self ~dim ~index ~source = - let out__ = CArray.make t 1 in - stubs_index_copy (CArray.start out__) self (Int64.of_int dim) index source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_copy self (Int64.of_int dim) index source |> with_tensor_gc ;; let index_copy_ self ~dim ~index ~source = - let out__ = CArray.make t 1 in - stubs_index_copy_ (CArray.start out__) self (Int64.of_int dim) index source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_copy_ self (Int64.of_int dim) index source |> with_tensor_gc ;; let index_copy_out ~out self ~dim ~index ~source = - let out__ = CArray.make t 1 in - stubs_index_copy_out (CArray.start out__) out self (Int64.of_int dim) index source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_copy_out out self (Int64.of_int dim) index source |> with_tensor_gc ;; let index_fill self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_index_fill (CArray.start out__) self (Int64.of_int dim) index value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_fill self (Int64.of_int dim) index value |> with_tensor_gc ;; let index_fill_ self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_index_fill_ (CArray.start out__) self (Int64.of_int dim) index value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_fill_ self (Int64.of_int dim) index value |> with_tensor_gc ;; let index_fill_int_scalar_out ~out self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_index_fill_int_scalar_out - (CArray.start out__) - out - self - (Int64.of_int dim) - index - value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_fill_int_scalar_out out self (Int64.of_int dim) index value + |> with_tensor_gc ;; let index_fill_int_tensor self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_index_fill_int_tensor (CArray.start out__) self (Int64.of_int dim) index value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_fill_int_tensor self (Int64.of_int dim) index value |> with_tensor_gc ;; let index_fill_int_tensor_ self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_index_fill_int_tensor_ (CArray.start out__) self (Int64.of_int dim) index value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_fill_int_tensor_ self (Int64.of_int dim) index value |> with_tensor_gc ;; let index_fill_int_tensor_out ~out self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_index_fill_int_tensor_out - (CArray.start out__) - out - self - (Int64.of_int dim) - index - value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_fill_int_tensor_out out self (Int64.of_int dim) index value + |> with_tensor_gc ;; let index_put self ~indices ~values ~accumulate = - let out__ = CArray.make t 1 in stubs_index_put - (CArray.start out__) self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) (List.length indices) values - (if accumulate then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if accumulate then 1 else 0) + |> with_tensor_gc ;; let index_put_ self ~indices ~values ~accumulate = - let out__ = CArray.make t 1 in stubs_index_put_ - (CArray.start out__) self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) (List.length indices) values - (if accumulate then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if accumulate then 1 else 0) + |> with_tensor_gc ;; let index_put_out ~out self ~indices ~values ~accumulate = - let out__ = CArray.make t 1 in stubs_index_put_out - (CArray.start out__) out self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) (List.length indices) values - (if accumulate then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if accumulate then 1 else 0) + |> with_tensor_gc ;; let index_reduce self ~dim ~index ~source ~reduce ~include_self = - let out__ = CArray.make t 1 in stubs_index_reduce - (CArray.start out__) self (Int64.of_int dim) index source reduce - (if include_self then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if include_self then 1 else 0) + |> with_tensor_gc ;; let index_reduce_ self ~dim ~index ~source ~reduce ~include_self = - let out__ = CArray.make t 1 in stubs_index_reduce_ - (CArray.start out__) self (Int64.of_int dim) index source reduce - (if include_self then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if include_self then 1 else 0) + |> with_tensor_gc ;; let index_reduce_out ~out self ~dim ~index ~source ~reduce ~include_self = - let out__ = CArray.make t 1 in stubs_index_reduce_out - (CArray.start out__) out self (Int64.of_int dim) index source reduce - (if include_self then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if include_self then 1 else 0) + |> with_tensor_gc ;; let index_select self ~dim ~index = - let out__ = CArray.make t 1 in - stubs_index_select (CArray.start out__) self (Int64.of_int dim) index; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_select self (Int64.of_int dim) index |> with_tensor_gc ;; let index_select_backward ~grad ~self_sizes ~dim ~index = - let out__ = CArray.make t 1 in stubs_index_select_backward - (CArray.start out__) grad (List.map Int64.of_int self_sizes |> CArray.of_list int64_t |> CArray.start) (List.length self_sizes) (Int64.of_int dim) - index; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + index + |> with_tensor_gc ;; let index_select_out ~out self ~dim ~index = - let out__ = CArray.make t 1 in - stubs_index_select_out (CArray.start out__) out self (Int64.of_int dim) index; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_index_select_out out self (Int64.of_int dim) index |> with_tensor_gc ;; let index_tensor_out ~out self ~indices = - let out__ = CArray.make t 1 in stubs_index_tensor_out - (CArray.start out__) out self (List.map (function | Some x -> x - | None -> null) + | None -> none_gc_tensor) indices - |> CArray.of_list t + |> CArray.of_list gc_tensor |> CArray.start) - (List.length indices); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let indices self = - let out__ = CArray.make t 1 in - stubs_indices (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let indices_copy self = - let out__ = CArray.make t 1 in - stubs_indices_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length indices) + |> with_tensor_gc ;; -let indices_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_indices_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let indices self = stubs_indices self |> with_tensor_gc +let indices_copy self = stubs_indices_copy self |> with_tensor_gc +let indices_copy_out ~out self = stubs_indices_copy_out out self |> with_tensor_gc let infinitely_differentiable_gelu_backward ~grad self = - let out__ = CArray.make t 1 in - stubs_infinitely_differentiable_gelu_backward (CArray.start out__) grad self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_infinitely_differentiable_gelu_backward grad self |> with_tensor_gc ;; -let inner self other = - let out__ = CArray.make t 1 in - stubs_inner (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let inner_out ~out self other = - let out__ = CArray.make t 1 in - stubs_inner_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let inner self other = stubs_inner self other |> with_tensor_gc +let inner_out ~out self other = stubs_inner_out out self other |> with_tensor_gc let instance_norm input @@ -17617,63 +11577,31 @@ let instance_norm ~eps ~cudnn_enabled = - let out__ = CArray.make t 1 in stubs_instance_norm - (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if use_input_stats then 1 else 0) momentum eps - (if cudnn_enabled then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let int_repr self = - let out__ = CArray.make t 1 in - stubs_int_repr (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let int_repr_out ~out self = - let out__ = CArray.make t 1 in - stubs_int_repr_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let inverse self = - let out__ = CArray.make t 1 in - stubs_inverse (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let inverse_out ~out self = - let out__ = CArray.make t 1 in - stubs_inverse_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if cudnn_enabled then 1 else 0) + |> with_tensor_gc ;; +let int_repr self = stubs_int_repr self |> with_tensor_gc +let int_repr_out ~out self = stubs_int_repr_out out self |> with_tensor_gc +let inverse self = stubs_inverse self |> with_tensor_gc +let inverse_out ~out self = stubs_inverse_out out self |> with_tensor_gc let is_coalesced self = stubs_is_coalesced self let is_complex self = stubs_is_complex self let is_conj self = stubs_is_conj self @@ -17690,173 +11618,77 @@ let is_signed self = stubs_is_signed self let is_vulkan_available = stubs_is_vulkan_available let isclose self other ~rtol ~atol ~equal_nan = - let out__ = CArray.make t 1 in - stubs_isclose (CArray.start out__) self other rtol atol (if equal_nan then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_isclose self other rtol atol (if equal_nan then 1 else 0) |> with_tensor_gc ;; -let isfinite self = - let out__ = CArray.make t 1 in - stubs_isfinite (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let isfinite self = stubs_isfinite self |> with_tensor_gc let isin ~elements ~test_elements ~assume_unique ~invert = - let out__ = CArray.make t 1 in stubs_isin - (CArray.start out__) elements test_elements (if assume_unique then 1 else 0) - (if invert then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if invert then 1 else 0) + |> with_tensor_gc ;; let isin_scalar_tensor ~element ~test_elements ~assume_unique ~invert = - let out__ = CArray.make t 1 in stubs_isin_scalar_tensor - (CArray.start out__) element test_elements (if assume_unique then 1 else 0) - (if invert then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if invert then 1 else 0) + |> with_tensor_gc ;; let isin_scalar_tensor_out ~out ~element ~test_elements ~assume_unique ~invert = - let out__ = CArray.make t 1 in stubs_isin_scalar_tensor_out - (CArray.start out__) out element test_elements (if assume_unique then 1 else 0) - (if invert then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if invert then 1 else 0) + |> with_tensor_gc ;; let isin_tensor_scalar ~elements ~test_element ~assume_unique ~invert = - let out__ = CArray.make t 1 in stubs_isin_tensor_scalar - (CArray.start out__) elements test_element (if assume_unique then 1 else 0) - (if invert then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if invert then 1 else 0) + |> with_tensor_gc ;; let isin_tensor_scalar_out ~out ~elements ~test_element ~assume_unique ~invert = - let out__ = CArray.make t 1 in stubs_isin_tensor_scalar_out - (CArray.start out__) out elements test_element (if assume_unique then 1 else 0) - (if invert then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if invert then 1 else 0) + |> with_tensor_gc ;; let isin_tensor_tensor_out ~out ~elements ~test_elements ~assume_unique ~invert = - let out__ = CArray.make t 1 in stubs_isin_tensor_tensor_out - (CArray.start out__) out elements test_elements (if assume_unique then 1 else 0) - (if invert then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isinf self = - let out__ = CArray.make t 1 in - stubs_isinf (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isinf_out ~out self = - let out__ = CArray.make t 1 in - stubs_isinf_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if invert then 1 else 0) + |> with_tensor_gc ;; -let isnan self = - let out__ = CArray.make t 1 in - stubs_isnan (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isnan_out ~out self = - let out__ = CArray.make t 1 in - stubs_isnan_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isneginf self = - let out__ = CArray.make t 1 in - stubs_isneginf (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isneginf_out ~out self = - let out__ = CArray.make t 1 in - stubs_isneginf_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isposinf self = - let out__ = CArray.make t 1 in - stubs_isposinf (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isposinf_out ~out self = - let out__ = CArray.make t 1 in - stubs_isposinf_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let isreal self = - let out__ = CArray.make t 1 in - stubs_isreal (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let isinf self = stubs_isinf self |> with_tensor_gc +let isinf_out ~out self = stubs_isinf_out out self |> with_tensor_gc +let isnan self = stubs_isnan self |> with_tensor_gc +let isnan_out ~out self = stubs_isnan_out out self |> with_tensor_gc +let isneginf self = stubs_isneginf self |> with_tensor_gc +let isneginf_out ~out self = stubs_isneginf_out out self |> with_tensor_gc +let isposinf self = stubs_isposinf self |> with_tensor_gc +let isposinf_out ~out self = stubs_isposinf_out out self |> with_tensor_gc +let isreal self = stubs_isreal self |> with_tensor_gc let istft self @@ -17870,9 +11702,7 @@ let istft ~length ~return_complex = - let out__ = CArray.make t 1 in stubs_istft - (CArray.start out__) self (Int64.of_int n_fft) (match hop_length with @@ -17889,7 +11719,7 @@ let istft | None -> 1) (match window with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if center then 1 else 0) (if normalized then 1 else 0) (if onesided then 1 else 0) @@ -17899,130 +11729,85 @@ let istft (match length with | Some _ -> 0 | None -> 1) - (if return_complex then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if return_complex then 1 else 0) + |> with_tensor_gc ;; let kaiser_window ~window_length ~options = - let out__ = CArray.make t 1 in stubs_kaiser_window - (CArray.start out__) (Int64.of_int window_length) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let kaiser_window_beta ~window_length ~periodic ~beta ~options = - let out__ = CArray.make t 1 in stubs_kaiser_window_beta - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) beta (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let kaiser_window_beta_out ~out ~window_length ~periodic ~beta = - let out__ = CArray.make t 1 in stubs_kaiser_window_beta_out - (CArray.start out__) out (Int64.of_int window_length) (if periodic then 1 else 0) - beta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + beta + |> with_tensor_gc ;; let kaiser_window_out ~out ~window_length = - let out__ = CArray.make t 1 in - stubs_kaiser_window_out (CArray.start out__) out (Int64.of_int window_length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_kaiser_window_out out (Int64.of_int window_length) |> with_tensor_gc ;; let kaiser_window_periodic ~window_length ~periodic ~options = - let out__ = CArray.make t 1 in stubs_kaiser_window_periodic - (CArray.start out__) (Int64.of_int window_length) (if periodic then 1 else 0) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let kaiser_window_periodic_out ~out ~window_length ~periodic = - let out__ = CArray.make t 1 in stubs_kaiser_window_periodic_out - (CArray.start out__) out (Int64.of_int window_length) - (if periodic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if periodic then 1 else 0) + |> with_tensor_gc ;; let kl_div self ~target ~reduction ~log_target = - let out__ = CArray.make t 1 in stubs_kl_div - (CArray.start out__) self target (Reduction.to_int reduction |> Int64.of_int) - (if log_target then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let kron self other = - let out__ = CArray.make t 1 in - stubs_kron (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if log_target then 1 else 0) + |> with_tensor_gc ;; -let kron_out ~out self other = - let out__ = CArray.make t 1 in - stubs_kron_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let kron self other = stubs_kron self other |> with_tensor_gc +let kron_out ~out self other = stubs_kron_out out self other |> with_tensor_gc let kthvalue self ~k ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_kthvalue (CArray.start out__) self (Int64.of_int k) (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let kthvalue_values ~values ~indices self ~k ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_kthvalue_values (CArray.start out__) values @@ -18031,168 +11816,53 @@ let kthvalue_values ~values ~indices self ~k ~dim ~keepdim = (Int64.of_int k) (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let l1_loss self ~target ~reduction = - let out__ = CArray.make t 1 in - stubs_l1_loss - (CArray.start out__) - self - target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_l1_loss self target (Reduction.to_int reduction |> Int64.of_int) |> with_tensor_gc ;; let layer_norm input ~normalized_shape ~weight ~bias ~eps ~cudnn_enable = - let out__ = CArray.make t 1 in stubs_layer_norm - (CArray.start out__) input (List.map Int64.of_int normalized_shape |> CArray.of_list int64_t |> CArray.start) (List.length normalized_shape) (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) eps - (if cudnn_enable then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lcm self other = - let out__ = CArray.make t 1 in - stubs_lcm (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lcm_ self other = - let out__ = CArray.make t 1 in - stubs_lcm_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lcm_out ~out self other = - let out__ = CArray.make t 1 in - stubs_lcm_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ldexp self other = - let out__ = CArray.make t 1 in - stubs_ldexp (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ldexp_ self other = - let out__ = CArray.make t 1 in - stubs_ldexp_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ldexp_out ~out self other = - let out__ = CArray.make t 1 in - stubs_ldexp_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let le self other = - let out__ = CArray.make t 1 in - stubs_le (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let le_ self other = - let out__ = CArray.make t 1 in - stubs_le_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let le_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_le_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let le_tensor self other = - let out__ = CArray.make t 1 in - stubs_le_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let le_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_le_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let le_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_le_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let leaky_relu self = - let out__ = CArray.make t 1 in - stubs_leaky_relu (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let leaky_relu_ self = - let out__ = CArray.make t 1 in - stubs_leaky_relu_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; + (if cudnn_enable then 1 else 0) + |> with_tensor_gc +;; + +let lcm self other = stubs_lcm self other |> with_tensor_gc +let lcm_ self other = stubs_lcm_ self other |> with_tensor_gc +let lcm_out ~out self other = stubs_lcm_out out self other |> with_tensor_gc +let ldexp self other = stubs_ldexp self other |> with_tensor_gc +let ldexp_ self other = stubs_ldexp_ self other |> with_tensor_gc +let ldexp_out ~out self other = stubs_ldexp_out out self other |> with_tensor_gc +let le self other = stubs_le self other |> with_tensor_gc +let le_ self other = stubs_le_ self other |> with_tensor_gc +let le_scalar_out ~out self other = stubs_le_scalar_out out self other |> with_tensor_gc +let le_tensor self other = stubs_le_tensor self other |> with_tensor_gc +let le_tensor_ self other = stubs_le_tensor_ self other |> with_tensor_gc +let le_tensor_out ~out self other = stubs_le_tensor_out out self other |> with_tensor_gc +let leaky_relu self = stubs_leaky_relu self |> with_tensor_gc +let leaky_relu_ self = stubs_leaky_relu_ self |> with_tensor_gc let leaky_relu_backward ~grad_output self ~negative_slope ~self_is_result = - let out__ = CArray.make t 1 in stubs_leaky_relu_backward - (CArray.start out__) grad_output self negative_slope - (if self_is_result then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if self_is_result then 1 else 0) + |> with_tensor_gc ;; let leaky_relu_backward_grad_input @@ -18202,259 +11872,87 @@ let leaky_relu_backward_grad_input ~negative_slope ~self_is_result = - let out__ = CArray.make t 1 in stubs_leaky_relu_backward_grad_input - (CArray.start out__) grad_input grad_output self negative_slope - (if self_is_result then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let leaky_relu_out ~out self = - let out__ = CArray.make t 1 in - stubs_leaky_relu_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lerp self ~end_ ~weight = - let out__ = CArray.make t 1 in - stubs_lerp (CArray.start out__) self end_ weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if self_is_result then 1 else 0) + |> with_tensor_gc ;; -let lerp_ self ~end_ ~weight = - let out__ = CArray.make t 1 in - stubs_lerp_ (CArray.start out__) self end_ weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let leaky_relu_out ~out self = stubs_leaky_relu_out out self |> with_tensor_gc +let lerp self ~end_ ~weight = stubs_lerp self end_ weight |> with_tensor_gc +let lerp_ self ~end_ ~weight = stubs_lerp_ self end_ weight |> with_tensor_gc let lerp_scalar_out ~out self ~end_ ~weight = - let out__ = CArray.make t 1 in - stubs_lerp_scalar_out (CArray.start out__) out self end_ weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_lerp_scalar_out out self end_ weight |> with_tensor_gc ;; -let lerp_tensor self ~end_ ~weight = - let out__ = CArray.make t 1 in - stubs_lerp_tensor (CArray.start out__) self end_ weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let lerp_tensor self ~end_ ~weight = stubs_lerp_tensor self end_ weight |> with_tensor_gc let lerp_tensor_ self ~end_ ~weight = - let out__ = CArray.make t 1 in - stubs_lerp_tensor_ (CArray.start out__) self end_ weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_lerp_tensor_ self end_ weight |> with_tensor_gc ;; let lerp_tensor_out ~out self ~end_ ~weight = - let out__ = CArray.make t 1 in - stubs_lerp_tensor_out (CArray.start out__) out self end_ weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let less self other = - let out__ = CArray.make t 1 in - stubs_less (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let less_ self other = - let out__ = CArray.make t 1 in - stubs_less_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let less_equal self other = - let out__ = CArray.make t 1 in - stubs_less_equal (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_lerp_tensor_out out self end_ weight |> with_tensor_gc ;; -let less_equal_ self other = - let out__ = CArray.make t 1 in - stubs_less_equal_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let less self other = stubs_less self other |> with_tensor_gc +let less_ self other = stubs_less_ self other |> with_tensor_gc +let less_equal self other = stubs_less_equal self other |> with_tensor_gc +let less_equal_ self other = stubs_less_equal_ self other |> with_tensor_gc let less_equal_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_less_equal_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_less_equal_scalar_out out self other |> with_tensor_gc ;; -let less_equal_tensor self other = - let out__ = CArray.make t 1 in - stubs_less_equal_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let less_equal_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_less_equal_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let less_equal_tensor self other = stubs_less_equal_tensor self other |> with_tensor_gc +let less_equal_tensor_ self other = stubs_less_equal_tensor_ self other |> with_tensor_gc let less_equal_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_less_equal_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_less_equal_tensor_out out self other |> with_tensor_gc ;; let less_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_less_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let less_tensor self other = - let out__ = CArray.make t 1 in - stubs_less_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_less_scalar_out out self other |> with_tensor_gc ;; -let less_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_less_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let less_tensor self other = stubs_less_tensor self other |> with_tensor_gc +let less_tensor_ self other = stubs_less_tensor_ self other |> with_tensor_gc let less_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_less_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lgamma self = - let out__ = CArray.make t 1 in - stubs_lgamma (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lgamma_ self = - let out__ = CArray.make t 1 in - stubs_lgamma_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lgamma_out ~out self = - let out__ = CArray.make t 1 in - stubs_lgamma_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lift self = - let out__ = CArray.make t 1 in - stubs_lift (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lift_fresh self = - let out__ = CArray.make t 1 in - stubs_lift_fresh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lift_fresh_copy self = - let out__ = CArray.make t 1 in - stubs_lift_fresh_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lift_fresh_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_lift_fresh_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_less_tensor_out out self other |> with_tensor_gc ;; -let lift_out ~out self = - let out__ = CArray.make t 1 in - stubs_lift_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let lgamma self = stubs_lgamma self |> with_tensor_gc +let lgamma_ self = stubs_lgamma_ self |> with_tensor_gc +let lgamma_out ~out self = stubs_lgamma_out out self |> with_tensor_gc +let lift self = stubs_lift self |> with_tensor_gc +let lift_fresh self = stubs_lift_fresh self |> with_tensor_gc +let lift_fresh_copy self = stubs_lift_fresh_copy self |> with_tensor_gc +let lift_fresh_copy_out ~out self = stubs_lift_fresh_copy_out out self |> with_tensor_gc +let lift_out ~out self = stubs_lift_out out self |> with_tensor_gc let linalg_cholesky self ~upper = - let out__ = CArray.make t 1 in - stubs_linalg_cholesky (CArray.start out__) self (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_cholesky self (if upper then 1 else 0) |> with_tensor_gc ;; let linalg_cholesky_ex self ~upper ~check_errors = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_cholesky_ex (CArray.start out__) self (if upper then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_cholesky_ex_l ~l ~info self ~upper ~check_errors = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_cholesky_ex_l (CArray.start out__) l @@ -18462,255 +11960,135 @@ let linalg_cholesky_ex_l ~l ~info self ~upper ~check_errors = self (if upper then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_cholesky_out ~out self ~upper = - let out__ = CArray.make t 1 in - stubs_linalg_cholesky_out (CArray.start out__) out self (if upper then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let linalg_cond self ~p = - let out__ = CArray.make t 1 in - stubs_linalg_cond (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let linalg_cond_out ~out self ~p = - let out__ = CArray.make t 1 in - stubs_linalg_cond_out (CArray.start out__) out self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_cholesky_out out self (if upper then 1 else 0) |> with_tensor_gc ;; -let linalg_cond_p_str self ~p = - let out__ = CArray.make t 1 in - stubs_linalg_cond_p_str (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_cond self ~p = stubs_linalg_cond self p |> with_tensor_gc +let linalg_cond_out ~out self ~p = stubs_linalg_cond_out out self p |> with_tensor_gc +let linalg_cond_p_str self ~p = stubs_linalg_cond_p_str self p |> with_tensor_gc let linalg_cond_p_str_out ~out self ~p = - let out__ = CArray.make t 1 in - stubs_linalg_cond_p_str_out (CArray.start out__) out self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_cond_p_str_out out self p |> with_tensor_gc ;; let linalg_cross self other ~dim = - let out__ = CArray.make t 1 in - stubs_linalg_cross (CArray.start out__) self other (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_cross self other (Int64.of_int dim) |> with_tensor_gc ;; let linalg_cross_out ~out self other ~dim = - let out__ = CArray.make t 1 in - stubs_linalg_cross_out (CArray.start out__) out self other (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let linalg_det ~a = - let out__ = CArray.make t 1 in - stubs_linalg_det (CArray.start out__) a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_cross_out out self other (Int64.of_int dim) |> with_tensor_gc ;; -let linalg_det_out ~out ~a = - let out__ = CArray.make t 1 in - stubs_linalg_det_out (CArray.start out__) out a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_det ~a = stubs_linalg_det a |> with_tensor_gc +let linalg_det_out ~out ~a = stubs_linalg_det_out out a |> with_tensor_gc let linalg_diagonal ~a ~offset ~dim1 ~dim2 = - let out__ = CArray.make t 1 in - stubs_linalg_diagonal - (CArray.start out__) - a - (Int64.of_int offset) - (Int64.of_int dim1) - (Int64.of_int dim2); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_diagonal a (Int64.of_int offset) (Int64.of_int dim1) (Int64.of_int dim2) + |> with_tensor_gc ;; let linalg_eig self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_eig (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_eig_out ~eigenvalues ~eigenvectors self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_eig_out (CArray.start out__) eigenvalues eigenvectors self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_eigh self ~uplo = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_eigh (CArray.start out__) self uplo; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_eigh_eigvals ~eigvals ~eigvecs self ~uplo = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_eigh_eigvals (CArray.start out__) eigvals eigvecs self uplo; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let linalg_eigvals self = - let out__ = CArray.make t 1 in - stubs_linalg_eigvals (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let linalg_eigvals_out ~out self = - let out__ = CArray.make t 1 in - stubs_linalg_eigvals_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let linalg_eigvalsh self ~uplo = - let out__ = CArray.make t 1 in - stubs_linalg_eigvalsh (CArray.start out__) self uplo; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_eigvals self = stubs_linalg_eigvals self |> with_tensor_gc +let linalg_eigvals_out ~out self = stubs_linalg_eigvals_out out self |> with_tensor_gc +let linalg_eigvalsh self ~uplo = stubs_linalg_eigvalsh self uplo |> with_tensor_gc let linalg_eigvalsh_out ~out self ~uplo = - let out__ = CArray.make t 1 in - stubs_linalg_eigvalsh_out (CArray.start out__) out self uplo; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_eigvalsh_out out self uplo |> with_tensor_gc ;; let linalg_householder_product input ~tau = - let out__ = CArray.make t 1 in - stubs_linalg_householder_product (CArray.start out__) input tau; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_householder_product input tau |> with_tensor_gc ;; let linalg_householder_product_out ~out input ~tau = - let out__ = CArray.make t 1 in - stubs_linalg_householder_product_out (CArray.start out__) out input tau; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_householder_product_out out input tau |> with_tensor_gc ;; -let linalg_inv ~a = - let out__ = CArray.make t 1 in - stubs_linalg_inv (CArray.start out__) a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_inv ~a = stubs_linalg_inv a |> with_tensor_gc let linalg_inv_ex ~a ~check_errors = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_inv_ex (CArray.start out__) a (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_inv_ex_inverse ~inverse ~info ~a ~check_errors = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_inv_ex_inverse (CArray.start out__) inverse info a (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let linalg_inv_out ~out ~a = - let out__ = CArray.make t 1 in - stubs_linalg_inv_out (CArray.start out__) out a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_inv_out ~out ~a = stubs_linalg_inv_out out a |> with_tensor_gc let linalg_ldl_factor self ~hermitian = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_ldl_factor (CArray.start out__) self (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_ldl_factor_ex self ~hermitian ~check_errors = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_ldl_factor_ex (CArray.start out__) self (if hermitian then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let linalg_ldl_factor_ex_out ~ld ~pivots ~info self ~hermitian ~check_errors = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_ldl_factor_ex_out (CArray.start out__) ld @@ -18719,54 +12097,36 @@ let linalg_ldl_factor_ex_out ~ld ~pivots ~info self ~hermitian ~check_errors = self (if hermitian then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let linalg_ldl_factor_out ~ld ~pivots self ~hermitian = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_ldl_factor_out (CArray.start out__) ld pivots self (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_ldl_solve ~ld ~pivots ~b ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_ldl_solve (CArray.start out__) ld pivots b (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_ldl_solve ld pivots b (if hermitian then 1 else 0) |> with_tensor_gc ;; let linalg_ldl_solve_out ~out ~ld ~pivots ~b ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_ldl_solve_out - (CArray.start out__) - out - ld - pivots - b - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_ldl_solve_out out ld pivots b (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_lstsq self ~b ~rcond ~driver = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_linalg_lstsq (CArray.start out__) self @@ -18776,19 +12136,15 @@ let linalg_lstsq self ~b ~rcond ~driver = | Some _ -> 0 | None -> 1) driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let linalg_lstsq_out ~solution ~residuals ~rank ~singular_values self ~b ~rcond ~driver = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_linalg_lstsq_out (CArray.start out__) solution @@ -18802,57 +12158,45 @@ let linalg_lstsq_out ~solution ~residuals ~rank ~singular_values self ~b ~rcond | Some _ -> 0 | None -> 1) driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; let linalg_lu ~a ~pivot = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_lu (CArray.start out__) a (if pivot then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let linalg_lu_factor ~a ~pivot = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_lu_factor (CArray.start out__) a (if pivot then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_lu_factor_ex ~a ~pivot ~check_errors = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_lu_factor_ex (CArray.start out__) a (if pivot then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let linalg_lu_factor_ex_out ~lu ~pivots ~info ~a ~pivot ~check_errors = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_lu_factor_ex_out (CArray.start out__) lu @@ -18861,126 +12205,71 @@ let linalg_lu_factor_ex_out ~lu ~pivots ~info ~a ~pivot ~check_errors = a (if pivot then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let linalg_lu_factor_out ~lu ~pivots ~a ~pivot = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_lu_factor_out (CArray.start out__) lu pivots a (if pivot then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_lu_out ~p ~l ~u ~a ~pivot = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_lu_out (CArray.start out__) p l u a (if pivot then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let linalg_lu_solve ~lu ~pivots ~b ~left ~adjoint = - let out__ = CArray.make t 1 in - stubs_linalg_lu_solve - (CArray.start out__) - lu - pivots - b - (if left then 1 else 0) - (if adjoint then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_lu_solve lu pivots b (if left then 1 else 0) (if adjoint then 1 else 0) + |> with_tensor_gc ;; let linalg_lu_solve_out ~out ~lu ~pivots ~b ~left ~adjoint = - let out__ = CArray.make t 1 in stubs_linalg_lu_solve_out - (CArray.start out__) out lu pivots b (if left then 1 else 0) - (if adjoint then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if adjoint then 1 else 0) + |> with_tensor_gc ;; -let linalg_matmul self other = - let out__ = CArray.make t 1 in - stubs_linalg_matmul (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_matmul self other = stubs_linalg_matmul self other |> with_tensor_gc let linalg_matmul_out ~out self other = - let out__ = CArray.make t 1 in - stubs_linalg_matmul_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matmul_out out self other |> with_tensor_gc ;; -let linalg_matrix_exp self = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_exp (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_matrix_exp self = stubs_linalg_matrix_exp self |> with_tensor_gc let linalg_matrix_exp_out ~out self = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_exp_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matrix_exp_out out self |> with_tensor_gc ;; let linalg_matrix_power self ~n = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_power (CArray.start out__) self (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matrix_power self (Int64.of_int n) |> with_tensor_gc ;; let linalg_matrix_power_out ~out self ~n = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_power_out (CArray.start out__) out self (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matrix_power_out out self (Int64.of_int n) |> with_tensor_gc ;; let linalg_matrix_rank self ~tol ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_rank (CArray.start out__) self tol (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matrix_rank self tol (if hermitian then 1 else 0) |> with_tensor_gc ;; let linalg_matrix_rank_atol_rtol_float self ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_matrix_rank_atol_rtol_float - (CArray.start out__) self (Option.value atol ~default:0.0) (match atol with @@ -18990,16 +12279,12 @@ let linalg_matrix_rank_atol_rtol_float self ~atol ~rtol ~hermitian = (match rtol with | Some _ -> 0 | None -> 1) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_matrix_rank_atol_rtol_float_out ~out self ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_matrix_rank_atol_rtol_float_out - (CArray.start out__) out self (Option.value atol ~default:0.0) @@ -19010,120 +12295,72 @@ let linalg_matrix_rank_atol_rtol_float_out ~out self ~atol ~rtol ~hermitian = (match rtol with | Some _ -> 0 | None -> 1) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_matrix_rank_atol_rtol_tensor input ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_matrix_rank_atol_rtol_tensor - (CArray.start out__) input (match atol with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match rtol with | Some v -> v - | None -> null) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_matrix_rank_atol_rtol_tensor_out ~out input ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_matrix_rank_atol_rtol_tensor_out - (CArray.start out__) out input (match atol with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match rtol with | Some v -> v - | None -> null) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_matrix_rank_out ~out self ~tol ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_rank_out - (CArray.start out__) - out - self - tol - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matrix_rank_out out self tol (if hermitian then 1 else 0) |> with_tensor_gc ;; let linalg_matrix_rank_out_tol_tensor ~out input ~tol ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_rank_out_tol_tensor - (CArray.start out__) - out - input - tol - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matrix_rank_out_tol_tensor out input tol (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_matrix_rank_tol_tensor input ~tol ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_matrix_rank_tol_tensor - (CArray.start out__) - input - tol - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_matrix_rank_tol_tensor input tol (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_multi_dot tensors = - let out__ = CArray.make t 1 in stubs_linalg_multi_dot - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let linalg_multi_dot_out ~out tensors = - let out__ = CArray.make t 1 in stubs_linalg_multi_dot_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let linalg_pinv self ~rcond ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_pinv (CArray.start out__) self rcond (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_pinv self rcond (if hermitian then 1 else 0) |> with_tensor_gc ;; let linalg_pinv_atol_rtol_float self ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_pinv_atol_rtol_float - (CArray.start out__) self (Option.value atol ~default:0.0) (match atol with @@ -19133,16 +12370,12 @@ let linalg_pinv_atol_rtol_float self ~atol ~rtol ~hermitian = (match rtol with | Some _ -> 0 | None -> 1) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_pinv_atol_rtol_float_out ~out self ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_pinv_atol_rtol_float_out - (CArray.start out__) out self (Option.value atol ~default:0.0) @@ -19153,145 +12386,101 @@ let linalg_pinv_atol_rtol_float_out ~out self ~atol ~rtol ~hermitian = (match rtol with | Some _ -> 0 | None -> 1) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_pinv_atol_rtol_tensor self ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_pinv_atol_rtol_tensor - (CArray.start out__) self (match atol with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match rtol with | Some v -> v - | None -> null) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_pinv_atol_rtol_tensor_out ~out self ~atol ~rtol ~hermitian = - let out__ = CArray.make t 1 in stubs_linalg_pinv_atol_rtol_tensor_out - (CArray.start out__) out self (match atol with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match rtol with | Some v -> v - | None -> null) - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_pinv_out ~out self ~rcond ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_pinv_out (CArray.start out__) out self rcond (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let linalg_pinv_out_rcond_tensor ~out self ~rcond ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_pinv_out_rcond_tensor - (CArray.start out__) - out - self - rcond - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_pinv_out out self rcond (if hermitian then 1 else 0) |> with_tensor_gc +;; + +let linalg_pinv_out_rcond_tensor ~out self ~rcond ~hermitian = + stubs_linalg_pinv_out_rcond_tensor out self rcond (if hermitian then 1 else 0) + |> with_tensor_gc ;; let linalg_pinv_rcond_tensor self ~rcond ~hermitian = - let out__ = CArray.make t 1 in - stubs_linalg_pinv_rcond_tensor - (CArray.start out__) - self - rcond - (if hermitian then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_pinv_rcond_tensor self rcond (if hermitian then 1 else 0) |> with_tensor_gc ;; let linalg_qr ~a ~mode = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_qr (CArray.start out__) a mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_qr_out ~q ~r ~a ~mode = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_qr_out (CArray.start out__) q r a mode; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_slogdet ~a = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_slogdet (CArray.start out__) a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_slogdet_out ~sign ~logabsdet ~a = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_slogdet_out (CArray.start out__) sign logabsdet a; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_solve ~a ~b ~left = - let out__ = CArray.make t 1 in - stubs_linalg_solve (CArray.start out__) a b (if left then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_solve a b (if left then 1 else 0) |> with_tensor_gc ;; let linalg_solve_ex ~a ~b ~left ~check_errors = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_solve_ex (CArray.start out__) a b (if left then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_solve_ex_out result ~info ~a ~b ~left ~check_errors = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_linalg_solve_ex_out (CArray.start out__) result @@ -19300,110 +12489,70 @@ let linalg_solve_ex_out result ~info ~a ~b ~left ~check_errors = b (if left then 1 else 0) (if check_errors then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let linalg_solve_out ~out ~a ~b ~left = - let out__ = CArray.make t 1 in - stubs_linalg_solve_out (CArray.start out__) out a b (if left then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_solve_out out a b (if left then 1 else 0) |> with_tensor_gc ;; let linalg_solve_triangular self ~b ~upper ~left ~unitriangular = - let out__ = CArray.make t 1 in stubs_linalg_solve_triangular - (CArray.start out__) self b (if upper then 1 else 0) (if left then 1 else 0) - (if unitriangular then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if unitriangular then 1 else 0) + |> with_tensor_gc ;; let linalg_solve_triangular_out ~out self ~b ~upper ~left ~unitriangular = - let out__ = CArray.make t 1 in stubs_linalg_solve_triangular_out - (CArray.start out__) out self b (if upper then 1 else 0) (if left then 1 else 0) - (if unitriangular then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if unitriangular then 1 else 0) + |> with_tensor_gc ;; let linalg_svd ~a ~full_matrices ~driver = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_svd (CArray.start out__) a (if full_matrices then 1 else 0) driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let linalg_svd_u ~u ~s ~vh ~a ~full_matrices ~driver = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_linalg_svd_u (CArray.start out__) u s vh a (if full_matrices then 1 else 0) driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; -let linalg_svdvals ~a ~driver = - let out__ = CArray.make t 1 in - stubs_linalg_svdvals (CArray.start out__) a driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let linalg_svdvals ~a ~driver = stubs_linalg_svdvals a driver |> with_tensor_gc let linalg_svdvals_out ~out ~a ~driver = - let out__ = CArray.make t 1 in - stubs_linalg_svdvals_out (CArray.start out__) out a driver; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_svdvals_out out a driver |> with_tensor_gc ;; let linalg_tensorinv self ~ind = - let out__ = CArray.make t 1 in - stubs_linalg_tensorinv (CArray.start out__) self (Int64.of_int ind); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_tensorinv self (Int64.of_int ind) |> with_tensor_gc ;; let linalg_tensorinv_out ~out self ~ind = - let out__ = CArray.make t 1 in - stubs_linalg_tensorinv_out (CArray.start out__) out self (Int64.of_int ind); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_tensorinv_out out self (Int64.of_int ind) |> with_tensor_gc ;; let linalg_tensorsolve self other ~dims = - let out__ = CArray.make t 1 in stubs_linalg_tensorsolve - (CArray.start out__) self other (match dims with @@ -19411,16 +12560,12 @@ let linalg_tensorsolve self other ~dims = | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match dims with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let linalg_tensorsolve_out ~out self other ~dims = - let out__ = CArray.make t 1 in stubs_linalg_tensorsolve_out - (CArray.start out__) out self other @@ -19429,551 +12574,223 @@ let linalg_tensorsolve_out ~out self other ~dims = | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match dims with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let linalg_vander ~x ~n = - let out__ = CArray.make t 1 in stubs_linalg_vander - (CArray.start out__) x (match n with | None -> Int64.zero | Some v -> Int64.of_int v) (match n with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let linalg_vecdot ~x ~y ~dim = - let out__ = CArray.make t 1 in - stubs_linalg_vecdot (CArray.start out__) x y (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_vecdot x y (Int64.of_int dim) |> with_tensor_gc ;; let linalg_vecdot_out ~out ~x ~y ~dim = - let out__ = CArray.make t 1 in - stubs_linalg_vecdot_out (CArray.start out__) out x y (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_linalg_vecdot_out out x y (Int64.of_int dim) |> with_tensor_gc ;; let linear input ~weight ~bias = - let out__ = CArray.make t 1 in stubs_linear - (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let linear_out ~out input ~weight ~bias = - let out__ = CArray.make t 1 in stubs_linear_out - (CArray.start out__) out input weight (match bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let linspace ~start ~end_ ~steps ~options = - let out__ = CArray.make t 1 in stubs_linspace - (CArray.start out__) start end_ (Int64.of_int steps) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let linspace_out ~out ~start ~end_ ~steps = - let out__ = CArray.make t 1 in - stubs_linspace_out (CArray.start out__) out start end_ (Int64.of_int steps); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log self = - let out__ = CArray.make t 1 in - stubs_log (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log10 self = - let out__ = CArray.make t 1 in - stubs_log10 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log10_ self = - let out__ = CArray.make t 1 in - stubs_log10_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log10_out ~out self = - let out__ = CArray.make t 1 in - stubs_log10_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log1p self = - let out__ = CArray.make t 1 in - stubs_log1p (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log1p_ self = - let out__ = CArray.make t 1 in - stubs_log1p_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log1p_out ~out self = - let out__ = CArray.make t 1 in - stubs_log1p_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log2 self = - let out__ = CArray.make t 1 in - stubs_log2 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log2_ self = - let out__ = CArray.make t 1 in - stubs_log2_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log2_out ~out self = - let out__ = CArray.make t 1 in - stubs_log2_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log_ self = - let out__ = CArray.make t 1 in - stubs_log_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log_normal self ~mean ~std = - let out__ = CArray.make t 1 in - stubs_log_normal (CArray.start out__) self mean std; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log_normal_ self ~mean ~std = - let out__ = CArray.make t 1 in - stubs_log_normal_ (CArray.start out__) self mean std; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; + stubs_linspace_out out start end_ (Int64.of_int steps) |> with_tensor_gc +;; + +let log self = stubs_log self |> with_tensor_gc +let log10 self = stubs_log10 self |> with_tensor_gc +let log10_ self = stubs_log10_ self |> with_tensor_gc +let log10_out ~out self = stubs_log10_out out self |> with_tensor_gc +let log1p self = stubs_log1p self |> with_tensor_gc +let log1p_ self = stubs_log1p_ self |> with_tensor_gc +let log1p_out ~out self = stubs_log1p_out out self |> with_tensor_gc +let log2 self = stubs_log2 self |> with_tensor_gc +let log2_ self = stubs_log2_ self |> with_tensor_gc +let log2_out ~out self = stubs_log2_out out self |> with_tensor_gc +let log_ self = stubs_log_ self |> with_tensor_gc +let log_normal self ~mean ~std = stubs_log_normal self mean std |> with_tensor_gc +let log_normal_ self ~mean ~std = stubs_log_normal_ self mean std |> with_tensor_gc let log_normal_out ~out self ~mean ~std = - let out__ = CArray.make t 1 in - stubs_log_normal_out (CArray.start out__) out self mean std; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let log_out ~out self = - let out__ = CArray.make t 1 in - stubs_log_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_log_normal_out out self mean std |> with_tensor_gc ;; -let log_sigmoid self = - let out__ = CArray.make t 1 in - stubs_log_sigmoid (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let log_out ~out self = stubs_log_out out self |> with_tensor_gc +let log_sigmoid self = stubs_log_sigmoid self |> with_tensor_gc let log_sigmoid_backward ~grad_output self ~buffer = - let out__ = CArray.make t 1 in - stubs_log_sigmoid_backward (CArray.start out__) grad_output self buffer; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_log_sigmoid_backward grad_output self buffer |> with_tensor_gc ;; let log_sigmoid_backward_grad_input ~grad_input ~grad_output self ~buffer = - let out__ = CArray.make t 1 in - stubs_log_sigmoid_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - buffer; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_log_sigmoid_backward_grad_input grad_input grad_output self buffer + |> with_tensor_gc ;; -let log_sigmoid_out ~out self = - let out__ = CArray.make t 1 in - stubs_log_sigmoid_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let log_sigmoid_out ~out self = stubs_log_sigmoid_out out self |> with_tensor_gc let log_softmax self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_log_softmax - (CArray.start out__) - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_log_softmax self (Int64.of_int dim) (Kind.packed_to_int dtype) |> with_tensor_gc ;; let log_softmax_int_out ~out self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_log_softmax_int_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logaddexp self other = - let out__ = CArray.make t 1 in - stubs_logaddexp (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logaddexp2 self other = - let out__ = CArray.make t 1 in - stubs_logaddexp2 (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logaddexp2_out ~out self other = - let out__ = CArray.make t 1 in - stubs_logaddexp2_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_log_softmax_int_out out self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc ;; -let logaddexp_out ~out self other = - let out__ = CArray.make t 1 in - stubs_logaddexp_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logcumsumexp self ~dim = - let out__ = CArray.make t 1 in - stubs_logcumsumexp (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let logaddexp self other = stubs_logaddexp self other |> with_tensor_gc +let logaddexp2 self other = stubs_logaddexp2 self other |> with_tensor_gc +let logaddexp2_out ~out self other = stubs_logaddexp2_out out self other |> with_tensor_gc +let logaddexp_out ~out self other = stubs_logaddexp_out out self other |> with_tensor_gc +let logcumsumexp self ~dim = stubs_logcumsumexp self (Int64.of_int dim) |> with_tensor_gc let logcumsumexp_out ~out self ~dim = - let out__ = CArray.make t 1 in - stubs_logcumsumexp_out (CArray.start out__) out self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logdet self = - let out__ = CArray.make t 1 in - stubs_logdet (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_and self other = - let out__ = CArray.make t 1 in - stubs_logical_and (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_logcumsumexp_out out self (Int64.of_int dim) |> with_tensor_gc ;; -let logical_and_ self other = - let out__ = CArray.make t 1 in - stubs_logical_and_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let logdet self = stubs_logdet self |> with_tensor_gc +let logical_and self other = stubs_logical_and self other |> with_tensor_gc +let logical_and_ self other = stubs_logical_and_ self other |> with_tensor_gc let logical_and_out ~out self other = - let out__ = CArray.make t 1 in - stubs_logical_and_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_logical_and_out out self other |> with_tensor_gc ;; -let logical_not self = - let out__ = CArray.make t 1 in - stubs_logical_not (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_not_ self = - let out__ = CArray.make t 1 in - stubs_logical_not_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_not_out ~out self = - let out__ = CArray.make t 1 in - stubs_logical_not_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_or self other = - let out__ = CArray.make t 1 in - stubs_logical_or (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_or_ self other = - let out__ = CArray.make t 1 in - stubs_logical_or_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_or_out ~out self other = - let out__ = CArray.make t 1 in - stubs_logical_or_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_xor self other = - let out__ = CArray.make t 1 in - stubs_logical_xor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let logical_xor_ self other = - let out__ = CArray.make t 1 in - stubs_logical_xor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let logical_not self = stubs_logical_not self |> with_tensor_gc +let logical_not_ self = stubs_logical_not_ self |> with_tensor_gc +let logical_not_out ~out self = stubs_logical_not_out out self |> with_tensor_gc +let logical_or self other = stubs_logical_or self other |> with_tensor_gc +let logical_or_ self other = stubs_logical_or_ self other |> with_tensor_gc +let logical_or_out ~out self other = stubs_logical_or_out out self other |> with_tensor_gc +let logical_xor self other = stubs_logical_xor self other |> with_tensor_gc +let logical_xor_ self other = stubs_logical_xor_ self other |> with_tensor_gc let logical_xor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_logical_xor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_logical_xor_out out self other |> with_tensor_gc ;; let logit self ~eps = - let out__ = CArray.make t 1 in stubs_logit - (CArray.start out__) self (Option.value eps ~default:0.0) (match eps with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let logit_ self ~eps = - let out__ = CArray.make t 1 in stubs_logit_ - (CArray.start out__) self (Option.value eps ~default:0.0) (match eps with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let logit_backward ~grad_output self ~eps = - let out__ = CArray.make t 1 in stubs_logit_backward - (CArray.start out__) grad_output self (Option.value eps ~default:0.0) (match eps with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let logit_backward_grad_input ~grad_input ~grad_output self ~eps = - let out__ = CArray.make t 1 in stubs_logit_backward_grad_input - (CArray.start out__) grad_input grad_output self (Option.value eps ~default:0.0) (match eps with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let logit_out ~out self ~eps = - let out__ = CArray.make t 1 in stubs_logit_out - (CArray.start out__) out self (Option.value eps ~default:0.0) (match eps with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let logspace ~start ~end_ ~steps ~base ~options = - let out__ = CArray.make t 1 in stubs_logspace - (CArray.start out__) start end_ (Int64.of_int steps) base (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let logspace_out ~out ~start ~end_ ~steps ~base = - let out__ = CArray.make t 1 in - stubs_logspace_out (CArray.start out__) out start end_ (Int64.of_int steps) base; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_logspace_out out start end_ (Int64.of_int steps) base |> with_tensor_gc ;; let logsumexp self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_logsumexp - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let logsumexp_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_logsumexp_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let lstm @@ -19987,13 +12804,13 @@ let lstm ~bidirectional ~batch_first = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_lstm (CArray.start out__) input - (CArray.of_list t hx |> CArray.start) + (CArray.of_list gc_tensor hx |> CArray.start) (List.length hx) - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) @@ -20001,34 +12818,29 @@ let lstm (if train then 1 else 0) (if bidirectional then 1 else 0) (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let lstm_cell input ~hx ~w_ih ~w_hh ~b_ih ~b_hh = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_lstm_cell (CArray.start out__) input - (CArray.of_list t hx |> CArray.start) + (CArray.of_list gc_tensor hx |> CArray.start) (List.length hx) w_ih w_hh (match b_ih with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match b_hh with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -20043,26 +12855,23 @@ let lstm_data ~train ~bidirectional = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_lstm_data (CArray.start out__) data batch_sizes - (CArray.of_list t hx |> CArray.start) + (CArray.of_list gc_tensor hx |> CArray.start) (List.length hx) - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) dropout (if train then 1 else 0) (if bidirectional then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -20088,26 +12897,26 @@ let lstm_mps_backward = stubs_lstm_mps_backward out0 - (CArray.of_list t out1 |> CArray.start) + (CArray.of_list gc_tensor out1 |> CArray.start) (List.length out1) - (CArray.of_list t out2 |> CArray.start) + (CArray.of_list gc_tensor out2 |> CArray.start) (List.length out2) (match grad_y with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_hy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_cy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) z_state cell_state_fwd input layersoutputs - (CArray.of_list t hx |> CArray.start) + (CArray.of_list gc_tensor hx |> CArray.start) (List.length hx) - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) @@ -20117,89 +12926,37 @@ let lstm_mps_backward (if batch_first then 1 else 0) ;; -let lt self other = - let out__ = CArray.make t 1 in - stubs_lt (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lt_ self other = - let out__ = CArray.make t 1 in - stubs_lt_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lt_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_lt_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lt_tensor self other = - let out__ = CArray.make t 1 in - stubs_lt_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lt_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_lt_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let lt_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_lt_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let lt self other = stubs_lt self other |> with_tensor_gc +let lt_ self other = stubs_lt_ self other |> with_tensor_gc +let lt_scalar_out ~out self other = stubs_lt_scalar_out out self other |> with_tensor_gc +let lt_tensor self other = stubs_lt_tensor self other |> with_tensor_gc +let lt_tensor_ self other = stubs_lt_tensor_ self other |> with_tensor_gc +let lt_tensor_out ~out self other = stubs_lt_tensor_out out self other |> with_tensor_gc let lu_solve self ~lu_data ~lu_pivots = - let out__ = CArray.make t 1 in - stubs_lu_solve (CArray.start out__) self lu_data lu_pivots; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_lu_solve self lu_data lu_pivots |> with_tensor_gc ;; let lu_solve_out ~out self ~lu_data ~lu_pivots = - let out__ = CArray.make t 1 in - stubs_lu_solve_out (CArray.start out__) out self lu_data lu_pivots; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_lu_solve_out out self lu_data lu_pivots |> with_tensor_gc ;; let lu_unpack ~lu_data ~lu_pivots ~unpack_data ~unpack_pivots = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_lu_unpack (CArray.start out__) lu_data lu_pivots (if unpack_data then 1 else 0) (if unpack_pivots then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let lu_unpack_out ~p ~l ~u ~lu_data ~lu_pivots ~unpack_data ~unpack_pivots = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_lu_unpack_out (CArray.start out__) p @@ -20209,201 +12966,86 @@ let lu_unpack_out ~p ~l ~u ~lu_data ~lu_pivots ~unpack_data ~unpack_pivots = lu_pivots (if unpack_data then 1 else 0) (if unpack_pivots then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let margin_ranking_loss ~input1 ~input2 ~target ~margin ~reduction = - let out__ = CArray.make t 1 in stubs_margin_ranking_loss - (CArray.start out__) input1 input2 target margin - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let masked_fill self ~mask ~value = - let out__ = CArray.make t 1 in - stubs_masked_fill (CArray.start out__) self mask value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; -let masked_fill_ self ~mask ~value = - let out__ = CArray.make t 1 in - stubs_masked_fill_ (CArray.start out__) self mask value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let masked_fill self ~mask ~value = stubs_masked_fill self mask value |> with_tensor_gc +let masked_fill_ self ~mask ~value = stubs_masked_fill_ self mask value |> with_tensor_gc let masked_fill_scalar_out ~out self ~mask ~value = - let out__ = CArray.make t 1 in - stubs_masked_fill_scalar_out (CArray.start out__) out self mask value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_fill_scalar_out out self mask value |> with_tensor_gc ;; let masked_fill_tensor self ~mask ~value = - let out__ = CArray.make t 1 in - stubs_masked_fill_tensor (CArray.start out__) self mask value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_fill_tensor self mask value |> with_tensor_gc ;; let masked_fill_tensor_ self ~mask ~value = - let out__ = CArray.make t 1 in - stubs_masked_fill_tensor_ (CArray.start out__) self mask value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_fill_tensor_ self mask value |> with_tensor_gc ;; let masked_fill_tensor_out ~out self ~mask ~value = - let out__ = CArray.make t 1 in - stubs_masked_fill_tensor_out (CArray.start out__) out self mask value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_fill_tensor_out out self mask value |> with_tensor_gc ;; let masked_scatter self ~mask ~source = - let out__ = CArray.make t 1 in - stubs_masked_scatter (CArray.start out__) self mask source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_scatter self mask source |> with_tensor_gc ;; let masked_scatter_ self ~mask ~source = - let out__ = CArray.make t 1 in - stubs_masked_scatter_ (CArray.start out__) self mask source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_scatter_ self mask source |> with_tensor_gc ;; let masked_scatter_out ~out self ~mask ~source = - let out__ = CArray.make t 1 in - stubs_masked_scatter_out (CArray.start out__) out self mask source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_scatter_out out self mask source |> with_tensor_gc ;; -let masked_select self ~mask = - let out__ = CArray.make t 1 in - stubs_masked_select (CArray.start out__) self mask; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let masked_select self ~mask = stubs_masked_select self mask |> with_tensor_gc let masked_select_backward ~grad input ~mask = - let out__ = CArray.make t 1 in - stubs_masked_select_backward (CArray.start out__) grad input mask; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_select_backward grad input mask |> with_tensor_gc ;; let masked_select_out ~out self ~mask = - let out__ = CArray.make t 1 in - stubs_masked_select_out (CArray.start out__) out self mask; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let matmul self other = - let out__ = CArray.make t 1 in - stubs_matmul (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let matmul_out ~out self other = - let out__ = CArray.make t 1 in - stubs_matmul_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let matrix_exp self = - let out__ = CArray.make t 1 in - stubs_matrix_exp (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let matrix_exp_backward self ~grad = - let out__ = CArray.make t 1 in - stubs_matrix_exp_backward (CArray.start out__) self grad; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let matrix_h self = - let out__ = CArray.make t 1 in - stubs_matrix_h (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_masked_select_out out self mask |> with_tensor_gc ;; -let matrix_power self ~n = - let out__ = CArray.make t 1 in - stubs_matrix_power (CArray.start out__) self (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let matmul self other = stubs_matmul self other |> with_tensor_gc +let matmul_out ~out self other = stubs_matmul_out out self other |> with_tensor_gc +let matrix_exp self = stubs_matrix_exp self |> with_tensor_gc +let matrix_exp_backward self ~grad = stubs_matrix_exp_backward self grad |> with_tensor_gc +let matrix_h self = stubs_matrix_h self |> with_tensor_gc +let matrix_power self ~n = stubs_matrix_power self (Int64.of_int n) |> with_tensor_gc let matrix_power_out ~out self ~n = - let out__ = CArray.make t 1 in - stubs_matrix_power_out (CArray.start out__) out self (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_matrix_power_out out self (Int64.of_int n) |> with_tensor_gc ;; -let max self = - let out__ = CArray.make t 1 in - stubs_max (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let max self = stubs_max self |> with_tensor_gc let max_dim self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_max_dim (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let max_dim_max ~max ~max_values self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_max_dim_max (CArray.start out__) max @@ -20411,33 +13053,16 @@ let max_dim_max ~max ~max_values self ~dim ~keepdim = self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let max_other self other = - let out__ = CArray.make t 1 in - stubs_max_other (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let max_out ~out self other = - let out__ = CArray.make t 1 in - stubs_max_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let max_other self other = stubs_max_other self other |> with_tensor_gc +let max_out ~out self other = stubs_max_out out self other |> with_tensor_gc let max_pool1d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_max_pool1d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -20447,14 +13072,12 @@ let max_pool1d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let max_pool1d_with_indices self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_max_pool1d_with_indices (CArray.start out__) self @@ -20467,17 +13090,13 @@ let max_pool1d_with_indices self ~kernel_size ~stride ~padding ~dilation ~ceil_m (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let max_pool2d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_max_pool2d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -20487,10 +13106,8 @@ let max_pool2d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let max_pool2d_backward @@ -20502,9 +13119,7 @@ let max_pool2d_backward ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_max_pool2d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -20515,10 +13130,8 @@ let max_pool2d_backward (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let max_pool2d_backward_out @@ -20531,9 +13144,7 @@ let max_pool2d_backward_out ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_max_pool2d_backward_out - (CArray.start out__) out grad_output self @@ -20545,14 +13156,12 @@ let max_pool2d_backward_out (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let max_pool2d_with_indices self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_max_pool2d_with_indices (CArray.start out__) self @@ -20565,10 +13174,8 @@ let max_pool2d_with_indices self ~kernel_size ~stride ~padding ~dilation ~ceil_m (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -20582,9 +13189,7 @@ let max_pool2d_with_indices_backward ~ceil_mode ~indices = - let out__ = CArray.make t 1 in stubs_max_pool2d_with_indices_backward - (CArray.start out__) grad_output self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -20596,10 +13201,8 @@ let max_pool2d_with_indices_backward (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let max_pool2d_with_indices_backward_grad_input @@ -20613,9 +13216,7 @@ let max_pool2d_with_indices_backward_grad_input ~ceil_mode ~indices = - let out__ = CArray.make t 1 in stubs_max_pool2d_with_indices_backward_grad_input - (CArray.start out__) grad_input grad_output self @@ -20628,10 +13229,8 @@ let max_pool2d_with_indices_backward_grad_input (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let max_pool2d_with_indices_out @@ -20644,7 +13243,7 @@ let max_pool2d_with_indices_out ~dilation ~ceil_mode = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_max_pool2d_with_indices_out (CArray.start out__) out @@ -20659,17 +13258,13 @@ let max_pool2d_with_indices_out (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let max_pool3d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_max_pool3d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -20679,14 +13274,12 @@ let max_pool3d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let max_pool3d_with_indices self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_max_pool3d_with_indices (CArray.start out__) self @@ -20699,10 +13292,8 @@ let max_pool3d_with_indices self ~kernel_size ~stride ~padding ~dilation ~ceil_m (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -20716,9 +13307,7 @@ let max_pool3d_with_indices_backward ~ceil_mode ~indices = - let out__ = CArray.make t 1 in stubs_max_pool3d_with_indices_backward - (CArray.start out__) grad_output self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -20730,10 +13319,8 @@ let max_pool3d_with_indices_backward (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let max_pool3d_with_indices_backward_grad_input @@ -20747,9 +13334,7 @@ let max_pool3d_with_indices_backward_grad_input ~ceil_mode ~indices = - let out__ = CArray.make t 1 in stubs_max_pool3d_with_indices_backward_grad_input - (CArray.start out__) grad_input grad_output self @@ -20762,10 +13347,8 @@ let max_pool3d_with_indices_backward_grad_input (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0) - indices; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + indices + |> with_tensor_gc ;; let max_pool3d_with_indices_out @@ -20778,7 +13361,7 @@ let max_pool3d_with_indices_out ~dilation ~ceil_mode = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_max_pool3d_with_indices_out (CArray.start out__) out @@ -20793,52 +13376,34 @@ let max_pool3d_with_indices_out (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let max_unary_out ~out self = - let out__ = CArray.make t 1 in - stubs_max_unary_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let max_unary_out ~out self = stubs_max_unary_out out self |> with_tensor_gc let max_unpool2d self ~indices ~output_size = - let out__ = CArray.make t 1 in stubs_max_unpool2d - (CArray.start out__) self indices (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let max_unpool2d_out ~out self ~indices ~output_size = - let out__ = CArray.make t 1 in stubs_max_unpool2d_out - (CArray.start out__) out self indices (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let max_unpool3d self ~indices ~output_size ~stride ~padding = - let out__ = CArray.make t 1 in stubs_max_unpool3d - (CArray.start out__) self indices (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -20846,16 +13411,12 @@ let max_unpool3d self ~indices ~output_size ~stride ~padding = (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let max_unpool3d_out ~out self ~indices ~output_size ~stride ~padding = - let out__ = CArray.make t 1 in stubs_max_unpool3d_out - (CArray.start out__) out self indices @@ -20864,40 +13425,16 @@ let max_unpool3d_out ~out self ~indices ~output_size ~stride ~padding = (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let maximum self other = - let out__ = CArray.make t 1 in - stubs_maximum (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let maximum_out ~out self other = - let out__ = CArray.make t 1 in - stubs_maximum_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; -let mean self ~dtype = - let out__ = CArray.make t 1 in - stubs_mean (CArray.start out__) self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let maximum self other = stubs_maximum self other |> with_tensor_gc +let maximum_out ~out self other = stubs_maximum_out out self other |> with_tensor_gc +let mean self ~dtype = stubs_mean self (Kind.packed_to_int dtype) |> with_tensor_gc let mean_dim self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_mean_dim - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -20906,16 +13443,12 @@ let mean_dim self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let mean_out ~out self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_mean_out - (CArray.start out__) out self (match dim with @@ -20925,32 +13458,22 @@ let mean_out ~out self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; -let median self = - let out__ = CArray.make t 1 in - stubs_median (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let median self = stubs_median self |> with_tensor_gc let median_dim self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_median_dim (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let median_dim_values ~values ~indices self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_median_dim_values (CArray.start out__) values @@ -20958,62 +13481,39 @@ let median_dim_values ~values ~indices self ~dim ~keepdim = self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let median_out ~out self = - let out__ = CArray.make t 1 in - stubs_median_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let median_out ~out self = stubs_median_out out self |> with_tensor_gc let meshgrid tensors = - stubs_meshgrid (CArray.of_list t tensors |> CArray.start) (List.length tensors) + stubs_meshgrid (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) |> to_tensor_list ;; let meshgrid_indexing tensors ~indexing = stubs_meshgrid_indexing - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) indexing |> to_tensor_list ;; -let mh self = - let out__ = CArray.make t 1 in - stubs_mh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let min self = - let out__ = CArray.make t 1 in - stubs_min (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let mh self = stubs_mh self |> with_tensor_gc +let min self = stubs_min self |> with_tensor_gc let min_dim self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_min_dim (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let min_dim_min ~min ~min_indices self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_min_dim_min (CArray.start out__) min @@ -21021,52 +13521,16 @@ let min_dim_min ~min ~min_indices self ~dim ~keepdim = self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let min_other self other = - let out__ = CArray.make t 1 in - stubs_min_other (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let min_out ~out self other = - let out__ = CArray.make t 1 in - stubs_min_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let min_unary_out ~out self = - let out__ = CArray.make t 1 in - stubs_min_unary_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let minimum self other = - let out__ = CArray.make t 1 in - stubs_minimum (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let minimum_out ~out self other = - let out__ = CArray.make t 1 in - stubs_minimum_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let min_other self other = stubs_min_other self other |> with_tensor_gc +let min_out ~out self other = stubs_min_out out self other |> with_tensor_gc +let min_unary_out ~out self = stubs_min_unary_out out self |> with_tensor_gc +let minimum self other = stubs_minimum self other |> with_tensor_gc +let minimum_out ~out self other = stubs_minimum_out out self other |> with_tensor_gc let miopen_batch_norm input @@ -21078,29 +13542,26 @@ let miopen_batch_norm ~exponential_average_factor ~epsilon = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_miopen_batch_norm (CArray.start out__) input weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) exponential_average_factor epsilon; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -21114,7 +13575,7 @@ let miopen_batch_norm_backward ~save_var ~epsilon = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_miopen_batch_norm_backward (CArray.start out__) input @@ -21122,23 +13583,20 @@ let miopen_batch_norm_backward weight (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) epsilon; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -21155,7 +13613,7 @@ let miopen_batch_norm_backward_out ~save_var ~epsilon = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_miopen_batch_norm_backward_out (CArray.start out__) out0 @@ -21166,23 +13624,20 @@ let miopen_batch_norm_backward_out weight (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match save_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) epsilon; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -21199,7 +13654,7 @@ let miopen_batch_norm_out ~exponential_average_factor ~epsilon = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_miopen_batch_norm_out (CArray.start out__) out0 @@ -21209,22 +13664,19 @@ let miopen_batch_norm_out weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) exponential_average_factor epsilon; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -21239,14 +13691,12 @@ let miopen_convolution ~benchmark ~deterministic = - let out__ = CArray.make t 1 in stubs_miopen_convolution - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) @@ -21255,10 +13705,8 @@ let miopen_convolution (List.length dilation) (Int64.of_int groups) (if benchmark then 1 else 0) - (if deterministic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if deterministic then 1 else 0) + |> with_tensor_gc ;; let miopen_convolution_add_relu @@ -21272,26 +13720,22 @@ let miopen_convolution_add_relu ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_miopen_convolution_add_relu - (CArray.start out__) self weight z alpha (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let miopen_convolution_out @@ -21306,15 +13750,13 @@ let miopen_convolution_out ~benchmark ~deterministic = - let out__ = CArray.make t 1 in stubs_miopen_convolution_out - (CArray.start out__) out self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) @@ -21323,31 +13765,25 @@ let miopen_convolution_out (List.length dilation) (Int64.of_int groups) (if benchmark then 1 else 0) - (if deterministic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if deterministic then 1 else 0) + |> with_tensor_gc ;; let miopen_convolution_relu self ~weight ~bias ~stride ~padding ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_miopen_convolution_relu - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let miopen_convolution_transpose @@ -21362,14 +13798,12 @@ let miopen_convolution_transpose ~benchmark ~deterministic = - let out__ = CArray.make t 1 in stubs_miopen_convolution_transpose - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) @@ -21380,10 +13814,8 @@ let miopen_convolution_transpose (List.length dilation) (Int64.of_int groups) (if benchmark then 1 else 0) - (if deterministic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if deterministic then 1 else 0) + |> with_tensor_gc ;; let miopen_convolution_transpose_out @@ -21399,15 +13831,13 @@ let miopen_convolution_transpose_out ~benchmark ~deterministic = - let out__ = CArray.make t 1 in stubs_miopen_convolution_transpose_out - (CArray.start out__) out self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) @@ -21418,10 +13848,8 @@ let miopen_convolution_transpose_out (List.length dilation) (Int64.of_int groups) (if benchmark then 1 else 0) - (if deterministic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if deterministic then 1 else 0) + |> with_tensor_gc ;; let miopen_depthwise_convolution @@ -21435,14 +13863,12 @@ let miopen_depthwise_convolution ~benchmark ~deterministic = - let out__ = CArray.make t 1 in stubs_miopen_depthwise_convolution - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) @@ -21451,10 +13877,8 @@ let miopen_depthwise_convolution (List.length dilation) (Int64.of_int groups) (if benchmark then 1 else 0) - (if deterministic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if deterministic then 1 else 0) + |> with_tensor_gc ;; let miopen_depthwise_convolution_out @@ -21469,15 +13893,13 @@ let miopen_depthwise_convolution_out ~benchmark ~deterministic = - let out__ = CArray.make t 1 in stubs_miopen_depthwise_convolution_out - (CArray.start out__) out self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) @@ -21486,10 +13908,8 @@ let miopen_depthwise_convolution_out (List.length dilation) (Int64.of_int groups) (if benchmark then 1 else 0) - (if deterministic then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if deterministic then 1 else 0) + |> with_tensor_gc ;; let miopen_rnn @@ -21508,17 +13928,17 @@ let miopen_rnn ~batch_sizes ~dropout_state = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs_miopen_rnn (CArray.start out__) input - (CArray.of_list t weight |> CArray.start) + (CArray.of_list gc_tensor weight |> CArray.start) (List.length weight) (Int64.of_int weight_stride0) hx (match cx with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int mode) (Int64.of_int hidden_size) (Int64.of_int num_layers) @@ -21530,17 +13950,12 @@ let miopen_rnn (List.length batch_sizes) (match dropout_state with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; @@ -21565,7 +13980,7 @@ let miopen_rnn_out ~batch_sizes ~dropout_state = - let out__ = CArray.make t 5 in + let out__ = CArray.make raw_tensor 5 in stubs_miopen_rnn_out (CArray.start out__) out0 @@ -21574,13 +13989,13 @@ let miopen_rnn_out out3 out4 input - (CArray.of_list t weight |> CArray.start) + (CArray.of_list gc_tensor weight |> CArray.start) (List.length weight) (Int64.of_int weight_stride0) hx (match cx with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int mode) (Int64.of_int hidden_size) (Int64.of_int num_layers) @@ -21592,189 +14007,123 @@ let miopen_rnn_out (List.length batch_sizes) (match dropout_state with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; + | None -> none_gc_tensor); + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in t0, t1, t2, t3, t4 ;; -let mish self = - let out__ = CArray.make t 1 in - stubs_mish (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mish_ self = - let out__ = CArray.make t 1 in - stubs_mish_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let mish self = stubs_mish self |> with_tensor_gc +let mish_ self = stubs_mish_ self |> with_tensor_gc let mish_backward ~grad_output self = - let out__ = CArray.make t 1 in - stubs_mish_backward (CArray.start out__) grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_mish_backward grad_output self |> with_tensor_gc ;; -let mish_out ~out self = - let out__ = CArray.make t 1 in - stubs_mish_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let mish_out ~out self = stubs_mish_out out self |> with_tensor_gc let mkldnn_adaptive_avg_pool2d self ~output_size = - let out__ = CArray.make t 1 in stubs_mkldnn_adaptive_avg_pool2d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let mkldnn_adaptive_avg_pool2d_backward ~grad_output self = - let out__ = CArray.make t 1 in - stubs_mkldnn_adaptive_avg_pool2d_backward (CArray.start out__) grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_mkldnn_adaptive_avg_pool2d_backward grad_output self |> with_tensor_gc ;; let mkldnn_adaptive_avg_pool2d_backward_out ~out ~grad_output self = - let out__ = CArray.make t 1 in - stubs_mkldnn_adaptive_avg_pool2d_backward_out (CArray.start out__) out grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_mkldnn_adaptive_avg_pool2d_backward_out out grad_output self |> with_tensor_gc ;; let mkldnn_adaptive_avg_pool2d_out ~out self ~output_size = - let out__ = CArray.make t 1 in stubs_mkldnn_adaptive_avg_pool2d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) - (List.length output_size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length output_size) + |> with_tensor_gc ;; let mkldnn_convolution self ~weight ~bias ~padding ~stride ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_mkldnn_convolution - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let mkldnn_convolution_out ~out self ~weight ~bias ~padding ~stride ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_mkldnn_convolution_out - (CArray.start out__) out self weight (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let mkldnn_linear self ~weight ~bias = - let out__ = CArray.make t 1 in stubs_mkldnn_linear - (CArray.start out__) self weight (match bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let mkldnn_linear_backward_input ~input_size ~grad_output ~weight = - let out__ = CArray.make t 1 in stubs_mkldnn_linear_backward_input - (CArray.start out__) (List.map Int64.of_int input_size |> CArray.of_list int64_t |> CArray.start) (List.length input_size) grad_output - weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + weight + |> with_tensor_gc ;; let mkldnn_linear_backward_input_out ~out ~input_size ~grad_output ~weight = - let out__ = CArray.make t 1 in stubs_mkldnn_linear_backward_input_out - (CArray.start out__) out (List.map Int64.of_int input_size |> CArray.of_list int64_t |> CArray.start) (List.length input_size) grad_output - weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + weight + |> with_tensor_gc ;; let mkldnn_linear_backward_weights ~grad_output input ~weight ~bias_defined = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_mkldnn_linear_backward_weights (CArray.start out__) grad_output input weight (if bias_defined then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -21786,7 +14135,7 @@ let mkldnn_linear_backward_weights_out ~weight ~bias_defined = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_mkldnn_linear_backward_weights_out (CArray.start out__) out0 @@ -21795,32 +14144,24 @@ let mkldnn_linear_backward_weights_out input weight (if bias_defined then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let mkldnn_linear_out ~out self ~weight ~bias = - let out__ = CArray.make t 1 in stubs_mkldnn_linear_out - (CArray.start out__) out self weight (match bias with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let mkldnn_max_pool2d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool2d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -21830,10 +14171,8 @@ let mkldnn_max_pool2d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_max_pool2d_backward @@ -21846,9 +14185,7 @@ let mkldnn_max_pool2d_backward ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool2d_backward - (CArray.start out__) grad_output output input @@ -21860,10 +14197,8 @@ let mkldnn_max_pool2d_backward (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_max_pool2d_backward_out @@ -21877,9 +14212,7 @@ let mkldnn_max_pool2d_backward_out ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool2d_backward_out - (CArray.start out__) out grad_output output @@ -21892,16 +14225,12 @@ let mkldnn_max_pool2d_backward_out (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_max_pool2d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool2d_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -21912,16 +14241,12 @@ let mkldnn_max_pool2d_out ~out self ~kernel_size ~stride ~padding ~dilation ~cei (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_max_pool3d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool3d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -21931,10 +14256,8 @@ let mkldnn_max_pool3d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_max_pool3d_backward @@ -21947,9 +14270,7 @@ let mkldnn_max_pool3d_backward ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool3d_backward - (CArray.start out__) grad_output output input @@ -21961,10 +14282,8 @@ let mkldnn_max_pool3d_backward (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_max_pool3d_backward_out @@ -21978,9 +14297,7 @@ let mkldnn_max_pool3d_backward_out ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool3d_backward_out - (CArray.start out__) out grad_output output @@ -21993,16 +14310,12 @@ let mkldnn_max_pool3d_backward_out (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_max_pool3d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_mkldnn_max_pool3d_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -22013,16 +14326,12 @@ let mkldnn_max_pool3d_out ~out self ~kernel_size ~stride ~padding ~dilation ~cei (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let mkldnn_reorder_conv2d_weight self ~padding ~stride ~dilation ~groups ~input_size = - let out__ = CArray.make t 1 in stubs_mkldnn_reorder_conv2d_weight - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) @@ -22036,10 +14345,8 @@ let mkldnn_reorder_conv2d_weight self ~padding ~stride ~dilation ~groups ~input_ | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match input_size with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let mkldnn_reorder_conv2d_weight_out @@ -22051,9 +14358,7 @@ let mkldnn_reorder_conv2d_weight_out ~groups ~input_size = - let out__ = CArray.make t 1 in stubs_mkldnn_reorder_conv2d_weight_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -22068,16 +14373,12 @@ let mkldnn_reorder_conv2d_weight_out | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match input_size with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let mkldnn_reorder_conv3d_weight self ~padding ~stride ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_mkldnn_reorder_conv3d_weight - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) @@ -22085,16 +14386,12 @@ let mkldnn_reorder_conv3d_weight self ~padding ~stride ~dilation ~groups = (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let mkldnn_reorder_conv3d_weight_out ~out self ~padding ~stride ~dilation ~groups = - let out__ = CArray.make t 1 in stubs_mkldnn_reorder_conv3d_weight_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -22103,10 +14400,8 @@ let mkldnn_reorder_conv3d_weight_out ~out self ~padding ~stride ~dilation ~group (List.length stride) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int groups) + |> with_tensor_gc ;; let mkldnn_rnn_layer @@ -22127,7 +14422,7 @@ let mkldnn_rnn_layer ~batch_first ~train = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_mkldnn_rnn_layer (CArray.start out__) input @@ -22147,14 +14442,10 @@ let mkldnn_rnn_layer (if bidirectional then 1 else 0) (if batch_first then 1 else 0) (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; @@ -22183,7 +14474,7 @@ let mkldnn_rnn_layer_backward ~batch_first ~workspace = - let out__ = CArray.make t 7 in + let out__ = CArray.make raw_tensor 7 in stubs_mkldnn_rnn_layer_backward (CArray.start out__) input @@ -22198,13 +14489,13 @@ let mkldnn_rnn_layer_backward cy_ (match grad_output with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_hy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_cy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if reverse then 1 else 0) (Int64.of_int mode) (Int64.of_int hidden_size) @@ -22216,20 +14507,13 @@ let mkldnn_rnn_layer_backward (List.length batch_sizes) (if batch_first then 1 else 0) workspace; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; - let t5 = CArray.get out__ 5 in - Gc.finalise C.Tensor.free t5; - let t6 = CArray.get out__ 6 in - Gc.finalise C.Tensor.free t6; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in + let t5 = CArray.get out__ 5 |> with_tensor_gc in + let t6 = CArray.get out__ 6 |> with_tensor_gc in t0, t1, t2, t3, t4, t5, t6 ;; @@ -22265,7 +14549,7 @@ let mkldnn_rnn_layer_backward_out ~batch_first ~workspace = - let out__ = CArray.make t 7 in + let out__ = CArray.make raw_tensor 7 in stubs_mkldnn_rnn_layer_backward_out (CArray.start out__) out0 @@ -22287,13 +14571,13 @@ let mkldnn_rnn_layer_backward_out cy_ (match grad_output with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_hy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match grad_cy with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if reverse then 1 else 0) (Int64.of_int mode) (Int64.of_int hidden_size) @@ -22305,20 +14589,13 @@ let mkldnn_rnn_layer_backward_out (List.length batch_sizes) (if batch_first then 1 else 0) workspace; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; - let t4 = CArray.get out__ 4 in - Gc.finalise C.Tensor.free t4; - let t5 = CArray.get out__ 5 in - Gc.finalise C.Tensor.free t5; - let t6 = CArray.get out__ 6 in - Gc.finalise C.Tensor.free t6; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in + let t4 = CArray.get out__ 4 |> with_tensor_gc in + let t5 = CArray.get out__ 5 |> with_tensor_gc in + let t6 = CArray.get out__ 6 |> with_tensor_gc in t0, t1, t2, t3, t4, t5, t6 ;; @@ -22344,7 +14621,7 @@ let mkldnn_rnn_layer_out ~batch_first ~train = - let out__ = CArray.make t 4 in + let out__ = CArray.make raw_tensor 4 in stubs_mkldnn_rnn_layer_out (CArray.start out__) out0 @@ -22368,45 +14645,26 @@ let mkldnn_rnn_layer_out (if bidirectional then 1 else 0) (if batch_first then 1 else 0) (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; - let t3 = CArray.get out__ 3 in - Gc.finalise C.Tensor.free t3; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in + let t3 = CArray.get out__ 3 |> with_tensor_gc in t0, t1, t2, t3 ;; -let mm self ~mat2 = - let out__ = CArray.make t 1 in - stubs_mm (CArray.start out__) self mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mm_out ~out self ~mat2 = - let out__ = CArray.make t 1 in - stubs_mm_out (CArray.start out__) out self mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let mm self ~mat2 = stubs_mm self mat2 |> with_tensor_gc +let mm_out ~out self ~mat2 = stubs_mm_out out self mat2 |> with_tensor_gc let mode self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_mode (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let mode_values ~values ~indices self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_mode_values (CArray.start out__) values @@ -22414,193 +14672,82 @@ let mode_values ~values ~indices self ~dim ~keepdim = self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let moveaxis self ~source ~destination = - let out__ = CArray.make t 1 in stubs_moveaxis - (CArray.start out__) self (List.map Int64.of_int source |> CArray.of_list int64_t |> CArray.start) (List.length source) (List.map Int64.of_int destination |> CArray.of_list int64_t |> CArray.start) - (List.length destination); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length destination) + |> with_tensor_gc ;; let moveaxis_int self ~source ~destination = - let out__ = CArray.make t 1 in - stubs_moveaxis_int - (CArray.start out__) - self - (Int64.of_int source) - (Int64.of_int destination); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_moveaxis_int self (Int64.of_int source) (Int64.of_int destination) + |> with_tensor_gc ;; let movedim self ~source ~destination = - let out__ = CArray.make t 1 in stubs_movedim - (CArray.start out__) self (List.map Int64.of_int source |> CArray.of_list int64_t |> CArray.start) (List.length source) (List.map Int64.of_int destination |> CArray.of_list int64_t |> CArray.start) - (List.length destination); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length destination) + |> with_tensor_gc ;; let movedim_int self ~source ~destination = - let out__ = CArray.make t 1 in - stubs_movedim_int - (CArray.start out__) - self - (Int64.of_int source) - (Int64.of_int destination); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_movedim_int self (Int64.of_int source) (Int64.of_int destination) + |> with_tensor_gc ;; let mse_loss self ~target ~reduction = - let out__ = CArray.make t 1 in - stubs_mse_loss - (CArray.start out__) - self - target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_mse_loss self target (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let mse_loss_backward ~grad_output self ~target ~reduction = - let out__ = CArray.make t 1 in stubs_mse_loss_backward - (CArray.start out__) grad_output self target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let mse_loss_backward_grad_input ~grad_input ~grad_output self ~target ~reduction = - let out__ = CArray.make t 1 in stubs_mse_loss_backward_grad_input - (CArray.start out__) grad_input grad_output self target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let mse_loss_out ~out self ~target ~reduction = - let out__ = CArray.make t 1 in - stubs_mse_loss_out - (CArray.start out__) - out - self - target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let msort self = - let out__ = CArray.make t 1 in - stubs_msort (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let msort_out ~out self = - let out__ = CArray.make t 1 in - stubs_msort_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mt self = - let out__ = CArray.make t 1 in - stubs_mt (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mul self other = - let out__ = CArray.make t 1 in - stubs_mul (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mul_ self other = - let out__ = CArray.make t 1 in - stubs_mul_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mul_out ~out self other = - let out__ = CArray.make t 1 in - stubs_mul_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mul_scalar self other = - let out__ = CArray.make t 1 in - stubs_mul_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mul_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_mul_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_mse_loss_out out self target (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; -let mul_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_mul_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let msort self = stubs_msort self |> with_tensor_gc +let msort_out ~out self = stubs_msort_out out self |> with_tensor_gc +let mt self = stubs_mt self |> with_tensor_gc +let mul self other = stubs_mul self other |> with_tensor_gc +let mul_ self other = stubs_mul_ self other |> with_tensor_gc +let mul_out ~out self other = stubs_mul_out out self other |> with_tensor_gc +let mul_scalar self other = stubs_mul_scalar self other |> with_tensor_gc +let mul_scalar_ self other = stubs_mul_scalar_ self other |> with_tensor_gc +let mul_scalar_out ~out self other = stubs_mul_scalar_out out self other |> with_tensor_gc let multi_margin_loss_backward ~grad_output self ~target ~p ~margin ~weight ~reduction = - let out__ = CArray.make t 1 in stubs_multi_margin_loss_backward - (CArray.start out__) grad_output self target @@ -22608,11 +14755,9 @@ let multi_margin_loss_backward ~grad_output self ~target ~p ~margin ~weight ~red margin (match weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let multi_margin_loss_backward_grad_input @@ -22625,9 +14770,7 @@ let multi_margin_loss_backward_grad_input ~weight ~reduction = - let out__ = CArray.make t 1 in stubs_multi_margin_loss_backward_grad_input - (CArray.start out__) grad_input grad_output self @@ -22636,37 +14779,24 @@ let multi_margin_loss_backward_grad_input margin (match weight with | Some v -> v - | None -> null) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let multilabel_margin_loss self ~target ~reduction = - let out__ = CArray.make t 1 in - stubs_multilabel_margin_loss - (CArray.start out__) - self - target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_multilabel_margin_loss self target (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let multilabel_margin_loss_backward ~grad_output self ~target ~reduction ~is_target = - let out__ = CArray.make t 1 in stubs_multilabel_margin_loss_backward - (CArray.start out__) grad_output self target (Reduction.to_int reduction |> Int64.of_int) - is_target; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + is_target + |> with_tensor_gc ;; let multilabel_margin_loss_backward_grad_input @@ -22677,142 +14807,51 @@ let multilabel_margin_loss_backward_grad_input ~reduction ~is_target = - let out__ = CArray.make t 1 in stubs_multilabel_margin_loss_backward_grad_input - (CArray.start out__) grad_input grad_output self target (Reduction.to_int reduction |> Int64.of_int) - is_target; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + is_target + |> with_tensor_gc ;; let multilabel_margin_loss_out ~out self ~target ~reduction = - let out__ = CArray.make t 1 in stubs_multilabel_margin_loss_out - (CArray.start out__) out self target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let multinomial self ~num_samples ~replacement = - let out__ = CArray.make t 1 in - stubs_multinomial - (CArray.start out__) - self - (Int64.of_int num_samples) - (if replacement then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_multinomial self (Int64.of_int num_samples) (if replacement then 1 else 0) + |> with_tensor_gc ;; let multinomial_out ~out self ~num_samples ~replacement = - let out__ = CArray.make t 1 in - stubs_multinomial_out - (CArray.start out__) - out - self - (Int64.of_int num_samples) - (if replacement then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let multiply self other = - let out__ = CArray.make t 1 in - stubs_multiply (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let multiply_ self other = - let out__ = CArray.make t 1 in - stubs_multiply_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_multinomial_out out self (Int64.of_int num_samples) (if replacement then 1 else 0) + |> with_tensor_gc ;; -let multiply_out ~out self other = - let out__ = CArray.make t 1 in - stubs_multiply_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let multiply_scalar self other = - let out__ = CArray.make t 1 in - stubs_multiply_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let multiply_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_multiply_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mv self ~vec = - let out__ = CArray.make t 1 in - stubs_mv (CArray.start out__) self vec; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mv_out ~out self ~vec = - let out__ = CArray.make t 1 in - stubs_mv_out (CArray.start out__) out self vec; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mvlgamma self ~p = - let out__ = CArray.make t 1 in - stubs_mvlgamma (CArray.start out__) self (Int64.of_int p); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let mvlgamma_ self ~p = - let out__ = CArray.make t 1 in - stubs_mvlgamma_ (CArray.start out__) self (Int64.of_int p); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let multiply self other = stubs_multiply self other |> with_tensor_gc +let multiply_ self other = stubs_multiply_ self other |> with_tensor_gc +let multiply_out ~out self other = stubs_multiply_out out self other |> with_tensor_gc +let multiply_scalar self other = stubs_multiply_scalar self other |> with_tensor_gc +let multiply_scalar_ self other = stubs_multiply_scalar_ self other |> with_tensor_gc +let mv self ~vec = stubs_mv self vec |> with_tensor_gc +let mv_out ~out self ~vec = stubs_mv_out out self vec |> with_tensor_gc +let mvlgamma self ~p = stubs_mvlgamma self (Int64.of_int p) |> with_tensor_gc +let mvlgamma_ self ~p = stubs_mvlgamma_ self (Int64.of_int p) |> with_tensor_gc let mvlgamma_out ~out self ~p = - let out__ = CArray.make t 1 in - stubs_mvlgamma_out (CArray.start out__) out self (Int64.of_int p); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_mvlgamma_out out self (Int64.of_int p) |> with_tensor_gc ;; let nan_to_num self ~nan ~posinf ~neginf = - let out__ = CArray.make t 1 in stubs_nan_to_num - (CArray.start out__) self (Option.value nan ~default:0.0) (match nan with @@ -22825,16 +14864,12 @@ let nan_to_num self ~nan ~posinf ~neginf = (Option.value neginf ~default:0.0) (match neginf with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let nan_to_num_ self ~nan ~posinf ~neginf = - let out__ = CArray.make t 1 in stubs_nan_to_num_ - (CArray.start out__) self (Option.value nan ~default:0.0) (match nan with @@ -22847,16 +14882,12 @@ let nan_to_num_ self ~nan ~posinf ~neginf = (Option.value neginf ~default:0.0) (match neginf with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let nan_to_num_out ~out self ~nan ~posinf ~neginf = - let out__ = CArray.make t 1 in stubs_nan_to_num_out - (CArray.start out__) out self (Option.value nan ~default:0.0) @@ -22870,16 +14901,12 @@ let nan_to_num_out ~out self ~nan ~posinf ~neginf = (Option.value neginf ~default:0.0) (match neginf with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let nanmean self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_nanmean - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -22888,16 +14915,12 @@ let nanmean self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let nanmean_out ~out self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_nanmean_out - (CArray.start out__) out self (match dim with @@ -22907,36 +14930,26 @@ let nanmean_out ~out self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; -let nanmedian self = - let out__ = CArray.make t 1 in - stubs_nanmedian (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let nanmedian self = stubs_nanmedian self |> with_tensor_gc let nanmedian_dim self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_nanmedian_dim (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let nanmedian_dim_values ~values ~indices self ~dim ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_nanmedian_dim_values (CArray.start out__) values @@ -22944,25 +14957,15 @@ let nanmedian_dim_values ~values ~indices self ~dim ~keepdim = self (Int64.of_int dim) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let nanmedian_out ~out self = - let out__ = CArray.make t 1 in - stubs_nanmedian_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let nanmedian_out ~out self = stubs_nanmedian_out out self |> with_tensor_gc let nanquantile self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_nanquantile - (CArray.start out__) self q (match dim with @@ -22972,16 +14975,12 @@ let nanquantile self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let nanquantile_out ~out self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_nanquantile_out - (CArray.start out__) out self q @@ -22992,16 +14991,12 @@ let nanquantile_out ~out self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let nanquantile_scalar self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_nanquantile_scalar - (CArray.start out__) self q (match dim with @@ -23011,16 +15006,12 @@ let nanquantile_scalar self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let nanquantile_scalar_out ~out self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_nanquantile_scalar_out - (CArray.start out__) out self q @@ -23031,16 +15022,12 @@ let nanquantile_scalar_out ~out self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let nansum self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_nansum - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -23049,16 +15036,12 @@ let nansum self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let nansum_out ~out self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_nansum_out - (CArray.start out__) out self (match dim with @@ -23068,63 +15051,33 @@ let nansum_out ~out self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let narrow self ~dim ~start ~length = - let out__ = CArray.make t 1 in - stubs_narrow - (CArray.start out__) - self - (Int64.of_int dim) - (Int64.of_int start) - (Int64.of_int length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_narrow self (Int64.of_int dim) (Int64.of_int start) (Int64.of_int length) + |> with_tensor_gc ;; let narrow_copy self ~dim ~start ~length = - let out__ = CArray.make t 1 in - stubs_narrow_copy - (CArray.start out__) - self - (Int64.of_int dim) - (Int64.of_int start) - (Int64.of_int length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_narrow_copy self (Int64.of_int dim) (Int64.of_int start) (Int64.of_int length) + |> with_tensor_gc ;; let narrow_copy_out ~out self ~dim ~start ~length = - let out__ = CArray.make t 1 in stubs_narrow_copy_out - (CArray.start out__) out self (Int64.of_int dim) (Int64.of_int start) - (Int64.of_int length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int length) + |> with_tensor_gc ;; let narrow_tensor self ~dim ~start ~length = - let out__ = CArray.make t 1 in - stubs_narrow_tensor - (CArray.start out__) - self - (Int64.of_int dim) - start - (Int64.of_int length); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_narrow_tensor self (Int64.of_int dim) start (Int64.of_int length) + |> with_tensor_gc ;; let native_batch_norm @@ -23137,31 +15090,28 @@ let native_batch_norm ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_native_batch_norm (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -23178,7 +15128,7 @@ let native_batch_norm_out ~momentum ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_native_batch_norm_out (CArray.start out__) out @@ -23187,99 +15137,77 @@ let native_batch_norm_out input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_mean with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match running_var with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if training then 1 else 0) momentum eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let native_channel_shuffle self ~groups = - let out__ = CArray.make t 1 in - stubs_native_channel_shuffle (CArray.start out__) self (Int64.of_int groups); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_native_channel_shuffle self (Int64.of_int groups) |> with_tensor_gc ;; let native_dropout input ~p ~train = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_native_dropout (CArray.start out__) input p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let native_dropout_backward ~grad_output ~mask ~scale = - let out__ = CArray.make t 1 in - stubs_native_dropout_backward (CArray.start out__) grad_output mask scale; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_native_dropout_backward grad_output mask scale |> with_tensor_gc ;; let native_dropout_backward_out ~out ~grad_output ~mask ~scale = - let out__ = CArray.make t 1 in - stubs_native_dropout_backward_out (CArray.start out__) out grad_output mask scale; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_native_dropout_backward_out out grad_output mask scale |> with_tensor_gc ;; let native_dropout_out ~out0 ~out1 input ~p ~train = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_native_dropout_out (CArray.start out__) out0 out1 input p (if train then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let native_group_norm input ~weight ~bias ~n ~c ~hxw ~group ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_native_group_norm (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int n) (Int64.of_int c) (Int64.of_int hxw) (Int64.of_int group) eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let native_group_norm_out ~out0 ~out1 ~out2 input ~weight ~bias ~n ~c ~hxw ~group ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_native_group_norm_out (CArray.start out__) out0 @@ -23288,26 +15216,23 @@ let native_group_norm_out ~out0 ~out1 ~out2 input ~weight ~bias ~n ~c ~hxw ~grou input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int n) (Int64.of_int c) (Int64.of_int hxw) (Int64.of_int group) eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let native_layer_norm input ~normalized_shape ~weight ~bias ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_native_layer_norm (CArray.start out__) input @@ -23315,22 +15240,19 @@ let native_layer_norm input ~normalized_shape ~weight ~bias ~eps = (List.length normalized_shape) (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let native_layer_norm_out ~out0 ~out1 ~out2 input ~normalized_shape ~weight ~bias ~eps = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_native_layer_norm_out (CArray.start out__) out0 @@ -23341,167 +15263,58 @@ let native_layer_norm_out ~out0 ~out1 ~out2 input ~normalized_shape ~weight ~bia (List.length normalized_shape) (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) eps; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; -let native_norm self = - let out__ = CArray.make t 1 in - stubs_native_norm (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let native_norm_out ~out self = - let out__ = CArray.make t 1 in - stubs_native_norm_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let native_norm self = stubs_native_norm self |> with_tensor_gc +let native_norm_out ~out self = stubs_native_norm_out out self |> with_tensor_gc let native_norm_scalaropt_dim_dtype self ~p ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_native_norm_scalaropt_dim_dtype - (CArray.start out__) self p (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let native_norm_scalaropt_dim_dtype_out ~out self ~p ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_native_norm_scalaropt_dim_dtype_out - (CArray.start out__) out self p (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ne self other = - let out__ = CArray.make t 1 in - stubs_ne (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ne_ self other = - let out__ = CArray.make t 1 in - stubs_ne_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ne_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_ne_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ne_tensor self other = - let out__ = CArray.make t 1 in - stubs_ne_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ne_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_ne_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ne_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_ne_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let neg self = - let out__ = CArray.make t 1 in - stubs_neg (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let neg_ self = - let out__ = CArray.make t 1 in - stubs_neg_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let neg_out ~out self = - let out__ = CArray.make t 1 in - stubs_neg_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let negative self = - let out__ = CArray.make t 1 in - stubs_negative (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let negative_ self = - let out__ = CArray.make t 1 in - stubs_negative_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let negative_out ~out self = - let out__ = CArray.make t 1 in - stubs_negative_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; + (Kind.packed_to_int dtype) + |> with_tensor_gc +;; + +let ne self other = stubs_ne self other |> with_tensor_gc +let ne_ self other = stubs_ne_ self other |> with_tensor_gc +let ne_scalar_out ~out self other = stubs_ne_scalar_out out self other |> with_tensor_gc +let ne_tensor self other = stubs_ne_tensor self other |> with_tensor_gc +let ne_tensor_ self other = stubs_ne_tensor_ self other |> with_tensor_gc +let ne_tensor_out ~out self other = stubs_ne_tensor_out out self other |> with_tensor_gc +let neg self = stubs_neg self |> with_tensor_gc +let neg_ self = stubs_neg_ self |> with_tensor_gc +let neg_out ~out self = stubs_neg_out out self |> with_tensor_gc +let negative self = stubs_negative self |> with_tensor_gc +let negative_ self = stubs_negative_ self |> with_tensor_gc +let negative_out ~out self = stubs_negative_out out self |> with_tensor_gc let nested_to_padded_tensor self ~padding ~output_size = - let out__ = CArray.make t 1 in stubs_nested_to_padded_tensor - (CArray.start out__) self padding (match output_size with @@ -23509,207 +15322,137 @@ let nested_to_padded_tensor self ~padding ~output_size = | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match output_size with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let new_empty self ~size ~options = - let out__ = CArray.make t 1 in stubs_new_empty - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let new_empty_out ~out self ~size = - let out__ = CArray.make t 1 in stubs_new_empty_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let new_empty_strided self ~size ~stride ~options = - let out__ = CArray.make t 1 in stubs_new_empty_strided - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let new_empty_strided_out ~out self ~size ~stride = - let out__ = CArray.make t 1 in stubs_new_empty_strided_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; let new_full self ~size ~fill_value ~options = - let out__ = CArray.make t 1 in stubs_new_full - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) fill_value (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let new_full_out ~out self ~size ~fill_value = - let out__ = CArray.make t 1 in stubs_new_full_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) - fill_value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + fill_value + |> with_tensor_gc ;; let new_ones self ~size ~options = - let out__ = CArray.make t 1 in stubs_new_ones - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let new_ones_out ~out self ~size = - let out__ = CArray.make t 1 in stubs_new_ones_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let new_zeros self ~size ~options = - let out__ = CArray.make t 1 in stubs_new_zeros - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let new_zeros_out ~out self ~size = - let out__ = CArray.make t 1 in stubs_new_zeros_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let nextafter self other = - let out__ = CArray.make t 1 in - stubs_nextafter (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let nextafter_ self other = - let out__ = CArray.make t 1 in - stubs_nextafter_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; -let nextafter_out ~out self other = - let out__ = CArray.make t 1 in - stubs_nextafter_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let nextafter self other = stubs_nextafter self other |> with_tensor_gc +let nextafter_ self other = stubs_nextafter_ self other |> with_tensor_gc +let nextafter_out ~out self other = stubs_nextafter_out out self other |> with_tensor_gc let nll_loss self ~target ~weight ~reduction ~ignore_index = - let out__ = CArray.make t 1 in stubs_nll_loss - (CArray.start out__) self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) - (Int64.of_int ignore_index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int ignore_index) + |> with_tensor_gc ;; let nll_loss2d self ~target ~weight ~reduction ~ignore_index = - let out__ = CArray.make t 1 in stubs_nll_loss2d - (CArray.start out__) self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) - (Int64.of_int ignore_index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int ignore_index) + |> with_tensor_gc ;; let nll_loss2d_backward @@ -23721,21 +15464,17 @@ let nll_loss2d_backward ~ignore_index ~total_weight = - let out__ = CArray.make t 1 in stubs_nll_loss2d_backward - (CArray.start out__) grad_output self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) (Int64.of_int ignore_index) - total_weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + total_weight + |> with_tensor_gc ;; let nll_loss2d_backward_grad_input @@ -23748,39 +15487,31 @@ let nll_loss2d_backward_grad_input ~ignore_index ~total_weight = - let out__ = CArray.make t 1 in stubs_nll_loss2d_backward_grad_input - (CArray.start out__) grad_input grad_output self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) (Int64.of_int ignore_index) - total_weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + total_weight + |> with_tensor_gc ;; let nll_loss2d_out ~out self ~target ~weight ~reduction ~ignore_index = - let out__ = CArray.make t 1 in stubs_nll_loss2d_out - (CArray.start out__) out self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) - (Int64.of_int ignore_index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int ignore_index) + |> with_tensor_gc ;; let nll_loss_backward @@ -23792,21 +15523,17 @@ let nll_loss_backward ~ignore_index ~total_weight = - let out__ = CArray.make t 1 in stubs_nll_loss_backward - (CArray.start out__) grad_output self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) (Int64.of_int ignore_index) - total_weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + total_weight + |> with_tensor_gc ;; let nll_loss_backward_grad_input @@ -23819,436 +15546,217 @@ let nll_loss_backward_grad_input ~ignore_index ~total_weight = - let out__ = CArray.make t 1 in stubs_nll_loss_backward_grad_input - (CArray.start out__) grad_input grad_output self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) (Int64.of_int ignore_index) - total_weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + total_weight + |> with_tensor_gc ;; let nll_loss_nd self ~target ~weight ~reduction ~ignore_index = - let out__ = CArray.make t 1 in stubs_nll_loss_nd - (CArray.start out__) self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) - (Int64.of_int ignore_index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int ignore_index) + |> with_tensor_gc ;; let nll_loss_out ~out self ~target ~weight ~reduction ~ignore_index = - let out__ = CArray.make t 1 in stubs_nll_loss_out - (CArray.start out__) out self target (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Reduction.to_int reduction |> Int64.of_int) - (Int64.of_int ignore_index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let nonzero self = - let out__ = CArray.make t 1 in - stubs_nonzero (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int ignore_index) + |> with_tensor_gc ;; +let nonzero self = stubs_nonzero self |> with_tensor_gc let nonzero_numpy self = stubs_nonzero_numpy self |> to_tensor_list - -let nonzero_out ~out self = - let out__ = CArray.make t 1 in - stubs_nonzero_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let nonzero_out ~out self = stubs_nonzero_out out self |> with_tensor_gc let nonzero_static self ~size ~fill_value = - let out__ = CArray.make t 1 in - stubs_nonzero_static - (CArray.start out__) - self - (Int64.of_int size) - (Int64.of_int fill_value); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_nonzero_static self (Int64.of_int size) (Int64.of_int fill_value) + |> with_tensor_gc ;; let nonzero_static_out ~out self ~size ~fill_value = - let out__ = CArray.make t 1 in - stubs_nonzero_static_out - (CArray.start out__) - out - self - (Int64.of_int size) - (Int64.of_int fill_value); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_nonzero_static_out out self (Int64.of_int size) (Int64.of_int fill_value) + |> with_tensor_gc ;; -let norm self = - let out__ = CArray.make t 1 in - stubs_norm (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let norm self = stubs_norm self |> with_tensor_gc let norm_dtype_out ~out self ~p ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_norm_dtype_out - (CArray.start out__) out self p (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let norm_except_dim ~v ~pow ~dim = - let out__ = CArray.make t 1 in - stubs_norm_except_dim (CArray.start out__) v (Int64.of_int pow) (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_norm_except_dim v (Int64.of_int pow) (Int64.of_int dim) |> with_tensor_gc ;; let norm_out ~out self ~p ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_norm_out - (CArray.start out__) out self p (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; -let norm_scalar_out ~out self = - let out__ = CArray.make t 1 in - stubs_norm_scalar_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let norm_scalar_out ~out self = stubs_norm_scalar_out out self |> with_tensor_gc let norm_scalaropt_dim self ~p ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_norm_scalaropt_dim - (CArray.start out__) self p (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let norm_scalaropt_dim_dtype self ~p ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_norm_scalaropt_dim_dtype - (CArray.start out__) self p (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let norm_scalaropt_dtype self ~p ~dtype = - let out__ = CArray.make t 1 in - stubs_norm_scalaropt_dtype (CArray.start out__) self p (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_norm_scalaropt_dtype self p (Kind.packed_to_int dtype) |> with_tensor_gc ;; let norm_scalaropt_dtype_out ~out self ~p ~dtype = - let out__ = CArray.make t 1 in - stubs_norm_scalaropt_dtype_out - (CArray.start out__) - out - self - p - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_norm_scalaropt_dtype_out out self p (Kind.packed_to_int dtype) |> with_tensor_gc ;; -let normal_ self ~mean ~std = - let out__ = CArray.make t 1 in - stubs_normal_ (CArray.start out__) self mean std; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let normal_ self ~mean ~std = stubs_normal_ self mean std |> with_tensor_gc let normal_functional self ~mean ~std = - let out__ = CArray.make t 1 in - stubs_normal_functional (CArray.start out__) self mean std; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let not_equal self other = - let out__ = CArray.make t 1 in - stubs_not_equal (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_normal_functional self mean std |> with_tensor_gc ;; -let not_equal_ self other = - let out__ = CArray.make t 1 in - stubs_not_equal_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let not_equal self other = stubs_not_equal self other |> with_tensor_gc +let not_equal_ self other = stubs_not_equal_ self other |> with_tensor_gc let not_equal_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_not_equal_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let not_equal_tensor self other = - let out__ = CArray.make t 1 in - stubs_not_equal_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_not_equal_scalar_out out self other |> with_tensor_gc ;; -let not_equal_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_not_equal_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let not_equal_tensor self other = stubs_not_equal_tensor self other |> with_tensor_gc +let not_equal_tensor_ self other = stubs_not_equal_tensor_ self other |> with_tensor_gc let not_equal_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_not_equal_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_not_equal_tensor_out out self other |> with_tensor_gc ;; let nuclear_norm self ~keepdim = - let out__ = CArray.make t 1 in - stubs_nuclear_norm (CArray.start out__) self (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_nuclear_norm self (if keepdim then 1 else 0) |> with_tensor_gc ;; let nuclear_norm_dim self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_nuclear_norm_dim - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let nuclear_norm_dim_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_nuclear_norm_dim_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let nuclear_norm_out ~out self ~keepdim = - let out__ = CArray.make t 1 in - stubs_nuclear_norm_out (CArray.start out__) out self (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_nuclear_norm_out out self (if keepdim then 1 else 0) |> with_tensor_gc ;; -let numpy_t self = - let out__ = CArray.make t 1 in - stubs_numpy_t (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let numpy_t self = stubs_numpy_t self |> with_tensor_gc let one_hot self ~num_classes = - let out__ = CArray.make t 1 in - stubs_one_hot (CArray.start out__) self (Int64.of_int num_classes); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_one_hot self (Int64.of_int num_classes) |> with_tensor_gc ;; let ones ~size ~options = - let out__ = CArray.make t 1 in stubs_ones - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ones_like self = - let out__ = CArray.make t 1 in - stubs_ones_like (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let ones_like_out ~out self = - let out__ = CArray.make t 1 in - stubs_ones_like_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let ones_like self = stubs_ones_like self |> with_tensor_gc +let ones_like_out ~out self = stubs_ones_like_out out self |> with_tensor_gc let ones_out ~out ~size = - let out__ = CArray.make t 1 in stubs_ones_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let orgqr self ~input2 = - let out__ = CArray.make t 1 in - stubs_orgqr (CArray.start out__) self input2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; -let orgqr_out ~out self ~input2 = - let out__ = CArray.make t 1 in - stubs_orgqr_out (CArray.start out__) out self input2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let orgqr self ~input2 = stubs_orgqr self input2 |> with_tensor_gc +let orgqr_out ~out self ~input2 = stubs_orgqr_out out self input2 |> with_tensor_gc let ormqr self ~input2 ~input3 ~left ~transpose = - let out__ = CArray.make t 1 in - stubs_ormqr - (CArray.start out__) - self - input2 - input3 - (if left then 1 else 0) - (if transpose then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_ormqr self input2 input3 (if left then 1 else 0) (if transpose then 1 else 0) + |> with_tensor_gc ;; let ormqr_out ~out self ~input2 ~input3 ~left ~transpose = - let out__ = CArray.make t 1 in stubs_ormqr_out - (CArray.start out__) out self input2 input3 (if left then 1 else 0) - (if transpose then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let outer self ~vec2 = - let out__ = CArray.make t 1 in - stubs_outer (CArray.start out__) self vec2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let outer_out ~out self ~vec2 = - let out__ = CArray.make t 1 in - stubs_outer_out (CArray.start out__) out self vec2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if transpose then 1 else 0) + |> with_tensor_gc ;; +let outer self ~vec2 = stubs_outer self vec2 |> with_tensor_gc +let outer_out ~out self ~vec2 = stubs_outer_out out self vec2 |> with_tensor_gc let output_nr self = stubs_output_nr self let pad self ~pad ~mode ~value = - let out__ = CArray.make t 1 in stubs_pad - (CArray.start out__) self (List.map Int64.of_int pad |> CArray.of_list int64_t |> CArray.start) (List.length pad) @@ -24256,405 +15764,192 @@ let pad self ~pad ~mode ~value = (Option.value value ~default:0.0) (match value with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let pad_sequence ~sequences ~batch_first ~padding_value = - let out__ = CArray.make t 1 in stubs_pad_sequence - (CArray.start out__) - (CArray.of_list t sequences |> CArray.start) + (CArray.of_list gc_tensor sequences |> CArray.start) (List.length sequences) (if batch_first then 1 else 0) - padding_value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + padding_value + |> with_tensor_gc ;; let pairwise_distance ~x1 ~x2 ~p ~eps ~keepdim = - let out__ = CArray.make t 1 in - stubs_pairwise_distance (CArray.start out__) x1 x2 p eps (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pairwise_distance x1 x2 p eps (if keepdim then 1 else 0) |> with_tensor_gc ;; -let pdist self ~p = - let out__ = CArray.make t 1 in - stubs_pdist (CArray.start out__) self p; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let pdist self ~p = stubs_pdist self p |> with_tensor_gc let permute self ~dims = - let out__ = CArray.make t 1 in stubs_permute - (CArray.start out__) self (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; let permute_copy self ~dims = - let out__ = CArray.make t 1 in stubs_permute_copy - (CArray.start out__) self (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; let permute_copy_out ~out self ~dims = - let out__ = CArray.make t 1 in stubs_permute_copy_out - (CArray.start out__) out self (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; let pin_memory self ~device = - let out__ = CArray.make t 1 in - stubs_pin_memory (CArray.start out__) self (Device.to_int device); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pin_memory self (Device.to_int device) |> with_tensor_gc ;; -let pinverse self ~rcond = - let out__ = CArray.make t 1 in - stubs_pinverse (CArray.start out__) self rcond; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let pinverse self ~rcond = stubs_pinverse self rcond |> with_tensor_gc let pixel_shuffle self ~upscale_factor = - let out__ = CArray.make t 1 in - stubs_pixel_shuffle (CArray.start out__) self (Int64.of_int upscale_factor); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pixel_shuffle self (Int64.of_int upscale_factor) |> with_tensor_gc ;; let pixel_shuffle_out ~out self ~upscale_factor = - let out__ = CArray.make t 1 in - stubs_pixel_shuffle_out (CArray.start out__) out self (Int64.of_int upscale_factor); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pixel_shuffle_out out self (Int64.of_int upscale_factor) |> with_tensor_gc ;; let pixel_unshuffle self ~downscale_factor = - let out__ = CArray.make t 1 in - stubs_pixel_unshuffle (CArray.start out__) self (Int64.of_int downscale_factor); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pixel_unshuffle self (Int64.of_int downscale_factor) |> with_tensor_gc ;; let pixel_unshuffle_out ~out self ~downscale_factor = - let out__ = CArray.make t 1 in - stubs_pixel_unshuffle_out (CArray.start out__) out self (Int64.of_int downscale_factor); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pixel_unshuffle_out out self (Int64.of_int downscale_factor) |> with_tensor_gc ;; -let poisson self = - let out__ = CArray.make t 1 in - stubs_poisson (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let poisson self = stubs_poisson self |> with_tensor_gc let poisson_nll_loss input ~target ~log_input ~full ~eps ~reduction = - let out__ = CArray.make t 1 in stubs_poisson_nll_loss - (CArray.start out__) input target (if log_input then 1 else 0) (if full then 1 else 0) eps - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let poisson_out ~out self = - let out__ = CArray.make t 1 in - stubs_poisson_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let polar ~abs ~angle = - let out__ = CArray.make t 1 in - stubs_polar (CArray.start out__) abs angle; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let polar_out ~out ~abs ~angle = - let out__ = CArray.make t 1 in - stubs_polar_out (CArray.start out__) out abs angle; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let polygamma ~n self = - let out__ = CArray.make t 1 in - stubs_polygamma (CArray.start out__) (Int64.of_int n) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; -let polygamma_ self ~n = - let out__ = CArray.make t 1 in - stubs_polygamma_ (CArray.start out__) self (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let poisson_out ~out self = stubs_poisson_out out self |> with_tensor_gc +let polar ~abs ~angle = stubs_polar abs angle |> with_tensor_gc +let polar_out ~out ~abs ~angle = stubs_polar_out out abs angle |> with_tensor_gc +let polygamma ~n self = stubs_polygamma (Int64.of_int n) self |> with_tensor_gc +let polygamma_ self ~n = stubs_polygamma_ self (Int64.of_int n) |> with_tensor_gc let polygamma_out ~out ~n self = - let out__ = CArray.make t 1 in - stubs_polygamma_out (CArray.start out__) out (Int64.of_int n) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let positive self = - let out__ = CArray.make t 1 in - stubs_positive (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let pow self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let pow_ self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow_ (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_polygamma_out out (Int64.of_int n) self |> with_tensor_gc ;; -let pow_scalar self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow_scalar (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let positive self = stubs_positive self |> with_tensor_gc +let pow self ~exponent = stubs_pow self exponent |> with_tensor_gc +let pow_ self ~exponent = stubs_pow_ self exponent |> with_tensor_gc +let pow_scalar self ~exponent = stubs_pow_scalar self exponent |> with_tensor_gc let pow_scalar_out ~out self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow_scalar_out (CArray.start out__) out self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pow_scalar_out out self exponent |> with_tensor_gc ;; -let pow_tensor_ self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow_tensor_ (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let pow_tensor_ self ~exponent = stubs_pow_tensor_ self exponent |> with_tensor_gc let pow_tensor_scalar self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow_tensor_scalar (CArray.start out__) self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pow_tensor_scalar self exponent |> with_tensor_gc ;; let pow_tensor_scalar_out ~out self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow_tensor_scalar_out (CArray.start out__) out self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pow_tensor_scalar_out out self exponent |> with_tensor_gc ;; let pow_tensor_tensor_out ~out self ~exponent = - let out__ = CArray.make t 1 in - stubs_pow_tensor_tensor_out (CArray.start out__) out self exponent; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let prelu self ~weight = - let out__ = CArray.make t 1 in - stubs_prelu (CArray.start out__) self weight; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_pow_tensor_tensor_out out self exponent |> with_tensor_gc ;; -let prod self ~dtype = - let out__ = CArray.make t 1 in - stubs_prod (CArray.start out__) self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let prelu self ~weight = stubs_prelu self weight |> with_tensor_gc +let prod self ~dtype = stubs_prod self (Kind.packed_to_int dtype) |> with_tensor_gc let prod_dim_int self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_prod_dim_int - (CArray.start out__) self (Int64.of_int dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let prod_int_out ~out self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_prod_int_out - (CArray.start out__) out self (Int64.of_int dim) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let prod_out ~out self ~dtype = - let out__ = CArray.make t 1 in - stubs_prod_out (CArray.start out__) out self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_prod_out out self (Kind.packed_to_int dtype) |> with_tensor_gc ;; let put self ~index ~source ~accumulate = - let out__ = CArray.make t 1 in - stubs_put (CArray.start out__) self index source (if accumulate then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_put self index source (if accumulate then 1 else 0) |> with_tensor_gc ;; let put_ self ~index ~source ~accumulate = - let out__ = CArray.make t 1 in - stubs_put_ (CArray.start out__) self index source (if accumulate then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_put_ self index source (if accumulate then 1 else 0) |> with_tensor_gc ;; let put_out ~out self ~index ~source ~accumulate = - let out__ = CArray.make t 1 in - stubs_put_out (CArray.start out__) out self index source (if accumulate then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_put_out out self index source (if accumulate then 1 else 0) |> with_tensor_gc ;; let q_per_channel_axis self = stubs_q_per_channel_axis self - -let q_per_channel_scales self = - let out__ = CArray.make t 1 in - stubs_q_per_channel_scales (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let q_per_channel_scales self = stubs_q_per_channel_scales self |> with_tensor_gc let q_per_channel_scales_out ~out self = - let out__ = CArray.make t 1 in - stubs_q_per_channel_scales_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_q_per_channel_scales_out out self |> with_tensor_gc ;; let q_per_channel_zero_points self = - let out__ = CArray.make t 1 in - stubs_q_per_channel_zero_points (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_q_per_channel_zero_points self |> with_tensor_gc ;; let q_per_channel_zero_points_out ~out self = - let out__ = CArray.make t 1 in - stubs_q_per_channel_zero_points_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_q_per_channel_zero_points_out out self |> with_tensor_gc ;; let q_scale self = stubs_q_scale self let q_zero_point self = stubs_q_zero_point self let qr self ~some = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_qr (CArray.start out__) self (if some then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let qr_q ~q ~r self ~some = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_qr_q (CArray.start out__) q r self (if some then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let quantile self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_quantile - (CArray.start out__) self q (match dim with @@ -24664,16 +15959,12 @@ let quantile self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let quantile_out ~out self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_quantile_out - (CArray.start out__) out self q @@ -24684,16 +15975,12 @@ let quantile_out ~out self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let quantile_scalar self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_quantile_scalar - (CArray.start out__) self q (match dim with @@ -24703,16 +15990,12 @@ let quantile_scalar self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let quantile_scalar_out ~out self ~q ~dim ~keepdim ~interpolation = - let out__ = CArray.make t 1 in stubs_quantile_scalar_out - (CArray.start out__) out self q @@ -24723,123 +16006,89 @@ let quantile_scalar_out ~out self ~q ~dim ~keepdim ~interpolation = | Some _ -> 0 | None -> 1) (if keepdim then 1 else 0) - interpolation; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + interpolation + |> with_tensor_gc ;; let quantize_per_channel self ~scales ~zero_points ~axis ~dtype = - let out__ = CArray.make t 1 in stubs_quantize_per_channel - (CArray.start out__) self scales zero_points (Int64.of_int axis) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let quantize_per_channel_out ~out self ~scales ~zero_points ~axis ~dtype = - let out__ = CArray.make t 1 in stubs_quantize_per_channel_out - (CArray.start out__) out self scales zero_points (Int64.of_int axis) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let quantize_per_tensor self ~scale ~zero_point ~dtype = - let out__ = CArray.make t 1 in stubs_quantize_per_tensor - (CArray.start out__) self scale (Int64.of_int zero_point) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let quantize_per_tensor_dynamic self ~dtype ~reduce_range = - let out__ = CArray.make t 1 in stubs_quantize_per_tensor_dynamic - (CArray.start out__) self (Kind.packed_to_int dtype) - (if reduce_range then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if reduce_range then 1 else 0) + |> with_tensor_gc ;; let quantize_per_tensor_dynamic_out ~out self ~dtype ~reduce_range = - let out__ = CArray.make t 1 in stubs_quantize_per_tensor_dynamic_out - (CArray.start out__) out self (Kind.packed_to_int dtype) - (if reduce_range then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if reduce_range then 1 else 0) + |> with_tensor_gc ;; let quantize_per_tensor_out ~out self ~scale ~zero_point ~dtype = - let out__ = CArray.make t 1 in stubs_quantize_per_tensor_out - (CArray.start out__) out self scale (Int64.of_int zero_point) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let quantize_per_tensor_tensor_qparams self ~scale ~zero_point ~dtype = - let out__ = CArray.make t 1 in stubs_quantize_per_tensor_tensor_qparams - (CArray.start out__) self scale zero_point - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let quantize_per_tensor_tensor_qparams_out ~out self ~scale ~zero_point ~dtype = - let out__ = CArray.make t 1 in stubs_quantize_per_tensor_tensor_qparams_out - (CArray.start out__) out self scale zero_point - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let quantize_per_tensor_tensors tensors ~scales ~zero_points ~dtype = stubs_quantize_per_tensor_tensors - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) scales zero_points @@ -24849,9 +16098,9 @@ let quantize_per_tensor_tensors tensors ~scales ~zero_points ~dtype = let quantize_per_tensor_tensors_out ~out tensors ~scales ~zero_points ~dtype = stubs_quantize_per_tensor_tensors_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) scales zero_points @@ -24868,24 +16117,20 @@ let quantized_batch_norm ~output_scale ~output_zero_point = - let out__ = CArray.make t 1 in stubs_quantized_batch_norm - (CArray.start out__) input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) mean var eps output_scale - (Int64.of_int output_zero_point); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int output_zero_point) + |> with_tensor_gc ;; let quantized_batch_norm_out @@ -24899,25 +16144,21 @@ let quantized_batch_norm_out ~output_scale ~output_zero_point = - let out__ = CArray.make t 1 in stubs_quantized_batch_norm_out - (CArray.start out__) out input (match weight with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) mean var eps output_scale - (Int64.of_int output_zero_point); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int output_zero_point) + |> with_tensor_gc ;; let quantized_gru_cell @@ -24936,9 +16177,7 @@ let quantized_gru_cell ~zero_point_ih ~zero_point_hh = - let out__ = CArray.make t 1 in stubs_quantized_gru_cell - (CArray.start out__) input hx w_ih @@ -24952,10 +16191,8 @@ let quantized_gru_cell scale_ih scale_hh zero_point_ih - zero_point_hh; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + zero_point_hh + |> with_tensor_gc ;; let quantized_lstm_cell @@ -24974,11 +16211,11 @@ let quantized_lstm_cell ~zero_point_ih ~zero_point_hh = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_quantized_lstm_cell (CArray.start out__) input - (CArray.of_list t hx |> CArray.start) + (CArray.of_list gc_tensor hx |> CArray.start) (List.length hx) w_ih w_hh @@ -24992,17 +16229,13 @@ let quantized_lstm_cell scale_hh zero_point_ih zero_point_hh; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let quantized_max_pool1d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_quantized_max_pool1d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -25012,16 +16245,12 @@ let quantized_max_pool1d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let quantized_max_pool1d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_quantized_max_pool1d_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -25032,16 +16261,12 @@ let quantized_max_pool1d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let quantized_max_pool2d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_quantized_max_pool2d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -25051,16 +16276,12 @@ let quantized_max_pool2d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let quantized_max_pool2d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_quantized_max_pool2d_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -25071,16 +16292,12 @@ let quantized_max_pool2d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let quantized_max_pool3d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_quantized_max_pool3d - (CArray.start out__) self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) @@ -25090,16 +16307,12 @@ let quantized_max_pool3d self ~kernel_size ~stride ~padding ~dilation ~ceil_mode (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let quantized_max_pool3d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ceil_mode = - let out__ = CArray.make t 1 in stubs_quantized_max_pool3d_out - (CArray.start out__) out self (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) @@ -25110,10 +16323,8 @@ let quantized_max_pool3d_out ~out self ~kernel_size ~stride ~padding ~dilation ~ (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) (List.length dilation) - (if ceil_mode then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if ceil_mode then 1 else 0) + |> with_tensor_gc ;; let quantized_rnn_relu_cell @@ -25132,9 +16343,7 @@ let quantized_rnn_relu_cell ~zero_point_ih ~zero_point_hh = - let out__ = CArray.make t 1 in stubs_quantized_rnn_relu_cell - (CArray.start out__) input hx w_ih @@ -25148,10 +16357,8 @@ let quantized_rnn_relu_cell scale_ih scale_hh zero_point_ih - zero_point_hh; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + zero_point_hh + |> with_tensor_gc ;; let quantized_rnn_tanh_cell @@ -25170,9 +16377,7 @@ let quantized_rnn_tanh_cell ~zero_point_ih ~zero_point_hh = - let out__ = CArray.make t 1 in stubs_quantized_rnn_tanh_cell - (CArray.start out__) input hx w_ih @@ -25186,235 +16391,117 @@ let quantized_rnn_tanh_cell scale_ih scale_hh zero_point_ih - zero_point_hh; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let rad2deg self = - let out__ = CArray.make t 1 in - stubs_rad2deg (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let rad2deg_ self = - let out__ = CArray.make t 1 in - stubs_rad2deg_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + zero_point_hh + |> with_tensor_gc ;; -let rad2deg_out ~out self = - let out__ = CArray.make t 1 in - stubs_rad2deg_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let rad2deg self = stubs_rad2deg self |> with_tensor_gc +let rad2deg_ self = stubs_rad2deg_ self |> with_tensor_gc +let rad2deg_out ~out self = stubs_rad2deg_out out self |> with_tensor_gc let rand ~size ~options = - let out__ = CArray.make t 1 in stubs_rand - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let rand_like self = - let out__ = CArray.make t 1 in - stubs_rand_like (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let rand_like_out ~out self = - let out__ = CArray.make t 1 in - stubs_rand_like_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let rand_like self = stubs_rand_like self |> with_tensor_gc +let rand_like_out ~out self = stubs_rand_like_out out self |> with_tensor_gc let rand_out ~out ~size = - let out__ = CArray.make t 1 in stubs_rand_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let randint ~high ~size ~options = - let out__ = CArray.make t 1 in stubs_randint - (CArray.start out__) (Int64.of_int high) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let randint_like self ~high = - let out__ = CArray.make t 1 in - stubs_randint_like (CArray.start out__) self (Int64.of_int high); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_randint_like self (Int64.of_int high) |> with_tensor_gc ;; let randint_like_low_dtype self ~low ~high = - let out__ = CArray.make t 1 in - stubs_randint_like_low_dtype - (CArray.start out__) - self - (Int64.of_int low) - (Int64.of_int high); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_randint_like_low_dtype self (Int64.of_int low) (Int64.of_int high) + |> with_tensor_gc ;; let randint_like_low_dtype_out ~out self ~low ~high = - let out__ = CArray.make t 1 in - stubs_randint_like_low_dtype_out - (CArray.start out__) - out - self - (Int64.of_int low) - (Int64.of_int high); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_randint_like_low_dtype_out out self (Int64.of_int low) (Int64.of_int high) + |> with_tensor_gc ;; let randint_like_out ~out self ~high = - let out__ = CArray.make t 1 in - stubs_randint_like_out (CArray.start out__) out self (Int64.of_int high); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_randint_like_out out self (Int64.of_int high) |> with_tensor_gc ;; let randint_low ~low ~high ~size ~options = - let out__ = CArray.make t 1 in stubs_randint_low - (CArray.start out__) (Int64.of_int low) (Int64.of_int high) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let randint_low_out ~out ~low ~high ~size = - let out__ = CArray.make t 1 in stubs_randint_low_out - (CArray.start out__) out (Int64.of_int low) (Int64.of_int high) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let randint_out ~out ~high ~size = - let out__ = CArray.make t 1 in stubs_randint_out - (CArray.start out__) out (Int64.of_int high) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let randn ~size ~options = - let out__ = CArray.make t 1 in stubs_randn - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let randn_like self = - let out__ = CArray.make t 1 in - stubs_randn_like (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let randn_like_out ~out self = - let out__ = CArray.make t 1 in - stubs_randn_like_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let randn_like self = stubs_randn_like self |> with_tensor_gc +let randn_like_out ~out self = stubs_randn_like_out out self |> with_tensor_gc let randn_out ~out ~size = - let out__ = CArray.make t 1 in stubs_randn_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let random self = - let out__ = CArray.make t 1 in - stubs_random (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; -let random_ self = - let out__ = CArray.make t 1 in - stubs_random_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let random self = stubs_random self |> with_tensor_gc +let random_ self = stubs_random_ self |> with_tensor_gc let random_from self ~from ~to_ = - let out__ = CArray.make t 1 in stubs_random_from - (CArray.start out__) self (Int64.of_int from) (match to_ with @@ -25422,16 +16509,12 @@ let random_from self ~from ~to_ = | Some v -> Int64.of_int v) (match to_ with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let random_from_ self ~from ~to_ = - let out__ = CArray.make t 1 in stubs_random_from_ - (CArray.start out__) self (Int64.of_int from) (match to_ with @@ -25439,16 +16522,12 @@ let random_from_ self ~from ~to_ = | Some v -> Int64.of_int v) (match to_ with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let random_from_out ~out self ~from ~to_ = - let out__ = CArray.make t 1 in stubs_random_from_out - (CArray.start out__) out self (Int64.of_int from) @@ -25457,462 +16536,220 @@ let random_from_out ~out self ~from ~to_ = | Some v -> Int64.of_int v) (match to_ with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let random_out ~out self = - let out__ = CArray.make t 1 in - stubs_random_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let random_to self ~to_ = - let out__ = CArray.make t 1 in - stubs_random_to (CArray.start out__) self (Int64.of_int to_); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; -let random_to_ self ~to_ = - let out__ = CArray.make t 1 in - stubs_random_to_ (CArray.start out__) self (Int64.of_int to_); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let random_out ~out self = stubs_random_out out self |> with_tensor_gc +let random_to self ~to_ = stubs_random_to self (Int64.of_int to_) |> with_tensor_gc +let random_to_ self ~to_ = stubs_random_to_ self (Int64.of_int to_) |> with_tensor_gc let random_to_out ~out self ~to_ = - let out__ = CArray.make t 1 in - stubs_random_to_out (CArray.start out__) out self (Int64.of_int to_); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_random_to_out out self (Int64.of_int to_) |> with_tensor_gc ;; let randperm ~n ~options = - let out__ = CArray.make t 1 in stubs_randperm - (CArray.start out__) (Int64.of_int n) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let randperm_out ~out ~n = - let out__ = CArray.make t 1 in - stubs_randperm_out (CArray.start out__) out (Int64.of_int n); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let randperm_out ~out ~n = stubs_randperm_out out (Int64.of_int n) |> with_tensor_gc let range ~start ~end_ ~options = - let out__ = CArray.make t 1 in - stubs_range - (CArray.start out__) - start - end_ - (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let range_out ~out ~start ~end_ = - let out__ = CArray.make t 1 in - stubs_range_out (CArray.start out__) out start end_; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_range start end_ (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) + |> with_tensor_gc ;; -let range_out_ ~out ~start ~end_ = - let out__ = CArray.make t 1 in - stubs_range_out_ (CArray.start out__) out start end_; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let range_out ~out ~start ~end_ = stubs_range_out out start end_ |> with_tensor_gc +let range_out_ ~out ~start ~end_ = stubs_range_out_ out start end_ |> with_tensor_gc let range_step ~start ~end_ ~options = - let out__ = CArray.make t 1 in stubs_range_step - (CArray.start out__) start end_ (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let ravel self = - let out__ = CArray.make t 1 in - stubs_ravel (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let real self = - let out__ = CArray.make t 1 in - stubs_real (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let reciprocal self = - let out__ = CArray.make t 1 in - stubs_reciprocal (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let reciprocal_ self = - let out__ = CArray.make t 1 in - stubs_reciprocal_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let reciprocal_out ~out self = - let out__ = CArray.make t 1 in - stubs_reciprocal_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let ravel self = stubs_ravel self |> with_tensor_gc +let real self = stubs_real self |> with_tensor_gc +let reciprocal self = stubs_reciprocal self |> with_tensor_gc +let reciprocal_ self = stubs_reciprocal_ self |> with_tensor_gc +let reciprocal_out ~out self = stubs_reciprocal_out out self |> with_tensor_gc let reflection_pad1d self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad1d - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad1d_backward ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad1d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad1d_backward_grad_input ~grad_input ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad1d_backward_grad_input - (CArray.start out__) grad_input grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad1d_out ~out self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad1d_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad2d self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad2d - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad2d_backward ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad2d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad2d_backward_grad_input ~grad_input ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad2d_backward_grad_input - (CArray.start out__) grad_input grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad2d_out ~out self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad2d_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad3d self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad3d - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad3d_backward ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad3d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad3d_backward_grad_input ~grad_input ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad3d_backward_grad_input - (CArray.start out__) grad_input grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let reflection_pad3d_out ~out self ~padding = - let out__ = CArray.make t 1 in stubs_reflection_pad3d_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let relu self = - let out__ = CArray.make t 1 in - stubs_relu (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let relu6 self = - let out__ = CArray.make t 1 in - stubs_relu6 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let relu6_ self = - let out__ = CArray.make t 1 in - stubs_relu6_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let relu_ self = - let out__ = CArray.make t 1 in - stubs_relu_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let relu_out ~out self = - let out__ = CArray.make t 1 in - stubs_relu_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let remainder self other = - let out__ = CArray.make t 1 in - stubs_remainder (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; -let remainder_ self other = - let out__ = CArray.make t 1 in - stubs_remainder_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let relu self = stubs_relu self |> with_tensor_gc +let relu6 self = stubs_relu6 self |> with_tensor_gc +let relu6_ self = stubs_relu6_ self |> with_tensor_gc +let relu_ self = stubs_relu_ self |> with_tensor_gc +let relu_out ~out self = stubs_relu_out out self |> with_tensor_gc +let remainder self other = stubs_remainder self other |> with_tensor_gc +let remainder_ self other = stubs_remainder_ self other |> with_tensor_gc let remainder_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_remainder_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_remainder_scalar_out out self other |> with_tensor_gc ;; let remainder_scalar_tensor self other = - let out__ = CArray.make t 1 in - stubs_remainder_scalar_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_remainder_scalar_tensor self other |> with_tensor_gc ;; let remainder_scalar_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_remainder_scalar_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let remainder_tensor self other = - let out__ = CArray.make t 1 in - stubs_remainder_tensor (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_remainder_scalar_tensor_out out self other |> with_tensor_gc ;; -let remainder_tensor_ self other = - let out__ = CArray.make t 1 in - stubs_remainder_tensor_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let remainder_tensor self other = stubs_remainder_tensor self other |> with_tensor_gc +let remainder_tensor_ self other = stubs_remainder_tensor_ self other |> with_tensor_gc let remainder_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_remainder_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_remainder_tensor_out out self other |> with_tensor_gc ;; let renorm self ~p ~dim ~maxnorm = - let out__ = CArray.make t 1 in - stubs_renorm (CArray.start out__) self p (Int64.of_int dim) maxnorm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_renorm self p (Int64.of_int dim) maxnorm |> with_tensor_gc ;; let renorm_ self ~p ~dim ~maxnorm = - let out__ = CArray.make t 1 in - stubs_renorm_ (CArray.start out__) self p (Int64.of_int dim) maxnorm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_renorm_ self p (Int64.of_int dim) maxnorm |> with_tensor_gc ;; let renorm_out ~out self ~p ~dim ~maxnorm = - let out__ = CArray.make t 1 in - stubs_renorm_out (CArray.start out__) out self p (Int64.of_int dim) maxnorm; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_renorm_out out self p (Int64.of_int dim) maxnorm |> with_tensor_gc ;; let repeat self ~repeats = - let out__ = CArray.make t 1 in stubs_repeat - (CArray.start out__) self (List.map Int64.of_int repeats |> CArray.of_list int64_t |> CArray.start) - (List.length repeats); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length repeats) + |> with_tensor_gc ;; let repeat_interleave ~repeats ~output_size = - let out__ = CArray.make t 1 in stubs_repeat_interleave - (CArray.start out__) repeats (match output_size with | None -> Int64.zero | Some v -> Int64.of_int v) (match output_size with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let repeat_interleave_self_int self ~repeats ~dim ~output_size = - let out__ = CArray.make t 1 in stubs_repeat_interleave_self_int - (CArray.start out__) self (Int64.of_int repeats) (match dim with @@ -25926,16 +16763,12 @@ let repeat_interleave_self_int self ~repeats ~dim ~output_size = | Some v -> Int64.of_int v) (match output_size with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let repeat_interleave_self_tensor self ~repeats ~dim ~output_size = - let out__ = CArray.make t 1 in stubs_repeat_interleave_self_tensor - (CArray.start out__) self repeats (match dim with @@ -25949,16 +16782,12 @@ let repeat_interleave_self_tensor self ~repeats ~dim ~output_size = | Some v -> Int64.of_int v) (match output_size with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let repeat_interleave_tensor_out ~out ~repeats ~output_size = - let out__ = CArray.make t 1 in stubs_repeat_interleave_tensor_out - (CArray.start out__) out repeats (match output_size with @@ -25966,310 +16795,187 @@ let repeat_interleave_tensor_out ~out ~repeats ~output_size = | Some v -> Int64.of_int v) (match output_size with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let repeat_out ~out self ~repeats = - let out__ = CArray.make t 1 in stubs_repeat_out - (CArray.start out__) out self (List.map Int64.of_int repeats |> CArray.of_list int64_t |> CArray.start) - (List.length repeats); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length repeats) + |> with_tensor_gc ;; let replication_pad1d self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad1d - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad1d_backward ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad1d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad1d_backward_grad_input ~grad_input ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad1d_backward_grad_input - (CArray.start out__) grad_input grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad1d_out ~out self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad1d_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad2d self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad2d - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad2d_backward ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad2d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad2d_backward_grad_input ~grad_input ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad2d_backward_grad_input - (CArray.start out__) grad_input grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad2d_out ~out self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad2d_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad3d self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad3d - (CArray.start out__) self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad3d_backward ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad3d_backward - (CArray.start out__) grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad3d_backward_grad_input ~grad_input ~grad_output self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad3d_backward_grad_input - (CArray.start out__) grad_input grad_output self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let replication_pad3d_out ~out self ~padding = - let out__ = CArray.make t 1 in stubs_replication_pad3d_out - (CArray.start out__) out self (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let requires_grad_ self ~requires_grad = - let out__ = CArray.make t 1 in - stubs_requires_grad_ (CArray.start out__) self (if requires_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_requires_grad_ self (if requires_grad then 1 else 0) |> with_tensor_gc ;; let reshape self ~shape = - let out__ = CArray.make t 1 in stubs_reshape - (CArray.start out__) self (List.map Int64.of_int shape |> CArray.of_list int64_t |> CArray.start) - (List.length shape); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length shape) + |> with_tensor_gc ;; -let reshape_as self other = - let out__ = CArray.make t 1 in - stubs_reshape_as (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let reshape_as self other = stubs_reshape_as self other |> with_tensor_gc let resize self ~size = - let out__ = CArray.make t 1 in stubs_resize - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let resize_ self ~size = - let out__ = CArray.make t 1 in stubs_resize_ - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let resize_as self ~the_template = - let out__ = CArray.make t 1 in - stubs_resize_as (CArray.start out__) self the_template; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; -let resize_as_ self ~the_template = - let out__ = CArray.make t 1 in - stubs_resize_as_ (CArray.start out__) self the_template; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let resize_as self ~the_template = stubs_resize_as self the_template |> with_tensor_gc +let resize_as_ self ~the_template = stubs_resize_as_ self the_template |> with_tensor_gc let resize_as_out ~out self ~the_template = - let out__ = CArray.make t 1 in - stubs_resize_as_out (CArray.start out__) out self the_template; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_resize_as_out out self the_template |> with_tensor_gc ;; let resize_as_sparse self ~the_template = - let out__ = CArray.make t 1 in - stubs_resize_as_sparse (CArray.start out__) self the_template; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_resize_as_sparse self the_template |> with_tensor_gc ;; let resize_as_sparse_ self ~the_template = - let out__ = CArray.make t 1 in - stubs_resize_as_sparse_ (CArray.start out__) self the_template; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_resize_as_sparse_ self the_template |> with_tensor_gc ;; let resize_as_sparse_out ~out self ~the_template = - let out__ = CArray.make t 1 in - stubs_resize_as_sparse_out (CArray.start out__) out self the_template; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_resize_as_sparse_out out self the_template |> with_tensor_gc ;; let resize_out ~out self ~size = - let out__ = CArray.make t 1 in stubs_resize_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let resolve_conj self = - let out__ = CArray.make t 1 in - stubs_resolve_conj (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let resolve_neg self = - let out__ = CArray.make t 1 in - stubs_resolve_neg (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; +let resolve_conj self = stubs_resolve_conj self |> with_tensor_gc +let resolve_neg self = stubs_resolve_neg self |> with_tensor_gc let retains_grad self = stubs_retains_grad self let rnn_relu @@ -26283,12 +16989,12 @@ let rnn_relu ~bidirectional ~batch_first = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_rnn_relu (CArray.start out__) input hx - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) @@ -26296,30 +17002,24 @@ let rnn_relu (if train then 1 else 0) (if bidirectional then 1 else 0) (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let rnn_relu_cell input ~hx ~w_ih ~w_hh ~b_ih ~b_hh = - let out__ = CArray.make t 1 in stubs_rnn_relu_cell - (CArray.start out__) input hx w_ih w_hh (match b_ih with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match b_hh with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let rnn_relu_data @@ -26333,23 +17033,21 @@ let rnn_relu_data ~train ~bidirectional = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_rnn_relu_data (CArray.start out__) data batch_sizes hx - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) dropout (if train then 1 else 0) (if bidirectional then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; @@ -26364,12 +17062,12 @@ let rnn_tanh ~bidirectional ~batch_first = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_rnn_tanh (CArray.start out__) input hx - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) @@ -26377,30 +17075,24 @@ let rnn_tanh (if train then 1 else 0) (if bidirectional then 1 else 0) (if batch_first then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let rnn_tanh_cell input ~hx ~w_ih ~w_hh ~b_ih ~b_hh = - let out__ = CArray.make t 1 in stubs_rnn_tanh_cell - (CArray.start out__) input hx w_ih w_hh (match b_ih with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match b_hh with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let rnn_tanh_data @@ -26414,207 +17106,109 @@ let rnn_tanh_data ~train ~bidirectional = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_rnn_tanh_data (CArray.start out__) data batch_sizes hx - (CArray.of_list t params |> CArray.start) + (CArray.of_list gc_tensor params |> CArray.start) (List.length params) (if has_biases then 1 else 0) (Int64.of_int num_layers) dropout (if train then 1 else 0) (if bidirectional then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let roll self ~shifts ~dims = - let out__ = CArray.make t 1 in stubs_roll - (CArray.start out__) self (List.map Int64.of_int shifts |> CArray.of_list int64_t |> CArray.start) (List.length shifts) (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; let roll_out ~out self ~shifts ~dims = - let out__ = CArray.make t 1 in stubs_roll_out - (CArray.start out__) out self (List.map Int64.of_int shifts |> CArray.of_list int64_t |> CArray.start) (List.length shifts) (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; let rot90 self ~k ~dims = - let out__ = CArray.make t 1 in stubs_rot90 - (CArray.start out__) self (Int64.of_int k) (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; let rot90_out ~out self ~k ~dims = - let out__ = CArray.make t 1 in stubs_rot90_out - (CArray.start out__) out self (Int64.of_int k) (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let round self = - let out__ = CArray.make t 1 in - stubs_round (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; -let round_ self = - let out__ = CArray.make t 1 in - stubs_round_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let round self = stubs_round self |> with_tensor_gc +let round_ self = stubs_round_ self |> with_tensor_gc let round_decimals self ~decimals = - let out__ = CArray.make t 1 in - stubs_round_decimals (CArray.start out__) self (Int64.of_int decimals); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_round_decimals self (Int64.of_int decimals) |> with_tensor_gc ;; let round_decimals_ self ~decimals = - let out__ = CArray.make t 1 in - stubs_round_decimals_ (CArray.start out__) self (Int64.of_int decimals); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_round_decimals_ self (Int64.of_int decimals) |> with_tensor_gc ;; let round_decimals_out ~out self ~decimals = - let out__ = CArray.make t 1 in - stubs_round_decimals_out (CArray.start out__) out self (Int64.of_int decimals); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let round_out ~out self = - let out__ = CArray.make t 1 in - stubs_round_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let row_indices self = - let out__ = CArray.make t 1 in - stubs_row_indices (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let row_indices_copy self = - let out__ = CArray.make t 1 in - stubs_row_indices_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_round_decimals_out out self (Int64.of_int decimals) |> with_tensor_gc ;; -let row_indices_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_row_indices_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let round_out ~out self = stubs_round_out out self |> with_tensor_gc +let row_indices self = stubs_row_indices self |> with_tensor_gc +let row_indices_copy self = stubs_row_indices_copy self |> with_tensor_gc +let row_indices_copy_out ~out self = stubs_row_indices_copy_out out self |> with_tensor_gc let row_stack tensors = - let out__ = CArray.make t 1 in - stubs_row_stack - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_row_stack (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) + |> with_tensor_gc ;; let row_stack_out ~out tensors = - let out__ = CArray.make t 1 in stubs_row_stack_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; -let rrelu self ~training = - let out__ = CArray.make t 1 in - stubs_rrelu (CArray.start out__) self (if training then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let rrelu self ~training = stubs_rrelu self (if training then 1 else 0) |> with_tensor_gc let rrelu_ self ~training = - let out__ = CArray.make t 1 in - stubs_rrelu_ (CArray.start out__) self (if training then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_rrelu_ self (if training then 1 else 0) |> with_tensor_gc ;; let rrelu_with_noise self ~noise ~training = - let out__ = CArray.make t 1 in - stubs_rrelu_with_noise (CArray.start out__) self noise (if training then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_rrelu_with_noise self noise (if training then 1 else 0) |> with_tensor_gc ;; let rrelu_with_noise_ self ~noise ~training = - let out__ = CArray.make t 1 in - stubs_rrelu_with_noise_ (CArray.start out__) self noise (if training then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_rrelu_with_noise_ self noise (if training then 1 else 0) |> with_tensor_gc ;; let rrelu_with_noise_backward @@ -26626,19 +17220,15 @@ let rrelu_with_noise_backward ~training ~self_is_result = - let out__ = CArray.make t 1 in stubs_rrelu_with_noise_backward - (CArray.start out__) grad_output self noise lower upper (if training then 1 else 0) - (if self_is_result then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if self_is_result then 1 else 0) + |> with_tensor_gc ;; let rrelu_with_noise_backward_out @@ -26651,9 +17241,7 @@ let rrelu_with_noise_backward_out ~training ~self_is_result = - let out__ = CArray.make t 1 in stubs_rrelu_with_noise_backward_out - (CArray.start out__) out grad_output self @@ -26661,100 +17249,34 @@ let rrelu_with_noise_backward_out lower upper (if training then 1 else 0) - (if self_is_result then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if self_is_result then 1 else 0) + |> with_tensor_gc ;; let rrelu_with_noise_out ~out self ~noise ~training = - let out__ = CArray.make t 1 in - stubs_rrelu_with_noise_out - (CArray.start out__) - out - self - noise - (if training then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let rsqrt self = - let out__ = CArray.make t 1 in - stubs_rsqrt (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let rsqrt_ self = - let out__ = CArray.make t 1 in - stubs_rsqrt_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let rsqrt_out ~out self = - let out__ = CArray.make t 1 in - stubs_rsqrt_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let rsub self other = - let out__ = CArray.make t 1 in - stubs_rsub (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_rrelu_with_noise_out out self noise (if training then 1 else 0) |> with_tensor_gc ;; -let rsub_scalar self other = - let out__ = CArray.make t 1 in - stubs_rsub_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let rsqrt self = stubs_rsqrt self |> with_tensor_gc +let rsqrt_ self = stubs_rsqrt_ self |> with_tensor_gc +let rsqrt_out ~out self = stubs_rsqrt_out out self |> with_tensor_gc +let rsub self other = stubs_rsub self other |> with_tensor_gc +let rsub_scalar self other = stubs_rsub_scalar self other |> with_tensor_gc let rsub_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_rsub_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_rsub_scalar_out out self other |> with_tensor_gc ;; let rsub_tensor_out ~out self other = - let out__ = CArray.make t 1 in - stubs_rsub_tensor_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_rsub_tensor_out out self other |> with_tensor_gc ;; let scalar_tensor ~s ~options = - let out__ = CArray.make t 1 in - stubs_scalar_tensor - (CArray.start out__) - s - (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scalar_tensor s (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) + |> with_tensor_gc ;; -let scalar_tensor_out ~out ~s = - let out__ = CArray.make t 1 in - stubs_scalar_tensor_out (CArray.start out__) out s; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let scalar_tensor_out ~out ~s = stubs_scalar_tensor_out out s |> with_tensor_gc let scaled_dot_product_attention ~query @@ -26765,176 +17287,85 @@ let scaled_dot_product_attention ~is_causal ~scale = - let out__ = CArray.make t 1 in stubs_scaled_dot_product_attention - (CArray.start out__) query key value (match attn_mask with | Some v -> v - | None -> null) + | None -> none_gc_tensor) dropout_p (if is_causal then 1 else 0) (Option.value scale ~default:0.0) (match scale with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let scatter self ~dim ~index ~src = - let out__ = CArray.make t 1 in - stubs_scatter (CArray.start out__) self (Int64.of_int dim) index src; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter self (Int64.of_int dim) index src |> with_tensor_gc ;; let scatter_ self ~dim ~index ~src = - let out__ = CArray.make t 1 in - stubs_scatter_ (CArray.start out__) self (Int64.of_int dim) index src; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_ self (Int64.of_int dim) index src |> with_tensor_gc ;; let scatter_add self ~dim ~index ~src = - let out__ = CArray.make t 1 in - stubs_scatter_add (CArray.start out__) self (Int64.of_int dim) index src; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_add self (Int64.of_int dim) index src |> with_tensor_gc ;; let scatter_add_ self ~dim ~index ~src = - let out__ = CArray.make t 1 in - stubs_scatter_add_ (CArray.start out__) self (Int64.of_int dim) index src; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_add_ self (Int64.of_int dim) index src |> with_tensor_gc ;; let scatter_add_out ~out self ~dim ~index ~src = - let out__ = CArray.make t 1 in - stubs_scatter_add_out (CArray.start out__) out self (Int64.of_int dim) index src; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_add_out out self (Int64.of_int dim) index src |> with_tensor_gc ;; let scatter_reduce self ~dim ~index ~src ~reduce = - let out__ = CArray.make t 1 in - stubs_scatter_reduce (CArray.start out__) self (Int64.of_int dim) index src reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_reduce self (Int64.of_int dim) index src reduce |> with_tensor_gc ;; let scatter_reduce_ self ~dim ~index ~src ~reduce = - let out__ = CArray.make t 1 in - stubs_scatter_reduce_ (CArray.start out__) self (Int64.of_int dim) index src reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_reduce_ self (Int64.of_int dim) index src reduce |> with_tensor_gc ;; let scatter_reduce_out ~out self ~dim ~index ~src ~reduce = - let out__ = CArray.make t 1 in - stubs_scatter_reduce_out - (CArray.start out__) - out - self - (Int64.of_int dim) - index - src - reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_reduce_out out self (Int64.of_int dim) index src reduce |> with_tensor_gc ;; let scatter_src_out ~out self ~dim ~index ~src = - let out__ = CArray.make t 1 in - stubs_scatter_src_out (CArray.start out__) out self (Int64.of_int dim) index src; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_src_out out self (Int64.of_int dim) index src |> with_tensor_gc ;; let scatter_value self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_scatter_value (CArray.start out__) self (Int64.of_int dim) index value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_value self (Int64.of_int dim) index value |> with_tensor_gc ;; let scatter_value_ self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_scatter_value_ (CArray.start out__) self (Int64.of_int dim) index value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_value_ self (Int64.of_int dim) index value |> with_tensor_gc ;; let scatter_value_out ~out self ~dim ~index ~value = - let out__ = CArray.make t 1 in - stubs_scatter_value_out (CArray.start out__) out self (Int64.of_int dim) index value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_value_out out self (Int64.of_int dim) index value |> with_tensor_gc ;; let scatter_value_reduce self ~dim ~index ~value ~reduce = - let out__ = CArray.make t 1 in - stubs_scatter_value_reduce - (CArray.start out__) - self - (Int64.of_int dim) - index - value - reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_value_reduce self (Int64.of_int dim) index value reduce |> with_tensor_gc ;; let scatter_value_reduce_ self ~dim ~index ~value ~reduce = - let out__ = CArray.make t 1 in - stubs_scatter_value_reduce_ - (CArray.start out__) - self - (Int64.of_int dim) - index - value - reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_value_reduce_ self (Int64.of_int dim) index value reduce |> with_tensor_gc ;; let scatter_value_reduce_out ~out self ~dim ~index ~value ~reduce = - let out__ = CArray.make t 1 in - stubs_scatter_value_reduce_out - (CArray.start out__) - out - self - (Int64.of_int dim) - index - value - reduce; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_scatter_value_reduce_out out self (Int64.of_int dim) index value reduce + |> with_tensor_gc ;; let searchsorted ~sorted_sequence self ~out_int32 ~right ~side ~sorter = - let out__ = CArray.make t 1 in stubs_searchsorted - (CArray.start out__) sorted_sequence self (if out_int32 then 1 else 0) @@ -26942,16 +17373,12 @@ let searchsorted ~sorted_sequence self ~out_int32 ~right ~side ~sorter = side (match sorter with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let searchsorted_scalar ~sorted_sequence self ~out_int32 ~right ~side ~sorter = - let out__ = CArray.make t 1 in stubs_searchsorted_scalar - (CArray.start out__) sorted_sequence self (if out_int32 then 1 else 0) @@ -26959,16 +17386,12 @@ let searchsorted_scalar ~sorted_sequence self ~out_int32 ~right ~side ~sorter = side (match sorter with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let searchsorted_scalar_out ~out ~sorted_sequence self ~out_int32 ~right ~side ~sorter = - let out__ = CArray.make t 1 in stubs_searchsorted_scalar_out - (CArray.start out__) out sorted_sequence self @@ -26977,16 +17400,12 @@ let searchsorted_scalar_out ~out ~sorted_sequence self ~out_int32 ~right ~side ~ side (match sorter with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let searchsorted_tensor_out ~out ~sorted_sequence self ~out_int32 ~right ~side ~sorter = - let out__ = CArray.make t 1 in stubs_searchsorted_tensor_out - (CArray.start out__) out sorted_sequence self @@ -26995,33 +17414,27 @@ let searchsorted_tensor_out ~out ~sorted_sequence self ~out_int32 ~right ~side ~ side (match sorter with | Some v -> v - | None -> null); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> none_gc_tensor) + |> with_tensor_gc ;; let segment_reduce ~data ~reduce ~lengths ~indices ~offsets ~axis ~unsafe ~initial = - let out__ = CArray.make t 1 in stubs_segment_reduce - (CArray.start out__) data reduce (match lengths with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match indices with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match offsets with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int axis) (if unsafe then 1 else 0) - initial; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + initial + |> with_tensor_gc ;; let segment_reduce_out @@ -27035,424 +17448,145 @@ let segment_reduce_out ~unsafe ~initial = - let out__ = CArray.make t 1 in stubs_segment_reduce_out - (CArray.start out__) out data reduce (match lengths with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match indices with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (match offsets with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (Int64.of_int axis) (if unsafe then 1 else 0) - initial; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + initial + |> with_tensor_gc ;; let select self ~dim ~index = - let out__ = CArray.make t 1 in - stubs_select (CArray.start out__) self (Int64.of_int dim) (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_select self (Int64.of_int dim) (Int64.of_int index) |> with_tensor_gc ;; let select_backward ~grad_output ~input_sizes ~dim ~index = - let out__ = CArray.make t 1 in stubs_select_backward - (CArray.start out__) grad_output (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) (List.length input_sizes) (Int64.of_int dim) - (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int index) + |> with_tensor_gc ;; let select_backward_out ~out ~grad_output ~input_sizes ~dim ~index = - let out__ = CArray.make t 1 in stubs_select_backward_out - (CArray.start out__) out grad_output (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) (List.length input_sizes) (Int64.of_int dim) - (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int index) + |> with_tensor_gc ;; let select_copy self ~dim ~index = - let out__ = CArray.make t 1 in - stubs_select_copy (CArray.start out__) self (Int64.of_int dim) (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_select_copy self (Int64.of_int dim) (Int64.of_int index) |> with_tensor_gc ;; let select_copy_int_out ~out self ~dim ~index = - let out__ = CArray.make t 1 in - stubs_select_copy_int_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_select_copy_int_out out self (Int64.of_int dim) (Int64.of_int index) + |> with_tensor_gc ;; let select_scatter self ~src ~dim ~index = - let out__ = CArray.make t 1 in - stubs_select_scatter - (CArray.start out__) - self - src - (Int64.of_int dim) - (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_select_scatter self src (Int64.of_int dim) (Int64.of_int index) |> with_tensor_gc ;; let select_scatter_out ~out self ~src ~dim ~index = - let out__ = CArray.make t 1 in - stubs_select_scatter_out - (CArray.start out__) - out - self - src - (Int64.of_int dim) - (Int64.of_int index); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let selu self = - let out__ = CArray.make t 1 in - stubs_selu (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_select_scatter_out out self src (Int64.of_int dim) (Int64.of_int index) + |> with_tensor_gc ;; -let selu_ self = - let out__ = CArray.make t 1 in - stubs_selu_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let set self = - let out__ = CArray.make t 1 in - stubs_set (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let set_ self = - let out__ = CArray.make t 1 in - stubs_set_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let set_out ~out self = - let out__ = CArray.make t 1 in - stubs_set_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let selu self = stubs_selu self |> with_tensor_gc +let selu_ self = stubs_selu_ self |> with_tensor_gc +let set self = stubs_set self |> with_tensor_gc +let set_ self = stubs_set_ self |> with_tensor_gc +let set_out ~out self = stubs_set_out out self |> with_tensor_gc let set_requires_grad self ~r = - let out__ = CArray.make t 1 in - stubs_set_requires_grad (CArray.start out__) self (if r then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_set_requires_grad self (if r then 1 else 0) |> with_tensor_gc ;; -let set_source_tensor self ~source = - let out__ = CArray.make t 1 in - stubs_set_source_tensor (CArray.start out__) self source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let set_source_tensor self ~source = stubs_set_source_tensor self source |> with_tensor_gc let set_source_tensor_ self ~source = - let out__ = CArray.make t 1 in - stubs_set_source_tensor_ (CArray.start out__) self source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_set_source_tensor_ self source |> with_tensor_gc ;; let set_source_tensor_out ~out self ~source = - let out__ = CArray.make t 1 in - stubs_set_source_tensor_out (CArray.start out__) out self source; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_set_source_tensor_out out self source |> with_tensor_gc ;; let set_source_tensor_storage_offset_ self ~source ~storage_offset ~size ~stride = - let out__ = CArray.make t 1 in stubs_set_source_tensor_storage_offset_ - (CArray.start out__) self source (Int64.of_int storage_offset) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) - (List.length stride); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sgn self = - let out__ = CArray.make t 1 in - stubs_sgn (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sgn_ self = - let out__ = CArray.make t 1 in - stubs_sgn_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sgn_out ~out self = - let out__ = CArray.make t 1 in - stubs_sgn_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sigmoid self = - let out__ = CArray.make t 1 in - stubs_sigmoid (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length stride) + |> with_tensor_gc ;; -let sigmoid_ self = - let out__ = CArray.make t 1 in - stubs_sigmoid_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let sgn self = stubs_sgn self |> with_tensor_gc +let sgn_ self = stubs_sgn_ self |> with_tensor_gc +let sgn_out ~out self = stubs_sgn_out out self |> with_tensor_gc +let sigmoid self = stubs_sigmoid self |> with_tensor_gc +let sigmoid_ self = stubs_sigmoid_ self |> with_tensor_gc let sigmoid_backward ~grad_output ~output = - let out__ = CArray.make t 1 in - stubs_sigmoid_backward (CArray.start out__) grad_output output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_sigmoid_backward grad_output output |> with_tensor_gc ;; let sigmoid_backward_grad_input ~grad_input ~grad_output ~output = - let out__ = CArray.make t 1 in - stubs_sigmoid_backward_grad_input (CArray.start out__) grad_input grad_output output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sigmoid_out ~out self = - let out__ = CArray.make t 1 in - stubs_sigmoid_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sign self = - let out__ = CArray.make t 1 in - stubs_sign (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sign_ self = - let out__ = CArray.make t 1 in - stubs_sign_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sign_out ~out self = - let out__ = CArray.make t 1 in - stubs_sign_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let signbit self = - let out__ = CArray.make t 1 in - stubs_signbit (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let signbit_out ~out self = - let out__ = CArray.make t 1 in - stubs_signbit_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let silu self = - let out__ = CArray.make t 1 in - stubs_silu (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_sigmoid_backward_grad_input grad_input grad_output output |> with_tensor_gc ;; -let silu_ self = - let out__ = CArray.make t 1 in - stubs_silu_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let sigmoid_out ~out self = stubs_sigmoid_out out self |> with_tensor_gc +let sign self = stubs_sign self |> with_tensor_gc +let sign_ self = stubs_sign_ self |> with_tensor_gc +let sign_out ~out self = stubs_sign_out out self |> with_tensor_gc +let signbit self = stubs_signbit self |> with_tensor_gc +let signbit_out ~out self = stubs_signbit_out out self |> with_tensor_gc +let silu self = stubs_silu self |> with_tensor_gc +let silu_ self = stubs_silu_ self |> with_tensor_gc let silu_backward ~grad_output self = - let out__ = CArray.make t 1 in - stubs_silu_backward (CArray.start out__) grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_silu_backward grad_output self |> with_tensor_gc ;; let silu_backward_grad_input ~grad_input ~grad_output self = - let out__ = CArray.make t 1 in - stubs_silu_backward_grad_input (CArray.start out__) grad_input grad_output self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let silu_out ~out self = - let out__ = CArray.make t 1 in - stubs_silu_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sin self = - let out__ = CArray.make t 1 in - stubs_sin (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sin_ self = - let out__ = CArray.make t 1 in - stubs_sin_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sin_out ~out self = - let out__ = CArray.make t 1 in - stubs_sin_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sinc self = - let out__ = CArray.make t 1 in - stubs_sinc (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sinc_ self = - let out__ = CArray.make t 1 in - stubs_sinc_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sinc_out ~out self = - let out__ = CArray.make t 1 in - stubs_sinc_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sinh self = - let out__ = CArray.make t 1 in - stubs_sinh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sinh_ self = - let out__ = CArray.make t 1 in - stubs_sinh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sinh_out ~out self = - let out__ = CArray.make t 1 in - stubs_sinh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - + stubs_silu_backward_grad_input grad_input grad_output self |> with_tensor_gc +;; + +let silu_out ~out self = stubs_silu_out out self |> with_tensor_gc +let sin self = stubs_sin self |> with_tensor_gc +let sin_ self = stubs_sin_ self |> with_tensor_gc +let sin_out ~out self = stubs_sin_out out self |> with_tensor_gc +let sinc self = stubs_sinc self |> with_tensor_gc +let sinc_ self = stubs_sinc_ self |> with_tensor_gc +let sinc_out ~out self = stubs_sinc_out out self |> with_tensor_gc +let sinh self = stubs_sinh self |> with_tensor_gc +let sinh_ self = stubs_sinh_ self |> with_tensor_gc +let sinh_out ~out self = stubs_sinh_out out self |> with_tensor_gc let size self ~dim = stubs_size self (Int64.of_int dim) let slice self ~dim ~start ~end_ ~step = - let out__ = CArray.make t 1 in stubs_slice - (CArray.start out__) self (Int64.of_int dim) (match start with @@ -27467,32 +17601,24 @@ let slice self ~dim ~start ~end_ ~step = (match end_ with | Some _ -> 0 | None -> 1) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let slice_backward ~grad_output ~input_sizes ~dim ~start ~end_ ~step = - let out__ = CArray.make t 1 in stubs_slice_backward - (CArray.start out__) grad_output (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) (List.length input_sizes) (Int64.of_int dim) (Int64.of_int start) (Int64.of_int end_) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let slice_backward_out ~out ~grad_output ~input_sizes ~dim ~start ~end_ ~step = - let out__ = CArray.make t 1 in stubs_slice_backward_out - (CArray.start out__) out grad_output (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) @@ -27500,16 +17626,12 @@ let slice_backward_out ~out ~grad_output ~input_sizes ~dim ~start ~end_ ~step = (Int64.of_int dim) (Int64.of_int start) (Int64.of_int end_) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let slice_copy self ~dim ~start ~end_ ~step = - let out__ = CArray.make t 1 in stubs_slice_copy - (CArray.start out__) self (Int64.of_int dim) (match start with @@ -27524,16 +17646,12 @@ let slice_copy self ~dim ~start ~end_ ~step = (match end_ with | Some _ -> 0 | None -> 1) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let slice_copy_tensor_out ~out self ~dim ~start ~end_ ~step = - let out__ = CArray.make t 1 in stubs_slice_copy_tensor_out - (CArray.start out__) out self (Int64.of_int dim) @@ -27549,16 +17667,12 @@ let slice_copy_tensor_out ~out self ~dim ~start ~end_ ~step = (match end_ with | Some _ -> 0 | None -> 1) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let slice_scatter self ~src ~dim ~start ~end_ ~step = - let out__ = CArray.make t 1 in stubs_slice_scatter - (CArray.start out__) self src (Int64.of_int dim) @@ -27574,16 +17688,12 @@ let slice_scatter self ~src ~dim ~start ~end_ ~step = (match end_ with | Some _ -> 0 | None -> 1) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let slice_scatter_out ~out self ~src ~dim ~start ~end_ ~step = - let out__ = CArray.make t 1 in stubs_slice_scatter_out - (CArray.start out__) out self src @@ -27600,56 +17710,44 @@ let slice_scatter_out ~out self ~src ~dim ~start ~end_ ~step = (match end_ with | Some _ -> 0 | None -> 1) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let slogdet self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_slogdet (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let slogdet_out ~sign ~logabsdet self = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_slogdet_out (CArray.start out__) sign logabsdet self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let slow_conv3d self ~weight ~kernel_size ~bias ~stride ~padding = - let out__ = CArray.make t 1 in stubs_slow_conv3d - (CArray.start out__) self weight (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let slow_conv3d_out ~out self ~weight ~kernel_size ~bias ~stride ~padding = - let out__ = CArray.make t 1 in stubs_slow_conv3d_out - (CArray.start out__) out self weight @@ -27657,36 +17755,30 @@ let slow_conv3d_out ~out self ~weight ~kernel_size ~bias ~stride ~padding = (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) - (List.length padding); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length padding) + |> with_tensor_gc ;; let slow_conv_dilated2d self ~weight ~kernel_size ~bias ~stride ~padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_dilated2d - (CArray.start out__) self weight (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let slow_conv_dilated2d_out @@ -27699,9 +17791,7 @@ let slow_conv_dilated2d_out ~padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_dilated2d_out - (CArray.start out__) out self weight @@ -27709,38 +17799,32 @@ let slow_conv_dilated2d_out (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let slow_conv_dilated3d self ~weight ~kernel_size ~bias ~stride ~padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_dilated3d - (CArray.start out__) self weight (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let slow_conv_dilated3d_out @@ -27753,9 +17837,7 @@ let slow_conv_dilated3d_out ~padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_dilated3d_out - (CArray.start out__) out self weight @@ -27763,16 +17845,14 @@ let slow_conv_dilated3d_out (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) (List.length padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let slow_conv_transpose2d @@ -27785,16 +17865,14 @@ let slow_conv_transpose2d ~output_padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_transpose2d - (CArray.start out__) self weight (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -27802,10 +17880,8 @@ let slow_conv_transpose2d (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let slow_conv_transpose2d_out @@ -27819,9 +17895,7 @@ let slow_conv_transpose2d_out ~output_padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_transpose2d_out - (CArray.start out__) out self weight @@ -27829,7 +17903,7 @@ let slow_conv_transpose2d_out (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -27837,10 +17911,8 @@ let slow_conv_transpose2d_out (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let slow_conv_transpose3d @@ -27853,16 +17925,14 @@ let slow_conv_transpose3d ~output_padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_transpose3d - (CArray.start out__) self weight (List.map Int64.of_int kernel_size |> CArray.of_list int64_t |> CArray.start) (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -27870,10 +17940,8 @@ let slow_conv_transpose3d (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; let slow_conv_transpose3d_out @@ -27887,9 +17955,7 @@ let slow_conv_transpose3d_out ~output_padding ~dilation = - let out__ = CArray.make t 1 in stubs_slow_conv_transpose3d_out - (CArray.start out__) out self weight @@ -27897,7 +17963,7 @@ let slow_conv_transpose3d_out (List.length kernel_size) (match bias with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (List.map Int64.of_int stride |> CArray.of_list int64_t |> CArray.start) (List.length stride) (List.map Int64.of_int padding |> CArray.of_list int64_t |> CArray.start) @@ -27905,45 +17971,25 @@ let slow_conv_transpose3d_out (List.map Int64.of_int output_padding |> CArray.of_list int64_t |> CArray.start) (List.length output_padding) (List.map Int64.of_int dilation |> CArray.of_list int64_t |> CArray.start) - (List.length dilation); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dilation) + |> with_tensor_gc ;; -let smm self ~mat2 = - let out__ = CArray.make t 1 in - stubs_smm (CArray.start out__) self mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let smm self ~mat2 = stubs_smm self mat2 |> with_tensor_gc let smooth_l1_loss self ~target ~reduction ~beta = - let out__ = CArray.make t 1 in - stubs_smooth_l1_loss - (CArray.start out__) - self - target - (Reduction.to_int reduction |> Int64.of_int) - beta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_smooth_l1_loss self target (Reduction.to_int reduction |> Int64.of_int) beta + |> with_tensor_gc ;; let smooth_l1_loss_backward ~grad_output self ~target ~reduction ~beta = - let out__ = CArray.make t 1 in stubs_smooth_l1_loss_backward - (CArray.start out__) grad_output self target (Reduction.to_int reduction |> Int64.of_int) - beta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + beta + |> with_tensor_gc ;; let smooth_l1_loss_backward_grad_input @@ -27954,209 +18000,111 @@ let smooth_l1_loss_backward_grad_input ~reduction ~beta = - let out__ = CArray.make t 1 in stubs_smooth_l1_loss_backward_grad_input - (CArray.start out__) grad_input grad_output self target (Reduction.to_int reduction |> Int64.of_int) - beta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + beta + |> with_tensor_gc ;; let smooth_l1_loss_out ~out self ~target ~reduction ~beta = - let out__ = CArray.make t 1 in stubs_smooth_l1_loss_out - (CArray.start out__) out self target (Reduction.to_int reduction |> Int64.of_int) - beta; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + beta + |> with_tensor_gc ;; let soft_margin_loss self ~target ~reduction = - let out__ = CArray.make t 1 in - stubs_soft_margin_loss - (CArray.start out__) - self - target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_soft_margin_loss self target (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let soft_margin_loss_backward ~grad_output self ~target ~reduction = - let out__ = CArray.make t 1 in stubs_soft_margin_loss_backward - (CArray.start out__) grad_output self target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let soft_margin_loss_backward_grad_input ~grad_input ~grad_output self ~target ~reduction = - let out__ = CArray.make t 1 in stubs_soft_margin_loss_backward_grad_input - (CArray.start out__) grad_input grad_output self target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let soft_margin_loss_out ~out self ~target ~reduction = - let out__ = CArray.make t 1 in - stubs_soft_margin_loss_out - (CArray.start out__) - out - self - target - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_soft_margin_loss_out out self target (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; let softmax self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_softmax (CArray.start out__) self (Int64.of_int dim) (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let softmax_int_out ~out self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_softmax_int_out - (CArray.start out__) - out - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let softplus self = - let out__ = CArray.make t 1 in - stubs_softplus (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let softplus_backward ~grad_output self ~beta ~threshold = - let out__ = CArray.make t 1 in - stubs_softplus_backward (CArray.start out__) grad_output self beta threshold; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let softplus_backward_grad_input ~grad_input ~grad_output self ~beta ~threshold = - let out__ = CArray.make t 1 in - stubs_softplus_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - beta - threshold; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_softmax self (Int64.of_int dim) (Kind.packed_to_int dtype) |> with_tensor_gc ;; -let softplus_out ~out self = - let out__ = CArray.make t 1 in - stubs_softplus_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let softmax_int_out ~out self ~dim ~dtype = + stubs_softmax_int_out out self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc +;; + +let softplus self = stubs_softplus self |> with_tensor_gc + +let softplus_backward ~grad_output self ~beta ~threshold = + stubs_softplus_backward grad_output self beta threshold |> with_tensor_gc ;; -let softshrink self = - let out__ = CArray.make t 1 in - stubs_softshrink (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 +let softplus_backward_grad_input ~grad_input ~grad_output self ~beta ~threshold = + stubs_softplus_backward_grad_input grad_input grad_output self beta threshold + |> with_tensor_gc ;; +let softplus_out ~out self = stubs_softplus_out out self |> with_tensor_gc +let softshrink self = stubs_softshrink self |> with_tensor_gc + let softshrink_backward ~grad_output self ~lambd = - let out__ = CArray.make t 1 in - stubs_softshrink_backward (CArray.start out__) grad_output self lambd; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_softshrink_backward grad_output self lambd |> with_tensor_gc ;; let softshrink_backward_grad_input ~grad_input ~grad_output self ~lambd = - let out__ = CArray.make t 1 in - stubs_softshrink_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - lambd; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_softshrink_backward_grad_input grad_input grad_output self lambd |> with_tensor_gc ;; -let softshrink_out ~out self = - let out__ = CArray.make t 1 in - stubs_softshrink_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let softshrink_out ~out self = stubs_softshrink_out out self |> with_tensor_gc let sort self ~dim ~descending = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_sort (CArray.start out__) self (Int64.of_int dim) (if descending then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let sort_stable self ~stable ~dim ~descending = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_sort_stable (CArray.start out__) self (if stable then 1 else 0) (Int64.of_int dim) (if descending then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let sort_values ~values ~indices self ~dim ~descending = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_sort_values (CArray.start out__) values @@ -28164,15 +18112,13 @@ let sort_values ~values ~indices self ~dim ~descending = self (Int64.of_int dim) (if descending then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let sort_values_stable ~values ~indices self ~stable ~dim ~descending = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_sort_values_stable (CArray.start out__) values @@ -28181,25 +18127,19 @@ let sort_values_stable ~values ~indices self ~stable ~dim ~descending = (if stable then 1 else 0) (Int64.of_int dim) (if descending then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let sparse_bsc_tensor ~ccol_indices ~row_indices ~values ~options = - let out__ = CArray.make t 1 in stubs_sparse_bsc_tensor - (CArray.start out__) ccol_indices row_indices values (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_bsc_tensor_ccol_row_value_size @@ -28209,33 +18149,25 @@ let sparse_bsc_tensor_ccol_row_value_size ~size ~options = - let out__ = CArray.make t 1 in stubs_sparse_bsc_tensor_ccol_row_value_size - (CArray.start out__) ccol_indices row_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_bsr_tensor ~crow_indices ~col_indices ~values ~options = - let out__ = CArray.make t 1 in stubs_sparse_bsr_tensor - (CArray.start out__) crow_indices col_indices values (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_bsr_tensor_crow_col_value_size @@ -28245,33 +18177,25 @@ let sparse_bsr_tensor_crow_col_value_size ~size ~options = - let out__ = CArray.make t 1 in stubs_sparse_bsr_tensor_crow_col_value_size - (CArray.start out__) crow_indices col_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_compressed_tensor ~compressed_indices ~plain_indices ~values ~options = - let out__ = CArray.make t 1 in stubs_sparse_compressed_tensor - (CArray.start out__) compressed_indices plain_indices values (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_compressed_tensor_comp_plain_value_size @@ -28281,88 +18205,64 @@ let sparse_compressed_tensor_comp_plain_value_size ~size ~options = - let out__ = CArray.make t 1 in stubs_sparse_compressed_tensor_comp_plain_value_size - (CArray.start out__) compressed_indices plain_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_coo_tensor ~size ~options = - let out__ = CArray.make t 1 in stubs_sparse_coo_tensor - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_coo_tensor_indices ~indices ~values ~options ~is_coalesced = - let out__ = CArray.make t 1 in stubs_sparse_coo_tensor_indices - (CArray.start out__) indices values (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) - (if is_coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if is_coalesced then 1 else 0) + |> with_tensor_gc ;; let sparse_coo_tensor_indices_size ~indices ~values ~size ~options ~is_coalesced = - let out__ = CArray.make t 1 in stubs_sparse_coo_tensor_indices_size - (CArray.start out__) indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) - (if is_coalesced then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if is_coalesced then 1 else 0) + |> with_tensor_gc ;; let sparse_coo_tensor_size_out ~out ~size = - let out__ = CArray.make t 1 in stubs_sparse_coo_tensor_size_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let sparse_csc_tensor ~ccol_indices ~row_indices ~values ~options = - let out__ = CArray.make t 1 in stubs_sparse_csc_tensor - (CArray.start out__) ccol_indices row_indices values (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_csc_tensor_ccol_row_value_size @@ -28372,33 +18272,25 @@ let sparse_csc_tensor_ccol_row_value_size ~size ~options = - let out__ = CArray.make t 1 in stubs_sparse_csc_tensor_ccol_row_value_size - (CArray.start out__) ccol_indices row_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_csr_tensor ~crow_indices ~col_indices ~values ~options = - let out__ = CArray.make t 1 in stubs_sparse_csr_tensor - (CArray.start out__) crow_indices col_indices values (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_csr_tensor_crow_col_value_size @@ -28408,1531 +18300,658 @@ let sparse_csr_tensor_crow_col_value_size ~size ~options = - let out__ = CArray.make t 1 in stubs_sparse_csr_tensor_crow_col_value_size - (CArray.start out__) crow_indices col_indices values (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let sparse_dim self = stubs_sparse_dim self - -let sparse_mask self ~mask = - let out__ = CArray.make t 1 in - stubs_sparse_mask (CArray.start out__) self mask; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let sparse_mask self ~mask = stubs_sparse_mask self mask |> with_tensor_gc let sparse_mask_out ~out self ~mask = - let out__ = CArray.make t 1 in - stubs_sparse_mask_out (CArray.start out__) out self mask; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_sparse_mask_out out self mask |> with_tensor_gc ;; let sparse_resize self ~size ~sparse_dim ~dense_dim = - let out__ = CArray.make t 1 in stubs_sparse_resize - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Int64.of_int sparse_dim) - (Int64.of_int dense_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dense_dim) + |> with_tensor_gc ;; let sparse_resize_ self ~size ~sparse_dim ~dense_dim = - let out__ = CArray.make t 1 in stubs_sparse_resize_ - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Int64.of_int sparse_dim) - (Int64.of_int dense_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dense_dim) + |> with_tensor_gc ;; let sparse_resize_and_clear self ~size ~sparse_dim ~dense_dim = - let out__ = CArray.make t 1 in stubs_sparse_resize_and_clear - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Int64.of_int sparse_dim) - (Int64.of_int dense_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dense_dim) + |> with_tensor_gc ;; let sparse_resize_and_clear_ self ~size ~sparse_dim ~dense_dim = - let out__ = CArray.make t 1 in stubs_sparse_resize_and_clear_ - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Int64.of_int sparse_dim) - (Int64.of_int dense_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dense_dim) + |> with_tensor_gc ;; let sparse_resize_and_clear_out ~out self ~size ~sparse_dim ~dense_dim = - let out__ = CArray.make t 1 in stubs_sparse_resize_and_clear_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Int64.of_int sparse_dim) - (Int64.of_int dense_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dense_dim) + |> with_tensor_gc ;; let sparse_resize_out ~out self ~size ~sparse_dim ~dense_dim = - let out__ = CArray.make t 1 in stubs_sparse_resize_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Int64.of_int sparse_dim) - (Int64.of_int dense_dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dense_dim) + |> with_tensor_gc ;; let sparse_sampled_addmm self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_sparse_sampled_addmm (CArray.start out__) self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_sparse_sampled_addmm self mat1 mat2 |> with_tensor_gc ;; let sparse_sampled_addmm_out ~out self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_sparse_sampled_addmm_out (CArray.start out__) out self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_airy_ai ~x = - let out__ = CArray.make t 1 in - stubs_special_airy_ai (CArray.start out__) x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_airy_ai_out ~out ~x = - let out__ = CArray.make t 1 in - stubs_special_airy_ai_out (CArray.start out__) out x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_sparse_sampled_addmm_out out self mat1 mat2 |> with_tensor_gc ;; -let special_bessel_j0 self = - let out__ = CArray.make t 1 in - stubs_special_bessel_j0 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_airy_ai ~x = stubs_special_airy_ai x |> with_tensor_gc +let special_airy_ai_out ~out ~x = stubs_special_airy_ai_out out x |> with_tensor_gc +let special_bessel_j0 self = stubs_special_bessel_j0 self |> with_tensor_gc let special_bessel_j0_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_bessel_j0_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_bessel_j0_out out self |> with_tensor_gc ;; -let special_bessel_j1 self = - let out__ = CArray.make t 1 in - stubs_special_bessel_j1 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_bessel_j1 self = stubs_special_bessel_j1 self |> with_tensor_gc let special_bessel_j1_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_bessel_j1_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_bessel_j1_out out self |> with_tensor_gc ;; -let special_bessel_y0 self = - let out__ = CArray.make t 1 in - stubs_special_bessel_y0 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_bessel_y0 self = stubs_special_bessel_y0 self |> with_tensor_gc let special_bessel_y0_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_bessel_y0_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_bessel_y0_out out self |> with_tensor_gc ;; -let special_bessel_y1 self = - let out__ = CArray.make t 1 in - stubs_special_bessel_y1 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_bessel_y1 self = stubs_special_bessel_y1 self |> with_tensor_gc let special_bessel_y1_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_bessel_y1_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_bessel_y1_out out self |> with_tensor_gc ;; let special_chebyshev_polynomial_t ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_t (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_t x n |> with_tensor_gc ;; let special_chebyshev_polynomial_t_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_t_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_t_n_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_t_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_t_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_t_n_scalar_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_t_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_t_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_t_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_t_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_t_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_t_x_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_t_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_t_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_t_x_scalar_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_u ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_u (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_u x n |> with_tensor_gc ;; let special_chebyshev_polynomial_u_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_u_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_u_n_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_u_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_u_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_u_n_scalar_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_u_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_u_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_u_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_u_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_u_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_u_x_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_u_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_u_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_u_x_scalar_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_v ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_v (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_v x n |> with_tensor_gc ;; let special_chebyshev_polynomial_v_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_v_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_v_n_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_v_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_v_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_v_n_scalar_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_v_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_v_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_v_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_v_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_v_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_v_x_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_v_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_v_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_v_x_scalar_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_w ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_w (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_w x n |> with_tensor_gc ;; let special_chebyshev_polynomial_w_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_w_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_w_n_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_w_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_w_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_w_n_scalar_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_w_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_w_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_w_out out x n |> with_tensor_gc ;; let special_chebyshev_polynomial_w_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_w_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_chebyshev_polynomial_w_x_scalar x n |> with_tensor_gc ;; let special_chebyshev_polynomial_w_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_chebyshev_polynomial_w_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_digamma self = - let out__ = CArray.make t 1 in - stubs_special_digamma (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_digamma_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_digamma_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_entr self = - let out__ = CArray.make t 1 in - stubs_special_entr (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_entr_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_entr_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erf self = - let out__ = CArray.make t 1 in - stubs_special_erf (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erf_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_erf_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erfc self = - let out__ = CArray.make t 1 in - stubs_special_erfc (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erfc_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_erfc_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erfcx self = - let out__ = CArray.make t 1 in - stubs_special_erfcx (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erfcx_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_erfcx_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erfinv self = - let out__ = CArray.make t 1 in - stubs_special_erfinv (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_erfinv_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_erfinv_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_exp2 self = - let out__ = CArray.make t 1 in - stubs_special_exp2 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_exp2_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_exp2_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_expit self = - let out__ = CArray.make t 1 in - stubs_special_expit (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_expit_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_expit_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_expm1 self = - let out__ = CArray.make t 1 in - stubs_special_expm1 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_expm1_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_expm1_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_gammainc self other = - let out__ = CArray.make t 1 in - stubs_special_gammainc (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; + stubs_special_chebyshev_polynomial_w_x_scalar_out out x n |> with_tensor_gc +;; + +let special_digamma self = stubs_special_digamma self |> with_tensor_gc +let special_digamma_out ~out self = stubs_special_digamma_out out self |> with_tensor_gc +let special_entr self = stubs_special_entr self |> with_tensor_gc +let special_entr_out ~out self = stubs_special_entr_out out self |> with_tensor_gc +let special_erf self = stubs_special_erf self |> with_tensor_gc +let special_erf_out ~out self = stubs_special_erf_out out self |> with_tensor_gc +let special_erfc self = stubs_special_erfc self |> with_tensor_gc +let special_erfc_out ~out self = stubs_special_erfc_out out self |> with_tensor_gc +let special_erfcx self = stubs_special_erfcx self |> with_tensor_gc +let special_erfcx_out ~out self = stubs_special_erfcx_out out self |> with_tensor_gc +let special_erfinv self = stubs_special_erfinv self |> with_tensor_gc +let special_erfinv_out ~out self = stubs_special_erfinv_out out self |> with_tensor_gc +let special_exp2 self = stubs_special_exp2 self |> with_tensor_gc +let special_exp2_out ~out self = stubs_special_exp2_out out self |> with_tensor_gc +let special_expit self = stubs_special_expit self |> with_tensor_gc +let special_expit_out ~out self = stubs_special_expit_out out self |> with_tensor_gc +let special_expm1 self = stubs_special_expm1 self |> with_tensor_gc +let special_expm1_out ~out self = stubs_special_expm1_out out self |> with_tensor_gc +let special_gammainc self other = stubs_special_gammainc self other |> with_tensor_gc let special_gammainc_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_gammainc_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_gammainc_out out self other |> with_tensor_gc ;; -let special_gammaincc self other = - let out__ = CArray.make t 1 in - stubs_special_gammaincc (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_gammaincc self other = stubs_special_gammaincc self other |> with_tensor_gc let special_gammaincc_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_gammaincc_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_gammaincc_out out self other |> with_tensor_gc ;; -let special_gammaln self = - let out__ = CArray.make t 1 in - stubs_special_gammaln (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_gammaln_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_gammaln_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_gammaln self = stubs_special_gammaln self |> with_tensor_gc +let special_gammaln_out ~out self = stubs_special_gammaln_out out self |> with_tensor_gc let special_hermite_polynomial_h ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_h (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_h x n |> with_tensor_gc ;; let special_hermite_polynomial_h_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_h_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_h_n_scalar x n |> with_tensor_gc ;; let special_hermite_polynomial_h_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_h_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_h_n_scalar_out out x n |> with_tensor_gc ;; let special_hermite_polynomial_h_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_h_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_h_out out x n |> with_tensor_gc ;; let special_hermite_polynomial_h_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_h_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_h_x_scalar x n |> with_tensor_gc ;; let special_hermite_polynomial_h_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_h_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_h_x_scalar_out out x n |> with_tensor_gc ;; let special_hermite_polynomial_he ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_he (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_he x n |> with_tensor_gc ;; let special_hermite_polynomial_he_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_he_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_he_n_scalar x n |> with_tensor_gc ;; let special_hermite_polynomial_he_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_he_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_he_n_scalar_out out x n |> with_tensor_gc ;; let special_hermite_polynomial_he_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_he_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_he_out out x n |> with_tensor_gc ;; let special_hermite_polynomial_he_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_he_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_he_x_scalar x n |> with_tensor_gc ;; let special_hermite_polynomial_he_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_hermite_polynomial_he_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_i0 self = - let out__ = CArray.make t 1 in - stubs_special_i0 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_i0_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_i0_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_i0e self = - let out__ = CArray.make t 1 in - stubs_special_i0e (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_i0e_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_i0e_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_i1 self = - let out__ = CArray.make t 1 in - stubs_special_i1 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_i1_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_i1_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_i1e self = - let out__ = CArray.make t 1 in - stubs_special_i1e (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_hermite_polynomial_he_x_scalar_out out x n |> with_tensor_gc ;; -let special_i1e_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_i1e_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_i0 self = stubs_special_i0 self |> with_tensor_gc +let special_i0_out ~out self = stubs_special_i0_out out self |> with_tensor_gc +let special_i0e self = stubs_special_i0e self |> with_tensor_gc +let special_i0e_out ~out self = stubs_special_i0e_out out self |> with_tensor_gc +let special_i1 self = stubs_special_i1 self |> with_tensor_gc +let special_i1_out ~out self = stubs_special_i1_out out self |> with_tensor_gc +let special_i1e self = stubs_special_i1e self |> with_tensor_gc +let special_i1e_out ~out self = stubs_special_i1e_out out self |> with_tensor_gc let special_laguerre_polynomial_l ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_laguerre_polynomial_l (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_laguerre_polynomial_l x n |> with_tensor_gc ;; let special_laguerre_polynomial_l_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_laguerre_polynomial_l_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_laguerre_polynomial_l_n_scalar x n |> with_tensor_gc ;; let special_laguerre_polynomial_l_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_laguerre_polynomial_l_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_laguerre_polynomial_l_n_scalar_out out x n |> with_tensor_gc ;; let special_laguerre_polynomial_l_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_laguerre_polynomial_l_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_laguerre_polynomial_l_out out x n |> with_tensor_gc ;; let special_laguerre_polynomial_l_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_laguerre_polynomial_l_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_laguerre_polynomial_l_x_scalar x n |> with_tensor_gc ;; let special_laguerre_polynomial_l_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_laguerre_polynomial_l_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_laguerre_polynomial_l_x_scalar_out out x n |> with_tensor_gc ;; let special_legendre_polynomial_p ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_legendre_polynomial_p (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_legendre_polynomial_p x n |> with_tensor_gc ;; let special_legendre_polynomial_p_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_legendre_polynomial_p_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_legendre_polynomial_p_n_scalar x n |> with_tensor_gc ;; let special_legendre_polynomial_p_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_legendre_polynomial_p_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_legendre_polynomial_p_n_scalar_out out x n |> with_tensor_gc ;; let special_legendre_polynomial_p_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_legendre_polynomial_p_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_legendre_polynomial_p_out out x n |> with_tensor_gc ;; let special_legendre_polynomial_p_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_legendre_polynomial_p_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_legendre_polynomial_p_x_scalar x n |> with_tensor_gc ;; let special_legendre_polynomial_p_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_legendre_polynomial_p_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_log1p self = - let out__ = CArray.make t 1 in - stubs_special_log1p (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_legendre_polynomial_p_x_scalar_out out x n |> with_tensor_gc ;; -let special_log1p_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_log1p_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_log_ndtr self = - let out__ = CArray.make t 1 in - stubs_special_log_ndtr (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_log_ndtr_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_log_ndtr_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_log1p self = stubs_special_log1p self |> with_tensor_gc +let special_log1p_out ~out self = stubs_special_log1p_out out self |> with_tensor_gc +let special_log_ndtr self = stubs_special_log_ndtr self |> with_tensor_gc +let special_log_ndtr_out ~out self = stubs_special_log_ndtr_out out self |> with_tensor_gc let special_log_softmax self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_special_log_softmax - (CArray.start out__) - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_log_softmax self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let special_logit self ~eps = - let out__ = CArray.make t 1 in stubs_special_logit - (CArray.start out__) self (Option.value eps ~default:0.0) (match eps with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let special_logit_out ~out self ~eps = - let out__ = CArray.make t 1 in stubs_special_logit_out - (CArray.start out__) out self (Option.value eps ~default:0.0) (match eps with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let special_logsumexp self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_special_logsumexp - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let special_logsumexp_out ~out self ~dim ~keepdim = - let out__ = CArray.make t 1 in stubs_special_logsumexp_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) (List.length dim) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let special_modified_bessel_i0 self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_i0 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_i0 self |> with_tensor_gc ;; let special_modified_bessel_i0_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_i0_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_i0_out out self |> with_tensor_gc ;; let special_modified_bessel_i1 self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_i1 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_i1 self |> with_tensor_gc ;; let special_modified_bessel_i1_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_i1_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_i1_out out self |> with_tensor_gc ;; let special_modified_bessel_k0 self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_k0 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_k0 self |> with_tensor_gc ;; let special_modified_bessel_k0_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_k0_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_k0_out out self |> with_tensor_gc ;; let special_modified_bessel_k1 self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_k1 (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_k1 self |> with_tensor_gc ;; let special_modified_bessel_k1_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_modified_bessel_k1_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_modified_bessel_k1_out out self |> with_tensor_gc ;; let special_multigammaln self ~p = - let out__ = CArray.make t 1 in - stubs_special_multigammaln (CArray.start out__) self (Int64.of_int p); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_multigammaln self (Int64.of_int p) |> with_tensor_gc ;; let special_multigammaln_out ~out self ~p = - let out__ = CArray.make t 1 in - stubs_special_multigammaln_out (CArray.start out__) out self (Int64.of_int p); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_ndtr self = - let out__ = CArray.make t 1 in - stubs_special_ndtr (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_ndtr_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_ndtr_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_ndtri self = - let out__ = CArray.make t 1 in - stubs_special_ndtri (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_multigammaln_out out self (Int64.of_int p) |> with_tensor_gc ;; -let special_ndtri_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_ndtri_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_ndtr self = stubs_special_ndtr self |> with_tensor_gc +let special_ndtr_out ~out self = stubs_special_ndtr_out out self |> with_tensor_gc +let special_ndtri self = stubs_special_ndtri self |> with_tensor_gc +let special_ndtri_out ~out self = stubs_special_ndtri_out out self |> with_tensor_gc let special_polygamma ~n self = - let out__ = CArray.make t 1 in - stubs_special_polygamma (CArray.start out__) (Int64.of_int n) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_polygamma (Int64.of_int n) self |> with_tensor_gc ;; let special_polygamma_out ~out ~n self = - let out__ = CArray.make t 1 in - stubs_special_polygamma_out (CArray.start out__) out (Int64.of_int n) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_polygamma_out out (Int64.of_int n) self |> with_tensor_gc ;; -let special_psi self = - let out__ = CArray.make t 1 in - stubs_special_psi (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_psi_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_psi_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_psi self = stubs_special_psi self |> with_tensor_gc +let special_psi_out ~out self = stubs_special_psi_out out self |> with_tensor_gc let special_round self ~decimals = - let out__ = CArray.make t 1 in - stubs_special_round (CArray.start out__) self (Int64.of_int decimals); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_round self (Int64.of_int decimals) |> with_tensor_gc ;; let special_round_out ~out self ~decimals = - let out__ = CArray.make t 1 in - stubs_special_round_out (CArray.start out__) out self (Int64.of_int decimals); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_round_out out self (Int64.of_int decimals) |> with_tensor_gc ;; let special_scaled_modified_bessel_k0 ~x = - let out__ = CArray.make t 1 in - stubs_special_scaled_modified_bessel_k0 (CArray.start out__) x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_scaled_modified_bessel_k0 x |> with_tensor_gc ;; let special_scaled_modified_bessel_k0_out ~out ~x = - let out__ = CArray.make t 1 in - stubs_special_scaled_modified_bessel_k0_out (CArray.start out__) out x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_scaled_modified_bessel_k0_out out x |> with_tensor_gc ;; let special_scaled_modified_bessel_k1 ~x = - let out__ = CArray.make t 1 in - stubs_special_scaled_modified_bessel_k1 (CArray.start out__) x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_scaled_modified_bessel_k1 x |> with_tensor_gc ;; let special_scaled_modified_bessel_k1_out ~out ~x = - let out__ = CArray.make t 1 in - stubs_special_scaled_modified_bessel_k1_out (CArray.start out__) out x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_scaled_modified_bessel_k1_out out x |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_t ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_t (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_t x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_t_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_t_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_t_n_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_t_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_t_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_t_n_scalar_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_t_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_t_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_t_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_t_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_t_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_t_x_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_t_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_t_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_t_x_scalar_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_u ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_u (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_u x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_u_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_u_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_u_n_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_u_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_u_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_u_n_scalar_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_u_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_u_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_u_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_u_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_u_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_u_x_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_u_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_u_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_u_x_scalar_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_v ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_v (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_v x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_v_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_v_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_v_n_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_v_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_v_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_v_n_scalar_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_v_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_v_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_v_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_v_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_v_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_v_x_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_v_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_v_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_v_x_scalar_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_w ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_w (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_w x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_w_n_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_w_n_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_w_n_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_w_n_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_w_n_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_w_n_scalar_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_w_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_w_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_w_out out x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_w_x_scalar ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_w_x_scalar (CArray.start out__) x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_w_x_scalar x n |> with_tensor_gc ;; let special_shifted_chebyshev_polynomial_w_x_scalar_out ~out ~x ~n = - let out__ = CArray.make t 1 in - stubs_special_shifted_chebyshev_polynomial_w_x_scalar_out (CArray.start out__) out x n; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let special_sinc self = - let out__ = CArray.make t 1 in - stubs_special_sinc (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_shifted_chebyshev_polynomial_w_x_scalar_out out x n |> with_tensor_gc ;; -let special_sinc_out ~out self = - let out__ = CArray.make t 1 in - stubs_special_sinc_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_sinc self = stubs_special_sinc self |> with_tensor_gc +let special_sinc_out ~out self = stubs_special_sinc_out out self |> with_tensor_gc let special_softmax self ~dim ~dtype = - let out__ = CArray.make t 1 in - stubs_special_softmax - (CArray.start out__) - self - (Int64.of_int dim) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_softmax self (Int64.of_int dim) (Kind.packed_to_int dtype) + |> with_tensor_gc ;; -let special_spherical_bessel_j0 ~x = - let out__ = CArray.make t 1 in - stubs_special_spherical_bessel_j0 (CArray.start out__) x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_spherical_bessel_j0 ~x = stubs_special_spherical_bessel_j0 x |> with_tensor_gc let special_spherical_bessel_j0_out ~out ~x = - let out__ = CArray.make t 1 in - stubs_special_spherical_bessel_j0_out (CArray.start out__) out x; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_spherical_bessel_j0_out out x |> with_tensor_gc ;; -let special_xlog1py self other = - let out__ = CArray.make t 1 in - stubs_special_xlog1py (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_xlog1py self other = stubs_special_xlog1py self other |> with_tensor_gc let special_xlog1py_other_scalar self other = - let out__ = CArray.make t 1 in - stubs_special_xlog1py_other_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlog1py_other_scalar self other |> with_tensor_gc ;; let special_xlog1py_other_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_xlog1py_other_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlog1py_other_scalar_out out self other |> with_tensor_gc ;; let special_xlog1py_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_xlog1py_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlog1py_out out self other |> with_tensor_gc ;; let special_xlog1py_self_scalar self other = - let out__ = CArray.make t 1 in - stubs_special_xlog1py_self_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlog1py_self_scalar self other |> with_tensor_gc ;; let special_xlog1py_self_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_xlog1py_self_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlog1py_self_scalar_out out self other |> with_tensor_gc ;; -let special_xlogy self other = - let out__ = CArray.make t 1 in - stubs_special_xlogy (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_xlogy self other = stubs_special_xlogy self other |> with_tensor_gc let special_xlogy_other_scalar self other = - let out__ = CArray.make t 1 in - stubs_special_xlogy_other_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlogy_other_scalar self other |> with_tensor_gc ;; let special_xlogy_other_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_xlogy_other_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlogy_other_scalar_out out self other |> with_tensor_gc ;; let special_xlogy_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_xlogy_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlogy_out out self other |> with_tensor_gc ;; let special_xlogy_self_scalar self other = - let out__ = CArray.make t 1 in - stubs_special_xlogy_self_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlogy_self_scalar self other |> with_tensor_gc ;; let special_xlogy_self_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_xlogy_self_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_xlogy_self_scalar_out out self other |> with_tensor_gc ;; -let special_zeta self other = - let out__ = CArray.make t 1 in - stubs_special_zeta (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let special_zeta self other = stubs_special_zeta self other |> with_tensor_gc let special_zeta_other_scalar self other = - let out__ = CArray.make t 1 in - stubs_special_zeta_other_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_zeta_other_scalar self other |> with_tensor_gc ;; let special_zeta_other_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_zeta_other_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_zeta_other_scalar_out out self other |> with_tensor_gc ;; let special_zeta_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_zeta_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_zeta_out out self other |> with_tensor_gc ;; let special_zeta_self_scalar self other = - let out__ = CArray.make t 1 in - stubs_special_zeta_self_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_zeta_self_scalar self other |> with_tensor_gc ;; let special_zeta_self_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_special_zeta_self_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_special_zeta_self_scalar_out out self other |> with_tensor_gc ;; let split self ~split_size ~dim = @@ -29945,7 +18964,7 @@ let split_copy self ~split_size ~dim = let split_copy_tensor_out ~out self ~split_size ~dim = stubs_split_copy_tensor_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) self (Int64.of_int split_size) @@ -29981,7 +19000,7 @@ let split_with_sizes_copy self ~split_sizes ~dim = let split_with_sizes_copy_out ~out self ~split_sizes ~dim = stubs_split_with_sizes_copy_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) self (List.map Int64.of_int split_sizes |> CArray.of_list int64_t |> CArray.start) @@ -29989,220 +19008,88 @@ let split_with_sizes_copy_out ~out self ~split_sizes ~dim = (Int64.of_int dim) ;; -let sqrt self = - let out__ = CArray.make t 1 in - stubs_sqrt (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sqrt_ self = - let out__ = CArray.make t 1 in - stubs_sqrt_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sqrt_out ~out self = - let out__ = CArray.make t 1 in - stubs_sqrt_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let square self = - let out__ = CArray.make t 1 in - stubs_square (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let square_ self = - let out__ = CArray.make t 1 in - stubs_square_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let square_out ~out self = - let out__ = CArray.make t 1 in - stubs_square_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let squeeze self = - let out__ = CArray.make t 1 in - stubs_squeeze (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let squeeze_ self = - let out__ = CArray.make t 1 in - stubs_squeeze_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let squeeze_copy self = - let out__ = CArray.make t 1 in - stubs_squeeze_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let sqrt self = stubs_sqrt self |> with_tensor_gc +let sqrt_ self = stubs_sqrt_ self |> with_tensor_gc +let sqrt_out ~out self = stubs_sqrt_out out self |> with_tensor_gc +let square self = stubs_square self |> with_tensor_gc +let square_ self = stubs_square_ self |> with_tensor_gc +let square_out ~out self = stubs_square_out out self |> with_tensor_gc +let squeeze self = stubs_squeeze self |> with_tensor_gc +let squeeze_ self = stubs_squeeze_ self |> with_tensor_gc +let squeeze_copy self = stubs_squeeze_copy self |> with_tensor_gc let squeeze_copy_dim self ~dim = - let out__ = CArray.make t 1 in - stubs_squeeze_copy_dim (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_squeeze_copy_dim self (Int64.of_int dim) |> with_tensor_gc ;; let squeeze_copy_dim_out ~out self ~dim = - let out__ = CArray.make t 1 in - stubs_squeeze_copy_dim_out (CArray.start out__) out self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_squeeze_copy_dim_out out self (Int64.of_int dim) |> with_tensor_gc ;; let squeeze_copy_dims self ~dim = - let out__ = CArray.make t 1 in stubs_squeeze_copy_dims - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; let squeeze_copy_dims_out ~out self ~dim = - let out__ = CArray.make t 1 in stubs_squeeze_copy_dims_out - (CArray.start out__) out self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let squeeze_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_squeeze_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let squeeze_dim self ~dim = - let out__ = CArray.make t 1 in - stubs_squeeze_dim (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; -let squeeze_dim_ self ~dim = - let out__ = CArray.make t 1 in - stubs_squeeze_dim_ (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let squeeze_copy_out ~out self = stubs_squeeze_copy_out out self |> with_tensor_gc +let squeeze_dim self ~dim = stubs_squeeze_dim self (Int64.of_int dim) |> with_tensor_gc +let squeeze_dim_ self ~dim = stubs_squeeze_dim_ self (Int64.of_int dim) |> with_tensor_gc let squeeze_dims self ~dim = - let out__ = CArray.make t 1 in stubs_squeeze_dims - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; let squeeze_dims_ self ~dim = - let out__ = CArray.make t 1 in stubs_squeeze_dims_ - (CArray.start out__) self (List.map Int64.of_int dim |> CArray.of_list int64_t |> CArray.start) - (List.length dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dim) + |> with_tensor_gc ;; -let sspaddmm self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_sspaddmm (CArray.start out__) self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let sspaddmm self ~mat1 ~mat2 = stubs_sspaddmm self mat1 mat2 |> with_tensor_gc let sspaddmm_out ~out self ~mat1 ~mat2 = - let out__ = CArray.make t 1 in - stubs_sspaddmm_out (CArray.start out__) out self mat1 mat2; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_sspaddmm_out out self mat1 mat2 |> with_tensor_gc ;; let stack tensors ~dim = - let out__ = CArray.make t 1 in stubs_stack - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; let stack_out ~out tensors ~dim = - let out__ = CArray.make t 1 in stubs_stack_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) - (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int dim) + |> with_tensor_gc ;; -let std self ~unbiased = - let out__ = CArray.make t 1 in - stubs_std (CArray.start out__) self (if unbiased then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let std self ~unbiased = stubs_std self (if unbiased then 1 else 0) |> with_tensor_gc let std_correction self ~dim ~correction ~keepdim = - let out__ = CArray.make t 1 in stubs_std_correction - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -30211,16 +19098,12 @@ let std_correction self ~dim ~correction ~keepdim = | None -> -1 | Some v -> List.length v) correction - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let std_correction_out ~out self ~dim ~correction ~keepdim = - let out__ = CArray.make t 1 in stubs_std_correction_out - (CArray.start out__) out self (match dim with @@ -30230,16 +19113,12 @@ let std_correction_out ~out self ~dim ~correction ~keepdim = | None -> -1 | Some v -> List.length v) correction - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let std_dim self ~dim ~unbiased ~keepdim = - let out__ = CArray.make t 1 in stubs_std_dim - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -30248,24 +19127,20 @@ let std_dim self ~dim ~unbiased ~keepdim = | None -> -1 | Some v -> List.length v) (if unbiased then 1 else 0) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let std_mean self ~unbiased = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_std_mean (CArray.start out__) self (if unbiased then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let std_mean_correction self ~dim ~correction ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_std_mean_correction (CArray.start out__) self @@ -30277,15 +19152,13 @@ let std_mean_correction self ~dim ~correction ~keepdim = | Some v -> List.length v) correction (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let std_mean_correction_out ~out0 ~out1 self ~dim ~correction ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_std_mean_correction_out (CArray.start out__) out0 @@ -30299,15 +19172,13 @@ let std_mean_correction_out ~out0 ~out1 self ~dim ~correction ~keepdim = | Some v -> List.length v) correction (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let std_mean_dim self ~dim ~unbiased ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_std_mean_dim (CArray.start out__) self @@ -30319,17 +19190,13 @@ let std_mean_dim self ~dim ~unbiased ~keepdim = | Some v -> List.length v) (if unbiased then 1 else 0) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let std_out ~out self ~dim ~unbiased ~keepdim = - let out__ = CArray.make t 1 in stubs_std_out - (CArray.start out__) out self (match dim with @@ -30339,17 +19206,13 @@ let std_out ~out self ~dim ~unbiased ~keepdim = | None -> -1 | Some v -> List.length v) (if unbiased then 1 else 0) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let stft self ~n_fft ~hop_length ~win_length ~window ~normalized ~onesided ~return_complex = - let out__ = CArray.make t 1 in stubs_stft - (CArray.start out__) self (Int64.of_int n_fft) (match hop_length with @@ -30366,13 +19229,11 @@ let stft self ~n_fft ~hop_length ~win_length ~window ~normalized ~onesided ~retu | None -> 1) (match window with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if normalized then 1 else 0) (if onesided then 1 else 0) - (if return_complex then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if return_complex then 1 else 0) + |> with_tensor_gc ;; let stft_center @@ -30387,9 +19248,7 @@ let stft_center ~onesided ~return_complex = - let out__ = CArray.make t 1 in stubs_stft_center - (CArray.start out__) self (Int64.of_int n_fft) (match hop_length with @@ -30406,119 +19265,31 @@ let stft_center | None -> 1) (match window with | Some v -> v - | None -> null) + | None -> none_gc_tensor) (if center then 1 else 0) pad_mode (if normalized then 1 else 0) (if onesided then 1 else 0) - (if return_complex then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if return_complex then 1 else 0) + |> with_tensor_gc ;; let stride self ~dim = stubs_stride self (Int64.of_int dim) - -let sub self other = - let out__ = CArray.make t 1 in - stubs_sub (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sub_ self other = - let out__ = CArray.make t 1 in - stubs_sub_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sub_out ~out self other = - let out__ = CArray.make t 1 in - stubs_sub_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sub_scalar self other = - let out__ = CArray.make t 1 in - stubs_sub_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sub_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_sub_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sub_scalar_out ~out self other = - let out__ = CArray.make t 1 in - stubs_sub_scalar_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let subtract self other = - let out__ = CArray.make t 1 in - stubs_subtract (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let subtract_ self other = - let out__ = CArray.make t 1 in - stubs_subtract_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let subtract_out ~out self other = - let out__ = CArray.make t 1 in - stubs_subtract_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let subtract_scalar self other = - let out__ = CArray.make t 1 in - stubs_subtract_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let subtract_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_subtract_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let sum self ~dtype = - let out__ = CArray.make t 1 in - stubs_sum (CArray.start out__) self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let sub self other = stubs_sub self other |> with_tensor_gc +let sub_ self other = stubs_sub_ self other |> with_tensor_gc +let sub_out ~out self other = stubs_sub_out out self other |> with_tensor_gc +let sub_scalar self other = stubs_sub_scalar self other |> with_tensor_gc +let sub_scalar_ self other = stubs_sub_scalar_ self other |> with_tensor_gc +let sub_scalar_out ~out self other = stubs_sub_scalar_out out self other |> with_tensor_gc +let subtract self other = stubs_subtract self other |> with_tensor_gc +let subtract_ self other = stubs_subtract_ self other |> with_tensor_gc +let subtract_out ~out self other = stubs_subtract_out out self other |> with_tensor_gc +let subtract_scalar self other = stubs_subtract_scalar self other |> with_tensor_gc +let subtract_scalar_ self other = stubs_subtract_scalar_ self other |> with_tensor_gc +let sum self ~dtype = stubs_sum self (Kind.packed_to_int dtype) |> with_tensor_gc let sum_dim_intlist self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_sum_dim_intlist - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -30527,16 +19298,12 @@ let sum_dim_intlist self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let sum_intlist_out ~out self ~dim ~keepdim ~dtype = - let out__ = CArray.make t 1 in stubs_sum_intlist_out - (CArray.start out__) out self (match dim with @@ -30546,50 +19313,37 @@ let sum_intlist_out ~out self ~dim ~keepdim ~dtype = | None -> -1 | Some v -> List.length v) (if keepdim then 1 else 0) - (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Kind.packed_to_int dtype) + |> with_tensor_gc ;; let sum_out ~out self ~dtype = - let out__ = CArray.make t 1 in - stubs_sum_out (CArray.start out__) out self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_sum_out out self (Kind.packed_to_int dtype) |> with_tensor_gc ;; let sum_to_size self ~size = - let out__ = CArray.make t 1 in stubs_sum_to_size - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let svd self ~some ~compute_uv = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_svd (CArray.start out__) self (if some then 1 else 0) (if compute_uv then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let svd_u ~u ~s ~v self ~some ~compute_uv = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_svd_u (CArray.start out__) u @@ -30598,91 +19352,36 @@ let svd_u ~u ~s ~v self ~some ~compute_uv = self (if some then 1 else 0) (if compute_uv then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let swapaxes self ~axis0 ~axis1 = - let out__ = CArray.make t 1 in - stubs_swapaxes (CArray.start out__) self (Int64.of_int axis0) (Int64.of_int axis1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_swapaxes self (Int64.of_int axis0) (Int64.of_int axis1) |> with_tensor_gc ;; let swapaxes_ self ~axis0 ~axis1 = - let out__ = CArray.make t 1 in - stubs_swapaxes_ (CArray.start out__) self (Int64.of_int axis0) (Int64.of_int axis1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_swapaxes_ self (Int64.of_int axis0) (Int64.of_int axis1) |> with_tensor_gc ;; let swapdims self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs_swapdims (CArray.start out__) self (Int64.of_int dim0) (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_swapdims self (Int64.of_int dim0) (Int64.of_int dim1) |> with_tensor_gc ;; let swapdims_ self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs_swapdims_ (CArray.start out__) self (Int64.of_int dim0) (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let tr self = - let out__ = CArray.make t 1 in - stubs_tr (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_swapdims_ self (Int64.of_int dim0) (Int64.of_int dim1) |> with_tensor_gc ;; -let t_ self = - let out__ = CArray.make t 1 in - stubs_t_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let t_copy self = - let out__ = CArray.make t 1 in - stubs_t_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let t_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_t_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let take self ~index = - let out__ = CArray.make t 1 in - stubs_take (CArray.start out__) self index; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let tr self = stubs_tr self |> with_tensor_gc +let t_ self = stubs_t_ self |> with_tensor_gc +let t_copy self = stubs_t_copy self |> with_tensor_gc +let t_copy_out ~out self = stubs_t_copy_out out self |> with_tensor_gc +let take self ~index = stubs_take self index |> with_tensor_gc let take_along_dim self ~indices ~dim = - let out__ = CArray.make t 1 in stubs_take_along_dim - (CArray.start out__) self indices (match dim with @@ -30690,16 +19389,12 @@ let take_along_dim self ~indices ~dim = | Some v -> Int64.of_int v) (match dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let take_along_dim_out ~out self ~indices ~dim = - let out__ = CArray.make t 1 in stubs_take_along_dim_out - (CArray.start out__) out self indices @@ -30708,83 +19403,26 @@ let take_along_dim_out ~out self ~indices ~dim = | Some v -> Int64.of_int v) (match dim with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let take_out ~out self ~index = - let out__ = CArray.make t 1 in - stubs_take_out (CArray.start out__) out self index; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let tan self = - let out__ = CArray.make t 1 in - stubs_tan (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let tan_ self = - let out__ = CArray.make t 1 in - stubs_tan_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let tan_out ~out self = - let out__ = CArray.make t 1 in - stubs_tan_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let tanh self = - let out__ = CArray.make t 1 in - stubs_tanh (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; -let tanh_ self = - let out__ = CArray.make t 1 in - stubs_tanh_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let take_out ~out self ~index = stubs_take_out out self index |> with_tensor_gc +let tan self = stubs_tan self |> with_tensor_gc +let tan_ self = stubs_tan_ self |> with_tensor_gc +let tan_out ~out self = stubs_tan_out out self |> with_tensor_gc +let tanh self = stubs_tanh self |> with_tensor_gc +let tanh_ self = stubs_tanh_ self |> with_tensor_gc let tanh_backward ~grad_output ~output = - let out__ = CArray.make t 1 in - stubs_tanh_backward (CArray.start out__) grad_output output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_tanh_backward grad_output output |> with_tensor_gc ;; let tanh_backward_grad_input ~grad_input ~grad_output ~output = - let out__ = CArray.make t 1 in - stubs_tanh_backward_grad_input (CArray.start out__) grad_input grad_output output; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_tanh_backward_grad_input grad_input grad_output output |> with_tensor_gc ;; -let tanh_out ~out self = - let out__ = CArray.make t 1 in - stubs_tanh_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let tanh_out ~out self = stubs_tanh_out out self |> with_tensor_gc let tensor_split self ~sections ~dim = stubs_tensor_split self (Int64.of_int sections) (Int64.of_int dim) |> to_tensor_list @@ -30808,203 +19446,114 @@ let tensor_split_tensor_indices_or_sections self ~tensor_indices_or_sections ~di ;; let tensordot self other ~dims_self ~dims_other = - let out__ = CArray.make t 1 in stubs_tensordot - (CArray.start out__) self other (List.map Int64.of_int dims_self |> CArray.of_list int64_t |> CArray.start) (List.length dims_self) (List.map Int64.of_int dims_other |> CArray.of_list int64_t |> CArray.start) - (List.length dims_other); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims_other) + |> with_tensor_gc ;; let tensordot_out ~out self other ~dims_self ~dims_other = - let out__ = CArray.make t 1 in stubs_tensordot_out - (CArray.start out__) out self other (List.map Int64.of_int dims_self |> CArray.of_list int64_t |> CArray.start) (List.length dims_self) (List.map Int64.of_int dims_other |> CArray.of_list int64_t |> CArray.start) - (List.length dims_other); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims_other) + |> with_tensor_gc ;; let threshold self ~threshold ~value = - let out__ = CArray.make t 1 in - stubs_threshold (CArray.start out__) self threshold value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_threshold self threshold value |> with_tensor_gc ;; let threshold_ self ~threshold ~value = - let out__ = CArray.make t 1 in - stubs_threshold_ (CArray.start out__) self threshold value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_threshold_ self threshold value |> with_tensor_gc ;; let threshold_backward ~grad_output self ~threshold = - let out__ = CArray.make t 1 in - stubs_threshold_backward (CArray.start out__) grad_output self threshold; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_threshold_backward grad_output self threshold |> with_tensor_gc ;; let threshold_backward_grad_input ~grad_input ~grad_output self ~threshold = - let out__ = CArray.make t 1 in - stubs_threshold_backward_grad_input - (CArray.start out__) - grad_input - grad_output - self - threshold; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_threshold_backward_grad_input grad_input grad_output self threshold + |> with_tensor_gc ;; let threshold_out ~out self ~threshold ~value = - let out__ = CArray.make t 1 in - stubs_threshold_out (CArray.start out__) out self threshold value; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_threshold_out out self threshold value |> with_tensor_gc ;; let tile self ~dims = - let out__ = CArray.make t 1 in stubs_tile - (CArray.start out__) self (List.map Int64.of_int dims |> CArray.of_list int64_t |> CArray.start) - (List.length dims); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length dims) + |> with_tensor_gc ;; -let to_ self ~device = - let out__ = CArray.make t 1 in - stubs_to_ (CArray.start out__) self (Device.to_int device); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let to_ self ~device = stubs_to_ self (Device.to_int device) |> with_tensor_gc let to_dense self ~dtype ~masked_grad = - let out__ = CArray.make t 1 in - stubs_to_dense - (CArray.start out__) - self - (Kind.packed_to_int dtype) - (if masked_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_to_dense self (Kind.packed_to_int dtype) (if masked_grad then 1 else 0) + |> with_tensor_gc ;; let to_dense_backward ~grad input ~masked_grad = - let out__ = CArray.make t 1 in - stubs_to_dense_backward (CArray.start out__) grad input (if masked_grad then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_to_dense_backward grad input (if masked_grad then 1 else 0) |> with_tensor_gc ;; let to_device self ~device ~dtype ~non_blocking ~copy = - let out__ = CArray.make t 1 in stubs_to_device - (CArray.start out__) self (Device.to_int device) (Kind.packed_to_int dtype) (if non_blocking then 1 else 0) - (if copy then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if copy then 1 else 0) + |> with_tensor_gc ;; let to_dtype self ~dtype ~non_blocking ~copy = - let out__ = CArray.make t 1 in stubs_to_dtype - (CArray.start out__) self (Kind.packed_to_int dtype) (if non_blocking then 1 else 0) - (if copy then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if copy then 1 else 0) + |> with_tensor_gc ;; let to_dtype_layout self ~options ~non_blocking ~copy = - let out__ = CArray.make t 1 in stubs_to_dtype_layout - (CArray.start out__) self (Kind.packed_to_int (fst options)) (Device.to_int (snd options)) (if non_blocking then 1 else 0) - (if copy then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if copy then 1 else 0) + |> with_tensor_gc ;; let to_mkldnn self ~dtype = - let out__ = CArray.make t 1 in - stubs_to_mkldnn (CArray.start out__) self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_to_mkldnn self (Kind.packed_to_int dtype) |> with_tensor_gc ;; -let to_mkldnn_backward ~grad input = - let out__ = CArray.make t 1 in - stubs_to_mkldnn_backward (CArray.start out__) grad input; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let to_mkldnn_backward ~grad input = stubs_to_mkldnn_backward grad input |> with_tensor_gc let to_mkldnn_out ~out self ~dtype = - let out__ = CArray.make t 1 in - stubs_to_mkldnn_out (CArray.start out__) out self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_to_mkldnn_out out self (Kind.packed_to_int dtype) |> with_tensor_gc ;; let to_other self other ~non_blocking ~copy = - let out__ = CArray.make t 1 in - stubs_to_other - (CArray.start out__) - self - other - (if non_blocking then 1 else 0) - (if copy then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_to_other self other (if non_blocking then 1 else 0) (if copy then 1 else 0) + |> with_tensor_gc ;; let to_padded_tensor self ~padding ~output_size = - let out__ = CArray.make t 1 in stubs_to_padded_tensor - (CArray.start out__) self padding (match output_size with @@ -31012,16 +19561,12 @@ let to_padded_tensor self ~padding ~output_size = | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match output_size with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let to_padded_tensor_out ~out self ~padding ~output_size = - let out__ = CArray.make t 1 in stubs_to_padded_tensor_out - (CArray.start out__) out self padding @@ -31030,14 +19575,12 @@ let to_padded_tensor_out ~out self ~padding ~output_size = | Some v -> List.map Int64.of_int v |> CArray.of_list int64_t |> CArray.start) (match output_size with | None -> -1 - | Some v -> List.length v); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | Some v -> List.length v) + |> with_tensor_gc ;; let topk self ~k ~dim ~largest ~sorted = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_topk (CArray.start out__) self @@ -31045,15 +19588,13 @@ let topk self ~k ~dim ~largest ~sorted = (Int64.of_int dim) (if largest then 1 else 0) (if sorted then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let topk_values ~values ~indices self ~k ~dim ~largest ~sorted = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_topk_values (CArray.start out__) values @@ -31063,120 +19604,51 @@ let topk_values ~values ~indices self ~k ~dim ~largest ~sorted = (Int64.of_int dim) (if largest then 1 else 0) (if sorted then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let totype self ~scalar_type = - let out__ = CArray.make t 1 in - stubs_totype (CArray.start out__) self (Kind.packed_to_int scalar_type); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_totype self (Kind.packed_to_int scalar_type) |> with_tensor_gc ;; -let trace self = - let out__ = CArray.make t 1 in - stubs_trace (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let trace self = stubs_trace self |> with_tensor_gc let trace_backward ~grad ~sizes = - let out__ = CArray.make t 1 in stubs_trace_backward - (CArray.start out__) grad (List.map Int64.of_int sizes |> CArray.of_list int64_t |> CArray.start) - (List.length sizes); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length sizes) + |> with_tensor_gc ;; -let trace_out ~out self = - let out__ = CArray.make t 1 in - stubs_trace_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let trace_out ~out self = stubs_trace_out out self |> with_tensor_gc let transpose self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs_transpose (CArray.start out__) self (Int64.of_int dim0) (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_transpose self (Int64.of_int dim0) (Int64.of_int dim1) |> with_tensor_gc ;; let transpose_ self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs_transpose_ (CArray.start out__) self (Int64.of_int dim0) (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_transpose_ self (Int64.of_int dim0) (Int64.of_int dim1) |> with_tensor_gc ;; let transpose_copy self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs_transpose_copy (CArray.start out__) self (Int64.of_int dim0) (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_transpose_copy self (Int64.of_int dim0) (Int64.of_int dim1) |> with_tensor_gc ;; let transpose_copy_int_out ~out self ~dim0 ~dim1 = - let out__ = CArray.make t 1 in - stubs_transpose_copy_int_out - (CArray.start out__) - out - self - (Int64.of_int dim0) - (Int64.of_int dim1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let trapezoid ~y ~dim = - let out__ = CArray.make t 1 in - stubs_trapezoid (CArray.start out__) y (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let trapezoid_x ~y ~x ~dim = - let out__ = CArray.make t 1 in - stubs_trapezoid_x (CArray.start out__) y x (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let trapz ~y ~x ~dim = - let out__ = CArray.make t 1 in - stubs_trapz (CArray.start out__) y x (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_transpose_copy_int_out out self (Int64.of_int dim0) (Int64.of_int dim1) + |> with_tensor_gc ;; -let trapz_dx ~y ~dx ~dim = - let out__ = CArray.make t 1 in - stubs_trapz_dx (CArray.start out__) y dx (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let trapezoid ~y ~dim = stubs_trapezoid y (Int64.of_int dim) |> with_tensor_gc +let trapezoid_x ~y ~x ~dim = stubs_trapezoid_x y x (Int64.of_int dim) |> with_tensor_gc +let trapz ~y ~x ~dim = stubs_trapz y x (Int64.of_int dim) |> with_tensor_gc +let trapz_dx ~y ~dx ~dim = stubs_trapz_dx y dx (Int64.of_int dim) |> with_tensor_gc let triangular_solve self ~a ~upper ~transpose ~unitriangular = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_triangular_solve (CArray.start out__) self @@ -31184,15 +19656,13 @@ let triangular_solve self ~a ~upper ~transpose ~unitriangular = (if upper then 1 else 0) (if transpose then 1 else 0) (if unitriangular then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let triangular_solve_x ~x ~m self ~a ~upper ~transpose ~unitriangular = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_triangular_solve_x (CArray.start out__) x @@ -31202,68 +19672,35 @@ let triangular_solve_x ~x ~m self ~a ~upper ~transpose ~unitriangular = (if upper then 1 else 0) (if transpose then 1 else 0) (if unitriangular then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; -let tril self ~diagonal = - let out__ = CArray.make t 1 in - stubs_tril (CArray.start out__) self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let tril_ self ~diagonal = - let out__ = CArray.make t 1 in - stubs_tril_ (CArray.start out__) self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let tril self ~diagonal = stubs_tril self (Int64.of_int diagonal) |> with_tensor_gc +let tril_ self ~diagonal = stubs_tril_ self (Int64.of_int diagonal) |> with_tensor_gc let tril_indices ~row ~col ~offset ~options = - let out__ = CArray.make t 1 in stubs_tril_indices - (CArray.start out__) (Int64.of_int row) (Int64.of_int col) (Int64.of_int offset) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let tril_indices_out ~out ~row ~col ~offset = - let out__ = CArray.make t 1 in - stubs_tril_indices_out - (CArray.start out__) - out - (Int64.of_int row) - (Int64.of_int col) - (Int64.of_int offset); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_tril_indices_out out (Int64.of_int row) (Int64.of_int col) (Int64.of_int offset) + |> with_tensor_gc ;; let tril_out ~out self ~diagonal = - let out__ = CArray.make t 1 in - stubs_tril_out (CArray.start out__) out self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_tril_out out self (Int64.of_int diagonal) |> with_tensor_gc ;; let triplet_margin_loss ~anchor ~positive ~negative ~margin ~p ~eps ~swap ~reduction = - let out__ = CArray.make t 1 in stubs_triplet_margin_loss - (CArray.start out__) anchor positive negative @@ -31271,264 +19708,129 @@ let triplet_margin_loss ~anchor ~positive ~negative ~margin ~p ~eps ~swap ~reduc p eps (if swap then 1 else 0) - (Reduction.to_int reduction |> Int64.of_int); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let triu self ~diagonal = - let out__ = CArray.make t 1 in - stubs_triu (CArray.start out__) self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Reduction.to_int reduction |> Int64.of_int) + |> with_tensor_gc ;; -let triu_ self ~diagonal = - let out__ = CArray.make t 1 in - stubs_triu_ (CArray.start out__) self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let triu self ~diagonal = stubs_triu self (Int64.of_int diagonal) |> with_tensor_gc +let triu_ self ~diagonal = stubs_triu_ self (Int64.of_int diagonal) |> with_tensor_gc let triu_indices ~row ~col ~offset ~options = - let out__ = CArray.make t 1 in stubs_triu_indices - (CArray.start out__) (Int64.of_int row) (Int64.of_int col) (Int64.of_int offset) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; let triu_indices_out ~out ~row ~col ~offset = - let out__ = CArray.make t 1 in - stubs_triu_indices_out - (CArray.start out__) - out - (Int64.of_int row) - (Int64.of_int col) - (Int64.of_int offset); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_triu_indices_out out (Int64.of_int row) (Int64.of_int col) (Int64.of_int offset) + |> with_tensor_gc ;; let triu_out ~out self ~diagonal = - let out__ = CArray.make t 1 in - stubs_triu_out (CArray.start out__) out self (Int64.of_int diagonal); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let true_divide self other = - let out__ = CArray.make t 1 in - stubs_true_divide (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_triu_out out self (Int64.of_int diagonal) |> with_tensor_gc ;; -let true_divide_ self other = - let out__ = CArray.make t 1 in - stubs_true_divide_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let true_divide self other = stubs_true_divide self other |> with_tensor_gc +let true_divide_ self other = stubs_true_divide_ self other |> with_tensor_gc let true_divide_out ~out self other = - let out__ = CArray.make t 1 in - stubs_true_divide_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_true_divide_out out self other |> with_tensor_gc ;; -let true_divide_scalar self other = - let out__ = CArray.make t 1 in - stubs_true_divide_scalar (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let true_divide_scalar self other = stubs_true_divide_scalar self other |> with_tensor_gc let true_divide_scalar_ self other = - let out__ = CArray.make t 1 in - stubs_true_divide_scalar_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let trunc self = - let out__ = CArray.make t 1 in - stubs_trunc (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let trunc_ self = - let out__ = CArray.make t 1 in - stubs_trunc_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let trunc_out ~out self = - let out__ = CArray.make t 1 in - stubs_trunc_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let type_as self other = - let out__ = CArray.make t 1 in - stubs_type_as (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_true_divide_scalar_ self other |> with_tensor_gc ;; +let trunc self = stubs_trunc self |> with_tensor_gc +let trunc_ self = stubs_trunc_ self |> with_tensor_gc +let trunc_out ~out self = stubs_trunc_out out self |> with_tensor_gc +let type_as self other = stubs_type_as self other |> with_tensor_gc let unbind self ~dim = stubs_unbind self (Int64.of_int dim) |> to_tensor_list let unbind_copy self ~dim = stubs_unbind_copy self (Int64.of_int dim) |> to_tensor_list let unbind_copy_int_out ~out self ~dim = stubs_unbind_copy_int_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) self (Int64.of_int dim) ;; let unflatten self ~dim ~sizes = - let out__ = CArray.make t 1 in stubs_unflatten - (CArray.start out__) self (Int64.of_int dim) (List.map Int64.of_int sizes |> CArray.of_list int64_t |> CArray.start) - (List.length sizes); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length sizes) + |> with_tensor_gc ;; let unflatten_dense_tensors ~flat tensors = stubs_unflatten_dense_tensors flat - (CArray.of_list t tensors |> CArray.start) + (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) |> to_tensor_list ;; let unfold self ~dimension ~size ~step = - let out__ = CArray.make t 1 in - stubs_unfold - (CArray.start out__) - self - (Int64.of_int dimension) - (Int64.of_int size) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_unfold self (Int64.of_int dimension) (Int64.of_int size) (Int64.of_int step) + |> with_tensor_gc ;; let unfold_backward ~grad_in ~input_sizes ~dim ~size ~step = - let out__ = CArray.make t 1 in stubs_unfold_backward - (CArray.start out__) grad_in (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) (List.length input_sizes) (Int64.of_int dim) (Int64.of_int size) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let unfold_backward_out ~out ~grad_in ~input_sizes ~dim ~size ~step = - let out__ = CArray.make t 1 in stubs_unfold_backward_out - (CArray.start out__) out grad_in (List.map Int64.of_int input_sizes |> CArray.of_list int64_t |> CArray.start) (List.length input_sizes) (Int64.of_int dim) (Int64.of_int size) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; let unfold_copy self ~dimension ~size ~step = - let out__ = CArray.make t 1 in - stubs_unfold_copy - (CArray.start out__) - self - (Int64.of_int dimension) - (Int64.of_int size) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_unfold_copy self (Int64.of_int dimension) (Int64.of_int size) (Int64.of_int step) + |> with_tensor_gc ;; let unfold_copy_out ~out self ~dimension ~size ~step = - let out__ = CArray.make t 1 in stubs_unfold_copy_out - (CArray.start out__) out self (Int64.of_int dimension) (Int64.of_int size) - (Int64.of_int step); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let uniform self ~from ~to_ = - let out__ = CArray.make t 1 in - stubs_uniform (CArray.start out__) self from to_; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Int64.of_int step) + |> with_tensor_gc ;; -let uniform_ self ~from ~to_ = - let out__ = CArray.make t 1 in - stubs_uniform_ (CArray.start out__) self from to_; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let uniform self ~from ~to_ = stubs_uniform self from to_ |> with_tensor_gc +let uniform_ self ~from ~to_ = stubs_uniform_ self from to_ |> with_tensor_gc let uniform_out ~out self ~from ~to_ = - let out__ = CArray.make t 1 in - stubs_uniform_out (CArray.start out__) out self from to_; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_uniform_out out self from to_ |> with_tensor_gc ;; let unique_consecutive self ~return_inverse ~return_counts ~dim = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_unique_consecutive (CArray.start out__) self @@ -31540,17 +19842,14 @@ let unique_consecutive self ~return_inverse ~return_counts ~dim = (match dim with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let unique_consecutive_out ~out0 ~out1 ~out2 self ~return_inverse ~return_counts ~dim = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_unique_consecutive_out (CArray.start out__) out0 @@ -31565,17 +19864,14 @@ let unique_consecutive_out ~out0 ~out1 ~out2 self ~return_inverse ~return_counts (match dim with | Some _ -> 0 | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let unique_dim self ~dim ~sorted ~return_inverse ~return_counts = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_unique_dim (CArray.start out__) self @@ -31583,34 +19879,28 @@ let unique_dim self ~dim ~sorted ~return_inverse ~return_counts = (if sorted then 1 else 0) (if return_inverse then 1 else 0) (if return_counts then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let unique_dim_consecutive self ~dim ~return_inverse ~return_counts = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_unique_dim_consecutive (CArray.start out__) self (Int64.of_int dim) (if return_inverse then 1 else 0) (if return_counts then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let unique_dim_consecutive_out ~out0 ~out1 ~out2 self ~dim ~return_inverse ~return_counts = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_unique_dim_consecutive_out (CArray.start out__) out0 @@ -31620,17 +19910,14 @@ let unique_dim_consecutive_out ~out0 ~out1 ~out2 self ~dim ~return_inverse ~retu (Int64.of_int dim) (if return_inverse then 1 else 0) (if return_counts then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; let unique_dim_out ~out0 ~out1 ~out2 self ~dim ~sorted ~return_inverse ~return_counts = - let out__ = CArray.make t 3 in + let out__ = CArray.make raw_tensor 3 in stubs_unique_dim_out (CArray.start out__) out0 @@ -31641,12 +19928,9 @@ let unique_dim_out ~out0 ~out1 ~out2 self ~dim ~sorted ~return_inverse ~return_c (if sorted then 1 else 0) (if return_inverse then 1 else 0) (if return_counts then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; - let t2 = CArray.get out__ 2 in - Gc.finalise C.Tensor.free t2; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in + let t2 = CArray.get out__ 2 |> with_tensor_gc in t0, t1, t2 ;; @@ -31660,7 +19944,7 @@ let unsafe_split self ~split_size ~dim = let unsafe_split_tensor_out ~out self ~split_size ~dim = stubs_unsafe_split_tensor_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) self (Int64.of_int split_size) @@ -31678,7 +19962,7 @@ let unsafe_split_with_sizes self ~split_sizes ~dim = let unsafe_split_with_sizes_out ~out self ~split_sizes ~dim = stubs_unsafe_split_with_sizes_out - (CArray.of_list t out |> CArray.start) + (CArray.of_list gc_tensor out |> CArray.start) (List.length out) self (List.map Int64.of_int split_sizes |> CArray.of_list int64_t |> CArray.start) @@ -31686,42 +19970,19 @@ let unsafe_split_with_sizes_out ~out self ~split_sizes ~dim = (Int64.of_int dim) ;; -let unsqueeze self ~dim = - let out__ = CArray.make t 1 in - stubs_unsqueeze (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let unsqueeze_ self ~dim = - let out__ = CArray.make t 1 in - stubs_unsqueeze_ (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let unsqueeze self ~dim = stubs_unsqueeze self (Int64.of_int dim) |> with_tensor_gc +let unsqueeze_ self ~dim = stubs_unsqueeze_ self (Int64.of_int dim) |> with_tensor_gc let unsqueeze_copy self ~dim = - let out__ = CArray.make t 1 in - stubs_unsqueeze_copy (CArray.start out__) self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_unsqueeze_copy self (Int64.of_int dim) |> with_tensor_gc ;; let unsqueeze_copy_out ~out self ~dim = - let out__ = CArray.make t 1 in - stubs_unsqueeze_copy_out (CArray.start out__) out self (Int64.of_int dim); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_unsqueeze_copy_out out self (Int64.of_int dim) |> with_tensor_gc ;; let upsample_bicubic2d self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bicubic2d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -31733,10 +19994,8 @@ let upsample_bicubic2d self ~output_size ~align_corners ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bicubic2d_backward @@ -31747,9 +20006,7 @@ let upsample_bicubic2d_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bicubic2d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -31763,10 +20020,8 @@ let upsample_bicubic2d_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bicubic2d_backward_grad_input @@ -31778,9 +20033,7 @@ let upsample_bicubic2d_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bicubic2d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -31795,16 +20048,12 @@ let upsample_bicubic2d_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bicubic2d_out ~out self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bicubic2d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -31817,16 +20066,12 @@ let upsample_bicubic2d_out ~out self ~output_size ~align_corners ~scales_h ~scal (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bicubic2d_vec input ~output_size ~align_corners ~scale_factors = - let out__ = CArray.make t 1 in stubs_upsample_bicubic2d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -31836,16 +20081,12 @@ let upsample_bicubic2d_vec input ~output_size ~align_corners ~scale_factors = | Some v -> List.length v) (if align_corners then 1 else 0) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let upsample_bilinear2d self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bilinear2d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -31857,10 +20098,8 @@ let upsample_bilinear2d self ~output_size ~align_corners ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bilinear2d_backward @@ -31871,9 +20110,7 @@ let upsample_bilinear2d_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bilinear2d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -31887,10 +20124,8 @@ let upsample_bilinear2d_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bilinear2d_backward_grad_input @@ -31902,9 +20137,7 @@ let upsample_bilinear2d_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bilinear2d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -31919,16 +20152,12 @@ let upsample_bilinear2d_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bilinear2d_out ~out self ~output_size ~align_corners ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_bilinear2d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -31941,16 +20170,12 @@ let upsample_bilinear2d_out ~out self ~output_size ~align_corners ~scales_h ~sca (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_bilinear2d_vec input ~output_size ~align_corners ~scale_factors = - let out__ = CArray.make t 1 in stubs_upsample_bilinear2d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -31960,16 +20185,12 @@ let upsample_bilinear2d_vec input ~output_size ~align_corners ~scale_factors = | Some v -> List.length v) (if align_corners then 1 else 0) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let upsample_linear1d self ~output_size ~align_corners ~scales = - let out__ = CArray.make t 1 in stubs_upsample_linear1d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -31977,10 +20198,8 @@ let upsample_linear1d self ~output_size ~align_corners ~scales = (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_linear1d_backward @@ -31990,9 +20209,7 @@ let upsample_linear1d_backward ~align_corners ~scales = - let out__ = CArray.make t 1 in stubs_upsample_linear1d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32002,10 +20219,8 @@ let upsample_linear1d_backward (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_linear1d_backward_grad_input @@ -32016,9 +20231,7 @@ let upsample_linear1d_backward_grad_input ~align_corners ~scales = - let out__ = CArray.make t 1 in stubs_upsample_linear1d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32029,16 +20242,12 @@ let upsample_linear1d_backward_grad_input (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_linear1d_out ~out self ~output_size ~align_corners ~scales = - let out__ = CArray.make t 1 in stubs_upsample_linear1d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32047,16 +20256,12 @@ let upsample_linear1d_out ~out self ~output_size ~align_corners ~scales = (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_linear1d_vec input ~output_size ~align_corners ~scale_factors = - let out__ = CArray.make t 1 in stubs_upsample_linear1d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -32066,32 +20271,24 @@ let upsample_linear1d_vec input ~output_size ~align_corners ~scale_factors = | Some v -> List.length v) (if align_corners then 1 else 0) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let upsample_nearest1d self ~output_size ~scales = - let out__ = CArray.make t 1 in stubs_upsample_nearest1d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest1d_backward ~grad_output ~output_size ~input_size ~scales = - let out__ = CArray.make t 1 in stubs_upsample_nearest1d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32100,10 +20297,8 @@ let upsample_nearest1d_backward ~grad_output ~output_size ~input_size ~scales = (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest1d_backward_grad_input @@ -32113,9 +20308,7 @@ let upsample_nearest1d_backward_grad_input ~input_size ~scales = - let out__ = CArray.make t 1 in stubs_upsample_nearest1d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32125,16 +20318,12 @@ let upsample_nearest1d_backward_grad_input (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest1d_out ~out self ~output_size ~scales = - let out__ = CArray.make t 1 in stubs_upsample_nearest1d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32142,16 +20331,12 @@ let upsample_nearest1d_out ~out self ~output_size ~scales = (Option.value scales ~default:0.0) (match scales with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest1d_vec input ~output_size ~scale_factors = - let out__ = CArray.make t 1 in stubs_upsample_nearest1d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -32160,16 +20345,12 @@ let upsample_nearest1d_vec input ~output_size ~scale_factors = | None -> -1 | Some v -> List.length v) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let upsample_nearest2d self ~output_size ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest2d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32180,16 +20361,12 @@ let upsample_nearest2d self ~output_size ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest2d_backward ~grad_output ~output_size ~input_size ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest2d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32202,10 +20379,8 @@ let upsample_nearest2d_backward ~grad_output ~output_size ~input_size ~scales_h (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest2d_backward_grad_input @@ -32216,9 +20391,7 @@ let upsample_nearest2d_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest2d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32232,16 +20405,12 @@ let upsample_nearest2d_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest2d_out ~out self ~output_size ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest2d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32253,16 +20422,12 @@ let upsample_nearest2d_out ~out self ~output_size ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest2d_vec input ~output_size ~scale_factors = - let out__ = CArray.make t 1 in stubs_upsample_nearest2d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -32271,16 +20436,12 @@ let upsample_nearest2d_vec input ~output_size ~scale_factors = | None -> -1 | Some v -> List.length v) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let upsample_nearest3d self ~output_size ~scales_d ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest3d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32295,10 +20456,8 @@ let upsample_nearest3d self ~output_size ~scales_d ~scales_h ~scales_w = (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest3d_backward @@ -32309,9 +20468,7 @@ let upsample_nearest3d_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest3d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32328,10 +20485,8 @@ let upsample_nearest3d_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest3d_backward_grad_input @@ -32343,9 +20498,7 @@ let upsample_nearest3d_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest3d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32363,16 +20516,12 @@ let upsample_nearest3d_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest3d_out ~out self ~output_size ~scales_d ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_nearest3d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32388,16 +20537,12 @@ let upsample_nearest3d_out ~out self ~output_size ~scales_d ~scales_h ~scales_w (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_nearest3d_vec input ~output_size ~scale_factors = - let out__ = CArray.make t 1 in stubs_upsample_nearest3d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -32406,16 +20551,12 @@ let upsample_nearest3d_vec input ~output_size ~scale_factors = | None -> -1 | Some v -> List.length v) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let upsample_trilinear3d self ~output_size ~align_corners ~scales_d ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_trilinear3d - (CArray.start out__) self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32431,10 +20572,8 @@ let upsample_trilinear3d self ~output_size ~align_corners ~scales_d ~scales_h ~s (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_trilinear3d_backward @@ -32446,9 +20585,7 @@ let upsample_trilinear3d_backward ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_trilinear3d_backward - (CArray.start out__) grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) (List.length output_size) @@ -32466,10 +20603,8 @@ let upsample_trilinear3d_backward (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_trilinear3d_backward_grad_input @@ -32482,9 +20617,7 @@ let upsample_trilinear3d_backward_grad_input ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_trilinear3d_backward_grad_input - (CArray.start out__) grad_input grad_output (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32503,10 +20636,8 @@ let upsample_trilinear3d_backward_grad_input (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_trilinear3d_out @@ -32518,9 +20649,7 @@ let upsample_trilinear3d_out ~scales_h ~scales_w = - let out__ = CArray.make t 1 in stubs_upsample_trilinear3d_out - (CArray.start out__) out self (List.map Int64.of_int output_size |> CArray.of_list int64_t |> CArray.start) @@ -32537,16 +20666,12 @@ let upsample_trilinear3d_out (Option.value scales_w ~default:0.0) (match scales_w with | Some _ -> 0 - | None -> 1); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + | None -> 1) + |> with_tensor_gc ;; let upsample_trilinear3d_vec input ~output_size ~align_corners ~scale_factors = - let out__ = CArray.make t 1 in stubs_upsample_trilinear3d_vec - (CArray.start out__) input (match output_size with | None -> from_voidp int64_t null @@ -32556,55 +20681,27 @@ let upsample_trilinear3d_vec input ~output_size ~align_corners ~scale_factors = | Some v -> List.length v) (if align_corners then 1 else 0) (scale_factors |> CArray.of_list double |> CArray.start) - (List.length scale_factors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length scale_factors) + |> with_tensor_gc ;; let value_selecting_reduction_backward ~grad ~dim ~indices ~sizes ~keepdim = - let out__ = CArray.make t 1 in stubs_value_selecting_reduction_backward - (CArray.start out__) grad (Int64.of_int dim) indices (List.map Int64.of_int sizes |> CArray.of_list int64_t |> CArray.start) (List.length sizes) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let values self = - let out__ = CArray.make t 1 in - stubs_values (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let values_copy self = - let out__ = CArray.make t 1 in - stubs_values_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; -let values_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_values_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let values self = stubs_values self |> with_tensor_gc +let values_copy self = stubs_values_copy self |> with_tensor_gc +let values_copy_out ~out self = stubs_values_copy_out out self |> with_tensor_gc let vander ~x ~n ~increasing = - let out__ = CArray.make t 1 in stubs_vander - (CArray.start out__) x (match n with | None -> Int64.zero @@ -32612,24 +20709,14 @@ let vander ~x ~n ~increasing = (match n with | Some _ -> 0 | None -> 1) - (if increasing then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if increasing then 1 else 0) + |> with_tensor_gc ;; -let var self ~unbiased = - let out__ = CArray.make t 1 in - stubs_var (CArray.start out__) self (if unbiased then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let var self ~unbiased = stubs_var self (if unbiased then 1 else 0) |> with_tensor_gc let var_correction self ~dim ~correction ~keepdim = - let out__ = CArray.make t 1 in stubs_var_correction - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -32638,16 +20725,12 @@ let var_correction self ~dim ~correction ~keepdim = | None -> -1 | Some v -> List.length v) correction - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let var_correction_out ~out self ~dim ~correction ~keepdim = - let out__ = CArray.make t 1 in stubs_var_correction_out - (CArray.start out__) out self (match dim with @@ -32657,16 +20740,12 @@ let var_correction_out ~out self ~dim ~correction ~keepdim = | None -> -1 | Some v -> List.length v) correction - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let var_dim self ~dim ~unbiased ~keepdim = - let out__ = CArray.make t 1 in stubs_var_dim - (CArray.start out__) self (match dim with | None -> from_voidp int64_t null @@ -32675,24 +20754,20 @@ let var_dim self ~dim ~unbiased ~keepdim = | None -> -1 | Some v -> List.length v) (if unbiased then 1 else 0) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; let var_mean self ~unbiased = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_var_mean (CArray.start out__) self (if unbiased then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let var_mean_correction self ~dim ~correction ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_var_mean_correction (CArray.start out__) self @@ -32704,15 +20779,13 @@ let var_mean_correction self ~dim ~correction ~keepdim = | Some v -> List.length v) correction (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let var_mean_correction_out ~out0 ~out1 self ~dim ~correction ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_var_mean_correction_out (CArray.start out__) out0 @@ -32726,15 +20799,13 @@ let var_mean_correction_out ~out0 ~out1 self ~dim ~correction ~keepdim = | Some v -> List.length v) correction (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let var_mean_dim self ~dim ~unbiased ~keepdim = - let out__ = CArray.make t 2 in + let out__ = CArray.make raw_tensor 2 in stubs_var_mean_dim (CArray.start out__) self @@ -32746,17 +20817,13 @@ let var_mean_dim self ~dim ~unbiased ~keepdim = | Some v -> List.length v) (if unbiased then 1 else 0) (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - let t1 = CArray.get out__ 1 in - Gc.finalise C.Tensor.free t1; + let t0 = CArray.get out__ 0 |> with_tensor_gc in + let t1 = CArray.get out__ 1 |> with_tensor_gc in t0, t1 ;; let var_out ~out self ~dim ~unbiased ~keepdim = - let out__ = CArray.make t 1 in stubs_var_out - (CArray.start out__) out self (match dim with @@ -32766,143 +20833,63 @@ let var_out ~out self ~dim ~unbiased ~keepdim = | None -> -1 | Some v -> List.length v) (if unbiased then 1 else 0) - (if keepdim then 1 else 0); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let vdot self other = - let out__ = CArray.make t 1 in - stubs_vdot (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (if keepdim then 1 else 0) + |> with_tensor_gc ;; -let vdot_out ~out self other = - let out__ = CArray.make t 1 in - stubs_vdot_out (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let vdot self other = stubs_vdot self other |> with_tensor_gc +let vdot_out ~out self other = stubs_vdot_out out self other |> with_tensor_gc let view self ~size = - let out__ = CArray.make t 1 in stubs_view - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let view_as self other = - let out__ = CArray.make t 1 in - stubs_view_as (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let view_as_complex self = - let out__ = CArray.make t 1 in - stubs_view_as_complex (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; -let view_as_complex_copy self = - let out__ = CArray.make t 1 in - stubs_view_as_complex_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let view_as self other = stubs_view_as self other |> with_tensor_gc +let view_as_complex self = stubs_view_as_complex self |> with_tensor_gc +let view_as_complex_copy self = stubs_view_as_complex_copy self |> with_tensor_gc let view_as_complex_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_view_as_complex_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_view_as_complex_copy_out out self |> with_tensor_gc ;; -let view_as_real self = - let out__ = CArray.make t 1 in - stubs_view_as_real (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let view_as_real_copy self = - let out__ = CArray.make t 1 in - stubs_view_as_real_copy (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let view_as_real self = stubs_view_as_real self |> with_tensor_gc +let view_as_real_copy self = stubs_view_as_real_copy self |> with_tensor_gc let view_as_real_copy_out ~out self = - let out__ = CArray.make t 1 in - stubs_view_as_real_copy_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_view_as_real_copy_out out self |> with_tensor_gc ;; let view_copy self ~size = - let out__ = CArray.make t 1 in stubs_view_copy - (CArray.start out__) self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let view_copy_dtype self ~dtype = - let out__ = CArray.make t 1 in - stubs_view_copy_dtype (CArray.start out__) self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_view_copy_dtype self (Kind.packed_to_int dtype) |> with_tensor_gc ;; let view_copy_dtype_out ~out self ~dtype = - let out__ = CArray.make t 1 in - stubs_view_copy_dtype_out (CArray.start out__) out self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_view_copy_dtype_out out self (Kind.packed_to_int dtype) |> with_tensor_gc ;; let view_copy_out ~out self ~size = - let out__ = CArray.make t 1 in stubs_view_copy_out - (CArray.start out__) out self (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; let view_dtype self ~dtype = - let out__ = CArray.make t 1 in - stubs_view_dtype (CArray.start out__) self (Kind.packed_to_int dtype); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_view_dtype self (Kind.packed_to_int dtype) |> with_tensor_gc ;; let vsplit self ~sections = stubs_vsplit self (Int64.of_int sections) |> to_tensor_list @@ -32916,195 +20903,82 @@ let vsplit_array self ~indices = ;; let vstack tensors = - let out__ = CArray.make t 1 in - stubs_vstack - (CArray.start out__) - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_vstack (CArray.of_list gc_tensor tensors |> CArray.start) (List.length tensors) + |> with_tensor_gc ;; let vstack_out ~out tensors = - let out__ = CArray.make t 1 in stubs_vstack_out - (CArray.start out__) out - (CArray.of_list t tensors |> CArray.start) - (List.length tensors); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (CArray.of_list gc_tensor tensors |> CArray.start) + (List.length tensors) + |> with_tensor_gc ;; let where ~condition = stubs_where condition |> to_tensor_list let where_scalar ~condition self other = - let out__ = CArray.make t 1 in - stubs_where_scalar (CArray.start out__) condition self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_where_scalar condition self other |> with_tensor_gc ;; let where_scalarother ~condition self other = - let out__ = CArray.make t 1 in - stubs_where_scalarother (CArray.start out__) condition self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_where_scalarother condition self other |> with_tensor_gc ;; let where_scalarself ~condition self other = - let out__ = CArray.make t 1 in - stubs_where_scalarself (CArray.start out__) condition self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_where_scalarself condition self other |> with_tensor_gc ;; let where_self ~condition self other = - let out__ = CArray.make t 1 in - stubs_where_self (CArray.start out__) condition self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_where_self condition self other |> with_tensor_gc ;; let where_self_out ~out ~condition self other = - let out__ = CArray.make t 1 in - stubs_where_self_out (CArray.start out__) out condition self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let xlogy self other = - let out__ = CArray.make t 1 in - stubs_xlogy (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_where_self_out out condition self other |> with_tensor_gc ;; -let xlogy_ self other = - let out__ = CArray.make t 1 in - stubs_xlogy_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let xlogy self other = stubs_xlogy self other |> with_tensor_gc +let xlogy_ self other = stubs_xlogy_ self other |> with_tensor_gc let xlogy_outscalar_other ~out self other = - let out__ = CArray.make t 1 in - stubs_xlogy_outscalar_other (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_xlogy_outscalar_other out self other |> with_tensor_gc ;; let xlogy_outscalar_self ~out self other = - let out__ = CArray.make t 1 in - stubs_xlogy_outscalar_self (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_xlogy_outscalar_self out self other |> with_tensor_gc ;; let xlogy_outtensor ~out self other = - let out__ = CArray.make t 1 in - stubs_xlogy_outtensor (CArray.start out__) out self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_xlogy_outtensor out self other |> with_tensor_gc ;; -let xlogy_scalar_other self other = - let out__ = CArray.make t 1 in - stubs_xlogy_scalar_other (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let xlogy_scalar_other self other = stubs_xlogy_scalar_other self other |> with_tensor_gc let xlogy_scalar_other_ self other = - let out__ = CArray.make t 1 in - stubs_xlogy_scalar_other_ (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let xlogy_scalar_self self other = - let out__ = CArray.make t 1 in - stubs_xlogy_scalar_self (CArray.start out__) self other; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let zero self = - let out__ = CArray.make t 1 in - stubs_zero (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let zero_ self = - let out__ = CArray.make t 1 in - stubs_zero_ (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + stubs_xlogy_scalar_other_ self other |> with_tensor_gc ;; -let zero_out ~out self = - let out__ = CArray.make t 1 in - stubs_zero_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let xlogy_scalar_self self other = stubs_xlogy_scalar_self self other |> with_tensor_gc +let zero self = stubs_zero self |> with_tensor_gc +let zero_ self = stubs_zero_ self |> with_tensor_gc +let zero_out ~out self = stubs_zero_out out self |> with_tensor_gc let zeros ~size ~options = - let out__ = CArray.make t 1 in stubs_zeros - (CArray.start out__) (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) (List.length size) (Kind.packed_to_int (fst options)) - (Device.to_int (snd options)); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; - -let zeros_like self = - let out__ = CArray.make t 1 in - stubs_zeros_like (CArray.start out__) self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (Device.to_int (snd options)) + |> with_tensor_gc ;; -let zeros_like_out ~out self = - let out__ = CArray.make t 1 in - stubs_zeros_like_out (CArray.start out__) out self; - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 -;; +let zeros_like self = stubs_zeros_like self |> with_tensor_gc +let zeros_like_out ~out self = stubs_zeros_like_out out self |> with_tensor_gc let zeros_out ~out ~size = - let out__ = CArray.make t 1 in stubs_zeros_out - (CArray.start out__) out (List.map Int64.of_int size |> CArray.of_list int64_t |> CArray.start) - (List.length size); - let t0 = CArray.get out__ 0 in - Gc.finalise C.Tensor.free t0; - t0 + (List.length size) + |> with_tensor_gc ;; diff --git a/third_party/pytorch/dune b/third_party/pytorch/dune new file mode 100644 index 0000000..e69de29 diff --git a/torch.opam b/torch.opam index 7ae43ef..d62c920 100644 --- a/torch.opam +++ b/torch.opam @@ -16,6 +16,7 @@ depends: [ "ppx_bench" "ppx_inline_test" "ppx_jane" + "ppx_string" "stdio" "ctypes" {>= "0.18.0"} "ctypes-foreign"