Skip to content

Commit

Permalink
Merge branch 'main' into 099_gpu_quant
Browse files Browse the repository at this point in the history
  • Loading branch information
svekars authored Jan 30, 2024
2 parents 2a865a6 + cfe484c commit 0b7f773
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 40 deletions.
8 changes: 4 additions & 4 deletions .jenkins/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ pip install --progress-bar off -r $DIR/../requirements.txt

#Install PyTorch Nightly for test.
# Nightly - pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html
# Install 2.2 for testing
pip uninstall -y torch torchvision torchaudio torchtext torchdata
pip3 install torch==2.2.0 torchvision torchaudio --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu121
pip3 install torchdata torchtext --index-url https://download.pytorch.org/whl/test/cpu
# Install 2.2 for testing - uncomment to install nightly binaries (update the version as needed).
# pip uninstall -y torch torchvision torchaudio torchtext torchdata
# pip3 install torch==2.2.0 torchvision torchaudio --no-cache-dir --index-url https://download.pytorch.org/whl/test/cu121
# pip3 install torchdata torchtext --index-url https://download.pytorch.org/whl/test/cpu

# Install two language tokenizers for Translation with TorchText tutorial
python -m spacy download en_core_web_sm
Expand Down
14 changes: 5 additions & 9 deletions index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,11 @@ Welcome to PyTorch Tutorials

What's new in PyTorch tutorials?

* `Getting Started with Distributed Checkpoint (DCP) <https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html>`__
* `torch.export Tutorial <https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html>`__
* `Facilitating New Backend Integration by PrivateUse1 <https://pytorch.org/tutorials/advanced/privateuseone.html>`__
* `(prototype) Accelerating BERT with semi-structured (2:4) sparsity <https://pytorch.org/tutorials/prototype/semi_structured_sparse.html>`__
* `(prototype) PyTorch 2 Export Quantization-Aware Training (QAT) <https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html>`__
* `(prototype) PyTorch 2 Export Post Training Quantization with X86 Backend through Inductor <https://pytorch.org/tutorials/prototype/pt2e_quant_ptq_x86_inductor.html>`__
* `(prototype) Inductor C++ Wrapper Tutorial <https://pytorch.org/tutorials/prototype/inductor_cpp_wrapper_tutorial.html>`__
* `How to save memory by fusing the optimizer step into the backward pass <https://pytorch.org/tutorials/intermediate/optimizer_step_in_backward_tutorial.html>`__
* `Tips for Loading an nn.Module from a Checkpoint <https://pytorch.org/tutorials/recipes/recipes/module_load_state_dict_tips.html>`__
* `PyTorch Inference Performance Tuning on AWS Graviton Processors <https://pytorch.org/tutorials/recipes/inference_tuning_on_aws_graviton.html>`__
* `Using TORCH_LOGS python API with torch.compile <https://pytorch.org/tutorials/recipes/torch_logs.html>`__
* `PyTorch 2 Export Quantization with X86 Backend through Inductor <https://pytorch.org/tutorials/prototype/pt2e_quant_x86_inductor.html>`__
* `Getting Started with DeviceMesh <https://pytorch.org/tutorials/recipes/distributed_device_mesh.html>`__
* `Compiling the optimizer with torch.compile <https://pytorch.org/tutorials/recipes/compiling_optimizer.html>`__


.. raw:: html
Expand Down
50 changes: 23 additions & 27 deletions recipes_source/torch_logs.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,53 +34,49 @@

# exit cleanly if we are on a device that doesn't support torch.compile
if torch.cuda.get_device_capability() < (7, 0):
print("Exiting because torch.compile is not supported on this device.")
import sys
print("Skipping because torch.compile is not supported on this device.")
else:
@torch.compile()
def fn(x, y):
z = x + y
return z + 2

sys.exit(0)


@torch.compile()
def fn(x, y):
z = x + y
return z + 2


inputs = (torch.ones(2, 2, device="cuda"), torch.zeros(2, 2, device="cuda"))
inputs = (torch.ones(2, 2, device="cuda"), torch.zeros(2, 2, device="cuda"))


# print separator and reset dynamo
# between each example
def separator(name):
print(f"==================={name}=========================")
torch._dynamo.reset()
def separator(name):
print(f"==================={name}=========================")
torch._dynamo.reset()


separator("Dynamo Tracing")
separator("Dynamo Tracing")
# View dynamo tracing
# TORCH_LOGS="+dynamo"
torch._logging.set_logs(dynamo=logging.DEBUG)
fn(*inputs)
torch._logging.set_logs(dynamo=logging.DEBUG)
fn(*inputs)

separator("Traced Graph")
separator("Traced Graph")
# View traced graph
# TORCH_LOGS="graph"
torch._logging.set_logs(graph=True)
fn(*inputs)
torch._logging.set_logs(graph=True)
fn(*inputs)

separator("Fusion Decisions")
separator("Fusion Decisions")
# View fusion decisions
# TORCH_LOGS="fusion"
torch._logging.set_logs(fusion=True)
fn(*inputs)
torch._logging.set_logs(fusion=True)
fn(*inputs)

separator("Output Code")
separator("Output Code")
# View output code generated by inductor
# TORCH_LOGS="output_code"
torch._logging.set_logs(output_code=True)
fn(*inputs)
torch._logging.set_logs(output_code=True)
fn(*inputs)

separator("")
separator("")

######################################################################
# Conclusion
Expand Down

0 comments on commit 0b7f773

Please sign in to comment.