Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jan 16, 2024
1 parent 7bf73af commit 8d94525
Show file tree
Hide file tree
Showing 353 changed files with 11,998 additions and 14,228 deletions.
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1649,26 +1649,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:04, 1934.56it/s]
16%|#6 | 1600/10000 [00:03<00:23, 361.50it/s]
24%|##4 | 2400/10000 [00:04<00:13, 552.48it/s]
32%|###2 | 3200/10000 [00:04<00:09, 737.59it/s]
40%|#### | 4000/10000 [00:05<00:06, 902.90it/s]
48%|####8 | 4800/10000 [00:06<00:04, 1046.48it/s]
56%|#####6 | 5600/10000 [00:06<00:03, 1164.30it/s]
reward: -2.01 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-1.76/6.66, grad norm= 92.03, loss_value= 410.87, loss_actor= 12.45, target value: -9.78: 56%|#####6 | 5600/10000 [00:07<00:03, 1164.30it/s]
reward: -2.01 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-1.76/6.66, grad norm= 92.03, loss_value= 410.87, loss_actor= 12.45, target value: -9.78: 64%|######4 | 6400/10000 [00:08<00:04, 720.45it/s]
reward: -0.13 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-2.14/5.47, grad norm= 45.95, loss_value= 225.14, loss_actor= 14.70, target value: -14.54: 64%|######4 | 6400/10000 [00:09<00:04, 720.45it/s]
reward: -0.13 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-2.14/5.47, grad norm= 45.95, loss_value= 225.14, loss_actor= 14.70, target value: -14.54: 72%|#######2 | 7200/10000 [00:10<00:05, 543.61it/s]
reward: -0.99 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-2.03/5.05, grad norm= 94.39, loss_value= 213.34, loss_actor= 9.58, target value: -12.79: 72%|#######2 | 7200/10000 [00:11<00:05, 543.61it/s]
reward: -0.99 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-2.03/5.05, grad norm= 94.39, loss_value= 213.34, loss_actor= 9.58, target value: -12.79: 80%|######## | 8000/10000 [00:13<00:04, 444.47it/s]
reward: -3.82 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-2.02/4.56, grad norm= 83.18, loss_value= 166.50, loss_actor= 14.78, target value: -12.34: 80%|######## | 8000/10000 [00:14<00:04, 444.47it/s]
reward: -3.82 (r0 = -1.23), reward eval: reward: 0.00, reward normalized=-2.02/4.56, grad norm= 83.18, loss_value= 166.50, loss_actor= 14.78, target value: -12.34: 88%|########8 | 8800/10000 [00:15<00:03, 394.85it/s]
reward: -5.18 (r0 = -1.23), reward eval: reward: -5.55, reward normalized=-2.37/4.85, grad norm= 175.48, loss_value= 184.63, loss_actor= 15.14, target value: -16.55: 88%|########8 | 8800/10000 [00:19<00:03, 394.85it/s]
reward: -5.18 (r0 = -1.23), reward eval: reward: -5.55, reward normalized=-2.37/4.85, grad norm= 175.48, loss_value= 184.63, loss_actor= 15.14, target value: -16.55: 96%|#########6| 9600/10000 [00:21<00:01, 265.13it/s]
reward: -4.71 (r0 = -1.23), reward eval: reward: -5.55, reward normalized=-3.04/4.39, grad norm= 233.96, loss_value= 179.48, loss_actor= 14.56, target value: -20.80: 96%|#########6| 9600/10000 [00:22<00:01, 265.13it/s]
reward: -4.71 (r0 = -1.23), reward eval: reward: -5.55, reward normalized=-3.04/4.39, grad norm= 233.96, loss_value= 179.48, loss_actor= 14.56, target value: -20.80: : 10400it [00:23, 279.67it/s]
reward: -8.14 (r0 = -1.23), reward eval: reward: -5.55, reward normalized=-2.47/4.66, grad norm= 210.40, loss_value= 150.93, loss_actor= 18.18, target value: -17.40: : 10400it [00:24, 279.67it/s]
8%|8 | 800/10000 [00:00<00:04, 2010.28it/s]
16%|#6 | 1600/10000 [00:03<00:22, 377.14it/s]
24%|##4 | 2400/10000 [00:04<00:13, 575.71it/s]
32%|###2 | 3200/10000 [00:04<00:08, 767.23it/s]
40%|#### | 4000/10000 [00:05<00:06, 938.15it/s]
48%|####8 | 4800/10000 [00:05<00:04, 1087.10it/s]
56%|#####6 | 5600/10000 [00:06<00:03, 1208.92it/s]
reward: -2.28 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.03/5.99, grad norm= 83.44, loss_value= 240.10, loss_actor= 13.62, target value: -11.69: 56%|#####6 | 5600/10000 [00:07<00:03, 1208.92it/s]
reward: -2.28 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.03/5.99, grad norm= 83.44, loss_value= 240.10, loss_actor= 13.62, target value: -11.69: 64%|######4 | 6400/10000 [00:08<00:04, 739.40it/s]
reward: -0.13 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.16/5.29, grad norm= 67.95, loss_value= 159.86, loss_actor= 15.35, target value: -14.24: 64%|######4 | 6400/10000 [00:09<00:04, 739.40it/s]
reward: -0.13 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.16/5.29, grad norm= 67.95, loss_value= 159.86, loss_actor= 15.35, target value: -14.24: 72%|#######2 | 7200/10000 [00:10<00:04, 596.07it/s]
reward: -1.26 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.12/5.09, grad norm= 97.58, loss_value= 193.69, loss_actor= 11.58, target value: -13.67: 72%|#######2 | 7200/10000 [00:11<00:04, 596.07it/s]
reward: -1.26 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.12/5.09, grad norm= 97.58, loss_value= 193.69, loss_actor= 11.58, target value: -13.67: 80%|######## | 8000/10000 [00:12<00:04, 472.35it/s]
reward: -4.21 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.57/4.50, grad norm= 121.68, loss_value= 167.76, loss_actor= 17.89, target value: -16.55: 80%|######## | 8000/10000 [00:13<00:04, 472.35it/s]
reward: -4.21 (r0 = -1.60), reward eval: reward: -0.00, reward normalized=-2.57/4.50, grad norm= 121.68, loss_value= 167.76, loss_actor= 17.89, target value: -16.55: 88%|########8 | 8800/10000 [00:15<00:02, 414.04it/s]
reward: -4.99 (r0 = -1.60), reward eval: reward: -5.16, reward normalized=-2.41/4.91, grad norm= 61.43, loss_value= 156.82, loss_actor= 14.72, target value: -16.67: 88%|########8 | 8800/10000 [00:19<00:02, 414.04it/s]
reward: -4.99 (r0 = -1.60), reward eval: reward: -5.16, reward normalized=-2.41/4.91, grad norm= 61.43, loss_value= 156.82, loss_actor= 14.72, target value: -16.67: 96%|#########6| 9600/10000 [00:20<00:01, 265.19it/s]
reward: -4.54 (r0 = -1.60), reward eval: reward: -5.16, reward normalized=-2.93/4.50, grad norm= 95.65, loss_value= 195.73, loss_actor= 14.44, target value: -20.43: 96%|#########6| 9600/10000 [00:21<00:01, 265.19it/s]
reward: -4.54 (r0 = -1.60), reward eval: reward: -5.16, reward normalized=-2.93/4.50, grad norm= 95.65, loss_value= 195.73, loss_actor= 14.44, target value: -20.43: : 10400it [00:23, 279.89it/s]
reward: -9.13 (r0 = -1.60), reward eval: reward: -5.16, reward normalized=-3.18/5.50, grad norm= 141.52, loss_value= 258.12, loss_actor= 21.03, target value: -22.15: : 10400it [00:24, 279.89it/s]
Expand Down Expand Up @@ -1738,7 +1738,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 31.452 seconds)
**Total running time of the script:** ( 0 minutes 30.685 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -516,9 +516,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 203.8
elapsed time (seconds): 213.2
loss: 5.168
elapsed time (seconds): 116.0
elapsed time (seconds): 115.7
Expand All @@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 28.618 seconds)
**Total running time of the script:** ( 5 minutes 38.073 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
92 changes: 46 additions & 46 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,45 +410,45 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
2%|2 | 12.8M/548M [00:00<00:04, 134MB/s]
5%|4 | 26.0M/548M [00:00<00:04, 137MB/s]
7%|7 | 39.7M/548M [00:00<00:03, 140MB/s]
10%|9 | 54.0M/548M [00:00<00:03, 144MB/s]
12%|#2 | 68.4M/548M [00:00<00:03, 146MB/s]
15%|#5 | 82.7M/548M [00:00<00:03, 148MB/s]
18%|#7 | 97.0M/548M [00:00<00:03, 148MB/s]
20%|## | 111M/548M [00:00<00:03, 149MB/s]
23%|##2 | 126M/548M [00:00<00:03, 147MB/s]
25%|##5 | 140M/548M [00:01<00:02, 148MB/s]
28%|##8 | 154M/548M [00:01<00:02, 149MB/s]
31%|### | 169M/548M [00:01<00:02, 149MB/s]
33%|###3 | 183M/548M [00:01<00:02, 150MB/s]
36%|###6 | 197M/548M [00:01<00:02, 150MB/s]
39%|###8 | 212M/548M [00:01<00:02, 151MB/s]
41%|####1 | 226M/548M [00:01<00:02, 151MB/s]
44%|####3 | 240M/548M [00:01<00:02, 150MB/s]
47%|####6 | 255M/548M [00:01<00:02, 150MB/s]
49%|####9 | 269M/548M [00:01<00:01, 151MB/s]
52%|#####1 | 284M/548M [00:02<00:01, 151MB/s]
54%|#####4 | 298M/548M [00:02<00:01, 151MB/s]
57%|#####7 | 313M/548M [00:02<00:01, 151MB/s]
60%|#####9 | 327M/548M [00:02<00:01, 151MB/s]
62%|######2 | 341M/548M [00:02<00:01, 151MB/s]
65%|######4 | 356M/548M [00:02<00:01, 151MB/s]
68%|######7 | 370M/548M [00:02<00:01, 150MB/s]
70%|####### | 385M/548M [00:02<00:01, 150MB/s]
73%|#######2 | 399M/548M [00:02<00:01, 150MB/s]
75%|#######5 | 413M/548M [00:02<00:00, 150MB/s]
78%|#######8 | 428M/548M [00:03<00:00, 151MB/s]
81%|######## | 442M/548M [00:03<00:00, 150MB/s]
83%|########3 | 456M/548M [00:03<00:00, 150MB/s]
86%|########5 | 471M/548M [00:03<00:00, 150MB/s]
89%|########8 | 485M/548M [00:03<00:00, 150MB/s]
91%|#########1| 499M/548M [00:03<00:00, 150MB/s]
94%|#########3| 514M/548M [00:03<00:00, 150MB/s]
96%|#########6| 528M/548M [00:03<00:00, 150MB/s]
99%|#########8| 542M/548M [00:03<00:00, 150MB/s]
100%|##########| 548M/548M [00:03<00:00, 149MB/s]
2%|2 | 12.1M/548M [00:00<00:04, 127MB/s]
5%|4 | 24.8M/548M [00:00<00:04, 130MB/s]
7%|7 | 38.4M/548M [00:00<00:03, 136MB/s]
10%|9 | 52.6M/548M [00:00<00:03, 141MB/s]
12%|#2 | 66.7M/548M [00:00<00:03, 144MB/s]
15%|#4 | 80.9M/548M [00:00<00:03, 145MB/s]
17%|#7 | 95.0M/548M [00:00<00:03, 146MB/s]
20%|#9 | 109M/548M [00:00<00:03, 147MB/s]
23%|##2 | 124M/548M [00:00<00:03, 148MB/s]
25%|##5 | 138M/548M [00:01<00:02, 149MB/s]
28%|##7 | 152M/548M [00:01<00:02, 149MB/s]
30%|### | 167M/548M [00:01<00:02, 150MB/s]
33%|###3 | 181M/548M [00:01<00:02, 150MB/s]
36%|###5 | 195M/548M [00:01<00:02, 150MB/s]
38%|###8 | 210M/548M [00:01<00:02, 150MB/s]
41%|#### | 224M/548M [00:01<00:02, 150MB/s]
43%|####3 | 238M/548M [00:01<00:02, 150MB/s]
46%|####6 | 253M/548M [00:01<00:02, 150MB/s]
49%|####8 | 267M/548M [00:01<00:01, 150MB/s]
51%|#####1 | 281M/548M [00:02<00:01, 150MB/s]
54%|#####3 | 296M/548M [00:02<00:01, 150MB/s]
57%|#####6 | 310M/548M [00:02<00:01, 150MB/s]
59%|#####9 | 324M/548M [00:02<00:01, 150MB/s]
62%|######1 | 339M/548M [00:02<00:01, 150MB/s]
64%|######4 | 353M/548M [00:02<00:01, 150MB/s]
67%|######6 | 367M/548M [00:02<00:01, 150MB/s]
70%|######9 | 381M/548M [00:02<00:01, 150MB/s]
72%|#######2 | 396M/548M [00:02<00:01, 150MB/s]
75%|#######4 | 410M/548M [00:02<00:00, 150MB/s]
77%|#######7 | 424M/548M [00:03<00:00, 150MB/s]
80%|######## | 439M/548M [00:03<00:00, 150MB/s]
83%|########2 | 453M/548M [00:03<00:00, 150MB/s]
85%|########5 | 467M/548M [00:03<00:00, 150MB/s]
88%|########7 | 482M/548M [00:03<00:00, 149MB/s]
90%|######### | 496M/548M [00:03<00:00, 150MB/s]
93%|#########3| 510M/548M [00:03<00:00, 150MB/s]
96%|#########5| 525M/548M [00:03<00:00, 150MB/s]
98%|#########8| 539M/548M [00:03<00:00, 150MB/s]
100%|##########| 548M/548M [00:03<00:00, 148MB/s]
Expand Down Expand Up @@ -769,22 +769,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 4.205052 Content Loss: 4.115046
Style Loss : 3.941625 Content Loss: 4.078473
run [100]:
Style Loss : 1.159382 Content Loss: 3.048042
Style Loss : 1.139907 Content Loss: 3.019051
run [150]:
Style Loss : 0.725685 Content Loss: 2.664803
Style Loss : 0.716969 Content Loss: 2.652961
run [200]:
Style Loss : 0.484354 Content Loss: 2.496711
Style Loss : 0.476380 Content Loss: 2.486228
run [250]:
Style Loss : 0.350218 Content Loss: 2.408220
Style Loss : 0.344488 Content Loss: 2.401075
run [300]:
Style Loss : 0.267636 Content Loss: 2.352090
Style Loss : 0.262781 Content Loss: 2.347618
Expand All @@ -793,7 +793,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 41.206 seconds)
**Total running time of the script:** ( 0 minutes 41.227 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.556 seconds)
**Total running time of the script:** ( 0 minutes 0.574 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit 8d94525

Please sign in to comment.