{"id":941,"date":"2024-11-07T17:00:38","date_gmt":"2024-11-07T09:00:38","guid":{"rendered":"https:\/\/fwq.ai\/blog\/941\/"},"modified":"2024-11-07T17:00:38","modified_gmt":"2024-11-07T09:00:38","slug":"%e5%9c%a8-pytorch-%e4%b8%ad%e5%b1%95%e5%b9%b3","status":"publish","type":"post","link":"https:\/\/fwq.ai\/blog\/941\/","title":{"rendered":"\u5728 PyTorch \u4e2d\u5c55\u5e73"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-968\" src=\"https:\/\/fwq.ai\/blog\/wp-content\/uploads\/2024\/11\/173085991954531.jpg\" width=\"800\" height=\"320\" srcset=\"https:\/\/fwq.ai\/blog\/wp-content\/uploads\/2024\/11\/173085991954531.jpg 800w, https:\/\/fwq.ai\/blog\/wp-content\/uploads\/2024\/11\/173085991954531-300x120.jpg 300w, https:\/\/fwq.ai\/blog\/wp-content\/uploads\/2024\/11\/173085991954531-768x307.jpg 768w, https:\/\/fwq.ai\/blog\/wp-content\/uploads\/2024\/11\/173085991954531-670x268.jpg 670w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" title=\"\u5728 PyTorch \u4e2d\u5c55\u5e73\u63d2\u56fe\" alt=\"\u5728 PyTorch \u4e2d\u5c55\u5e73\u63d2\u56fe\" \/><\/p>\n<p>\u8bf7\u6211\u559d\u676f\u5496\u5561<\/p>\n<p>*\u5907\u5fd8\u5f55\uff1a<\/p>\n<ul>\n<li> \u6211\u7684\u5e16\u5b50\u89e3\u91ca\u4e86 flatten() \u548c ravel()\u3002<\/li>\n<li> \u6211\u7684\u5e16\u5b50\u89e3\u91ca\u4e86 unflatten()\u3002<\/li>\n<\/ul>\n<p>flatten() \u53ef\u4ee5\u901a\u8fc7\u4ece\u96f6\u4e2a\u6216\u591a\u4e2a\u5143\u7d20\u7684 0d \u6216\u591a\u4e2a d \u5f20\u91cf\u4e2d\u9009\u62e9\u7ef4\u5ea6\u6765\u79fb\u9664\u96f6\u4e2a\u6216\u591a\u4e2a\u7ef4\u5ea6\uff0c\u5f97\u5230\u96f6\u4e2a\u6216\u591a\u4e2a\u5143\u7d20\u7684 1d \u6216\u591a\u4e2a d \u5f20\u91cf\uff0c\u5982\u4e0b\u6240\u793a\uff1a<\/p>\n<p>*\u5907\u5fd8\u5f55\uff1a<\/p>\n<ul>\n<li>\u521d\u59cb\u5316\u7684\u7b2c\u4e00\u4e2a\u53c2\u6570\u662f start_dim(optional-default:1-type:int)\u3002<\/li>\n<li>\u521d\u59cb\u5316\u7684\u7b2c\u4e8c\u4e2a\u53c2\u6570\u662f end_dim\uff08\u53ef\u9009-\u9ed8\u8ba4\uff1a-1-\u7c7b\u578b\uff1aint\uff09\u3002<\/li>\n<li>\u7b2c\u4e00\u4e2a\u53c2\u6570\u662f\u8f93\u5165\uff08\u5fc5\u9700\u7c7b\u578b\uff1aint\u3001float\u3001complex \u6216 bool \u7684\u5f20\u91cf\uff09\u3002<\/li>\n<li> flatten() \u53ef\u4ee5\u5c06 0d \u5f20\u91cf\u66f4\u6539\u4e3a 1d \u5f20\u91cf\u3002<\/li>\n<li> flatten() \u5bf9\u4e8e\u4e00\u7ef4\u5f20\u91cf\u6ca1\u6709\u4efb\u4f55\u4f5c\u7528\u3002<\/li>\n<li>flatten() \u548c flatten() \u7684\u533a\u522b\u662f\uff1a\n<ul>\n<li>flatten() \u7684 start_dim \u9ed8\u8ba4\u503c\u4e3a 1\uff0c\u800c flatten() \u7684 start_dim \u9ed8\u8ba4\u503c\u4e3a 0\u3002<\/li>\n<li>\u57fa\u672c\u4e0a\uff0cflatten() \u7528\u4e8e\u5b9a\u4e49\u6a21\u578b\uff0c\u800c flatten() \u4e0d\u7528\u4e8e\u5b9a\u4e49\u6a21\u578b\u3002 <\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<pre>import torch\nfrom torch import nn\n\nflatten = nn.Flatten()\nflatten\n# Flatten(start_dim=1, end_dim=-1)\n\nflatten.start_dim\n# 1\n\nflatten.end_dim\n# -1\n\nmy_tensor = torch.tensor(7)\n\nflatten = nn.Flatten(start_dim=0, end_dim=0)\nflatten = nn.Flatten(start_dim=0, end_dim=-1)\nflatten = nn.Flatten(start_dim=-1, end_dim=0)\nflatten = nn.Flatten(start_dim=-1, end_dim=-1)\nflatten(input=my_tensor)\n# tensor([7])\n\nmy_tensor = torch.tensor([7, 1, -8, 3, -6, 0])\n\nflatten = nn.Flatten(start_dim=0, end_dim=0)\nflatten = nn.Flatten(start_dim=0, end_dim=-1)\nflatten = nn.Flatten(start_dim=-1, end_dim=0)\nflatten = nn.Flatten(start_dim=-1, end_dim=-1)\nflatten(input=my_tensor)\n# tensor([7, 1, -8, 3, -6, 0])\n\nmy_tensor = torch.tensor([[7, 1, -8], [3, -6, 0]])\n\nflatten = nn.Flatten(start_dim=0, end_dim=1)\nflatten = nn.Flatten(start_dim=0, end_dim=-1)\nflatten = nn.Flatten(start_dim=-2, end_dim=1)\nflatten = nn.Flatten(start_dim=-2, end_dim=-1)\nflatten(input=my_tensor)\n# tensor([7, 1, -8, 3, -6, 0])\n\nflatten = nn.Flatten()\nflatten = nn.Flatten(start_dim=0, end_dim=0)\nflatten = nn.Flatten(start_dim=-1, end_dim=-1)\nflatten = nn.Flatten(start_dim=0, end_dim=-2)\nflatten = nn.Flatten(start_dim=1, end_dim=1)\nflatten = nn.Flatten(start_dim=1, end_dim=-1)\nflatten = nn.Flatten(start_dim=-1, end_dim=1)\nflatten = nn.Flatten(start_dim=-1, end_dim=-1)\nflatten = nn.Flatten(start_dim=-2, end_dim=0)\nflatten = nn.Flatten(start_dim=-2, end_dim=-2)\nflatten(input=my_tensor)\n# tensor([[7, 1, -8], [3, -6, 0]])\n\nmy_tensor = torch.tensor([[[7], [1], [-8]], [[3], [-6], [0]]])\n\nflatten = nn.Flatten(start_dim=0, end_dim=2)\nflatten = nn.Flatten(start_dim=0, end_dim=-1)\nflatten = nn.Flatten(start_dim=-3, end_dim=2)\nflatten = nn.Flatten(start_dim=-3, end_dim=-1)\nflatten(input=my_tensor)\n# tensor([7, 1, -8, 3, -6, 0])\n\nflatten = nn.Flatten(start_dim=0, end_dim=0)\nflatten = nn.Flatten(start_dim=0, end_dim=-3)\nflatten = nn.Flatten(start_dim=1, end_dim=1)\nflatten = nn.Flatten(start_dim=1, end_dim=-2)\nflatten = nn.Flatten(start_dim=2, end_dim=2)\nflatten = nn.Flatten(start_dim=2, end_dim=-1)\nflatten = nn.Flatten(start_dim=-1, end_dim=2)\nflatten = nn.Flatten(start_dim=-1, end_dim=-1)\nflatten = nn.Flatten(start_dim=-2, end_dim=1)\nflatten = nn.Flatten(start_dim=-2, end_dim=-2)\nflatten = nn.Flatten(start_dim=-3, end_dim=0)\nflatten = nn.Flatten(start_dim=-3, end_dim=-3)\nflatten(input=my_tensor)\n# tensor([[[7], [1], [-8]], [[3], [-6], [0]]])\n\nflatten = nn.Flatten(start_dim=0, end_dim=1)\nflatten = nn.Flatten(start_dim=0, end_dim=-2)\nflatten = nn.Flatten(start_dim=-3, end_dim=1)\nflatten = nn.Flatten(start_dim=-3, end_dim=-2)\nflatten(input=my_tensor)\n# tensor([[7], [1], [-8], [3], [-6], [0]])\n\nflatten = nn.Flatten()\nflatten = nn.Flatten(start_dim=1, end_dim=2)\nflatten = nn.Flatten(start_dim=1, end_dim=-1)\nflatten = nn.Flatten(start_dim=-2, end_dim=2)\nflatten = nn.Flatten(start_dim=-2, end_dim=-1)\nflatten(input=my_tensor)\n# tensor([[7, 1, -8], [3, -6, 0]])\n\nmy_tensor = torch.tensor([[[7.], [1.], [-8.]], [[3.], [-6.], [0.]]])\n\nflatten = nn.Flatten()\nflatten(input=my_tensor)\n# tensor([[7., 1., -8.], [3., -6., 0.]])\n\nmy_tensor = torch.tensor([[[7.+0.j], [1.+0.j], [-8.+0.j]],\n                          [[3.+0.j], [-6.+0.j], [0.+0.j]]])\nflatten = nn.Flatten()\nflatten(input=my_tensor)\n# tensor([[7.+0.j, 1.+0.j, -8.+0.j],\n#         [3.+0.j, -6.+0.j, 0.+0.j]])\n\nmy_tensor = torch.tensor([[[True], [False], [True]],\n                          [[False], [True], [False]]])\nflatten = nn.Flatten()\nflatten(input=my_tensor)\n# tensor([[True, False, True],\n#         [False, True, False]])\n<\/pre>\n<p> \u767b\u5f55\u540e\u590d\u5236 <\/p>\n<p>\u4ee5\u4e0a\u5c31\u662f\u5728 PyTorch \u4e2d\u5c55\u5e73\u7684\u8be6\u7ec6\u5185\u5bb9\uff0c\u66f4\u591a\u8bf7\u5173\u6ce8\u7c73\u4e91\u5176\u5b83\u76f8\u5173\u6587\u7ae0\uff01<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u8bf7\u6211\u559d\u676f\u5496\u5561 *\u5907\u5fd8\u5f55\uff1a \u6211\u7684\u5e16\u5b50\u89e3\u91ca\u4e86 flatten() \u548c ravel()\u3002 \u6211\u7684\u5e16\u5b50\u89e3\u91ca\u4e86 unflatten()\u3002 flatten() \u53ef\u4ee5\u901a\u8fc7\u4ece\u96f6\u4e2a\u6216\u591a\u4e2a\u5143\u7d20\u7684 0d \u6216\u591a\u4e2a d \u5f20\u91cf\u4e2d\u9009\u62e9\u7ef4\u5ea6\u6765\u79fb\u9664\u96f6\u4e2a\u6216\u591a\u4e2a\u7ef4\u5ea6\uff0c\u5f97\u5230\u96f6\u4e2a\u6216\u591a\u4e2a\u5143\u7d20\u7684 1d \u6216\u591a\u4e2a d \u5f20\u91cf\uff0c\u5982\u4e0b\u6240\u793a\uff1a *\u5907\u5fd8\u5f55\uff1a \u521d\u59cb\u5316\u7684\u7b2c\u4e00\u4e2a\u53c2\u6570\u662f start_dim(optional-default:1-type:int)\u3002 \u521d\u59cb\u5316\u7684\u7b2c\u4e8c\u4e2a\u53c2\u6570\u662f end_dim\uff08\u53ef\u9009-\u9ed8\u8ba4\uff1a-1-\u7c7b\u578b\uff1aint\uff09\u3002 \u7b2c\u4e00\u4e2a\u53c2\u6570\u662f\u8f93\u5165\uff08\u5fc5\u9700\u7c7b\u578b\uff1aint\u3001float\u3001complex \u6216 bool \u7684\u5f20\u91cf\uff09\u3002 flatten() \u53ef\u4ee5\u5c06 0d \u5f20\u91cf\u66f4\u6539\u4e3a 1d \u5f20\u91cf\u3002 flatten() \u5bf9\u4e8e\u4e00\u7ef4\u5f20\u91cf\u6ca1\u6709\u4efb\u4f55\u4f5c\u7528\u3002 flatten() \u548c flatten() \u7684\u533a\u522b\u662f\uff1a flatten() \u7684 start_dim \u9ed8\u8ba4\u503c\u4e3a 1\uff0c\u800c flatten() \u7684 start_dim \u9ed8\u8ba4\u503c\u4e3a 0\u3002 \u57fa\u672c\u4e0a\uff0cflatten() \u7528\u4e8e\u5b9a\u4e49\u6a21\u578b\uff0c\u800c flatten() \u4e0d\u7528\u4e8e\u5b9a\u4e49\u6a21\u578b\u3002 import torch [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[],"class_list":["post-941","post","type-post","status-publish","format-standard","hentry","category-16"],"_links":{"self":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/posts\/941","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/comments?post=941"}],"version-history":[{"count":0,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/posts\/941\/revisions"}],"wp:attachment":[{"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/media?parent=941"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/categories?post=941"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fwq.ai\/blog\/wp-json\/wp\/v2\/tags?post=941"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}