本篇带给大家PyTorch一些基本操作,PyTorch是一个基于Python的科学计算包,提供最大灵活性和速度的深度学习研究平台。

 详解PyTorch基本操作(pytorch使用方法) PyTorch Python 深度学习 第1张

什么是 PyTorch?

PyTorch是一个基于Python的科学计算包,提供最大灵活性和速度的深度学习研究平台。

张量

张量类似于NumPy 的n 维数组,此外张量也可以在 GPU 上使用以加速计算。

让我们构造一个简单的张量并检查输出。首先让我们看看我们如何构建一个 5×3 的未初始化矩阵:

  1. importtorch
  2. x=torch.empty(5,3)
  3. print(x)

输出如下:

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])

现在让我们构造一个随机初始化的矩阵:

  1. x=torch.rand(5,3)
  2. print(x)

输出:

  1. tensor([[1.1608e-01,9.8966e-01,1.2705e-01],
  2. [2.8599e-01,5.4429e-01,3.7764e-01],
  3. [5.8646e-01,1.0449e-02,4.2655e-01],
  4. [2.2087e-01,6.6702e-01,5.1910e-01],
  5. [1.8414e-01,2.0611e-01,9.4652e-04]])

直接从数据构造张量:

  1. x=torch.tensor([5.5,3])
  2. print(x)

输出:

  1. tensor([5.5000,3.0000])

创建一个统一的长张量。

  1. x=torch.LongTensor(3,4)
  2. x
  3. tensor([[94006673833344,210453397554,206158430253,193273528374],
  4. [214748364849,210453397588,249108103216,223338299441],
  5. [210453397562,197568495665,206158430257,240518168626]])

「浮动张量。」

  1. x=torch.FloatTensor(3,4)
  2. x
  3. tensor([[-3.1152e-18,3.0670e-41,3.5032e-44,0.0000e+00],
  4. [nan,3.0670e-41,1.7753e+28,1.0795e+27],
  5. [1.0899e+27,2.6223e+20,1.7465e+19,1.8888e+31]])

「在范围内创建张量」

  1. torch.arange(10,dtype=torch.float)
  2. tensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.])

「重塑张量」

  1. x=torch.arange(10,dtype=torch.float)
  2. x
  3. tensor([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.])

使用 .view重塑张量。

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
0

-1根据张量的大小自动识别维度。

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
1

「改变张量轴」

改变张量轴:两种方法view和permute

view改变张量的顺序,而permute只改变轴。

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
2

张量运算

在下面的示例中,我们将查看加法操作:

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
3

输出:

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
4

调整大小:如果你想调整张量的形状,你可以使用“torch.view”:

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
5

输出:

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
6

PyTorch 和 NumPy的转换

NumPy 是Python 编程语言的库,增加了对大型、多维数组和矩阵的支持,以及对这些数组进行操作的大量高级数学函数集合。

将Torch中Tensor 转换为 NumPy 数组,反之亦然是轻而易举的!

 详解PyTorch基本操作(pytorch使用方法) PyTorch Python 深度学习 第2张

Torch Tensor 和 NumPy 数组将共享它们的底层内存位置 ,改变一个将改变另一个。

「将 Torch 张量转换为 NumPy 数组:」

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
7

输出:tensor([1., 1., 1., 1., 1.])

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
8

输出:[1., 1., 1., 1., 1.]

让我们执行求和运算并检查值的变化:

  1. tensor([[2.7298e+32,4.5650e-41,2.7298e+32],
  2. [4.5650e-41,0.0000e+00,0.0000e+00],
  3. [0.0000e+00,0.0000e+00,0.0000e+00],
  4. [0.0000e+00,0.0000e+00,0.0000e+00],
  5. [0.0000e+00,0.0000e+00,0.0000e+00]])
9

输出:

  1. x=torch.rand(5,3)
  2. print(x)
0

「将 NumPy 数组转换为 Torch 张量:」

  1. x=torch.rand(5,3)
  2. print(x)
1

输出:

  1. x=torch.rand(5,3)
  2. print(x)
2

所以,正如你所看到的,就是这么简单!

接下来在这个 PyTorch 教程博客上,让我们看看PyTorch 的 AutoGrad 模块。

AutoGrad

该autograd包提供自动求导为上张量的所有操作。

 详解PyTorch基本操作(pytorch使用方法) PyTorch Python 深度学习 第3张

它是一个按运行定义的框架,这意味着您的反向传播是由您的代码运行方式定义的,并且每次迭代都可以不同。

  • torch.autograd.function (函数的反向传播)
  • torch.autograd.functional (计算图的反向传播)
  • torch.autograd.gradcheck (数值梯度检查)
  • torch.autograd.anomaly_mode (在自动求导时检测错误产生路径)
  • torch.autograd.grad_mode (设置是否需要梯度)
  • model.eval() 与 torch.no_grad()
  • torch.autograd.profiler (提供 function 级别的统计信息)

「下面使用 Autograd 进行反向传播。」

如果requires_grad=True,则 Tensor 对象会跟踪它是如何创建的。

  1. x=torch.rand(5,3)
  2. print(x)
3

因为requires_grad=True,z知道它是通过增加两个张量的产生z = x + y。

  1. x=torch.rand(5,3)
  2. print(x)
4

s是由它的数字总和创建的。当我们调用.backward(),反向传播从s开始运行。然后可以计算梯度。

  1. x=torch.rand(5,3)
  2. print(x)
5

下面例子是计算log(x)的导数为1 / x

  1. x=torch.rand(5,3)
  2. print(x)
6

【编辑推荐】

  1. 这 7 个 Linux 命令,你是怎么来使用的?
  2. 或许这是目前为止比较好的 Git 教程了
  3. 微信小程序基础架构浅析
  4. 后悔药来了!iOS 15刷机降级iOS 14.6详细图文教程
  5. 基于WebAssembly的热门语言项目

转载请说明出处
知优网 » 详解PyTorch基本操作(pytorch使用方法)

发表评论

您需要后才能发表评论