.. automodule:: torch
.. autofunction:: is_tensor
.. autofunction:: is_storage
.. autofunction:: is_floating_point
.. autofunction:: set_default_dtype
.. autofunction:: get_default_dtype
.. autofunction:: set_default_tensor_type
.. autofunction:: numel
.. autofunction:: set_printoptions
.. autofunction:: set_flush_denormal
Note
Random sampling creation ops are listed under :ref:`random-sampling` and include: :func:`torch.rand` :func:`torch.rand_like` :func:`torch.randn` :func:`torch.randn_like` :func:`torch.randint` :func:`torch.randint_like` :func:`torch.randperm` You may also use :func:`torch.empty` with the :ref:`inplace-random-sampling` methods to create :class:`torch.Tensor` s with values sampled from a broader range of distributions.
.. autofunction:: tensor
.. autofunction:: sparse_coo_tensor
.. autofunction:: as_tensor
.. autofunction:: as_strided
.. autofunction:: from_numpy
.. autofunction:: zeros
.. autofunction:: zeros_like
.. autofunction:: ones
.. autofunction:: ones_like
.. autofunction:: arange
.. autofunction:: range
.. autofunction:: linspace
.. autofunction:: logspace
.. autofunction:: eye
.. autofunction:: empty
.. autofunction:: empty_like
.. autofunction:: empty_strided
.. autofunction:: full
.. autofunction:: full_like
.. autofunction:: cat
.. autofunction:: chunk
.. autofunction:: gather
.. autofunction:: index_select
.. autofunction:: masked_select
.. autofunction:: narrow
.. autofunction:: nonzero
.. autofunction:: reshape
.. autofunction:: split
.. autofunction:: squeeze
.. autofunction:: stack
.. autofunction:: t
.. autofunction:: take
.. autofunction:: transpose
.. autofunction:: unbind
.. autofunction:: unsqueeze
.. autofunction:: where
.. autoclass:: torch._C.Generator :members:
.. autofunction:: seed
.. autofunction:: manual_seed
.. autofunction:: initial_seed
.. autofunction:: get_rng_state
.. autofunction:: set_rng_state
.. autoattribute:: torch.default_generator :annotation: Returns the default CPU torch.Generator
.. autoattribute:: torch.cuda.default_generators :annotation: If cuda is available, returns a tuple of default CUDA torch.Generator-s. The number of CUDA torch.Generator-s returned is equal to the number of GPUs available in the system.
.. autofunction:: bernoulli
.. autofunction:: multinomial
.. autofunction:: normal
.. autofunction:: rand
.. autofunction:: rand_like
.. autofunction:: randint
.. autofunction:: randint_like
.. autofunction:: randn
.. autofunction:: randn_like
.. autofunction:: randperm
There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:
- :func:`torch.Tensor.bernoulli_` - in-place version of :func:`torch.bernoulli`
- :func:`torch.Tensor.cauchy_` - numbers drawn from the Cauchy distribution
- :func:`torch.Tensor.exponential_` - numbers drawn from the exponential distribution
- :func:`torch.Tensor.geometric_` - elements drawn from the geometric distribution
- :func:`torch.Tensor.log_normal_` - samples from the log-normal distribution
- :func:`torch.Tensor.normal_` - in-place version of :func:`torch.normal`
- :func:`torch.Tensor.random_` - numbers sampled from the discrete uniform distribution
- :func:`torch.Tensor.uniform_` - numbers sampled from the continuous uniform distribution
.. autoclass:: torch.quasirandom.SobolEngine :members: :exclude-members: MAXBIT, MAXDIM :undoc-members:
.. autofunction:: save
.. autofunction:: load
.. autofunction:: get_num_threads
.. autofunction:: set_num_threads
.. autofunction:: get_num_interop_threads
.. autofunction:: set_num_interop_threads
The context managers :func:`torch.no_grad`, :func:`torch.enable_grad`, and :func:`torch.set_grad_enabled` are helpful for locally disabling and enabling gradient computation. See :ref:`locally-disable-grad` for more details on their usage. These context managers are thread local, so they won't work if you send work to another thread using the :module:`threading` module, etc.
Examples:
>>> x = torch.zeros(1, requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> is_train = False >>> with torch.set_grad_enabled(is_train): ... y = x * 2 >>> y.requires_grad False >>> torch.set_grad_enabled(True) # this can also be used as a function >>> y = x * 2 >>> y.requires_grad True >>> torch.set_grad_enabled(False) >>> y = x * 2 >>> y.requires_grad False
.. autofunction:: abs
.. autofunction:: acos
.. autofunction:: add
.. autofunction:: addcdiv
.. autofunction:: addcmul
.. autofunction:: asin
.. autofunction:: atan
.. autofunction:: atan2
.. autofunction:: bitwise_not
.. autofunction:: ceil
.. autofunction:: clamp
.. autofunction:: cos
.. autofunction:: cosh
.. autofunction:: div
.. autofunction:: digamma
.. autofunction:: erf
.. autofunction:: erfc
.. autofunction:: erfinv
.. autofunction:: exp
.. autofunction:: expm1
.. autofunction:: floor
.. autofunction:: fmod
.. autofunction:: frac
.. autofunction:: lerp
.. autofunction:: log
.. autofunction:: log10
.. autofunction:: log1p
.. autofunction:: log2
.. autofunction:: logical_not
.. autofunction:: logical_xor
.. autofunction:: mul
.. autofunction:: mvlgamma
.. autofunction:: neg
.. autofunction:: pow
.. autofunction:: reciprocal
.. autofunction:: remainder
.. autofunction:: round
.. autofunction:: rsqrt
.. autofunction:: sigmoid
.. autofunction:: sign
.. autofunction:: sin
.. autofunction:: sinh
.. autofunction:: sqrt
.. autofunction:: tan
.. autofunction:: tanh
.. autofunction:: trunc
.. autofunction:: argmax
.. autofunction:: argmin
.. autofunction:: cumprod
.. autofunction:: cumsum
.. autofunction:: dist
.. autofunction:: logsumexp
.. autofunction:: mean
.. autofunction:: median
.. autofunction:: mode
.. autofunction:: norm
.. autofunction:: prod
.. autofunction:: std
.. autofunction:: std_mean
.. autofunction:: sum
.. autofunction:: unique
.. autofunction:: unique_consecutive
.. autofunction:: var
.. autofunction:: var_mean
.. autofunction:: allclose
.. autofunction:: argsort
.. autofunction:: eq
.. autofunction:: equal
.. autofunction:: ge
.. autofunction:: gt
.. autofunction:: isfinite
.. autofunction:: isinf
.. autofunction:: isnan
.. autofunction:: kthvalue
.. autofunction:: le
.. autofunction:: lt
.. autofunction:: max
.. autofunction:: min
.. autofunction:: ne
.. autofunction:: sort
.. autofunction:: topk
.. autofunction:: fft
.. autofunction:: ifft
.. autofunction:: rfft
.. autofunction:: irfft
.. autofunction:: stft
.. autofunction:: bartlett_window
.. autofunction:: blackman_window
.. autofunction:: hamming_window
.. autofunction:: hann_window
.. autofunction:: bincount
.. autofunction:: broadcast_tensors
.. autofunction:: cartesian_prod
.. autofunction:: cdist
.. autofunction:: combinations
.. autofunction:: cross
.. autofunction:: diag
.. autofunction:: diag_embed
.. autofunction:: diagflat
.. autofunction:: diagonal
.. autofunction:: einsum
.. autofunction:: flatten
.. autofunction:: flip
.. autofunction:: rot90
.. autofunction:: histc
.. autofunction:: meshgrid
.. autofunction:: renorm
.. autofunction:: repeat_interleave
.. autofunction:: roll
.. autofunction:: tensordot
.. autofunction:: trace
.. autofunction:: tril
.. autofunction:: tril_indices
.. autofunction:: triu
.. autofunction:: triu_indices
.. autofunction:: addbmm
.. autofunction:: addmm
.. autofunction:: addmv
.. autofunction:: addr
.. autofunction:: baddbmm
.. autofunction:: bmm
.. autofunction:: chain_matmul
.. autofunction:: cholesky
.. autofunction:: cholesky_inverse
.. autofunction:: cholesky_solve
.. autofunction:: dot
.. autofunction:: eig
.. autofunction:: geqrf
.. autofunction:: ger
.. autofunction:: inverse
.. autofunction:: det
.. autofunction:: logdet
.. autofunction:: slogdet
.. autofunction:: lstsq
.. autofunction:: lu
.. autofunction:: lu_solve
.. autofunction:: lu_unpack
.. autofunction:: matmul
.. autofunction:: matrix_power
.. autofunction:: matrix_rank
.. autofunction:: mm
.. autofunction:: mv
.. autofunction:: orgqr
.. autofunction:: ormqr
.. autofunction:: pinverse
.. autofunction:: qr
.. autofunction:: solve
.. autofunction:: svd
.. autofunction:: symeig
.. autofunction:: trapz
.. autofunction:: triangular_solve
.. autofunction:: compiled_with_cxx11_abi
.. autofunction:: result_type
.. autofunction:: can_cast
.. autofunction:: promote_types