Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add jsdoc comments with Typedoc #33

Open
vtempest opened this issue Sep 8, 2024 · 0 comments
Open

Add jsdoc comments with Typedoc #33

vtempest opened this issue Sep 8, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@vtempest
Copy link

vtempest commented Sep 8, 2024

Describe the Feature

I highly suggest it and I want to colab together on https://airesearch.wiki


/**
 * JS-PyTorch is a Deep Learning library with GPU.js acceleration in PyTorch API syntax.
 * @author [JS-PyTorch](https://eduardoleao052.github.io/js-pytorch/site/index.html)
 * @module torch
 * 
 * Tensor Creation and Manipulation:
 * @function tensor(data, requires_grad = false, device = 'cpu') Creates a new Tensor filled with the given data
 * @function zeros(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with zeros
 * @function ones(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with ones
 * @function tril(*shape, requires_grad = false, device = 'cpu') Creates a new 2D lower triangular Tensor
 * @function randn(*shape, requires_grad = false, device = 'cpu', xavier = false) Creates a new Tensor filled with random values from a normal distribution
 * @function rand(*shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random values from a uniform distribution
 * @function randint(low, high, *shape, requires_grad = false, device = 'cpu') Creates a new Tensor filled with random integers
 * 
 * Tensor Methods:
 * @method tensor.backward() Performs backpropagation from this tensor backwards
 * @method tensor.zero_grad() Clears the gradients stored in this tensor
 * @method tensor.zero_grad_graph() Clears the gradients stored in this tensor and all tensors that led to it
 * @method tensor.tolist() Returns the tensor's data as a JavaScript Array
 * @property tensor.data Returns the tensor's data as a JavaScript Array
 * @property tensor.length Returns the tensor's length (size of first dimension)
 * @property tensor.ndims Returns the number of dimensions in the Tensor
 * @property tensor.grad Returns the gradients currently stored in the Tensor
 * 
 * Tensor Operations:
 * @function add(a, b) Performs element-wise addition of two tensors
 * @function sub(a, b) Performs element-wise subtraction of two tensors
 * @function neg(a) Returns the element-wise opposite of the given Tensor
 * @function mul(a, b) Performs element-wise multiplication of two tensors
 * @function div(a, b) Performs element-wise division of two tensors
 * @function matmul(a, b) Performs matrix multiplication between two tensors
 * @function sum(a, dim, keepdims = false) Gets the sum of the Tensor over a specified dimension
 * @function mean(a, dim, keepdims = false) Gets the mean of the Tensor over a specified dimension
 * @function variance(a, dim, keepdims = false) Gets the variance of the Tensor over a specified dimension
 * @function transpose(a, dim1, dim2) Transposes the tensor along two consecutive dimensions
 * @function at(a, index1, index2) Returns elements from the tensor based on given indices
 * @function masked_fill(a, condition, value) Fills elements in the tensor based on a condition
 * @function pow(a, n) Returns tensor raised to element-wise power
 * @function sqrt(a) Returns element-wise square root of the tensor
 * @function exp(a) Returns element-wise exponentiation of the tensor
 * @function log(a) Returns element-wise natural log of the tensor
 * 
 * Neural Network Layers:
 * @class nn.Linear(in_size, out_size, device, bias, xavier) Applies a linear transformation to the input tensor
 * @class nn.MultiHeadSelfAttention(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a self-attention layer on the input tensor
 * @class nn.FullyConnected(in_size, out_size, dropout_prob, device, bias) Applies a fully-connected layer on the input tensor
 * @class nn.Block(in_size, out_size, n_heads, n_timesteps, dropout_prob, device) Applies a transformer Block layer on the input tensor
 * @class nn.Embedding(in_size, embed_size) Creates an embedding table for vocabulary
 * @class nn.PositionalEmbedding(input_size, embed_size) Creates a positional embedding table
 * @class nn.ReLU() Applies Rectified Linear Unit activation function
 * @class nn.Softmax() Applies Softmax activation function
 * @class nn.Dropout(drop_prob) Applies dropout to input tensor
 * @class nn.LayerNorm(n_embed) Applies Layer Normalization to input tensor
 * @class nn.CrossEntropyLoss() Computes Cross Entropy Loss between target and input tensor
 * 
 * Optimization:
 * @class optim.Adam(params, lr, reg, betas, eps) Adam optimizer for updating model parameters
 * 
 * Utility Functions:
 * @function save(model, file) Saves the model reruning data blob (for you to save)
 * @function load(model, loadedData) Loads the model from saved data
 */

Motivation [optional]

No response

Sample Code

No response

Would you like to work on this issue?

None

@vtempest vtempest added the enhancement New feature or request label Sep 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant