Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Typos #38

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions matlab/src/bits/im2col_gpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -232,9 +232,9 @@ __global__ void col2im_gpu_kernel(T* data,
u(x) = x_data - (x * strideX - padLeft)
v(y) = y_data - (y * strideY - padRight)

Now we can comptute the indeces of the elements of stacked[] to accumulate:
Now we can compute the indices of the elements of stacked[] to accumulate:

stackedIndex(x,y) =
stackedIndex(x,y) =
(y * numPatchesX + x) + // column offset
((z * windowHeight + v(y)) * windowWidth + u(x)) * // within patch offset
(numPatchesX*numPatchesY)
Expand Down
13 changes: 7 additions & 6 deletions matlab/vl_nnconv.m
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
% biases as well as performing downsampling and padding as explained
% below.
%
% [DXDY, DXDF, DXDB] = VL_NNCONV(X, F, B, DZDY) computes the
% derivatives of the nework output Z w.r.t. the data X and
% [DZDX, DZDF, DZDB] = VL_NNCONV(X, F, B, DZDY) computes the
% derivatives of the network output Z w.r.t. the data X and
% parameters F, B given the derivative w.r.t the output Y. If B is
% the empty matrix, then DXDB is also empty.
% the empty matrix, then DZDB is also empty.
%
% X is a SINGLE array of dimension H x W x D x N where (H,W) are
% the height and width of the map stack, D is the image depth
Expand Down Expand Up @@ -44,14 +44,15 @@
% 1 <= FW <= W + 2*(PADLEFT+PADRIGHT).
%
% The output a is a SINGLE array of dimension YH x YW x K x N of
% N images with K challens and size:
% N images with K channels and size:
%
% YH = floor((H + (PADTOP+PADBOTTOM) - FH)/STRIDEY) + 1,
% YW = floor((W + (PADLEFT+PADRIGHT) - FW)/STRIDEX) + 1.
%
% The derivative DZDY has the same dimension of the output Y,
% the derivative DZDX has the same dimension as the input X, and
% the derivative DZDF has the the same dimenson as F.
% the derivative DZDX has the same dimension as the input X,
% the derivative DZDF has the the same dimenson as F, and
% the derivative DZDB has the the same dimenson as B.

% Copyright (C) 2014 Andrea Vedaldi and Max Jaderberg.
% All rights reserved.
Expand Down
2 changes: 1 addition & 1 deletion matlab/vl_nnloss.m
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
assert(isequal(sz_, [sz(1) sz(2) 1 sz(4)])) ;
end

% convert to indeces
% convert to indices
c_ = 0:numel(c)-1 ;
c_ = 1 + ...
mod(c_, sz(1)*sz(2)) + ...
Expand Down
6 changes: 3 additions & 3 deletions matlab/vl_nnpool.m
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
% VL_NNPOOL CNN poolinng
% VL_NNPOOL CNN pooling
% Y = VL_NNPOOL(X, POOL) applies the pooling operator to all
% channels of the data X using a square filter of size POOL. X is a
% SINGLE array of dimension H x W x D x N where (H,W) are the
Expand All @@ -9,7 +9,7 @@
% height POOLY and width POOLX.
%
% DZDX = VL_NNPOOL(X, POOL, DZDY) computes the derivatives of
% the nework output Z w.r.t. the data X given the derivative DZDY
% the network output Z w.r.t. the data X given the derivative DZDY
% w.r.t the max-pooling output Y.
%
% VL_NNCONV(..., 'option', value, ...) takes the following options:
Expand Down Expand Up @@ -37,7 +37,7 @@
% 1 <= POOLX <= WIDTH + (PADLEFT + PADRIGHT).
%
% The output a is a SINGLE array of dimension YH x YW x K x N of N
% images with K challens and size:
% images with K channels and size:
%
% YH = floor((H + (PADTOP+PADBOTTOM) - POOLY)/STRIDEY) + 1,
% YW = floor((W + (PADLEFT+PADRIGHT) - POOLX)/STRIDEX) + 1.
Expand Down
2 changes: 1 addition & 1 deletion matlab/vl_nnsoftmax.m
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
% convolutionally at all spatial locations.
%
% DZDX = VL_NNSOFTMAX(X, DZDY) computes the derivative DZDX of the
% CNN otuoutwith respect to the input X given the derivative DZDY
% CNN output Z with respect to the input X given the derivative DZDY
% with respect to the block output Y. DZDX has the same dimension
% as X.

Expand Down
4 changes: 2 additions & 2 deletions matlab/vl_nnsoftmaxloss.m
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
% convolutionally at all spatial locations.
%
% DZDX = VL_NNSOFTMAXLOSS(X, C, DZDY) computes the derivative DZDX
% of the CNN with respect to the input X given the derivative DZDY
% of the CNN output Z with respect to the input X given the derivative DZDY
% with respect to the block output Y. DZDX has the same dimension
% as X.

Expand All @@ -42,7 +42,7 @@
assert(isequal(sz_, [sz(1) sz(2) 1 sz(4)])) ;
end

% convert to indeces
% convert to indices
c_ = 0:numel(c)-1 ;
c_ = 1 + ...
mod(c_, sz(1)*sz(2)) + ...
Expand Down