Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor fixes #1154

Merged
merged 1 commit into from
Aug 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Or you can even run it straight in the browser: [talk.wasm](examples/talk.wasm)
- Various other examples are available in the [examples](examples) folder

The tensor operators are optimized heavily for Apple silicon CPUs. Depending on the computation size, Arm Neon SIMD
instrisics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since
intrinsics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since
the Accelerate framework utilizes the special-purpose AMX coprocessor available in modern Apple products.

## Quick start
Expand Down
4 changes: 2 additions & 2 deletions whisper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ static void byteswap_tensor(ggml_tensor * tensor) {
} while (0)
#define BYTESWAP_TENSOR(t) \
do { \
byteswap_tensor(tensor); \
byteswap_tensor(t); \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you clarify what issues this is causing? And why do we need to switch from tensor to t?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The current implementation is misleading. The macro accepts an argument, but it is never used.

  2. It has not caused errors so far because the place where the macro is used exists a variable with name 'tensor'. Had you changed the variable name, would the code be broken.

} while (0)
#else
#define BYTESWAP_VALUE(d) do {} while (0)
Expand Down Expand Up @@ -589,7 +589,7 @@ struct whisper_model {
struct whisper_sequence {
std::vector<whisper_token_data> tokens;

// the accumulated transcription in the current interation (used to truncate the tokens array)
// the accumulated transcription in the current iteration (used to truncate the tokens array)
int result_len;

double sum_logprobs_all; // the sum of the log probabilities of the tokens
Expand Down
2 changes: 1 addition & 1 deletion whisper.h
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ extern "C" {
void * user_data);

// Parameters for the whisper_full() function
// If you chnage the order or add new parameters, make sure to update the default values in whisper.cpp:
// If you change the order or add new parameters, make sure to update the default values in whisper.cpp:
// whisper_full_default_params()
struct whisper_full_params {
enum whisper_sampling_strategy strategy;
Expand Down