Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed a typo in the manual file. #1

Closed
wants to merge 1 commit into from
Closed

Fixed a typo in the manual file. #1

wants to merge 1 commit into from

Conversation

geophree
Copy link

@geophree geophree commented Feb 6, 2011

No description provided.

@evmar
Copy link
Collaborator

evmar commented Feb 8, 2011

Thanks! Merged.

asankah pushed a commit to asankah/ninja that referenced this pull request Jul 27, 2012
revert some unnecessary changes
maximuska referenced this pull request in maximuska/ninja Sep 5, 2013
'restat_mtime' is used to verify if the edge was rebuilt
after the 'newest' of the immediate inputs (see RecomputeOutputDirty()).

Since depfile timestamp is not considered in RecomputeOutputDirty(),
there is no real need to consider it when calculating the 'newest input' mtime.
okuoku pushed a commit to okuoku/ninja that referenced this pull request Feb 15, 2014
add smiliy to hacking at html/rst
mydongistiny pushed a commit to mydongistiny/ninja that referenced this pull request Oct 1, 2019
There were two bugs in DepsLog::Load's main parsing pass:

 * Previously, with an invalid log file, the main pass could initialize
   dep_index[output_id] with the index of a record after the point where
   the log is truncated, e.g.:

    - Chunk 1: path record for node #0
    - Chunk 1: invalid record
    - Chunk 2: path record for node ninja-build#1
    - Chunk 2: deps record outputting node #0, needs node ninja-build#1

   The result of the parse could depend on chunk boundaries (e.g. how many
   threads the machine has), and the parser could crash if the later deps
   record has source IDs that were also truncated.

   Fix the problem by moving dep_index initialization to a later pass. The
   validation and truncation work is factored out into a ValidateDepsLog
   function.

 * Fix node ID validation of deps record inputs. The existing code to do
   this had no effect:

        if (output_id < 0 || output_id >= next_node_id) break;
        for (size_t i = 4; i < size; ++i) {
          int input_id = log.table[index + i];
          if (input_id < 0 || input_id >= next_node_id) break;

   The outer break exited the for-each-record loop early, signaling that
   parsing had failed. The nested break exited the for-each-input loop,
   which merely prevented validation of later node IDs. Replace the break
   statements with "return false" in IsValidRecord.

These two changes regressed ".ninja_deps load" run-time by about 10ms
on a 580MB .ninja_deps file. (e.g. about 140ms -> 150ms). I suspect the
compiler may have been optimizing out the source ID checking.

Test: ninja_test
Change-Id: I13c3a314cfa7d2bf15c724962a9ec35f55176779
mayank-ramnani referenced this pull request in nyuoss/ninja-shadowdash-nyu Oct 1, 2024
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants