You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am reading your paper NEUZZ recently and it is really well written. I have some questions about the details in this paper.
"Furthermore, we only consider the edges that have been activated at least once in the training data."
"Intuitively, in our setting, the goal of gradient-based guidance is to find inputs that will change the output of the final
layer neurons corresponding to different edges from 0 to 1"
The goal of NEUZZ is to find new edges in the target program as many as possible, but when you build the NN model, you just
consider the edges that have been activated at least once in the training data, then select some output neurons to compute gradient to guide future mutation, and the final goal is to "change the output of the final layer neurons corresponding to different edges from 0 to 1".Now that the output of the final layer neurons represent the edges that have been found by the training data, what's the meaning of trying to change specific output neuron from 0 to 1.(I mean the edge represented by this neuron has been found by some input in the training data, why does NEUZZ try to find the edge again) . Why don't we also consider the edges that have not been activated in the training data, and try to change the output of the final layer neurons corresponding to these edges from 0 to 1, doesn't this means we successfully find some inputs which triger new edges that have not been activated by the training data?
" Next, we randomly choose 100 output neurons representing 100 unexplored edges in the target program "
What does the "unexplored edges " mean here, in the source code these edges are randomly choosen at every iteration. How does it enssure that these edges are those "unexplored edges ".
Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hi, Dongdong!
I am reading your paper NEUZZ recently and it is really well written. I have some questions about the details in this paper.
"Furthermore, we only consider the edges that have been activated at least once in the training data."
"Intuitively, in our setting, the goal of gradient-based guidance is to find inputs that will change the output of the final
layer neurons corresponding to different edges from 0 to 1"
The goal of NEUZZ is to find new edges in the target program as many as possible, but when you build the NN model, you just
consider the edges that have been activated at least once in the training data, then select some output neurons to compute gradient to guide future mutation, and the final goal is to "change the output of the final layer neurons corresponding to different edges from 0 to 1".Now that the output of the final layer neurons represent the edges that have been found by the training data, what's the meaning of trying to change specific output neuron from 0 to 1.(I mean the edge represented by this neuron has been found by some input in the training data, why does NEUZZ try to find the edge again) . Why don't we also consider the edges that have not been activated in the training data, and try to change the output of the final layer neurons corresponding to these edges from 0 to 1, doesn't this means we successfully find some inputs which triger new edges that have not been activated by the training data?
What does the "unexplored edges " mean here, in the source code these edges are randomly choosen at every iteration. How does it enssure that these edges are those "unexplored edges ".
Thanks a lot!
The text was updated successfully, but these errors were encountered: