Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
andreas-zeller committed Nov 9, 2024
2 parents c334a65 + 6d449a2 commit 7118e5a
Show file tree
Hide file tree
Showing 15 changed files with 29 additions and 27 deletions.
10 changes: 5 additions & 5 deletions docs/notebooks/ConfigurationFuzzer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6954,12 +6954,12 @@
},
"source": [
"When copying, expansions in the copy should also refer to symbols in the copy. Hence, when expanding `<int>` in\n",
"\n",
"```<int> ::= <int><digit>```\n",
"\n",
"```\n",
"<int> ::= <int><digit>\n",
"```\n",
"make that\n",
"\n",
"```<int> ::= <int><digit>\n",
"```\n",
"<int> ::= <int><digit>\n",
"<int'> ::= <int'><digit'>\n",
"```\n",
"\n",
Expand Down
2 changes: 2 additions & 0 deletions notebooks/05_Domain-Specific_Fuzzing.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@
"\n",
"* [Carving](Carver.ipynb) takes a system test and automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively.\n",
"\n",
"* [Testing Compilers](PythonFuzzer.ipynb) shows how to systematically explore the behavior of a compiler or an interpreter using _grammars_ and _grammar-based testing_ to systematically generate program code.\n",
"\n",
"* [Testing Web Applications](WebFuzzer.ipynb) shows how to systematically explore the behavior of a Web application – first with handwritten grammars, then with grammars automatically inferred from the user interface. We also show how to conduct systematic attacks on these servers, notably with code and SQL injection.\n",
"\n",
"* [Testing Graphical User Interfaces](GUIFuzzer.ipynb) explores how to generate tests for Graphical User Interfaces (GUIs), generalizing from rich Web applications to mobile apps, and systematically exploring user interfaces through forms and navigation elements."
Expand Down
4 changes: 2 additions & 2 deletions notebooks/ConcolicFuzzer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5813,7 +5813,7 @@
"source": [
"## Concolic Grammar Fuzzing\n",
"\n",
"The concolic framework can be used directly in grammar-based fuzzing. We implement a class `ConcolicGrammarFuzzer` wihich does this."
"The concolic framework can be used directly in grammar-based fuzzing. We implement a class `ConcolicGrammarFuzzer` which does this."
]
},
{
Expand Down Expand Up @@ -6814,7 +6814,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 1: Implment a Concolic Float Proxy Class\n"
"### Exercise 1: Implement a Concolic Float Proxy Class\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion notebooks/ControlFlow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -712,7 +712,7 @@
" myparents[0].add_calls(mid)\n",
"\n",
" # these need to be unlinked later if our module actually defines these\n",
" # functions. Otherwsise we may leave them around.\n",
" # functions. Otherwise we may leave them around.\n",
" # during a call, the direct child is not the next\n",
" # statement in text.\n",
" for c in p:\n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/DynamicInvariants.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1586,7 +1586,7 @@
"source": [
"### All-in-one Annotation\n",
"\n",
"Let us bring all of this together in a single class `TypeAnnotator` that first tracks calls of functions and then allows accesing the AST (and the source code form) of the tracked functions annotated with types. The method `typed_functions()` returns the annotated functions as a string; `typed_functions_ast()` returns their AST."
"Let us bring all of this together in a single class `TypeAnnotator` that first tracks calls of functions and then allows accessing the AST (and the source code form) of the tracked functions annotated with types. The method `typed_functions()` returns the annotated functions as a string; `typed_functions_ast()` returns their AST."
]
},
{
Expand Down Expand Up @@ -2853,7 +2853,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now for the actual annotation. `preconditions()` returns the preconditions from the mined invariants (i.e., those propertes that do not depend on the return value) as a string with annotations:"
"Now for the actual annotation. `preconditions()` returns the preconditions from the mined invariants (i.e., those properties that do not depend on the return value) as a string with annotations:"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions notebooks/Fuzzer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -895,7 +895,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The above code block precludes the possiblity of removing `~` (your home directory), this is because the probability of generating the character '~' is not 1/32; it is 0/32. The characters are created by calling chr(random.randrange(char_start, char_start + char_range)), where the default value of char_start is 32 and the default value of char_range is 32. The documentation for chr reads, \"[r]eturn the string representing a character whose Unicode code point is the integer i.\" The Unicode code point for '~' is 126 and therefore, not in the interval [32, 64). \n",
"The above code block precludes the possibility of removing `~` (your home directory), this is because the probability of generating the character '~' is not 1/32; it is 0/32. The characters are created by calling chr(random.randrange(char_start, char_start + char_range)), where the default value of char_start is 32 and the default value of char_range is 32. The documentation for chr reads, \"[r]eturn the string representing a character whose Unicode code point is the integer i.\" The Unicode code point for '~' is 126 and therefore, not in the interval [32, 64). \n",
"\n",
"If the code were to be changed so that char_range = 95 then the probability of obtaining the character '~' would be 1/94 , thus resulting in the probability of the event of deleting all files being equal to 0.000332\n",
"\n",
Expand All @@ -920,7 +920,7 @@
"\n",
"For the space the probability is 1 out of 32.\n",
"\n",
"We have to include the term for the probability of obtaining at least 2 characters which is required for the scenario of obtaining a space as the second character. This probability is 99/101 because it is calculated as (1 - probabilty of obtaining a single character or no character at all), so it is equal to 1-(2/101).\n",
"We have to include the term for the probability of obtaining at least 2 characters which is required for the scenario of obtaining a space as the second character. This probability is 99/101 because it is calculated as (1 - probability of obtaining a single character or no character at all), so it is equal to 1-(2/101).\n",
"\n",
"Therefore, the probability calculation for the event of deleting all files in the case of having a space for the second character is:\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/FuzzingWithConstraints.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -420,7 +420,7 @@
"One very general solution to this problem would be to use _unrestricted_ grammars rather than the _context-free_ grammars we have used so far.\n",
"In an unrestricted grammar, one can have multiple symbols also on the left-hand side of an expansion rule, making them very flexible.\n",
"In fact, unrestricted grammars are _Turing-universal_, meaning that they can express any feature that could also be expressed in program code; and they could thus check and produce arbitrary strings with arbitrary features. (If they finish, that is – unrestricted grammars also suffer from the halting problem.)\n",
"The downside is that there is literally no programming support for unrestricted grammars – we'd have to implement all arithmetics, strings, and other functionality from scratch in a grammar, which is - well - not fun."
"The downside is that there is literally no programming support for unrestricted grammars – we'd have to implement all arithmetic, strings, and other functionality from scratch in a grammar, which is - well - not fun."
]
},
{
Expand All @@ -445,7 +445,7 @@
"metadata": {},
"source": [
"In recent work, _Dominic Steinhöfel_ and _Andreas Zeller_ (one of the authors of this book) have presented an infrastructure that allows producing inputs with _arbitrary properties_, but without having to go through the trouble of implementing producers or checkers.\n",
"Instead, they suggest a dedicated _language_ for specifiying inputs, named [ISLa](https://rindphi.github.io/isla/) (for input specification language).\n",
"Instead, they suggest a dedicated _language_ for specifying inputs, named [ISLa](https://rindphi.github.io/isla/) (for input specification language).\n",
"_ISLa_ combines a standard context-free _grammar_ with _constraints_ that express _semantic_ properties of the inputs and their elements.\n",
"ISLa can be used as a _fuzzer_ (producing inputs that satisfy the constraints) as well as a _checker_ (checking inputs whether they satisfy the given constraints)."
]
Expand Down
2 changes: 1 addition & 1 deletion notebooks/Guide_for_Authors.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@
"The derived material for the book ends up in the `docs/` folder, from where it is eventually pushed to the [fuzzingbook website](http://www.fuzzingbook.org/). This site allows\n",
"* reading the chapters online,\n",
"* launching interactive Jupyter notebooks using the binder service, and\n",
"* accesssing code and slide formats.\n",
"* accessing code and slide formats.\n",
"\n",
"Use `make publish` to create and update the site."
]
Expand Down
2 changes: 1 addition & 1 deletion notebooks/InformationFlow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2214,7 +2214,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter."
"More complex originating such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion notebooks/Parser.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4111,7 +4111,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Using it is as folows:"
"Using it is as follows:"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion notebooks/ProbabilisticGrammarFuzzer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1895,7 +1895,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To have our probabilistic grammar fuzzer focus on _uncommon_ features, we _change the learned probabilities_ such that commonly occuring features (i.e., those with a high learned probability) get a low probability, and vice versa: The last shall be first, and the first last. A particularly simple way to achieve such an _inversion_ of probabilities is to _swap_ them: The alternatives with the highest and lowest probability swaps their probabilities, as so the alternatives with the second-highest and second-lowest probability, the alternatives with the third highest and lowest, and so on."
"To have our probabilistic grammar fuzzer focus on _uncommon_ features, we _change the learned probabilities_ such that commonly occurring features (i.e., those with a high learned probability) get a low probability, and vice versa: The last shall be first, and the first last. A particularly simple way to achieve such an _inversion_ of probabilities is to _swap_ them: The alternatives with the highest and lowest probability swaps their probabilities, as so the alternatives with the second-highest and second-lowest probability, the alternatives with the third highest and lowest, and so on."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions notebooks/Project_MutationFuzzing.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
"\n",
"While fuzzers can run for days in a row to cover considerable behavior, the goal of this project is to utilize mutation fuzzing to cover as much code as possible during a specified number of generations. \n",
"\n",
"Our target is the [svglib](https://pypi.org/project/svglib/) SVG rendering library written in python. For an easier integration with the library we provide a wrapped function __parse_svg(string)__, which receives a string with the SVG content and invokes the parsing library. To ensure that all converted elements are correct, the wrapper function internally converts the parsed SVG into PDF and PNG formats. Finally, the wrapper function returns an _RLG Drawing_ object if the conversion was successfull or None if it wasn't."
"Our target is the [svglib](https://pypi.org/project/svglib/) SVG rendering library written in python. For an easier integration with the library we provide a wrapped function __parse_svg(string)__, which receives a string with the SVG content and invokes the parsing library. To ensure that all converted elements are correct, the wrapper function internally converts the parsed SVG into PDF and PNG formats. Finally, the wrapper function returns an _RLG Drawing_ object if the conversion was successful or None if it wasn't."
]
},
{
Expand Down Expand Up @@ -386,7 +386,7 @@
"source": [
"## Obtaining the population coverage\n",
"\n",
"In order to obtain the overal coverage achieved by the fuzzer's population we will adapt the [population_coverage](Coverage.ipynb) function from the lecture.\n",
"In order to obtain the overall coverage achieved by the fuzzer's population we will adapt the [population_coverage](Coverage.ipynb) function from the lecture.\n",
"\n",
"The following code calculates the overall coverage from a fuzzer's population:"
]
Expand Down
10 changes: 5 additions & 5 deletions notebooks/Project_Search_Based_WebFuzzer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -769,7 +769,7 @@
"## Coverage of Web App \n",
"In this project, the coverage of the web app is defined as _web reachability_, i.e. the number of reached pages on the site. \n",
"\n",
"We reachability is a proxy measurement of the number of validation schemes passed/failed and also a proxy measurement of brach coverage in the `handle_order()` method. "
"We reachability is a proxy measurement of the number of validation schemes passed/failed and also a proxy measurement of branch coverage in the `handle_order()` method. "
]
},
{
Expand Down Expand Up @@ -1206,7 +1206,7 @@
"metadata": {},
"source": [
"## Your Tasks\n",
"For each input, you are epxected to produce a set of urls to reach a targeted web page, i.e. generate inputs that will fulfill the input validation requirements necessary to reach a specific web page. Overall, all web pages will be targets, i.e. both error and normal pages. \n",
"For each input, you are expected to produce a set of urls to reach a targeted web page, i.e. generate inputs that will fulfill the input validation requirements necessary to reach a specific web page. Overall, all web pages will be targets, i.e. both error and normal pages. \n",
"\n",
"Your task is to implement your own custom selection, fitness and mutation functions for the genetic algorithm, in order to fulfill the input validation and web reachabilty requirements."
]
Expand All @@ -1223,7 +1223,7 @@
"* Ensure that your implementation accounts for any arbitrary regex and any random initial population of inputs.\n",
"* For debugging purposes: unmute the `webbrowser()` to obtain logging information, i.e. set `webbrowser(url, mute=False)` \n",
"* Do not implement in any other section except the section below. \n",
"* Gracefully handle exceptions and errors resulting from your impelementation.\n",
"* Gracefully handle exceptions and errors resulting from your implementation.\n",
"* __Remember the input validation regex and initial input population could be arbitrary, do not hard code for a specific regex or input__."
]
},
Expand Down Expand Up @@ -1308,7 +1308,7 @@
"source": [
"# Evaluation code\n",
"\n",
"The code in the following section will be used to evaluate your impelementation."
"The code in the following section will be used to evaluate your implementation."
]
},
{
Expand Down Expand Up @@ -1596,7 +1596,7 @@
"source": [
"## Scoring\n",
"\n",
"For each URL input in the population and its corresponding target, your implementaton should generate a list of test inputs, __maximum of 10 inputs__. These inputs will be executed on the server and graded based on:\n",
"For each URL input in the population and its corresponding target, your implementation should generate a list of test inputs, __maximum of 10 inputs__. These inputs will be executed on the server and graded based on:\n",
"\n",
"* Number of iterations of your GA algorithm (less is better)\n",
"* Number of reached target pages , i.e. error pages, confirmation page and page not found\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/WhenToStopFuzzing.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -808,7 +808,7 @@
"plt.subplot(1, 2, 1)\n",
"plt.hist(frequencies, range=[1, 21], bins=numpy.arange(1, 21) - 0.5) # type: ignore\n",
"plt.xticks(range(1, 21)) # type: ignore\n",
"plt.xlabel('# of occurances (e.g., 1 represents singleton trigrams)')\n",
"plt.xlabel('# of occurrences (e.g., 1 represents singleton trigrams)')\n",
"plt.ylabel('Frequency of occurances')\n",
"plt.title('Figure 1. Frequency of Rare Trigrams')\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/shared/ClassDiagram.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@
"outputs": [],
"source": [
"class D_Class(D_Class):\n",
" pass # An incremental addiiton that should not impact D's semantics"
" pass # An incremental addition that should not impact D's semantics"
]
},
{
Expand Down

0 comments on commit 7118e5a

Please sign in to comment.