Skip to content

Latest commit

 

History

History
1053 lines (819 loc) · 57.5 KB

ch09.asciidoc

File metadata and controls

1053 lines (819 loc) · 57.5 KB

Practical Considerations

JavaScript is an ever-evolving language. Its development rhythm has had different paces throughout the years, entering a high-velocity phase with the introduction of ES5. Thus far, this book has taught you about dozens of language features and syntax changes introduced in ES6, and a few that came out afterwards, in ES2016 and ES2017.

Reconciling all of these new features with our existing ES5 knowledge may seem like a daunting task: what features should we take advantage of, and how? This chapter aims to rationalize the choices we have to make when considering whether to use specific ES6 features.

We’ll take a look at a few different features, the use cases where they shine, and the situations where we might be better off using features that were already available in the language. Let’s go case by case.

Variable Declarations

When developing software, most of our time is spent reading code, instead of writing it. ES6 offers let and const as new flavors of variable declaration, and part of the value in these statements is that they can signal how a variable is used. When reading a piece of code, others can take cues from these signals in order to better understand what we did. Cues like these are crucial to reducing the amount of time someone spends interpreting what a piece of code does, and as such we should try and leverage them whenever possible.

A let statement indicates that a variable can’t be used before its declaration, due to the Temporal Dead Zone rule. This isn’t a convention, it is a fact: if we tried accessing the variable before its declaration statement was reached, the program would fail. These statements are block-scoped and not function-scoped; this means we need to read less code in order to fully grasp how a let variable is used.

The const statement is block-scoped as well, and it follows TDZ semantics too. The upside is that a const binding can only be assigned during declaration.

Note that this means that the variable binding can’t change, but it doesn’t mean that the value itself is immutable or constant in any way. A const binding that references an object can’t later reference a different value, but the underlying object can indeed mutate.

In addition to the signals offered by let, the const keyword indicates that a variable binding can’t be reassigned. This is a strong signal. You know what the value is going to be; you know that the binding can’t be accessed outside of its immediately containing block, due to block scoping; and you know that the binding is never accessed before declaration, because of TDZ semantics.

You know all of this just by reading the const declaration statement and without scanning for other references to that variable.

Constraints such as those offered by let and const are a powerful way of making code easier to understand. Try to accrue as many of these constraints as possible in the code you write. The more declarative constraints that limit what a piece of code could mean, the easier and faster it is for humans to read, parse, and understand a piece of code in the future.

Granted, there are more rules to a const declaration than to a var declaration: block-scoped, TDZ, assign at declaration, no reassignment, whereas var statements only signal function scoping. Rule-counting, however, doesn’t offer a lot of insight. It is better to weigh these rules in terms of complexity: does the rule add or subtract complexity? In the case of const, block scoping means a narrower scope than function scoping, TDZ means that we don’t need to scan the scope backward from the declaration in order to spot usage before declaration, and assignment rules mean that the binding will always preserve the same reference.

The more constrained statements are, the simpler a piece of code becomes. As we add constraints to what a statement might mean, code becomes less unpredictable. This is one of the reasons why statically typed programs are, generally speaking, a bit easier to read than their dynamically typed counterparts. Static typing places a big constraint on the program writer, but it also places a big constraint on how the program can be interpreted, making its code easier to understand.

With these arguments in mind, it is recommended that you use const where possible, as it’s the statement that gives us the fewest possibilities to think about.

if (condition) {
  // can't access `isReady` before declaration is reached
  const isReady = true
  // `isReady` binding can't be reassigned
}
// can't access `isReady` outside of its containing block scope

When const isn’t an option, because the variable needs to be reassigned later, we may resort to a let statement. Using let carries all the benefits of const, except that the variable can be reassigned. This may be necessary in order to increment a counter, flip a Boolean flag, or defer initialization.

Consider the following example, where we take a number of megabytes and return a string such as 1.2 GB. We’re using let, as the values need to change if a condition is met.

function prettySize(input) {
  let value = input
  let unit = 'MB'
  if (value >= 1024) {
    value /= 1024
    unit = 'GB'
  }
  if (value >= 1024) {
    value /= 1024
    unit = 'TB'
  }
  return `${ value.toFixed(1) } ${ unit }`
}

Adding support for petabytes would involve a new if branch before the return statement.

if (value >= 1024) {
  value /= 1024
  unit = 'PB'
}

If we were looking to make prettySize easier to extend with new units, we could consider implementing a toLargestUnit function that computes the unit and value for any given input and its current unit. We could then consume toLargestUnit in prettySize to return the formatted string.

The following code snippet implements such a function. It relies on a list of supported units instead of using a new branch for each unit. When the input value is at least 1024 and there are larger units, we divide the input by 1024 and move to the next unit. Then we call toLargestUnit with the updated values, which will continue recursively reducing the value until it’s small enough or we reach the largest unit.

function toLargestUnit(value, unit = 'MB') {
  const units = ['MB', 'GB', 'TB']
  const i = units.indexOf(unit)
  const nextUnit = units[i + 1]
  if (value >= 1024 && nextUnit) {
    return toLargestUnit(value / 1024, nextUnit)
  }
  return { value, unit }
}

Introducing petabyte support used to involve a new if branch and repeating logic, but now it’s only a matter of adding the 'PB' string at the end of the units array.

The prettySize function becomes concerned only with how to display the string, as it can offload its calculations to the toLargestUnit function. This separation of concerns is also instrumental in producing more readable code.

function prettySize(input) {
  const { value, unit } = toLargestUnit(input)
  return `${ value.toFixed(1) } ${ unit }`
}

Whenever a piece of code has variables that need to be reassigned, we should spend a few minutes thinking about whether there’s a better pattern that could resolve the same problem without reassignment. This is not always possible, but it can be accomplished most of the time.

Once you’ve arrived at a different solution, compare it to what you used to have. Make sure that code readability has actually improved and that the implementation is still correct. Unit tests can be instrumental in this regard, as they’ll ensure you don’t run into the same shortcomings twice. If the refactored piece of code seems worse in terms of readability or extensibility, carefully consider going back to the previous solution.

Consider the following contrived example, where we use array concatenation to generate the result array. Here, too, we could change from let to const by making a simple adjustment.

function makeCollection(size) {
  let result = []
  if (size > 0) {
    result = result.concat([1, 2])
  }
  if (size > 1) {
    result = result.concat([3, 4])
  }
  if (size > 2) {
    result = result.concat([5, 6])
  }
  return result
}
makeCollection(0) // <- []
makeCollection(1) // <- [1, 2]
makeCollection(2) // <- [1, 2, 3, 4]
makeCollection(3) // <- [1, 2, 3, 4, 5, 6]

We can replace the reassignment operations with Array#push, which accepts multiple values. If we had a dynamic list, we could use the spread operator to push as many …​items as necessary.

function makeCollection(size) {
  const result = []
  if (size > 0) {
    result.push(1, 2)
  }
  if (size > 1) {
    result.push(3, 4)
  }
  if (size > 2) {
    result.push(5, 6)
  }
  return result
}
makeCollection(0) // <- []
makeCollection(1) // <- [1, 2]
makeCollection(2) // <- [1, 2, 3, 4]
makeCollection(3) // <- [1, 2, 3, 4, 5, 6]

When you do need to use Array#concat, you might prefer to use […​result, 1, 2] instead, to make the code shorter.

The last case we’ll cover is one of refactoring. Sometimes, we write code like the next snippet, usually in the context of a larger function.

let completionText = 'in progress'
if (completionPercent >= 85) {
  completionText = 'almost done'
} else if (completionPercent >= 70) {
  completionText = 'reticulating splines'
}

In these cases, it makes sense to extract the logic into a pure function. This way we avoid the initialization complexity near the top of the larger function, while clustering all the logic about computing the completion text in one place.

The following piece of code shows how we could extract the completion text logic into its own function. We can then move getCompletionText out of the way, making the code more linear in terms of readability.

const completionText = getCompletionText(completionPercent)
// …
function getCompletionText(progress) {
  if (progress >= 85) {
    return 'almost done'
  }
  if (progress >= 70) {
    return 'reticulating splines'
  }
  return 'in progress'
}

Template Literals

For the longest time, JavaScript users have resorted to utility libraries to format strings, as that was never a part of the language until now. Creating a multiline string was also a hassle, as was escaping single or double quotes—​depending on which quote style you were using. Template literals are different, and they fix all of these inconveniences.

With a template literal, you can use expression interpolation, which enables you to inline variables, function calls, or any other arbitrary JavaScript expressions in a string without relying on concatenation.

'Hello, ' + name + '!' // before
`Hello, ${ name }!` // after

Multiline strings such as the one shown in the following snippet involve one or more of array concatenation, string concatenation, or explicit \n line feeds. The code is a typical example for writing an HTML string in the pre-ES6 era.

'<div>' `
  '<p>' `
    '<span>Hello</span>' `
    '<span>' + name + '</span>' `
    '<span>!</span>' `
  '</p>' `
'</div>'

Using template literals, we can avoid all of the extra quotes and concatenation, focusing on the content. The interpolation certainly helps in these kinds of templates, making multiline strings one of the most useful aspects of template literals.

`<div>
  <p>
    <span>Hello</span>
    <span>${ name }</span>
    <span>!</span>
  </p>
</div>`

When it comes to quotes, ' and " are more likely to be necessary when writing a string than ` is. For the average English phrase, you’re less likely to require backticks than single or double quotes. This means that backticks lead to less escaping.[1]

'Alfred\'s cat suit is "slick".'
"Alfred's cat suit is \"slick\"."
`Alfred's cat suit is "slick".`

As we discovered in [es6-essentials], there are also other features such as tagged templates, which make it easy to sanitize or otherwise manipulate interpolated expressions. While useful, tagged templates are not as pervasively beneficial as multiline support, expression interpolation, or reduced escaping.

The combination of all of these features warrants considering template literals as the default string flavor over single- or double-quoted strings. There are a few concerns usually raised when template literals are proposed as the default style. We’ll go over each concern and address each individually. You can then decide for yourself.

Before we begin, let’s set a starting point everyone agrees on: using template literals when an expression has to be interpolated in a string is better than using quoted string concatenation.

Performance is often one of the cited concerns: is using template literals everywhere going to harm my application’s performance? When using a compiler like Babel, template literals are transformed into quoted strings and interpolated expressions are concatenated amid those strings.

Consider the following example using template literals.

const suitKind = `cat`
console.log(`Alfred's ${ suitKind } suit is "slick".`)
// <- Alfred's cat suit is "slick".

A compiler such as Babel would transform our example into code similar to this, relying on quoted strings.

const suitKind = 'cat'
console.log('Alfred\'s ' + suitKind + ' suit is "slick".')
// <- Alfred's cat suit is "slick".

We’ve already settled that interpolated expressions are better than quoted string concatenation, in terms of readability, and the compiler turns those into quoted string concatenation, maximizing browser support.

When it comes to the suitKind variable, a template literal with no interpolation, no newlines, and no tags, the compiler simply turns it into a plain quoted string.

Once we stop compiling template literals down to quoted strings, we can expect optimizing compilers to be able to interpret them as such with negligible slowdown.

Another often-cited concern is syntax: as of this writing, we can’t use backtick strings in JSON, object keys, import declarations, or strict mode directives.

The first statement in the following snippet of code demonstrates that a serialized JSON object couldn’t represent strings using backticks. As shown on the second line, we can certainly declare an object using template literals and then serialize that object as JSON. By the time JSON.stringify is invoked, the template literal has evaluated to a quoted string.

JSON.parse('{ "payload": `message` }')
// <- SyntaxError
JSON.stringify({ payload: `message` })
// <- '{"payload":"message"}'

When it comes to object keys, we’re out of luck. Attempting to use a template literal would result in a syntax error.

const alfred = { `suit kind`: `cat` }

Object property keys accept value types, which are then cast into plain strings, but template literals aren’t value types, and thus it’s not possible to use them as property keys.

As you might recall from [es6-essentials], ES6 introduces computed property names, as seen in the following code snippet. In a computed property key we can use any expression we want to produce the desired property key, including template literals.

const alfred = { [`suit kind`]: `cat` }

The preceding is far from ideal due to its verbosity, though, and in these cases it’s best to use regular quoted strings.

As always, the rule is to never take rules such as "template literals are the best option" too literally, and be open to use your best judgment as necessary and break the rules a little bit, if they don’t quite fit your use cases, conventions, or view of how an application is best structured. Rules are often presented as such, but what may be a rule to someone need not be a rule to everyone. This is the main reason why modern linters make every rule optional: the rules we use should be enforced, but not every rule may fit every project.

Perhaps someday we might get a flavor of computed property keys that doesn’t rely on square brackets for template literals, saving us a couple of characters when we need to interpolate a string. For the foreseeable future, the following code snippet will result in a syntax error.

const brand = `Porsche`
const car = {
  `wheels`: 4,
  `has fuel`: true,
  `is ${ brand }`: `you wish`
}

Attempts to import a module using template literals will also result in a syntax error. This is one of those cases where we might expect to be able to use template literals, if we were to adopt them extensively throughout our codebase, but can’t.

import { SayHello } from `./World`

Strict mode directives have to be single- or double-quoted strings. As of this writing, there’s no plan to allow template literals for 'use strict' directives. The following piece of code does not result in a syntax error, but it also does not enable strict mode. This is the biggest caveat when heavily using template literals.

'use strict' // enables strict mode
"use strict" // enables strict mode
`use strict` // nothing happens

Lastly, it could be argued that turning an existing codebase from single-quoted strings to template literals would be error-prone and a waste of time that could be otherwise used to develop features or fix bugs.

Fortunately, we have eslint at our disposal, as discussed in [ecmascript-and-the-future-of-javascript]. To switch our codebase to backticks by default, we can set up an .eslintrc.json configuration similar to the one in the following piece of code. Note how we turn the quotes rule into an error unless the code uses backticks.

{
  "env": {
    "es6": true
  },
  "extends": "eslint:recommended",
  "rules": {
    "quotes": ["error", "backtick"]
  }
}

With that in place, we can add a lint script to our package.json, like the one in the next snippet. The --fix flag ensures that any style errors found by the linter, such as using single quotes over backticks, are autocorrected.

{
  "scripts": {
    "lint": "eslint --fix ."
  }
}

Once we run the following command, we’re ready to start experimenting with a codebase that uses backticks by default!

» npm run lint

In conclusion, there are trade-offs to consider when using template literals. You’re invited to experiment with the backtick-first approach and gauge its merits. Always prefer convenience, over convention, over configuration.

Shorthand Notation and Object Destructuring

[ecmascript-and-the-future-of-javascript] introduced us to the concept of shorthand notation. Whenever we want to introduce a property and there’s a binding by the same name in scope, we can avoid repetition.

const unitPrice = 1.25
const tomato = {
  name: 'Tomato',
  color: 'red',
  unitPrice
}

This feature becomes particularly useful in the context of functions and information hiding. In the following example we leverage object destructuring for a few pieces of information from a grocery item and return a model that also includes the total price for the items.

function getGroceryModel({ name, unitPrice }, units) {
  return {
    name,
    unitPrice,
    units,
    totalPrice: unitPrice * units
  }
}
getGroceryModel(tomato, 4)
/*
{
  name: 'Tomato',
  unitPrice: 1.25,
  units: 4,
  totalPrice: 5
}
*/

Note how well shorthand notation works in tandem with destructuring. If you think of destructuring as a way of pulling properties out of an object, then you can think of shorthand notation as the analog for placing properties onto an object. The following example shows how we can leverage the getGroceryModel function to pull the totalPrice of a grocery item when we know how many the customer is buying.

const { totalPrice } = getGroceryModel(tomato, 4)

While counterintuitive at first, usage of destructuring in function parameters results in a convenient and implicitly contract-based solution, where we know that the first parameter to getGroceryModel is expected to be an object containing name and unitPrice properties.

function getGroceryModel({ name, unitPrice }, units) {
  return {
    name,
    unitPrice,
    units,
    totalPrice: unitPrice * units
  }
}

Conversely, destructuring a function’s output gives the reader an immediate feel for what aspect of that output a particular piece of code is interested in. In the next snippet, we’ll use only the product name and total price so that’s what we destructure out of the output.

const { name, totalPrice } = getGroceryModel(tomato, 4)

Compare the last snippet with the following line of code, where we don’t use destructuring. Instead, we pull the output into a model binding. While subtle, the key difference is that this piece communicates less information explicitly: we need to dig deeper into the code to find out which parts of the model are being used.

const model = getGroceryModel(tomato, 4)

Destructuring can also help avoid repeating references to the host object when it comes to using several properties from the same object.

const summary = `${ model.units }x ${ model.name }
($${ model.unitPrice }) = $${ model.totalPrice }`
// <- '4x Tomato ($1.25) = $5'

However, there’s a trade-off here: we avoid repeating the host object when referencing properties, but at the expense of repeating property names in our destructuring declaration statement.

const { name, units, unitPrice, totalPrice } = model
const summary = `${ units }x ${ name } ($${ unitPrice }) =
$${ totalPrice }`

Whenever there are several references to the same property, it becomes clear that we should avoid repeating references to the host object, by destructuring it.

When there’s a single reference to a single property, it’s clear we should avoid destructuring, as it mostly generates noise.

const { name } = model
const summary = `This is a ${ name } summary`

Having a reference to model.name directly in the summary code is less noisy.

const summary = `This is a ${ model.name } summary`

When we have two properties to destructure (or two references to one property), things change a bit.

const summary = `This is a summary for ${ model.units }x
${ model.name }`

Destructuring does help in this case. It reduces the character count in the summary declaration statement, and it explicitly announces the model properties we’re going to be using.

const { name, units } = model
const summary = `This is a summary for ${ units }x ${ name }`

If we have two references to the same property, similar conditions apply. In the next example, we have one less reference to model and one more reference to name than we’d have without destructuring. This case could go either way, although the value in explicitly declaring the future usage of name could be incentive enough to warrant destructuring.

const { name } = model
const summary = `This is a ${ name } summary`
const description = `${ name } is a grocery item`

Destructuring is as valuable as the amount of references to host objects it eliminates, but the amount of properties being referenced can dilute value, because of increased repetition in the destructuring statement. In short, destructuring is a great feature but it doesn’t necessarily lead to more readable code every time. Use it judiciously, especially when there aren’t that many host references being removed.

Rest and Spread

Matches for regular expressions are represented as an array. The matched portion of the input is placed in the first position, while each captured group is placed in subsequent elements in the array. Often, we are interested in specific captures such as the first one.

In the following example, array destructuring helps us omit the whole match and place the integer and fractional parts of a number into corresponding variables. This way, we avoid resorting to magic numbers pointing at the indices where captured groups will reside in the match result.

function getNumberParts(number) {
  const rnumber = /(\d+)\.(\d+)/
  const matches = number.match(rnumber)
  if (matches === null) {
    return null
  }
  const [ , integer, fractional] = number.match(rnumber)
  return { integer, fractional }
}
getNumberParts('1234.56')
// <- { integer: '1234', fractional: '56' }

The spread operator could be used to pick up every captured group, as part of destructuring the result of .match.

function getNumberParts(number) {
  const rnumber = /(\d+)\.(\d+)/
  const matches = number.match(rnumber)
  if (matches === null) {
    return null
  }
  const [ , ...captures] = number.match(rnumber)
  return captures
}
getNumberParts('1234.56')
// <- ['1234', '56']

When we need to concatenate lists, we use .concat to create a new array. The spread operator improves code readability by making it immediately obvious that we want to create a new collection comprising each list of inputs, while preserving the ease of adding new elements declaratively in array literals.

administrators.concat(moderators)
[...administrators, ...moderators]
[...administrators, ...moderators, bob]

Similarly, the object spread feature[2] introduced in [extending_objects_with_object_assign] allows us to merge objects onto a new object. Consider the following snippet where we programmatically create a new object comprising base defaults, user-provided options, and some important override property that prevails over previous properties.

Object.assign({}, defaults, options, { important: true })

Compare that to the equivalent snippet using object spread declaratively. We have the object literal, the defaults and options being spread, and the important property. Not using the Object.assign function has greatly improved our code’s readability, even letting us inline the important property in the object literal declaration.

{
  ...defaults,
  ...options,
  important: true
}

Being able to visualize object spread as an Object.assign helps internalize how the feature works. In the following example we’ve replaced the defaults and options variables with object literals. Since object spread relies on the same operation as Object.assign for every property, we can observe how the options literal overrides speed with the number 3, and why important remains true even when the options literal attempts to override it, due to precedence.

{
  ...{ // defaults
    speed: 1,
    type: 'sports'
  },
  ...{ // options
    speed: 3,
    important: false
  },
  important: true
}

Object spread comes in handy when we’re dealing with immutable structures, where we’re supposed to create new objects instead of editing existing ones. Consider the following bit of code where we have a player object and a function call that casts a healing spell and returns a new, healthier, player object.

const player = {
  strength: 4,
  luck: 2,
  mana: 80,
  health: 10
}
castHealingSpell(player) // consumes 40 mana, gains 110 health

The following snippet shows an implementation of castHealingSpell where we create a new player object without mutating the original player parameter. Every property in the original player object is copied over, and we can update individual properties as needed.

const castHealingSpell = player => ({
  ...player,
  mana: player.mana - 40,
  health: player.health + 110
})

As we explained in [classes-symbols-objects-and-decorators], we can use object rest properties while destructuring objects. Among other uses, such as listing unknown properties, object rest can be used to create a shallow copy of an object.

In the next snippet, we’ll look at three of the simplest ways in which we can create a shallow copy of an object in JavaScript. The first one uses Object.assign, assigning every property of source to an empty object that’s then returned; the second example uses object spread and is equivalent to using Object.assign, but a bit more gentle on the eyes; the last example relies on destructuring the rest parameter.

const copy = Object.assign({}, source)
const copy = { ...source }
const { ...copy } = source

Sometimes we need to create a copy of an object, but omit some properties in the resulting copy. For instance, we may want to create a copy of person while omitting their name, so that we only keep their metadata.

One way to achieve that with plain JavaScript would be to destructure the name property while placing other properties in a metadata object, using the rest parameter. Even though we don’t need the name, we’ve effectively "removed" that property from the metadata object, which contains the rest of the properties in person.

const { name, ...metadata } = person

In the following bit of code, we map a list of people to a list of person models, excluding personally identifiable information such as their name and Social Security number, while placing everything else in the person rest parameter.

people.map(({ name, ssn, ...person }) => person)

Savoring Function Flavors

JavaScript already offered a number of ways in which we can declare functions before ES6.

Function declarations are the most prominent kind of JavaScript function. The fact that declarations are hoisted means we can sort them based on how to improve code readability, instead of worrying about sorting them in the exact order they are used.

The following snippet displays three function declarations arranged in such a way that the code is more linear to read.

printSum(2, 3)
function printSum(x, y) {
  return print(sum(x, y))
}
function sum(x, y) {
  return x + y
}
function print(message) {
  console.log(`printing: ${ message }`)
}

Function expressions, in contrast, must be assigned to a variable before we can execute them. Keeping with the preceding example, this means we would necessarily need to have all function expressions declared before any code can use them.

The next snippet uses function expressions. Note that if we were to place the printSum function call anywhere other than after all three expression assignments, our code would fail because of a variable that hasn’t been initialized yet.

var printSum = function (x, y) {
  return print(sum(x, y))
}
var sum = function (x, y) {
  return x + y
}
// a `printSum()` statement would fail: print is not defined
var print = function (message) {
  console.log(`printing: ${ message }`)
}
printSum(2, 3)

For this reason, it may be better to sort function expressions as a LIFO (last-in-first-out) stack: placing the last function to be called first, the second to last function to be called second, and so on. The rearranged code is shown in the next snippet.

var sum = function (x, y) {
  return x + y
}
var print = function (message) {
  console.log(`printing: ${ message }`)
}
var printSum = function (x, y) {
  return print(sum(x, y))
}
printSum(2, 3)

While this code is a bit harder to follow, it becomes immediately obvious that we can’t call printSum before the function expression is assigned to that variable. In the previous piece of code this wasn’t obvious because we weren’t following the LIFO rule. This is reason enough to prefer function declarations for the vast majority of our code.

Function expressions can have a name that can be used for recursion, but that name is not accessible in the outer scope. The following example shows a function expression that’s named sum and assigned to a sumMany variable. The sum reference is used for recursion in the inner scope, but we get an error when trying to use it from the outer scope.

var sumMany = function sum(accumulator = 0, ...values) {
  if (values.length === 0) {
    return accumulator
  }
  const [value, ...rest] = values
  return sum(accumulator + value, ...rest)
}
console.log(sumMany(0, 1, 2, 3, 4))
// <- 10
console.log(sum())
// <- ReferenceError: sum is not defined

Arrow functions, introduced in [arrow_functions], are similar to function expressions. The syntax is made shorter by dropping the function keyword. In arrow functions, parentheses around the parameter list are optional when there’s a single parameter that’s not destructured nor the rest parameter. It is possible to implicitly return any valid JavaScript expression from an arrow function without declaring a block statement.

The following snippet shows an arrow function explicitly returning an expression in a block statement, one that implicitly returns the expression, one that drops the parentheses around its only parameter, and one that uses a block statement but doesn’t return a value.

const sum = (x, y) => { return x + y }
const multiply = (x, y) => x * y
const double = x => x * 2
const print = x => { console.log(x) }

Arrow functions can return arrays using tiny expressions. The first example in the next snippet implicitly returns an array comprising two elements, while the second example discards the first parameter and returns all other parameters held in the rest operator’s bag.

const makeArray = (first, second) => [first, second]
const makeSlice = (discarded, ...items) => items

Implicitly returning an object literal is a bit tricky because they’re hard to tell apart from block statements, which are also wrapped in curly braces. We’ll have to add parentheses around our object literal, turning it into an expression that evaluates into the object. This bit of indirection is just enough to help us disambiguate and tell JavaScript parsers that they’re dealing with an object literal.

Consider the following example, where we implicitly return an object expression. Without the parentheses, the parser would interpret our code as a block statement containing a label and the literal expression 'Nico'.

const getPerson = name => ({
  name: 'Nico'
})

Explicitly naming arrow functions isn’t possible, due to their syntax. However, if an arrow function expression is declared in the righthand side of a variable or property declaration, then its name becomes the name for the arrow function.

Arrow function expressions need to be assigned before use, and thus suffer from the same ordering ailments as regular function expressions. In addition, since they can’t be named, they must be bound to a variable for us to reference them in recursion scenarios.

Using function declarations by default should be preferred. They are less limited in terms of how they can be ordered, referenced, and executed, leading to better code readability and maintainability. In future refactors, we won’t have to worry about keeping function declarations in the same order in fear of breaking dependency chains or LIFO representations.

That said, arrow functions are a terse and powerful way of declaring functions in short form. The smaller the function, the more valuable using arrow syntax becomes, as it helps avoid a situation where we spend more code on form than we spend on function. As a function grows larger, writing it in arrow form loses its appeal due to the aforementioned ordering and naming issues.

Arrow functions are invaluable in cases where we would’ve otherwise declared an anonymous function expression, such as in test cases, functions passed to new Promise() and setTimeout, or array mapping functions.

Consider the following example, where we use a nonblocking wait promise to print a statement after five seconds. The wait function takes a delay in milliseconds and returns a Promise, which resolves after waiting for the specified time with setTimeout.

wait(5000).then(function () {
  console.log('waited 5 seconds!')
})

function wait(delay) {
  return new Promise(function (resolve) {
    setTimeout(function () {
      resolve()
    }, delay)
  })
}

When switching to arrow functions, we should stick with the top-level wait function declaration so that we don’t need to hoist it to the top of our scope. We can turn every other function into arrows to improve readability, thus removing many function keywords that got in the way of interpreting what those functions do.

The next snippet shows what that code would look like using arrow functions. With all the keywords out of the way after refactoring, it’s easier to understand the relationship between the delay parameter of wait and the second argument to setTimeout.

wait(5000).then(
  () => console.log('waited 5 seconds!')
)

function wait(delay) {
  return new Promise(resolve =>
    setTimeout(() => resolve(), delay)
  )
}

Another large upside in using arrow functions lies in their lexical scoping, where they don’t modify the meaning of this or arguments. If we find ourselves copying this to a temporary variable—​typically named self, context, or _this—we may want to use an arrow function for the inner bit of code instead. Let’s take a look at an example of this.

const pistol = {
  caliber: 50,
  trigger() {
    const self = this
    setTimeout(function () {
      console.log(`Fired caliber ${ self.caliber } pistol`)
    }, 1000)
  }
}
pistol.trigger()

If we tried to use this directly in the previous example, we’d get a caliber of undefined instead. With an arrow function, however, we can avoid the temporary self variable. We not only removed the function keyword but we also gained functional value due to lexical scoping, since we don’t need to work our way around the language’s limitations anymore in this case.

const pistol = {
  caliber: 50,
  trigger() {
    setTimeout(() => {
      console.log(`Fired caliber ${ self.caliber } pistol`)
    }, 1000)
  }
}
pistol.trigger()

As a general rule of thumb, think of every function as a function declaration by default. If that function doesn’t need a meaningful name, requires several lines of code, or involves recursion, then consider an arrow function.

Classes and Proxies

Most modern programming languages have classes in one form or another. JavaScript classes are syntactic sugar on top of prototypal inheritance. Using classes turns prototypes more idiomatic and easier for tools to statically analyze.

When writing prototype-based solutions the constructor code is the function itself, while declaring instance methods involves quite a bit of boilerplate code, as shown in the following code snippet.

function Player() {
  this.health = 5
}
Player.prototype.damage = function () {
  this.health--
}
Player.prototype.attack = function (player) {
  player.damage()
}

In contrast, classes normalize the constructor as an instance method, thus making it clear that the constructor is executed for every instance. At the same time, methods are built into the class literal and rely on a syntax that’s consistent with methods in object literals.

class Player {
  constructor() {
    this.health = 5
  }
  damage() {
    this.health--
  }
  attack(player) {
    player.damage()
  }
}

Grouping instance methods under an object literal ensures class declarations aren’t spread over several files, but rather unified in a single location describing their whole API.

Declaring any static methods as part of a class literal, as opposed to dynamically injecting them onto the class, also helps centralize API knowledge. Keeping this knowledge in a central location helps code readability because developers need to go through less code to learn the Player API. At the same time, when we define a convention of declaring instance and static methods on the class literal, coders know not to waste time looking elsewhere for methods defined dynamically. The same applies to getters and setters, which we can also define on the class literal.

class Player {
  constructor() {
    Player.heal(this)
  }
  damage() {
    this.health--
  }
  attack(player) {
    player.damage()
  }
  get alive() {
    return this.health > 0
  }
  static heal(player) {
    player.health = 5
  }
}

Classes also offer extends, simple syntactic sugar on top of prototypal inheritance. This, again, is more convenient than prototype-based solutions. With extends, we don’t have to worry about choosing a library or otherwise dynamic method of inheriting from another class.

class GameMaster extends Player {
  constructor(...rest) {
    super(...rest)
    this.health = Infinity
  }
  kill(player) {
    while (player.alive) {
      player.damage()
    }
  }
}

Using that same syntax, classes can extend native built-ins such as Array or Date without relying on an <iframe> or shallow copying. Consider the List class in the following code snippet, which skips the default Array constructor in order to avoid the often-confusing single number parameter overload. It also illustrates how we could implement our own methods on top of the native Array prototype.

class List extends Array {
  constructor(...items) {
    super()
    this.push(...items)
  }
  get first() {
    return this[0]
  }
  get last() {
    return this[this.length - 1]
  }
}
const number = new List(2)
console.log(number.first)
// <- 2
const items = new List('a', 'few', 'examples')
console.log(items.last)
// <- 'examples'

JavaScript classes are less verbose than their prototype-based equivalents. Class sugar is thus a most welcome improvement over raw prototypal inheritance. As for the merits of using JavaScript classes, it depends. Even though classes may be compelling to use due to their improved syntax, sugar alone doesn’t instantly promote classes to a wider variety of use cases.

Statically typed languages typically offer and enforce the use of classes.[3] In contrast, due to the highly dynamic nature of JavaScript, classes aren’t mandatory. Almost every scenario that would typically demand classes can be addressed using plain objects.

Plain objects are simpler than classes. There’s no need for special constructor methods, their only initialization is the declaration, they’re easy to serialize via JSON, and they’re more interoperable. Inheritance is seldom the right abstraction to use, but when it is desirable we might switch to classes or stick with plain objects and Object.create.

Proxies empower many previously unavailable use cases, but we need to tread lightly. Solutions that involve a Proxy object may also be implemented using plain objects and functions without resorting to an object that behaves as if by magic.

There may indeed be cases where using a Proxy is warranted, particularly when it comes to developer tooling meant for development environments, where a high degree of code introspection is desirable and complexity is hidden away in the developer tool’s codebase. Using Proxy in application-level codebases is easily avoided, and leads to less enigmatic code.

Readability hinges on code that has a clear purpose. Declarative code is readable: upon reading a piece of code, it becomes clear what it is intended to do. In contrast, using layers of indirection such as a Proxy on top of an object can result in highly complex access rules that may be hard to infer when reading a piece of code. It’s not that a solution involving a Proxy is impossible to understand, but the fact remains that more code needs to be read and carefully considered before we fully understand the nuances of how the proxy layer behaves.

If we’re considering proxies, then maybe objects aren’t the tool for what we’re trying to accomplish. Instead of going straight to a Proxy indirection layer, consider whether a simple function offers just enough indirection without causing an object to behave in a manner that’s inconsistent with how plain objects typically behave in JavaScript.

As such, always prefer boring, static, and declarative code over smart and elegant abstractions. Boring code might be a little more repetitive than using an abstraction, but it will also be simpler, easier to understand, and decidedly a safer bet in the short term.

Abstractions are costly. Once an abstraction is in place, it is often hard to go back and eliminate it. If an abstraction is created too early, it might not cover all common use cases, and we may end up having to handle special cases separately.

When we prefer boring code, patterns flourish gradually and naturally. Once a pattern emerges, then we can decide whether an abstraction is warranted and refactor our code fittingly. A time-honed well-placed abstraction is likely to cover more use cases than it might have covered if we had gone for an abstraction as soon as we had two or three functionally comparable pieces of code.

Asynchronous Code Flows

In [iteration-and-flow-control] we discussed how many of the different ways in which we can manage complexity in asynchronous operations work, and how we can use them. Callbacks, events, promises, generators, async functions and async iterators, external libraries, and the list goes on. You should now be comfortable with how these constructs work, but when should you use them?

Callbacks are the most primitive solution. They require little knowledge beyond basic JavaScript, making callback-based code some of the easiest to read. Callbacks should be approached with care in cases where the flow of operations involves a long dependency chain, as a series of deeply nested asynchronous operations can lead to callback hell.

When it comes to callbacks, libraries like async can help reduce complexity when we have three or more related tasks that need to be executed asynchronously.A popular flow control library. You can find async on GitHub. Another positive aspect of these libraries is how they unobtrusively interoperate with plain callbacks, which is useful when we have a mix of complex flows that need to be abstracted through the library and simpler flows that you can articulate with plain callbacks.

Events are a cheap way of introducing extensibility into code flows, asynchronous or otherwise. Events don’t lend themselves well to managing the complexity of asynchronous tasks, however.

The following example shows how convoluted our code could become if we wanted to handle asynchronous tasks using events. Half of the lines of code are spent on defining the code flow, and even then the flow is quite hard to understand. This means we probably chose the wrong tool for the job.

const tracker = emitter()
tracker.on('started', multiply)
tracker.on('multiplied', print)
start(256, 512, 1024)
function start(...input) {
  const sum = input.reduce((a, b) => a + b, 0)
  tracker.emit('started', { sum, input })
}
function multiply({ sum, input }) {
  const message = `The sum of ${ input.join('`') } is ${ sum }`
  tracker.emit('multiplied', message)
}
function print(message) {
  console.log(message)
}

Promises were around for a long time, in user libraries, before TC39 decided to bring them into the core JavaScript language. They serve a similar purpose as callback libraries, offering an alternative way of writing asynchronous code flows.

Promises are a bit more expensive than callbacks in terms of commitment, because promise chains involve more promises, so they are hard to interleave with plain callbacks. At the same time, you don’t want to interleave promises with callback-based code, because that leads to complex applications. For any given portion of code, it’s important to pick one paradigm and stick with it. Relying on a single paradigm produces code that doesn’t focus as much on the mechanics as it does on task processing.

Committing to promises isn’t inherently bad; however, it’s merely a cost you need to be aware of. As more and more of the web platform relies on promises as a fundamental building block, they only get better. Promises underlie generators, async functions, async iterators, and async generators. The more we use those constructs, the more synergistic our applications become, and while it could be argued that plain callbacks are already synergistic by nature, they certainly don’t compare to the sheer power of async functions and all promise-based solutions that are now native to the JavaScript language.

Once we commit to promises, the variety of tools at our disposal is comparable to using a library that offers solutions to common flow control problems by relying on callbacks. The difference is that, for the most part, promises don’t require any libraries because they’re native to the language.

We could use iterators to lazily describe sequences that don’t necessarily need to be finite. Further, their asynchronous counterpart could be used to describe sequences that require out-of-band processing, such as GET requests, to produce elements. Those sequences can be consumed by using a for await..of loop, hiding away the complexity of their asynchronous nature.

An iterator is a useful way of describing how an object is iterated to produce a sequence. When there isn’t an object to describe, generators offer a way of describing standalone sequences. Implementing an iterator is the ideal way of describing how a Movie object should be iterated, perhaps using Symbol.asyncIterator and fetching information about each actor and their roles for every credited actor in a movie. Without the context of a Movie object, however, such an iterator would make more sense as a generator.

Another case where generators are useful is infinite sequences. Consider the following iterator, where we produce an infinite stream of integer numbers.

const integers = value => ({
  value,
  [Symbol.iterator]() {
    return {
      next: () => ({
        value: this.value++
      })
    }
  }
})

You probably remember generators are inherently iterable, meaning they follow the iterator protocol without the need for us to supply an iterator. Now compare the iterable integers object to the equivalent generator function found in the next piece of code.

function* integers(value = 0) {
  while (true) {
    yield value++
  }
}

Not only is the generator code shorter, but it’s also far more readable. The fact that it produces an infinite sequence becomes immediately obvious due to the while loop. The iterable requires us to understand that the sequence is infinite because the code never returns an element with the done: true flag. Setting the seed value is more natural and doesn’t involve wrapping the object in a function that receives the initial parameters.

Promises were originally hailed as a cure to callback hell ailments. Programs that rely heavily on promises can fall into the callback hell trap when we have deeply nested asynchronous series flows. Async functions present an elegant solution to this problem, where we can describe the same promise-based code using await expressions.

Consider the following piece of code.

Promise
  .resolve(2)
  .then(x => x * 2)
  .then(x => x * 2)
  .then(x => x * 2)

When we use an await expression, the expression on its righthand side is coerced into a promise. When an await expression is reached, the async function will pause execution until the promise—​coerced or otherwise—​has been settled. When the promise is fulfilled, then execution in the async function continues, but if the promise is rejected then the rejection will bubble up to the promise returned by the async function call, unless that rejection is suppressed by a catch handler.

async function calculate() {
  let x = 2
  x = await x * 2
  x = await x * 2
  x = await x * 2
  return x
}

The beauty of async/await lies in the fact that it fixes the biggest problem with promises, where you can’t easily mix synchronous code into your flows. At the same time, async functions let you use try/catch, a construct we are unable to leverage when using callbacks. Meanwhile, async/await manages to stay synergistic with promises by using them under the hood, always returning a Promise from every async function and coercing awaited expressions into promises. Moreover, async functions accomplish all of the above while turning asynchronous code into synchronous-looking code.

While using await expressions optimizes toward reducing complexity in serial asynchronous code, it becomes hard to reason about concurrent asynchronous code flows when replacing promises with async/await. This can be mitigated by using await Promise.all(tasks) and firing those tasks concurrently before the await expression is reached. Given, however, that async functions don’t optimize for this use case, reading this kind of code can be confusing, so this is something to look out for. If our code is highly concurrent, we might want to consider a callback-based approach.

Once again, this leads us to critical thinking. New language features aren’t always necessarily better for all use cases. While sticking to conventions is important so that our code remains consistent and we don’t spend most of our time deciding on how to better represent a small portion of a program, it is also important to have a fine balance.

When we don’t spend at least some of our time figuring out what feature or flow style is the most appropriate for the code we’re writing, we risk treating every problem as a nail because all we have is a hammer. Picking the right tool for the problem at hand is even more important than being a stickler for conventions and hard rules.

Complexity Creep, Abstractions, and Conventions

Picking the right abstractions is hard: we want to reduce complexity in our code flows by introducing complexity that’s hidden away behind the constructs we use. Async functions borrow their foundation from generators. Generator objects are iterable. Async iterators use promises. Iterators are implemented using symbols. Promises use callbacks.

Consistency is an important theme when it comes to maintainable code. An application might mostly use callbacks, or mostly use promises. Individually, both callbacks and promises can be used to reduce complexity in code flows. When mixing them together, however, we need to make sure we don’t introduce context switching where developers reading different pieces of a codebase need to enter different mindsets to understand them.

This is why conventions exist. A strong convention such as "use promises where possible" goes a long way toward augmenting consistency across a codebase. Conventions, more than anything, are what drive readability and maintainability in a codebase. Code is, after all, a communication device used to convey a message. This message is not only relevant to the computers executing the code, but most importantly to developers reading the code, maintaining and improving the application over time.

Without strong conventions, communication breaks down and developers have a hard time understanding how a program works, ultimately leading to reduced productivity.

The vast majority of the time spent working as a software developer is spent reading code. It’s only logical, then, that we pay careful attention to how to write code in such a way that’s optimized for readability.


1. Typography enthusiasts will be quick to point out that straight quotes are typographically incorrect, meaning we should be using “ ” ‘ ’, which don’t lead to escaping. The fact remains that in practice we use straight quotes in code simply because they’re easier to type. Meanwhile, typographic beautification is usually offloaded to utility libraries or a compilation step such as within a Markdown compiler.
2. Currently in stage 3 of the ECMAScript standard development process.
3. An exception should be made for most functional programming languages.