The sixth edition of the language comes with a plethora of non-breaking syntax improvements, most of which we’ll tackle throughout this chapter. Many of these changes are syntactic sugar; that is, they could be represented in ES5, albeit using more complicated pieces of code. There are also changes that aren’t merely syntactic sugar but a completely different way of declaring variables using let
and const
, as we’ll see toward the end of the chapter.
Object literals get a few syntax changes in ES6, and they’re a good place to start.
An object literal is any object declaration using the {}
shorthand syntax, such as the following example:
var book = {
title: 'Modular ES6',
author: 'Nicolas',
publisher: 'O´Reilly'
}
ES6 brings a few improvements to object literal syntax: property value shorthands, computed property names, and method definitions. Let’s go through them and describe their use cases as well.
Sometimes we declare objects with one or more properties whose values are references to variables by the same name. For example, we might have a listeners
collection, and in order to assign it to a property called listeners
of an object literal, we have to repeat its name. The following snippet has a typical example where we have an object literal declaration with a couple of these repetitive properties:
var listeners = []
function listen() {}
var events = {
listeners: listeners,
listen: listen
}
Whenever you find yourself in this situation, you can omit the property value and the colon by taking advantage of the new property value shorthand syntax in ES6. As shown in the following example, the new ES6 syntax makes the assignment implicit:
var listeners = []
function listen() {}
var events = { listeners, listen }
As we’ll further explore in the second part of the book, property value shorthands help de-duplicate the code we write without diluting its meaning. In the following snippet, I reimplemented part of localStorage
, a browser API for persistent storage, as an in-memory ponyfill.Like polyfills, ponyfills are user-land implementations of features that aren't available in every JavaScript runtime. While polyfills try to patch the runtime environment so that it behaves as if the feature was indeed available on the runtime, ponyfills implement the missing functionality as standalone modules that don't pollute the runtime environment. This has the benefit of not breaking expectations third-party libraries (that don't know about your polyfill) may have about the environment. If it weren’t for the shorthand syntax, the storage
object would be more verbose to type out:
var store = {}
var storage = { getItem, setItem, clear }
function getItem(key) {
return key in store ? store[key] : null
}
function setItem(key, value) {
store[key] = value
}
function clear() {
store = {}
}
That’s the first of many ES6 features that are aimed toward reducing complexity in the code you have to maintain. Once you get used to the syntax, you’ll notice that code readability and developer productivity get boosts as well.
Sometimes you have to declare objects that contain properties with names based on variables or other JavaScript expressions, as shown in the following piece of code written in ES5. For this example, assume that expertise
is provided to you as a function parameter, and is not a value you know beforehand:
var expertise = 'journalism'
var person = {
name: 'Sharon',
age: 27
}
person[expertise] = {
years: 5,
interests: ['international', 'politics', 'internet']
}
Object literals in ES6 aren’t constrained to declarations with static names. With computed property names, you can wrap any expression in square brackets, and use that as the property name. When the declaration is reached, your expression is evaluated and used as the property name. The following example shows how the piece of code we just saw could declare the person object in a single step, without having to resort to a second statement adding the person’s expertise
.
var expertise = 'journalism'
var person = {
name: 'Sharon',
age: 27,
[expertise]: {
years: 5,
interests: ['international', 'politics', 'internet']
}
}
You can’t combine the property value shorthands with computed property names. Value shorthands are simple compile-time syntactic sugar that helps avoid repetition, while computed property names are evaluated at runtime. Given that we’re trying to mix these two incompatible features, the following example would throw a syntax error. In most cases this combination would lead to code that’s hard to interpret for other humans, so it’s probably a good thing that you can’t combine the two.
var expertise = 'journalism'
var journalism = {
years: 5,
interests: ['international', 'politics', 'internet']
}
var person = {
name: 'Sharon',
age: 27,
[expertise] // this is a syntax error!
}
A common scenario for computed property names is when we want to add an entity to an object map that uses the entity.id
field as its keys, as shown next. Instead of having to have a third statement where we add the grocery
to the groceries
map, we can inline that declaration in the groceries
object literal itself.
var grocery = {
id: 'bananas',
name: 'Bananas',
units: 6,
price: 10,
currency: 'USD'
}
var groceries = {
[grocery.id]: grocery
}
Another case may be whenever a function receives a parameter that it should then use to build out an object. In ES5 code, you’d need to allocate a variable declaring an object literal, then add the dynamic property, and then return the object. The following example shows exactly that, when creating an envelope that could later be used for Ajax messages that follow a convention: they have an error
property with a description when something goes wrong, and a success
property when things turn out okay:
function getEnvelope(type, description) {
var envelope = {
data: {}
}
envelope[type] = description
return envelope
}
Computed property names help us write the same function more concisely, using a single statement:
function getEnvelope(type, description) {
return {
data: {},
[type]: description
}
}
The last enhancement coming to object literals is about functions.
Typically, you can declare methods on an object by adding properties to it. In the next snippet, we’re creating a small event emitter that supports multiple kinds of events. It comes with an emitter#on
method that can be used to register event listeners, and an emitter#emit
method that can be used to raise events:
var emitter = {
events: {},
on: function (type, fn) {
if (this.events[type] === undefined) {
this.events[type] = []
}
this.events[type].push(fn)
},
emit: function (type, event) {
if (this.events[type] === undefined) {
return
}
this.events[type].forEach(function (fn) {
fn(event)
})
}
}
Starting in ES6, you can declare methods on an object literal using the new method definition syntax. In this case, we can omit the colon and the function
keyword. This is meant as a terse alternative to traditional method declarations where you need to use the function
keyword. The following example shows how our emitter
object looks when using method definitions.
var emitter = {
events: {},
on(type, fn) {
if (this.events[type] === undefined) {
this.events[type] = []
}
this.events[type].push(fn)
},
emit(type, event) {
if (this.events[type] === undefined) {
return
}
this.events[type].forEach(function (fn) {
fn(event)
})
}
}
Arrow functions are another way of declaring functions in ES6, and they come in several flavors. Let’s investigate what arrow functions are, how they can be declared, and how they behave semantically.
In JavaScript you typically declare functions using code like the following, where you have a name, a list of parameters, and a function body.
function name(parameters) {
// function body
}
You could also create anonymous functions, by omitting the name when assigning the function to a variable, a property, or a function call.
var example = function (parameters) {
// function body
}
Starting with ES6, you can use arrow functions as another way of writing anonymous functions. Keep in mind, there are several slightly different ways of writing them. The following piece of code shows an arrow function that’s very similar to the anonymous function we just saw. The only difference seems to be the missing function
keyword and the ⇒
arrow to the right of the parameter list.
var example = (parameters) => {
// function body
}
While arrow functions look very similar to your typical anonymous function, they are fundamentally different: arrow functions can’t be named explicitly, although modern runtimes can infer a name based on the variable they’re assigned to; they can’t be used as constructors nor do they have a prototype
property, meaning you can’t use new
on an arrow function; and they are bound to their lexical scope, which is the reason why they don’t alter the meaning of this
.
Let’s dig into their semantic differences with traditional functions, the many ways to declare an arrow function, and practical use cases.
In the body of an arrow function, this
, arguments
, and super
point to the containing scope, since arrow functions don’t create a new scope. Consider the following example. We have a timer
object with a seconds
counter and a start
method defined using the syntax we learned about earlier. We then start the timer, wait for a few seconds, and log the current amount of elapsed seconds
:
var timer = {
seconds: 0,
start() {
setInterval(() => {
this.seconds++
}, 1000)
}
}
timer.start()
setTimeout(function () {
console.log(timer.seconds)
}, 3500)
// <- 3
If we had defined the function passed to setInterval
as a regular anonymous function instead of using an arrow function, this
would’ve been bound to the context of the anonymous function, instead of the context of the start
method. We could have implemented timer
with a declaration like var self = this
at the beginning of the start
method, and then referencing self
instead of this
. With arrow functions, the added complexity of keeping context references around fades away and we can focus on the functionality of our code.
In a similar fashion, lexical binding in ES6 arrow functions also means that function calls won’t be able to change the this
context when using .call
, .apply
, .bind
, etc. That limitation is usually more useful than not, as it ensures that the context will always be preserved and constant.
Let’s now shift our attention to the following example. What do you think the console.log
statement will print?
function puzzle() {
return function () {
console.log(arguments)
}
}
puzzle('a', 'b', 'c')(1, 2, 3)
The answer is that arguments
refers to the context of the anonymous function, and thus the arguments passed to that function will be printed. In this case, those arguments are 1, 2, 3
.
What about in the following case, where we use an arrow function instead of the anonymous function in the previous example?
function puzzle() {
return () => console.log(arguments)
}
puzzle('a', 'b', 'c')(1, 2, 3)
In this case, the arguments
object refers to the context of the puzzle
function, because arrow functions don’t create a closure. For this reason, the printed arguments will be 'a', 'b', 'c'
.
I’ve mentioned there are several flavors of arrow functions, but so far we’ve only looked at their fully fleshed version. What are the other ways to represent an arrow function?
Let’s look one more time at the arrow function syntax we’ve learned so far:
var example = (parameters) => {
// function body
}
An arrow function with exactly one parameter can omit the parentheses. This is optional. It’s useful when passing the arrow function to another method, as it reduces the amount of parentheses involved, making it easier for some humans to parse the code:
var double = value => {
return value * 2
}
Arrow functions are heavily used for simple functions, such as the double
function we just saw. The following flavor of arrow functions does away with the function body. Instead, you provide an expression such as value * 2
. When the function is called, the expression is evaluated and its result is returned. The return
statement is implicit, and there’s no need for curly braces denoting the function body anymore, as you can only use a single expression:
var double = (value) => value * 2
Note that you can combine implicit parentheses and implicit return, making for concise arrow functions:
var double = value => value * 2
When you need to implicitly return an object literal, you’ll need to wrap that object literal expression in parentheses. Otherwise, the compiler would interpret your curly braces as the start and the end of the function block.
var objectFactory = () => ({ modular: 'es6' })
In the following example, JavaScript interprets the curly braces as the body of our arrow function. Furthermore, number
is interpreted as a labelLabels are used as a way of identifying instructions. Labels can be used by goto
statements, to indicate what instruction we should jump to; break
statements, to indicate the sequence we want to break out of; and continue
statements, to indicate the sequence we want to advance. and then figures out we have a value
expression that doesn’t do anything. Since we’re in a block and not returning anything, the mapped values will be undefined
:
[1, 2, 3].map(value => { number: value })
// <- [undefined, undefined, undefined]
If our attempt at implicitly returning an object literal had more than a single property, then the compiler wouldn’t be able to make sense of the second property, and it’d throw a SyntaxError
:
[1, 2, 3].map(value => { number: value, verified: true })
// <- SyntaxError
Wrapping the expression in parentheses fixes these issues, because the compiler would no longer interpret it as a function block. Instead, the object declaration becomes an expression that evaluates to the object literal we want to return implicitly:
[1, 2, 3].map(value => ({ number: value, verified: true }))
/* <- [
{ number: 1, verified: true },
{ number: 2, verified: true },
{ number: 3, verified: true }]
*/
Now that you understand arrow functions, let’s ponder about their merits and where they might be a good fit.
As a rule of thumb, you shouldn’t blindly adopt ES6 features wherever you can. Instead, it’s best to reason about each case individually and see whether adopting the new feature actually improves code readability and maintainability. ES6 features are not strictly better than what we had all along, and it’s a bad idea to treat them as such.
There are a few situations where arrow functions may not be the best tool. For example, if you have a large function comprised of several lines of code, replacing function
with ⇒
is hardly going to improve your code. Arrow functions are often most effective for short routines, where the function
keyword and syntax boilerplate make up a significant portion of the function expression.
Properly naming a function adds context to make it easier for humans to interpret them. Arrow functions can’t be explicitly named, but they can be named implicitly by assigning them to a variable. In the following example, we assign an arrow function to the throwError
variable. When calling this function results in an error, the stack trace properly identifies the arrow function as throwError
:
var throwError = message => {
throw new Error(message)
}
throwError('this is a warning')
<- Uncaught Error: this is a warning
at throwError
Arrow functions are neat when it comes to defining anonymous functions that should probably be lexically bound anyway, and they can definitely make your code more terse in some situations. They are particularly useful in most functional programming situations, such as when using .map
, .filter
, or .reduce
on collections, as shown in the following example:
[1, 2, 3, 4]
.map(value => value * 2)
.filter(value => value > 2)
.forEach(value => console.log(value))
// <- 4
// <- 6
// <- 8
This is one of the most flexible and expressive features in ES6. It’s also one of the simplest. It binds properties to as many variables as you need. It works with objects, arrays, and even in function
parameter lists. Let’s go step by step, starting with objects.
Imagine you had a program with some comic book characters, Bruce Wayne being one of them, and you want to refer to properties in the object that describes him. Here’s the example object we’ll be using for Batman:
var character = {
name: 'Bruce',
pseudonym: 'Batman',
metadata: {
age: 34,
gender: 'male'
},
batarang: ['gas pellet', 'bat-mobile control', 'bat-cuffs']
}
If you wanted a pseudonym
variable referencing character.pseudonym
, you could write the following bit of ES5 code. This is commonplace when, for instance, you’ll be referencing pseudonym
in several places in your codebase and you’d prefer to avoid typing out character.pseudonym
each time:
var pseudonym = character.pseudonym
With destructuring in assignment, the syntax becomes a bit more clear. As you can see in the next example, you don’t have to write pseudonym
twice, while still clearly conveying intent. The following statement is equivalent to the previous one written in ES5 code:
var { pseudonym } = character
Just like you could declare multiple comma-separated variables with a single var
statement, you can also declare multiple variables within the curly braces of a destructuring expression:
var { pseudonym, name } = character
In a similar fashion, you could mix and match destructuring with regular variable declarations in the same var
statement. While this might look a bit confusing at first, it’ll be up to any JavaScript coding style guides you follow to determine whether it’s appropriate to declare several variables in a single statement. In any case, it goes to show the flexibility offered by destructuring syntax:
var { pseudonym } = character, two = 2
If you want to extract a property named pseudonym
but would like to declare it as a variable named alias
, you can use the following destructuring syntax, known as aliasing. Note that you can use alias
or any other valid variable name:
var { pseudonym: alias } = character
console.log(alias)
// <- 'Batman'
While aliases don’t look any simpler than the ES5 flavor, alias = character.pseudonym
, they start making sense when you consider the fact that destructuring supports deep structures, as in the following example:
var { metadata: { gender } } = character
In cases like the previous one, where you have deeply nested properties being destructured, you might be able to convey a property name more clearly if you choose an alias. Consider the next snippet, where a property named code
wouldn’t have been as indicative of its contents as colorCode
could be:
var { metadata: { gender: characterGender } } = character
The scenario we just saw repeats itself frequently, because properties are often named in the context of their host object. While palette.color.code
is perfectly descriptive, code
on its own could mean a wide variety of things, and aliases such as colorCode
can help you bring context back into the variable name while still using destructuring.
Whenever you access a nonexistent property in ES5 notation, you get a value of undefined
:
console.log(character.boots)
// <- undefined
console.log(character['boots'])
// <- undefined
With destructuring, the same behavior prevails. When declaring a destructured variable for a property that’s missing, you’ll get back undefined
as well.
var { boots } = character
console.log(boots)
// <- undefined
A destructured declaration accessing a nested property of a parent object that’s null
or undefined
will throw an Exception
, just like regular attempts to access properties of null
or undefined
would, in other cases.
var { boots: { size } } = character
// <- Exception
var { missing } = null
// <- Exception
When you think of that piece of code as the equivalent ES5 code shown next, it becomes evident why the expression must throw, given that destructuring is mostly syntactic sugar.
var nothing = null
var missing = nothing.missing
// <- Exception
As part of destructuring, you can provide default values for those cases where the value is undefined
. The default value can be anything you can think of: numbers, strings, functions, objects, a reference to another variable, etc.
var { boots = { size: 10 } } = character
console.log(boots)
// <- { size: 10 }
Default values can also be provided in nested property destructuring.
var { metadata: { enemy = 'Satan' } } = character
console.log(enemy)
// <- 'Satan'
For use in combination with aliases, you should place the alias first, and then the default value, as shown next.
var { boots: footwear = { size: 10 } } = character
It’s possible to use the computed property names syntax in destructuring patterns. In this case, however, you’re required to provide an alias to be used as the variable name. That’s because computed property names allow arbitrary expressions and thus the compiler wouldn’t be able to infer a variable name. In the following example we use the value
alias, and a computed property name to extract the boots
property from the character
object.
var { ['boo' + 'ts']: characterBoots } = character
console.log(characterBoots)
// <- true
This flavor of destructuring is probably the least useful, as characterBoots = character[type]
is usually simpler than { [type]: characterBoots } = character
, as it’s a more sequential statement. That being said, the feature is useful when you have properties you want to declare in the object literal, as opposed to using subsequent assignment statements.
That’s it, as far as objects go, in terms of destructuring. What about arrays?
The syntax for destructuring arrays is similar to that of objects. The following example shows a coordinates
object that’s destructured into two variables: x
and y
. Note how the notation uses square brackets instead of curly braces; this denotes we’re using array destructuring instead of object destructuring. Instead of having to sprinkle your code with implementation details like x = coordinates[0]
, with destructuring you can convey your meaning clearly and without explicitly referencing the indices, naming the values instead.
var coordinates = [12, -7]
var [x, y] = coordinates
console.log(x)
// <- 12
When destructuring arrays, you can skip uninteresting properties or those that you otherwise don’t need to reference.
var names = ['James', 'L.', 'Howlett']
var [ firstName, , lastName ] = names
console.log(lastName)
// <- 'Howlett'
Array destructuring allows for default values just like object destructuring.
var names = ['James', 'L.']
var [ firstName = 'John', , lastName = 'Doe' ] = names
console.log(lastName)
// <- 'Doe'
In ES5, when you have to swap the values of two variables, you typically resort to a third, temporary variable, as in the following snippet.
var left = 5
var right = 7
var aux = left
left = right
right = aux
Destructuring helps you avoid the aux
declaration and focus on your intent. Once again, destructuring helps us convey intent more tersely and effectively for the use case.
var left = 5
var right = 7
[left, right] = [right, left]
The last area of destructuring we’ll be covering is function parameters.
Function parameters in ES6 enjoy the ability of specifying default values as well. The following example defines a default exponent
with the most commonly used value.
function powerOf(base, exponent = 2) {
return Math.pow(base, exponent)
}
Defaults can be applied to arrow function parameters as well. When we have default values in an arrow function we must wrap the parameter list in parentheses, even when there’s a single parameter.
var double = (input = 0) => input * 2
Default values aren’t limited to the rightmost parameters of a function, as in a few other programming languages. You could provide default values for any parameter, in any position.
function sumOf(a = 1, b = 2, c = 3) {
return a + b + c
}
console.log(sumOf(undefined, undefined, 4))
// <- 1 + 2 + 4 = 7
In JavaScript it’s not uncommon to provide a function with an options
object, containing several properties. You could determine a default options
object if one isn’t provided, as shown in the next snippet.
var defaultOptions = { brand: 'Volkswagen', make: 1999 }
function carFactory(options = defaultOptions) {
console.log(options.brand)
console.log(options.make)
}
carFactory()
// <- 'Volkswagen'
// <- 1999
The problem with this approach is that as soon as the consumer of carFactory
provides an options
object, you lose all of your defaults.
carFactory({ make: 2000 })
// <- undefined
// <- 2000
We can mix function parameter default values with destructuring, and get the best of both worlds.
A better approach than merely providing a default value might be to destructure options
entirely, providing default values for each property, individually, within the destructuring pattern. This approach also lets you reference each option without going through an options
object, but you lose the ability to reference options
directly, which might represent an issue in some situations.
function carFactory({ brand = 'Volkswagen', make = 1999 }) {
console.log(brand)
console.log(make)
}
carFactory({ make: 2000 })
// <- 'Volkswagen'
// <- 2000
In this case, however, we’ve once again lost the default value for the case where the consumer doesn’t provide any options
. Meaning carFactory()
will now throw when an options
object isn’t provided. This can be remedied by using the syntax shown in the following snippet of code, which adds a default options
value of an empty object. The empty object is then filled, property by property, with the default values on the destructuring pattern.
function carFactory({
brand = 'Volkswagen',
make = 1999
} = {}) {
console.log(brand)
console.log(make)
}
carFactory()
// <- 'Volkswagen'
// <- 1999
Besides default values, you can use destructuring in function parameters to describe the shape of objects your function can handle. Consider the following code snippet, where we have a car
object with several properties. The car
object describes its owner, what kind of car it is, who manufactured it, when, and the owner’s preferences when he purchased the car.
var car = {
owner: {
id: 'e2c3503a4181968c',
name: 'Donald Draper'
},
brand: 'Peugeot',
make: 2017,
model: '208',
preferences: {
airbags: true,
airconditioning: false,
color: 'red'
}
}
If we wanted to implement a function that only takes into account certain properties of a parameter, it might be a good idea to reference those properties explicitly by destructuring up front. The upside is that we become aware of every required property upon reading the function’s signature.
When we destructure everything up front, it’s easy to spot when input doesn’t adhere to the contract of a function. The following example shows how every property we need could be specified in the parameter list, laying bare the shape of the objects we can handle in the getCarProductModel
API.
var getCarProductModel = ({ brand, make, model }) => ({
sku: brand + ':' + make + ':' + model,
brand,
make,
model
})
getCarProductModel(car)
Besides default values and filling an options
object, let’s explore what else destructuring is good at.
Whenever there’s a function that returns an object or an array, destructuring makes it much terser to interact with. The following example shows a function that returns an object with some coordinates, where we grab only the ones we’re interested in: x
and y
. We’re avoiding an intermediate point
variable declaration that often gets in the way without adding a lot of value to the readability of your code.
function getCoordinates() {
return { x: 10, y: 22, z: -1, type: '3d' }
}
var { x, y } = getCoordinates()
The case for default option values bears repeating. Imagine you have a random
function that produces random integers between a min
and a max
value, and that it should default to values between 1 and 10. This is particularly interesting as an alternative to named parameters in languages with strong typing features, such as Python and C#. This pattern, where you’re able to define default values for options and then let consumers override them individually, offers great flexibility.
function random({ min = 1, max = 10 } = {}) {
return Math.floor(Math.random() * (max - min)) + min
}
console.log(random())
// <- 7
console.log(random({ max: 24 }))
// <- 18
Regular expressions are another great fit for destructuring. Destructuring empowers you to name groups from a match without having to resort to index numbers. Here’s an example RegExp
that could be used for parsing simple dates, and an example of destructuring those dates into each of their components. The first entry in the resulting array is reserved for the raw input string, and we can discard it.
function splitDate(date) {
var rdate = /(\d+).(\d+).(\d+)/
return rdate.exec(date)
}
var [ , year, month, day] = splitDate('2017-11-06')
You’ll want to be careful when the regular expression doesn’t match, as that returns null
. Perhaps a better approach would be to test for the failure case before destructuring, as shown in the following bit of code.
var matches = splitDate('2017-11-06')
if (matches === null) {
return
}
var [, year, month, day] = matches
Let’s turn our attention to spread and rest operators next.
Before ES6, interacting with an arbitrary amount of function parameters was complicated. You had to use arguments
, which isn’t an array but has a length
property. Usually you’d end up casting the arguments
object into an actual array using Array#slice.call
, and going from there, as shown in the following snippet.
function join() {
var list = Array.prototype.slice.call(arguments)
return list.join(', ')
}
join('first', 'second', 'third')
// <- 'first, second, third'
ES6 has a better solution to the problem, and that’s rest parameters.
You can now precede the last parameter in any JavaScript function with three dots, converting it into a special "rest parameter." When the rest parameter is the only parameter in a function, it gets all arguments passed to the function: it works just like the .slice
solution we saw earlier, but you avoid the need for a complicated construct like arguments
, and it’s specified in the parameter list.
function join(...list) {
return list.join(', ')
}
join('first', 'second', 'third')
// <- 'first, second, third'
Named parameters before the rest parameter won’t be included in the list
.
function join(separator, ...list) {
return list.join(separator)
}
join('; ', 'first', 'second', 'third')
// <- 'first; second; third'
Note that arrow functions with a rest parameter must include parentheses, even when it’s the only parameter. Otherwise, a SyntaxError
would be thrown. The following piece of code is a beautiful example of how combining arrow functions and rest parameters can yield concise functional expressions.
var sumAll = (...numbers) => numbers.reduce(
(total, next) => total + next
)
console.log(sumAll(1, 2, 5))
// <- 8
Compare that with the ES5 version of the same function. Granted, it’s all in the complexity. While terse, the sumAll
function can be confusing to readers unused to the .reduce
method, or because it uses two arrow functions. This is a complexity trade-off that we’ll cover in the second part of the book.
function sumAll() {
var numbers = Array.prototype.slice.call(arguments)
return numbers.reduce(function (total, next) {
return total + next
})
}
console.log(sumAll(1, 2, 5))
// <- 8
Next up we have the spread operator. It’s also denoted with three dots, but it serves a slightly different purpose.
The spread operator can be used to cast any iterable object into an array. Spreading effectively expands an expression onto a target such as an array literal or a function call. The following example uses …arguments
to cast function parameters into an array literal.
function cast() {
return [...arguments]
}
cast('a', 'b', 'c')
// <- ['a', 'b', 'c']
We could use the spread operator to split a string into an array with each code point that makes up the string.
[...'show me']
// <- ['s', 'h', 'o', 'w', ' ', 'm', 'e']
You can place additional elements to the left and to the right of a spread operation and still get the result you would expect.
function cast() {
return ['left', ...arguments, 'right']
}
cast('a', 'b', 'c')
// <- ['left', 'a', 'b', 'c', 'right']
Spread is an useful way of combining multiple arrays. The following example shows how you can spread arrays anywhere into an array literal, expanding their elements into place.
var all = [1, ...[2, 3], 4, ...[5], 6, 7]
console.log(all)
// <- [1, 2, 3, 4, 5, 6, 7]
Note that the spread operator isn’t limited to arrays and arguments
. The spread operator can be used with any iterable object. Iterable is a protocol in ES6 that allows you to turn any object into something that can be iterated over. We’ll research the iterable protocol in [iteration-and-flow-control].
When you want to extract an element or two from the beginning of an array, the common approach is to use .shift
. While functional, the following snippet of code can be hard to understand at a glance, because it uses .shift
twice to grab a different item from the beginning of the list
each time. The focus is, like in many other pre-ES6 situations, placed on getting the language to do what we want.
var list = ['a', 'b', 'c', 'd', 'e']
var first = list.shift()
var second = list.shift()
console.log(first)
// <- 'a'
In ES6, you can combine spread with array destructuring. The following piece of code is similar to the preceding one, except we’re using a single line of code, and that single line is more descriptive of what we’re doing than repeatedly calling list.shift()
in the previous example.
var [first, second, ...other] = ['a', 'b', 'c', 'd', 'e']
console.log(other)
// <- ['c', 'd', 'e']
Using the spread operator you can focus on implementing the functionality you need while the language stays out of the way. Improving expressiveness and decreasing time spent working around language limitations is a common pattern we can observe in ES6 features.
Before ES6, whenever you had a dynamic list of arguments that needed to be applied to a function call, you’d use .apply
. This is inelegant because .apply
also takes a context for this
, which, in this scenario, you don’t want to concern yourself with.
fn.apply(null, ['a', 'b', 'c'])
Besides spreading onto arrays, you can also spread items onto function calls. The following example shows how you could use the spread operator to pass an arbitrary number of arguments to the multiply
function.
function multiply(left, right) {
return left * right
}
var result = multiply(...[2, 3])
console.log(result)
// <- 6
Spreading arguments onto a function call can be combined with regular arguments as much as necessary, just like with array literals. The next example calls print
with a couple of regular arguments and a couple of arrays being spread over the parameter list. Note how conveniently the rest list
parameter matches all the provided arguments. Spread and rest can help make code intent more clear without diluting your codebase.
function print(...list) {
console.log(list)
}
print(1, ...[2, 3], 4, ...[5])
// <- [1, 2, 3, 4, 5]
Another limitation of .apply
is that combining it with the new
keyword, when instantiating an object, becomes very verbose. Here’s an example of combining new
and .apply
to create a Date
object. Ignore for a moment that months in JavaScript dates are zero-based, turning 11
into December, and consider how much of the following line of code is spent bending the language in our favor, just to instantiate a Date
object.
new (Date.bind.apply(Date, [null, 2017, 11, 31]))
// <- Thu Dec 31 2017
As shown in the next snippet, the spread operator strips away all the complexity and we’re only left with the important bits. It’s a new
instance, it uses …
to spread a dynamic list of arguments over the function call, and it’s a Date
. That’s it.
new Date(...[2017, 11, 31])
// <- Thu Dec 31 2017
The following table summarizes the use cases we’ve discussed for the spread operator.
Use case | ES5 | ES6 |
---|---|---|
Concatenation |
|
|
Push an array onto list |
|
|
Destructuring |
|
|
|
|
|
Template literals are a vast improvement upon regular JavaScript strings. Instead of using single or double quotes, template literals are declared using backticks, as shown next.
var text = `This is my first template literal`
Given that template literals are delimited by backticks, you’re now able to declare strings with both '
and "
quotation marks in them without having to escape either, as shown here.
var text = `I'm "amazed" at these opportunities!`
One of the most appealing features of template literals is their ability to interpolate JavaScript expressions.
With template literals, you’re able to interpolate any JavaScript expressions inside your templates. When the template literal expression is reached, it’s evaluated and you get back the compiled result. The following example interpolates a name
variable into a template literal.
var name = 'Shannon'
var text = `Hello, ${ name }!`
console.log(text)
// <- 'Hello, Shannon!'
We’ve already established that you can use any JavaScript expressions, and not just variables. You can think of each expression in a template literal as defining a variable before the template runs, and then concatenating each variable with the rest of the string. However, the code becomes easier to maintain because it doesn’t involve manually concatenating strings and JavaScript expressions. The variables you use in those expressions, the functions you call, and so on, should all be available to the current scope.
It will be up to your coding style guides to decide how much logic you want to cram into the interpolation expressions. The following code snippet, for example, instantiates a Date
object and formats it into a human-readable date inside a template literal.
`The time and date is ${ new Date().toLocaleString() }.`
// <- 'the time and date is 8/26/2017, 3:15:20 PM'
You could interpolate mathematical operations.
`The result of 2+3 equals ${ 2 + 3 }`
// <- 'The result of 2+3 equals 5'
You could even nest template literals, as they are also valid JavaScript expressions.
`This template literal ${ `is ${ 'nested' }` }!`
// <- 'This template literal is nested!'
Another perk of template literals is their multiline string representation support.
Before template literals, if you wanted to represent strings in multiple lines of JavaScript, you had to resort to escaping, concatenation, arrays, or even elaborate hacks using comments. The following snippet summarizes some of the most common multiline string representations prior to ES6.
var escaped =
'The first line\n\
A second line\n\
Then a third line'
var concatenated =
'The first line\n' `
'A second line\n' `
'Then a third line'
var joined = [
'The first line',
'A second line',
'Then a third line'
].join('\n')
Under ES6, you could use backticks instead. Template literals support multiline strings by default. Note how there are no \n
escapes, no concatenation, and no arrays involved.
var multiline =
`The first line
A second line
Then a third line`
Multiline strings really shine when you have, for instance, a chunk of HTML you want to interpolate some variables into. If you need to display a list within the template, you could iterate the list, mapping its items into the corresponding markup, and then return the joined result from an interpolated expression. This makes it a breeze to declare subcomponents within your templates, as shown in the following piece of code.
var book = {
title: 'Modular ES6',
excerpt: 'Here goes some properly sanitized HTML',
tags: ['es6', 'template-literals', 'es6-in-depth']
}
var html = `<article>
<header>
<h1>${ book.title }</h1>
</header>
<section>${ book.excerpt }</section>
<footer>
<ul>
${
book.tags
.map(tag => `<li>${ tag }</li>`)
.join('\n ')
}
</ul>
</footer>
</article>`
The template we’ve just prepared would produce output like what’s shown in the following snippet of code. Note how spacing was preserved,[1] and how <li>
tags are properly indented thanks to how we joined them together using a few spaces.
<article>
<header>
<h1>Modular ES6</h1>
</header>
<section>Here goes some properly sanitized HTML</section>
<footer>
<ul>
<li>es6</li>
<li>template-literals</li>
<li>es6-in-depth</li>
</ul>
</footer>
</article>
A downside when it comes to multiline template literals is indentation. The following example shows a typically indented piece of code with a template literal contained in a function. While we may have expected no indentation, the string has four spaces of indentation.
function getParagraph() {
return `
Dear Rod,
This is a template literal string that's indented
four spaces. However, you may have expected for it
to be not indented at all.
Nico
`
}
While not ideal, we could get away with a utility function to remove indentation from each line in the resulting string.
function unindent(text) {
return text
.split('\n')
.map(line => line.slice(4))
.join('\n')
.trim()
}
Sometimes, it might be a good idea to pre-process the results of interpolated expressions before inserting them into your templates. For these advanced kinds of use cases, it’s possible to use another feature of template literals called tagged templates.
By default, JavaScript interprets \
as an escape character with special meaning. For example, \n
is interpreted as a newline, \u00f1
is interpreted as ñ
, etc. You could avoid these rules using the String.raw
tagged template. The next snippet shows a template literal using String.raw
, which prevents \n
from being interpreted as a newline.
var text = String.raw`"\n" is taken literally.
It'll be escaped instead of interpreted.`
console.log(text)
// "\n" is taken literally.
// It'll be escaped instead of interpreted.
The String.raw
prefix we’ve added to our template literal is a tagged template. It’s used to parse the template. Tagged templates receive a parameter with an array containing the static parts of the template, as well as the result of evaluating each expression, each in its own parameter.
As an example, consider the tagged template literal in the next code snippet.
tag`Hello, ${ name }. I am ${ emotion } to meet you!`
That tagged template expression would, in practice, be translated into the following function call.
tag(
['Hello, ', '. I am ', ' to meet you!'],
'Maurice',
'thrilled'
)
The resulting string is built by taking each part of the template and placing one of the expressions next to it, until there are no more parts of the template left. It might be hard to interpret the argument list without looking at a potential implementation of the default template literal tag
, so let’s do that.
The following snippet of code shows a possible implementation of the default tag
. It provides the same functionality as a template literal does when a tagged template isn’t explicitly provided. It reduces the parts
array into a single value, the result of evaluating the template literal. The result is initialized with the first part
, and then each other part
of the template is preceded by one of the values
. We’ve used the rest parameter syntax for …values
in order to make it easier to grab the result of evaluating each expression in the template. We’re using an arrow function with an implicit return
statement, given that its expression is relatively simple.
function tag(parts, ...values) {
return parts.reduce(
(all, part, index) => all + values[index - 1] + part
)
}
You can try the tag
template using code like in the following snippet. You’ll notice you get the same output as if you omitted tag
, since we’re copying the default behavior.
var name = 'Maurice'
var emotion = 'thrilled'
var text = tag`Hello, ${ name }. I am ${ emotion } to meet you!`
console.log(text)
// <- 'Hello Maurice, I am thrilled to meet you!'
Multiple use cases apply to tagged templates. One possible use case might be to make user input uppercase, making the string sound satirical. That’s what the following piece of code does. We’ve modified tag
slightly so that any interpolated strings are uppercased.
function upper(parts, ...values) {
return parts.reduce((all, part, index) =>
all + values[index - 1].toUpperCase() + part
)
}
var name = 'Maurice'
var emotion = 'thrilled'
upper`Hello, ${ name }. I am ${ emotion } to meet you!`
// <- 'Hello MAURICE, I am THRILLED to meet you!'
A decidedly more useful use case would be to sanitize expressions interpolated into your templates, automatically, using a tagged template. Given a template where all expressions are considered user input, we could use a hypothetical sanitize
library to remove HTML tags and similar hazards, preventing cross-site scripting (XSS) attacks where users might inject malicious HTML into our websites.
function sanitized(parts, ...values) {
return parts.reduce((all, part, index) =>
all + sanitize(values[index - 1]) + part
)
}
var comment = 'Evil comment<iframe src="http://evil.corp">
</iframe>'
var html = sanitized`<div>${ comment }</div>`
console.log(html)
// <- '<div>Evil comment</div>'
Phew, that malicious <iframe>
almost got us. Rounding out ES6 syntax changes, we have the let
and const
statements.
The let
statement is one of the most well-known features in ES6. It works like a var
statement, but it has different scoping rules.
JavaScript has always had a complicated ruleset when it comes to scoping, driving many programmers crazy when they were first trying to figure out how variables work in JavaScript. Eventually, you discover hoisting, and JavaScript starts making a bit more sense to you. Hoisting means that variables get pulled from anywhere they were declared in user code to the top of their scope. For example, see the following code.
function isItTwo(value) {
if (value === 2) {
var two = true
}
return two
}
isItTwo(2)
// <- true
isItTwo('two')
// <- undefined
JavaScript code like this works, even though two
was declared in a code branch and then accessed outside of said branch. That behavior is due to the fact that var
bindings are bound to the enclosing scope, be it a function or the global scope. That, coupled with hoisting, means that the code we’ve written earlier will be interpreted as if it were written in a similar way to the next piece of code.
function isItTwo(value) {
var two
if (value === 2) {
two = true
}
return two
}
Whether we like it or not, hoisting is more confusing than having block-scoped variables would be. Block scoping works on the curly braces level, rather than the function level.
Instead of having to declare a new function
if we want a deeper scoping level, block scoping allows you to just leverage existing code branches like those in if
, for
, or while
statements; you could also create new {}
blocks arbitrarily. As you may or may not know, the JavaScript language allows us to create an indiscriminate number of blocks, just because we want to.
{{{{{ var deep = 'This is available from outer scope.'; }}}}}
console.log(deep)
// <- 'This is available from outer scope.'
With var
, because of lexical scoping, one could still access the deep
variable from outside those blocks, and not get an error. Sometimes it can be very useful to get errors in these situations, particularly if one or more of the following is true:
-
Accessing the inner variable breaks some sort of encapsulation principle in our code
-
The inner variable doesn’t belong in the outer scope at all
-
The block in question has many siblings that would also want to use the same variable name
-
One of the parent blocks already has a variable with the name we need, but the name is still appropriate to use in the inner block
The let
statement is an alternative to var
. It follows block scoping rules instead of the default lexical scoping rules. With var
, the only way of getting a deeper scope is to create a nested function, but with let
you can just open another pair of curly braces. This means you don’t need entirely new functions to get a new scope; a simple {}
block will do.
let topmost = {}
{
let inner = {}
{
let innermost = {}
}
// attempts to access innermost here would throw
}
// attempts to access inner here would throw
// attempts to access innermost here would throw
One useful aspect of let
statements is that you can use them when declaring a for
loop, and variables will be scoped to the contents of the loop, as shown next.
for (let i = 0; i < 2; i++) {
console.log(i)
// <- 0
// <- 1
}
console.log(i)
// <- i is not defined
Given let
variables declared in a loop are scoped to each step in the loop, the bindings would work as expected in combination with an asynchronous function call, as opposed to what we’re used to with var
. Let’s look at concrete examples.
First, we’ll look at the typical example of how var
scoping works. The i
binding is scoped to the printNumbers
function, and its value increases all the way to 10
as each timeout callback is scheduled. By the time each callback runs—one every 100 milliseconds—i has a value of 10
and thus that’s what’s printed every single time.
function printNumbers() {
for (var i = 0; i < 10; i++) {
setTimeout(function () {
console.log(i)
}, i * 100)
}
}
printNumbers()
Using let
, in contrast, binds the variable to the block’s scope. Indeed, each step in the loop still increases the value of the variable, but a new binding is created each step of the way, meaning that each timeout callback will hold a reference to the binding holding the value of i
at the point when the callback was scheduled, printing every number from 0
through 9
as expected.
function printNumbers() {
for (let i = 0; i < 10; i++) {
setTimeout(function () {
console.log(i)
}, i * 100)
}
}
printNumbers()
One more thing of note about let
is a concept called the "Temporal Dead Zone."
In so many words: if you have code such as the following code snippet, it’ll throw. Once execution enters a scope, and until a let
statement is reached, attempting to access the variable for said let
statement will throw. This is known as the Temporal Dead Zone (TDZ).
{
console.log(name)
// <- ReferenceError: name is not defined
let name = 'Stephen Hawking'
}
If your code tries to access name
in any way before the let name
statement is reached, the program will throw. Declaring a function that references name
before it’s defined is okay, as long as the function doesn’t get executed while name
is in the TDZ, and name
will be in the TDZ until the let name
statement is reached. This snippet won’t throw because return name
isn’t executed until after name
leaves the TDZ.
function readName() {
return name
}
let name = 'Stephen Hawking'
console.log(readName())
// <- 'Stephen Hawking'
But the following snippet will, because access to name
occurs before leaving the TDZ for name
.
function readName() {
return name
}
console.log(readName())
// ReferenceError: name is not defined
let name = 'Stephen Hawking'
Note that the semantics for these examples don’t change when name
isn’t actually assigned a value when initially declared. The next snippet throws as well, as it still tries to access name
before leaving the TDZ.
function readName() {
return name
}
console.log(readName())
// ReferenceError: name is not defined
let name
The following bit of code works because it leaves the TDZ before accessing name
in any way.
function readName() {
return name
}
let name
console.log(readName())
// <- undefined
The only tricky part to remember is that it’s okay to declare functions that access a variable in the TDZ as long as the statements accessing TDZ variables aren’t reached before the let
declaration is reached.
The whole point of the TDZ is to make it easier to catch errors where accessing a variable before it’s declared in user code leads to unexpected behavior. This happened a lot before ES6 due both to hoisting and poor coding conventions. In ES6 it’s easier to avoid. Keep in mind that hoisting still applies for let
as well. That means variables will be created when we enter the scope, and the TDZ will be born, but they will be inaccessible until code execution hits the place where the variable was actually declared, at which point we leave the TDZ and are allowed to access the variable.
We made it through the Temporal Dead Zone! It’s now time to cover const
, a similar statement to let
but with a few major differences.
The const
statement is block scoped like let
, and it follows TDZ semantics as well. In fact, TDZ semantics were implemented because of const
, and then TDZ was also applied to let
for consistency. The reason why const
needed TDZ semantics is that it would otherwise have been possible to assign a value to a hoisted const
variable before reaching the const
declaration, meaning that the declaration itself would throw. The temporal dead zone defines a solution that solves the problem of making const
assignment possible only at declaration time, helps avoid potential issues when using let
, and also makes it easy to eventually implement other features that benefit from TDZ semantics.
The following snippet shows how const
follows block scoping rules exactly like let
.
const pi = 3.1415
{
const pi = 6
console.log(pi)
// <- 6
}
console.log(pi)
// <- 3.1415
We’ve mentioned major differences between let
and const
. The first one is that const
variables must be declared using an initializer. A const
declaration must be accompanied by an initializer, as shown in the following snippet.
const pi = 3.1415
const e // SyntaxError, missing initializer
Besides the assignment when initializing a const
, variables declared using a const
statement can’t be assigned to. Once a const
is initialized, you can’t change its value. Under strict mode, attempts to change a const
variable will throw. Outside of strict mode, they’ll fail silently, as demonstrated by the following piece of code.
const people = ['Tesla', 'Musk']
people = []
console.log(people)
// <- ['Tesla', 'Musk']
Note that creating a const
variable doesn’t mean that the assigned value becomes immutable. This is a common source of confusion, and it is strongly recommended that you pay attention when reading the following warning.
Using const
only means that the variable will always have a reference to the same object or primitive value, because that reference can’t change. The reference itself is immutable, but the value held by the variable does not become immutable.
The following example shows that even though the people
reference couldn’t be changed, the array itself can indeed be modified. If the array were immutable, this wouldn’t be possible.
const people = ['Tesla', 'Musk']
people.push('Berners-Lee')
console.log(people)
// <- ['Tesla', 'Musk', 'Berners-Lee']
A const
statement only prevents the variable binding from referencing a different value. Another way of representing that difference is the following piece of code, where we create a people
variable using const
, and later assign that variable to a plain var humans
binding. We can reassign the humans
variable to reference something else, because it wasn’t declared using const
. However, we can’t reassign people
to reference something else, because it was created using const
.
const people = ['Tesla', 'Musk']
var humans = people
humans = 'evil'
console.log(humans)
// <- 'evil'
If our goal was to make the value immutable, then we’d have to use a function such as Object.freeze
. Using Object.freeze
prevents extensions to the provided object, as represented in the following code snippet.
const frozen = Object.freeze(
['Ice', 'Icicle', 'Ice cube']
)
frozen.push('Water')
// Uncaught TypeError: Can't add property 3
// object is not extensible
Let’s take a moment to discuss the merits of const
and let
.
New features should never be used for the sake of using new features. ES6 features should be used where they genuinely improve code readability and maintainability. The let
statement is able to, in many cases, simplify pieces of code where you’d otherwise declare var
statements at the top of a function just so that hoisting doesn’t produce unexpected results. Using the let
statement you’d be able to place your declarations at the top of a code block, instead of the top of the whole function, reducing the latency in mental trips to the top of the scope.
Using the const
statement is a great way to prevent accidents. The following piece of code is a plausibly error-prone scenario where we pass a reference to an items
variable off to a checklist
function, which then returns a todo
API that in turn interacts with said items
reference. When the items
variable is changed to reference another list of items, we’re in for a world of hurt—the todo
API still works with the value items
used to have, but items
is referencing something else now.
var items = ['a', 'b', 'c']
var todo = checklist(items)
todo.check()
console.log(items)
// <- ['b', 'c']
items = ['d', 'e']
todo.check()
console.log(items)
// <- ['d', 'e'], would be ['c'] if items had been constant
function checklist(items) {
return {
check: () => items.shift()
}
}
This type of problem is hard to debug because it might take a while until you figure out that the reference was modified. The const
statement helps prevent this scenario by producing a runtime error (under strict mode), which should help capture the bug soon after it’s introduced.
A similar benefit of using the const
statement is its ability to visually identify variables that aren’t reassigned. The const
cue signals that the variable binding is read-only and thus we have one less thing to worry about when reading a piece of code.
If we choose to default to using const
and use let
for variables that need to be reassigned, all variables will follow the same scoping rules, which makes code easier to reason about. The reason why const
is sometimes proposed as the "default" variable declaration type is that it’s the one that does the least: const
prevents reassignment, follows block scoping, and the declared binding can’t be accessed before the declaration statement is executed. The let
statement allows reassignment, but behaves like const
, so it naturally follows to choose let
when we’re in need of a reassignable variable.
On the counter side, var
is a more complex declaration because it is hard to use in code branches due to function scoping rules, it allows reassignment, and it can be accessed before the declaration statement is reached. The var
statement is inferior to const
and let
, which do less, and is thus less prominent in modern JavaScript codebases.
Throughout this book, we’ll follow the practice of using const
by default and let
when reassignment is desirable. You can learn more about the rationale behind this choice in [practical-considerations].