Below is an outline of all that is in this release, so you get a sense of all that's included.
- Overview
- 🔦 Highlights
- 📝 Changelog
- 👨👩👧👦 Contributors
Content routing is the process of discovering which peers provide a piece of content. Kubo has traditionally only supported libp2p's implementation of Kademlia DHT for content routing.
Kubo can now bridge networks by including support for the delegated routing HTTP API. Users can compose content routers using the Routing.Routers
config to pick content routers with different tradeoffs than a Kademlia DHT (e.g., high-performance and high-capacity centralized endpoints, dedicated Kademlia DHT nodes, routers with unique provider records, privacy-focused content routers).
One example is InterPlanetary Network Indexers, which are HTTP endpoints that cache records from both the IPFS network and other sources such as web3.storage and Filecoin. This improves not only content availability by enabling Kubo to transparently fetch content directly from Filecoin storage providers, but also improves IPFS content routing latency by an order of magnitude and decreases resource consumption.
Note: it's possible to retrieve content stored by Filecoin Storage Providers (SPs) from Kubo if the SPs service Bitswap requests. As of this release, some SPs are advertising Bitswap. You can follow the roadmap progress for IPNIs and Bitswap in SPs here.
In this release, the default content router is changed from dht
to auto
. The auto
router includes the IPFS DHT in addition to the cid.contact IPNI instance. In future releases, we plan to expand the functionality of auto
to encompass automatic discovery of content routers, which will improve performance and content availability (for example, see IPIP-342).
Previous behavior can be restored by setting Routing.Type
to dht
.
Alternative routing rules, including alternative IPNI endpoints, can be configured in Routing.Routers
after setting Routing.Type
to custom
.
Learn more in the Routing
docs.
Default Reprovider.Interval
changed from 12h to 22h to match new defaults for the Provider Record Expiration (48h) in go-libp2p-kad-dht v0.20.0.
The rationale for increasing this can be found in RFM 17: Provider Record Livenes Report, kubo#9326, and the upstream DHT specifications at libp2p/specs#451.
Learn more in the Reprovider
config.
Implemented IPIP-328 which adds support for DAG-JSON and DAG-CBOR, as well as their non-DAG variants, to the gateway. Now, CIDs that encode JSON, CBOR, DAG-JSON and DAG-CBOR objects can be retrieved, and traversed thanks to the special meaning of CBOR Tag 42.
HTTP clients can request JSON, CBOR, DAG-JSON, and DAG-CBOR responses by either
passing the query parameter ?format
or setting the Accept
HTTP header to the
following values:
- JSON:
?format=json
, orAccept: application/json
- CBOR:
?format=cbor
, orAccept: application/cbor
- DAG-JSON:
?format=dag-json
, orAccept: application/vnd.ipld.dag-json
- DAG-JSON:
?format=dag-cbor
, orAccept: application/vnd.ipld.dag-cbor
$ export DIR_CID=bafybeigccimv3zqm5g4jt363faybagywkvqbrismoquogimy7kvz2sj7sq
$ curl -H "Accept: application/vnd.ipld.dag-json" "http://127.0.0.1:8080/ipfs/$DIR_CID" | jq
$ curl "http://127.0.0.1:8080/ipfs/$DIR_CID?format=dag-json" | jq
{
"Data": {
"/": {
"bytes": "CAE"
}
},
"Links": [
{
"Hash": {
"/": "Qmc3zqKcwzbbvw3MQm3hXdg8BQoFjGdZiGdAfXAyAGGdLi"
},
"Name": "1 - Barrel - Part 1 - alt.txt",
"Tsize": 21
},
{
"Hash": {
"/": "QmdMxMx29KVYhHnaCc1icWYxQqXwUNCae6t1wS2NqruiHd"
},
"Name": "1 - Barrel - Part 1 - transcript.txt",
"Tsize": 195
},
{
"Hash": {
"/": "QmawceGscqN4o8Y8Fv26UUmB454kn2bnkXV5tEQYc4jBd6"
},
"Name": "1 - Barrel - Part 1.png",
"Tsize": 24862
}
]
}
Fast listings are now enabled for all UnixFS directories: big and small. There is no linear slowdown caused by reading size metadata from child nodes, and the size of DAG representing child items is always present.
As an example, the CID
bafybeiggvykl7skb2ndlmacg2k5modvudocffxjesexlod2pfvg5yhwrqm
represents a UnixFS
directory with over 10k files. Listing big directories was fast
since Kubo 0.13, but in this release it will also include the size column.
WebTransport is a new libp2p transport that was introduced in v0.16 that is based on top of QUIC and HTTP3.
This allows browsers to contact Kubo nodes, so now instead of just serving requests for other system level applicative nodes, you can also serve requests directly to a browser.
For the full story see connectivity.libp2p.io.
WebTransport is enabled by default in part because go-libp2p now supports running WebTransport and QUIC transports on the same QUIC listener. No additional port needs to be opened.
To use this feature, register two listen addresses on the same /ipX/.../udp/XXX
prefix.
go-libp2p now differentiates the first version of QUIC that was originally implemented, Draft-29
, from the ratified protocol in RFC9000, QUICv1
.
This was done for performance (time to first byte) reasons as outlined here.
This manifests as two different multiaddr components /quic
(old Draft-29) and /quic-v1
.
go-libp2p do supports listening with both QUIC versions on one single listener.
WebTransport has only supported QUICv1.
/webtransport
now needs to be prefixed by a /quic-v1
component instead of a /quic
component.
Support for QUIC Draft-29 will be removed at some point in 2023 (tracking issue). As a result, new deployements should use /quic-v1
instead of /quic
.
To support QUICv1 and WebTransport by default a new config migration (v13
) is run which automatically adds entries in addresses-related fields:
- Replace all
/quic/webtransport
to/quic-v1/webtransport
. - For all
/quic
listeners, keep the Draft-29 listener, and on the same ip and port, add/quic-v1
and/quic-v1/webtransport
listeners.
To help protect nodes from DoS (resource exhaustion) and eclipse attacks, Kubo enabled the go-libp2p Network Resource Manager by default in Kubo 0.17.
Introducing limits like this by default after the fact is tricky, and various improvements have been made to improve the UX including:
- Dedicated docs concerning the resource manager integration. This is a great place to go to learn more or get your FAQs answered.
- Increasing the default limits for the resource manager.
- Enabling the
Swarm.ConnMgr
by default and reducing it thresholds so it can intelligently prune connections in many cases before the indiscriminate resource manager kicks in. - Adjusted log messages and levels to make clear that the resource manager is likely doing your node a favor by bounding resources.
- Other miscellaneous config and command bugs reported by users.