We are a DAO of TON enthusiasts who want to help the community to grow and thrive.
Some of us are developers, some are researchers, some are just interested in the project. We are all united by the desire to make TON a success.
And you can become one of us. Read about how to join below.
Finally, it's easy!
Note: **Due to our small team size, we don't support setting up base locally due to restrictions on developer credentials. Although relatively difficult and new territory, you're welcome to set up this up yourself. In addition to running TON Metaverse locally, you'll need to also run TON Storage and Dialog locally because the developer Dialog server is locked down and your local Reticulum will not connect properly)
Polygon Count | We recommend your scene use no more than 50,000 triangles for mobile devices. |
Materials | We recommend using no more than 25 unique materials in your scene to reduce draw calls on mobile devices. |
Textures | We recommend your textures use no more than 256MB of video RAM for mobile devices. We also recommend against using textures larger than 2048 x 2048. |
Lights | While dynamic lights are not enabled on mobile devices, we recommend using no more than 3 lights in your scene (excluding ambient and hemisphere lights) for your scene to run on low end PCs. |
Metaspace size | We recommend a final file size of no more than 16MB for low bandwidth connections. Reducing the file size will reduce the time it takes to download your scene |
Metaspace limit | 128 MB |
App uses NodeJS, with vite.js on the backend, serving up index.js and index.html and other types of imports to the end-client. We also have Totum which accepts requests to decode or load various types of files and represent it as a javascript file, and wsrtc handling the multiplayer over websockets. Users can join rooms and share CRDT z.js state data to one another across the network. Also utilised by wsrtc are web codecs used to perform voice encoding and decoding. Once the app is installed all you need to do is go to localhost:3000 to launch the client. ThreeJS is used as a Renderer, physx-wasm for physics calculations as well as VRM models for avatars.
To run App you'll need Node.js v.17 installed. To manage your Node version use NVM.
git clone --recurse-submodules https://github.com/tonmetaspace/app.git
cd app/
npm install
npm run start
When cloning App from git, you must include the option "--recurse-submodules". The App repo relies upon and imports other Webaverse repos that are vital to the functioning application.
We prefer using VSCode for development, so the below notes reflect that toolset; however you should be able to adapt this guide to apply to any other IDEs.
The App primarily uses the following technologies
**Root**
│
├───src <--- React Application Resides Here
├───Main.jsx <-- Rgisters the routes of the React App and Load Dom
├───App.jsx <-- Loads Webaverse.js from Root directory
│
├─ index.js <-- This starts the vite server that serves the React App
│
├─ webaverse.js <-- This is the entry point of the Webaverse
│
├─ io-manager.js <-- Controls the input events within the application.
...
The application uses Vite to hot reload itself automatically if there are any changes to any files. To start the App in dev mode, run:
npm run start
Any changes inside the packages folder won't recompile automatically and so will require restarting the entire development server by just running again: npm run dev
Take control of your online communities with a fully open source virtual world platform that you can make your own.
Some examples:
- TON CON, which is available at TON CON Meta)
- TON CON NFT - This is a collection of 898 unique digital NFTs created by TON CON 2022. Now you can use them in the Web3 era as digital information on the Open Network.
- TON Fingerprints - This is a collection of 10 000 unique digital fingerprints created based on the algorithm for generating basic rings using a noise texture. Like human fingerprints, you can now use them for the Web3 and Metaverse era as digital biometric information on The Open Network.
For contributors
If you are a developer, you can contribute to the projects, or add your own. Also, we are looking for people who can help with documentation.
To receive compensation for open-sourcing a metaspaces that is useful for TON, please write a TON Metaspace.
For metaspace builders
After your project becomes a part of the TON Metaspace, you'll receive more support from open-source contributors from the community.
To receive compensation for open-sourcing a metaspaces that is useful for TON, please write a TON Metaspace.
A hybrid game networking and web API server, focused on Social Mixed Reality.
If you are a 3D artist and want to support what we are doing with TON Metaspace, consider creating and releasing content under a Creative Commons Zero license or using NFT TON Fingerprints when creating Metaspace and releasing them as remixed media. We highly appreciate the content with a low number of polygons, optimized for good work on the Internet! In particular, we would like to see scenes that reflect a wide range of experiences.
It is necessary that virtual environments are open, accessible and safe for everyone. This is of paramount importance for the appearance of the metaverse in The Open Network.
The term "metaverse" is used to describe augmented reality. This is a signal of a significant change in how people think about the future of online communications. With today's technological advances and the increase in geographically distributed social circles, the idea of seamlessly connected virtual worlds as part of a metaverse has not been more realistic.
Conferences can reach a new audience, and friends can connect to explore interactive spaces.
Some approaches related to the concept of the metaverse and virtual worlds are known through networks, including identification systems, communication protocols, social networks and online economics.
- Build Ecosystem
- Infrastructure Web3 Storage (Market spaces), Costs
- Deploy Assets (3D, nulled scenes, avatars, mp3, jpg) CC0
- Contact, design cards design and tests + host events
- Hackaton builders virtual Scenes on TON,
- Code of Conduct CC0
- Deploy Events Metaspace CC0
Result: 6 main metaspace are ready to accept guests
The metaverse is already beginning to spread three-dimensional environments (often created and shared by users). The use of digital "avatars" and the introduction of virtual and augmented reality technologies.
- Generator Avatars CC0
- Market avatars
- Market Metaspaces
- Custom Constructor
- Hackaton builders virtual Avatars on TON
Result: avatar constructor works, well known avatars of TON world can be chosen
The shift in computing paradigms makes it possible to promote open standards and projects that encourage the development of decentralized, distributed and interoperable virtual worlds. At the device level, the Khronos Group's OpenXR standard has been widely adopted by headset manufacturers, which makes it possible to focus on a single API with the capabilities of specific devices supported through extensions.
- TON Connect adding and login method development
- Oracle - Stable Coin rate, 1tx per hour x10 Data Provider CC0
- Introduction system, networking
- Hackaton builders virtual Worlds on TON
Result: Obtaining Governance and Acquiring Management Tools
There are still ongoing debates about how best to reduce risks and strengthen resistance to the Sibyl. Detecting various patterns of attack vectors allows us to create simple but rigorous tools to detect these patterns in a real and potentially high funding environment.
Management in the style of decentralized organizations is used to make decisions and vote on major issues.
- Environment, infrastructure, hardware SubDAO - Launcher SubDAO Metaspace, DAO Metaspace
- Hackaton CC0
- Protection from the sibyl's attack - Human / Bot
Result: The DAO is created in a strict time frame t. Deploy new-mulstisig
TON Metaspace supports the creation of training courses and materials, as well as other activities, including those conducted in collaboration with the marketing department in order to attract more people to the TON ecosystem
- professionsVirtual Builder/Moderator Vitrual Worlds
- Tool LOD Support assets / Open standart
- Hackaton textures Advanced Tips, Oprimizing models
- Gamefy (quests. quiz, LMS, City TON, hyper casual0strategy, Distributor, Contributor) CC0
Result: educational quest uploaded to meta and works with open source
- Staking (Reality Show - mainstage)
- Displaying data on profitability calculations from Stacking in Metaspaces
- Hackaton De-Fi deployer, Jetton Metaspaces
Result: tokenomic system and staking possibility created and can be used by tonmeta guests
Metadata storage platforms provide creators with optimized user interfaces and allow them to store their NFT metadata in decentralized storage solutions such as Arweave, IPFS or Filecoin. TON Storage guarantees almost indefinite data storage, while smart contracts on the blockchain regulate financial incentives. The node operator and the user will be able to create a smart contract in the TON blockchain, which guarantees that the user will pay a fixed amount in TON for storing files for a pre-agreed period of time.
- Storage Provider
- Hackaton Provider TON domain to a TON Storage bag of files
- Oracle verification
Result: prototype of node-cube built, node works. Any user can become a node operator in the TON network and receive a fee from other users for hosting files, even if he works with only one node. In addition, the storage is effectively combined with the service for creating TON Sites and TON DNS sites, allowing you to run TON Metaspace on the TON network without a fixed IP address, a centralized domain or a certified centralized center.
- Fee, mint SBT, NFT CC0 Collections, Fee Storage Providers
- Mint Items Collection, Asset Open Metaverse The Open Network
Result: Deploy NFT collections, with a limited access level. Placing Assets
collections in TON Storage
TON Metaspace can provide a level of proof of human existence, using pseudonymous digital biometric identification that can be verified, for grant committees to ensure a fair distribution of funds.
- Contributing systems Metaspace
- Organization of placement and presentation of <50 Pitch Deck projects
- Cost reward system
Result: Organization of an educational model for creating creative remixes and creating derivative works
A storage provider is a service that stores files for a fee.
You can download storage-daemon
and storage-daemon-cli
for Linux/Windows/MacOS binaries from TON Auto Builds (storage/
folder).
You can compile storage-daemon
and storage-daemon-cli
from sources using this instruction.
It consists of a smart contract that accepts storage requests and manages payment from clients, and an application that uploads and serves the files to clients. Here's how it works:
- The owner of the provider launches the
storage-daemon
, deploys the main smart contract, and sets up the parameters. The contract's address is shared with potential clients. - Using the
storage-daemon
, the client creates a Bag from their files and sends a special internal message to the provider's smart contract. - The provider's smart contract creates a storage contract to handle this specific Bag.
- The provider, upon finding the request in the blockchain, downloads the Bag and activates the storage contract.
- The client can then transfer payment for storage to the storage contract. To receive the payment, the provider regularly presents the contract with proof that they are still storing the Bag.
- If the funds on the storage contract run out, the contract is considered inactive and the provider is no longer required to store the Bag. The client can either refill the contract or retrieve their files.
:::info The client can also retrieve their files at any time by providing proof of ownership to the storage contract. The contract will then release the files to the client and deactivate itself. :::
In order to use a storage provider, you need to know the address of its smart contract. The client can obtain the provider's parameters with the following command in storage-daemon-cli
:
get-provider-params <address>
- Whether new storage contracts are accepted.
- Minimum and maximum Bag size (in bytes).
- Rate - the cost of storage. Specified in nanoTON per megabyte per day.
- Max span - how often the provider should provide proofs of Bag storage.
You need to create a Bag and generate a message with the following command:
new-contract-message <BagID> <file> --query-id 0 --provider <address>
Executing this command may take some time for large Bags. The message body will be saved to <file>
(not the entire internal message). Query id can be any number from 0 to 2^64-1
. The message contains the provider's parameters (rate and max span). These parameters will be printed out after executing the command, so they should be double checked before sending. If the provider's owner changes the parameters, the message will be rejected, so the conditions of the new storage contract will be exactly what the client expects.
The client must then send the message with this body to the provider's address. In case of an error the message will come back to the sender (bounce). Otherwise, a new storage contract will be created and the client will receive a message from it with op=0xbf7bd0c1
and the same query id.
At this point the contract is not yet active. Once the provider downloads the Bag, it will activate the storage contract and the client will receive a message with op=0xd4caedcd
(also from the storage contract).
The storage contract has a "client balance" - these are the funds that the client transferred to the contract and which have not yet been paid to the provider. Funds are gradually debited from this balance (at a rate equal to the rate per megabyte per day). The initial balance is what the client transferred with the request to create the storage contract. The client can then top up the balance by making simple transfers to the storage contract (this can be done from any address). The remaining client balance is returned by the get_storage_contract_data
get method as the second value (balance
).
:::info
In case of the storage contract being closed, the client receives a message with the remaining balance and op=0xb6236d63
.
:::
- Immediately after creation, before activation, if the provider refuses to accept the contract (the provider's limit is exceeded or other errors).
- The client balance reaches 0.
- The provider can voluntarily close the contract.
- The client can voluntarily close the contract by sending a message with
op=0x79f937ea
from its own address and any query id.
The Storage Provider is part of the storage-daemon
, and is managed by the storage-daemon-cli
. storage-daemon
needs to be started with the -P
flag.
You can do this from storage-daemon-cli
:
deploy-provider
:::info IMPORTANT!
You will be asked to send a non-bounceable message with 1 TON to the specified address in order to initialize the provider. You can check that the contract has been created using the get-provider-info
command.
:::
By default, the contract is set to not accept new storage contracts. Before activating it, you need to configure the provider. The provider's settings consist of a configuration (stored in storage-daemon
) and contract parameters (stored in the blockchain).
max contracts
- maximum number of storage contracts that can exist at the same time.max total size
- maximum total size of Bags in storage contracts. You can view the configuration values usingget-provider-info
, and change them with:
set-provider-config --max-contracts 100 --max-total-size 100000000000
accept
- whether to accept new storage contracts.max file size
,min file size
- size limits for one Bag.rate
- storage cost (specified in nanoTONs per megabyte per day).max span
- how often the provider will have to submit storage proofs.
You can view the parameters using get-provider-info
, and change them with:
set-provider-params --accept 1 --rate 1000000000 --max-span 86400 --min-file-size 1024 --max-file-size 1000000000
Note: in the set-provider-params
command, you can specify only some of the parameters. The others will be taken from the current parameters. Since the data in the blockchain is not updated instantly, several consecutive set-provider-params
commands can lead to unexpected results.
It is recommended to initially put more than 1 TON on the provider's balance, so that there are enough funds to cover the commissions for working with storage contracts. However, do not send too many TONs with the first non-bounceable message.
After setting the accept
parameter to 1
, the smart contract will start accepting requests from clients and creating storage contracts, while the storage daemon will automatically process them: downloading and distributing Bags, generating storage proofs.
get-provider-info --contracts --balances
Each storage contract has a Client$
and Contract$
balance listed; the difference between them can be withdrawn to the main provider contract with the command withdraw <address>
.
The command withdraw-all
will withdraw funds from all contracts that have at least 1 TON
available.
Any storage contract can be closed with the command close-contract <address>
. This will also transfer the funds to the main contract. The same will happen automatically when the client's balance runs out. The Bag files will be deleted in this case (unless there are other contracts using the same Bag).
You can transfer funds from the main smart contract to any address (the amount is specified in nanoTON):
send-coins <address> <amount>
send-coins <address> <amount> --message "Some message"
:::info
All Bags stored by the provider are available with the command list
, and can be used as usual. To prevent disrupting the provider's operations, do not delete them or use this storage daemon to work with any other Bags.
:::
-
Upload the bag of files to the network and get the Bag ID
-
Open the Google Chrome browser on your computer.
-
Install TON extension for Google Chrome. You can also use MyTonWallet.
-
Open the extension, click "Import wallet" and import the wallet that owns the domain, using the recovery phrase.
-
Now open your domain at https://dns.ton.org and click "Edit".
-
Copy your Bag ID into the "Storage" field and click "Save".
-
Create the Bag from folder with website files, upload it to the network and get the Bag ID. Folder must contain
index.html
file. -
Open the Google Chrome browser on your computer.
-
Install TON extension for Google Chrome. You can also use MyTonWallet.
-
Open the extension, click "Import wallet" and import the wallet that owns the domain, using the recovery phrase.
-
Now open your domain at https://dns.ton.org and click "Edit".
-
Copy your Bag ID into the "Site" field, select "Host in TON Storage" checkbox and click "Save".
If you used a standard NFT smart contract for your collection, you need to send a message to the collection smart contract from the collection owner's wallet with a new url prefix.
As an example, if the url prefix used to be https://mysite/my_collection/, the new prefix will be tonstorage://my_bag_id/.
You need to assign the following value to the sha256("storage") DNS Record of your TON domain:
dns_storage_address#7473 bag_id:uint256 = DNSRecord;
Create the Bag from folder with website files, upload it to the network and get the Bag ID. Folder must contain index.html
file.
You need to assign the following value to the sha256("site") DNS Record of your TON domain:
dns_storage_address#7473 bag_id:uint256 = DNSRecord;
A storage daemon is a program used to download and share files in the TON network. The storage-daemon-cli
console program, is used to manage a running storage daemon.
The current version of the storage daemon can be found in the testnet branch.
You can download storage-daemon
and storage-daemon-cli
for Linux/Windows/MacOS binaries from TON Auto Builds (storage/
folder).
You can compile storage-daemon
and storage-daemon-cli
from sources using this instruction.
- Bag of files or Bag - a collection of files distributed through TON Storage
- TON Storage's network part is based on technology similar to torrents, so the terms Torrent, Bag of files, and Bag will be used interchangeably. It's important to note some differences, however: TON Storage transfers data over ADNL by RLDP protocol, each Bag is distributed through its own network overlay, the merkle structure can exist in two versions - with large chunks for efficient downloading and small ones for efficient ownership proof, and TON DHT network is used for finding peers.
- A Bag of files consists of torrent info and a data block.
- The data block starts with a torrent header - a structure that contains a list of files with their names and sizes. The files themselves follow in the data block.
- The data block is divided into chunks (128 KB by default), and a merkle tree (made of TVM cells) is built on the SHA256 hashes of these chunks. This allows building and verifying merkle proofs of individual chunks, as well as efficiently updating the Bag by exchanging only the proof of the modified chunk.
- Torrent info contains the merkle root of the
- Chunk size (data block)
- the list of chunks' sizes
- Hash merkle tree
- Description - any text specified by the creator of the torrent
- Torrent info is serialized to a TVM cell. The hash of this cell is called BagID, and it uniquely identifies Bag.
- Bag meta is a file containing torrent info and torrent header.* This is an analog
.torrent
files.
storage-daemon -v 3 -C global.config.json -I <ip>:3333 -p 5555 -D storage-db
-v
- verbosity level (INFO)-C
- global network config (download global config)-I
- IP address and port for adnl-p
- TCP port for console interface-D
- directory for the storage daemon database
It's started like this:
storage-daemon-cli -I 127.0.0.1:5555 -k storage-db/cli-keys/client -p storage-db/cli-keys/server.pub
-I
- this is the IP address and port of the daemon (the port is the same one specified in the-p
parameter above)-k
and-p
- these are the client's private key and the server's public key (similar tovalidator-engine-console
). These keys are generated on the first run of the daemon and are placed in<db>/cli-keys/
.
The list of storage-daemon-cli
commands can be obtained with the help
command.
Commands have positional parameters and flags. Parameters with spaces should be enclosed in quotes ('
or "
), also spaces can be escaped. Other escapes are available, for example:
create filename\ with\ spaces.txt -d "Description\nSecond line of \"description\"\nBackslash: \\"
All parameters after flag --
are positional parameters. It can be used to specify filenames that start with a dash:
create -d "Description" -- -filename.txt
storage-daemon-cli
can be run in non-interactive mode by passing it commands to execute:
storage-daemon-cli ... -c "add-by-meta m" -c "list --hashes"
To download a Bag of Files, you need to know its BagID
or have a meta-file. The following commands can be used to add a Bag for download:
add-by-hash <hash> -d directory
add-by-meta <meta-file> -d directory
The Bag will be downloaded to the specified directory. You can omit it, then it will be saved to the storage daemon directory.
:::info The hash is specified in hexadecimal form (length - 64 characters). :::
When adding a Bag by a meta-file, information about the Bag will be immediately available: size, description, list of files. When adding by hash, you will have to wait for this information to be loaded.
- The
list
command outputs a list of Bags. list --hashes
outputs a list with full hashes.
In all subsequent commands, <BagID>
is either a hash (hexadecimal) or an ordinal number of the Bag within the session (a number that can be seen in the list by the list
command). Ordinal numbers of Bags are not saved between restarts of storage-daemon-cli and are not available in non-interactive mode.
get <BagID>
- outputs detailed information about the Bag: description, size, download speed, list of files.get-peers <BagID>
- outputs a list of peers.download-pause <BagID>
,download-resume <BagID>
- pauses or resumes downloading.upload-pause <BagID>
,upload-resume <BagID>
- pauses or resumes uploading.remove <BagID>
- removes the Bag.remove --remove-files
also deletes all files of the Bag. Note that if the Bag is saved in the internal storage daemon directory, the files will be deleted in any case.
:::info When adding a Bag you can specify which files you want to download from it: :::
add-by-hash <hash> -d dir --partial file1 file2 file3
add-by-meta <meta-file> -d dir --partial file1 file2 file3
Each file in the Bag of Files has a priority, a number from 0 to 255. Priority 0 means the file won't be downloaded. The --partial
flag sets the specified files to priority 1, the others to 0.
You can change the priorities of an already added Bag with the following commands:
priority-all <BagID> <priority>
- for all files.priority-idx <BagID> <idx> <priority>
- for one file by number (see withget
command).priority-name <BagID> <name> <priority>
- for one file by name. Priorities can be set even before the list of files is downloaded.
To create a Bag and start distributing it, use the create
command:
create <path>
<path>
can point to either a single file or a directory. When creating a Bag, you can specify a description:
create <path> -d "Bag of Files description"
After the Bag is created, the console will display detailed information about it (including the hash, which is the BagID
by which the Bag will be identified), and the daemon will start distributing the torrent. Extra options for create
:
--no-upload
- daemon will not distribute files to peers. Upload can be started usingupload-resume
.--copy
- files will be copied to an internal directory of storage daemon.
To download the Bag, other users just need to know its hash. You can also save the torrent meta file:
get-meta <BagID> <meta-file>
Linux:
On Ubuntu, you can use
apt install postgresql
Otherwise, consult your package manager of choice for other Linux distributions
Windows: https://www.postgresql.org/download/windows/
https://elixir-lang.org/install.html
Note: On Linux, you may also have to install the erlang-src package for your distribution in order to compile dependencies successfully.
https://hexdocs.pm/phoenix/installation.html
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html
Run the following commands at the root of the reticulum directory:
mix deps.get
mix ecto.create
- If step 2 fails, you may need to change the password for the
postgres
role to match the password configureddev.exs
. - From within the
psql
shell, enterALTER USER postgres WITH PASSWORD 'postgres';
- If you receive an error that the
ret_dev
database does not exist, (using psql again) entercreate database ret_dev;
- If step 2 fails, you may need to change the password for the
- From the project directory
mkdir -p storage/dev
Run scripts/run.sh
if you have the hubs secret repo cloned. Otherwise iex -S mix phx.server
When running the full stack for Hubs (which includes Reticulum) locally it is necessary to add a hosts
entry pointing hubs.local
to your local server's IP.
This will allow the CSP checks to pass that are served up by Reticulum so you can test the whole app. Note that you must also load hubs.local over https.
On MacOS or Linux:
nano /etc/hosts
From there, add a host alias
Example:
127.0.0.1 hubs.local
127.0.0.1 hubs-proxy.local
Clone the Hubs repository and install the npm dependencies.
git clone https://github.com/mozilla/hubs.git
cd hubs
npm ci
Because we are running Hubs against the local Reticulum client you'll need to use the npm run local
command in the root of the hubs
folder. This will start the development server on port 8080, but configure it to be accessed through Reticulum on port 4000.
Once both the Hubs Webpack Dev Server and Reticulum server are both running you can navigate to the client by opening up:
https://hubs.local:4000?skipadmin
The
skipadmin
is a temporary measure to bypass being redirected to the admin panel. Once you have logged in you will no longer need this.
To log into Hubs we use magic links that are sent to your email. When you are running Reticulum locally we do not send those emails. Instead, you'll find the contents of that email in the Reticulum console output.
With the Hubs landing page open click the Sign In button at the top of the page. Enter an email address and click send.
Go to the reticulum terminal session and find a url that looks like https://hubs.local:4000/?auth_origin=hubs&auth_payload=XXXXX&auth_token=XXXX
Navigate to that url in your browser to finish signing in.
After you've started Reticulum for the first time you'll likely want to create an admin user. Assuming you want to make the first account the admin, this can be done in the iex console using the following code:
Ret.Account |> Ret.Repo.all() |> Enum.at(0) |> Ecto.Changeset.change(is_admin: true) |> Ret.Repo.update!()
Rooms are created with restricted permissions by default, which means you can't spawn media objects. You can change this setting in the admin panel, or run the following code in the iex console:
Ret.AppConfig.set_config_value("features|permissive_rooms", true)
When running locally, you will need to also run the admin panel, which routes to hubs.local:8989
Using a separate terminal instance, navigate to the hubs/admin
folder and use:
npm run local
You can now navigate to https://hubs.local:4000/admin to access the admin control panel
- Follow the steps above to setup Hubs
- Clone and start spoke by running
./scripts/run_local_reticulum.sh
in the root of the spoke project - Navigate to https://hubs.local:4000/spoke
- Update the Janus host in
dev.exs
:
dev_janus_host = "hubs.local"
- Update the Janus port in
dev.exs
:
config :ret, Ret.JanusLoadStatus, default_janus_host: dev_janus_host, janus_port: 4443
- Add the Dialog meta endpoint to the CSP rules in
add_csp.ex
:
default_janus_csp_rule =
if default_janus_host,
do: "wss://#{default_janus_host}:#{janus_port} https://#{default_janus_host}:#{janus_port} https://#{default_janus_host}:#{janus_port}/meta",
else: ""
- Edit the Dialog configuration file turnserver.conf and update the PostgreSQL database connection string to use the coturn schema from the Reticulum database:
psql-userdb="host=hubs.local dbname=ret_dev user=postgres password=postgres options='-c search_path=coturn' connect_timeout=30"
Want to contribute?
Choose a project, fork the repo, make your changes, and submit a pull request.
Maybe you have created a tool that solves some problem?
Get more TON Metaspace by adding your tool to the:
Want to get more contributors?
- And of course you can add a repository to the organization! Just contact us through the Telegram group