Testnet & Node
All your burning questions about Taraxa's testnet.

What errors should I NOT be concerned about?

The blockchain network exists in a constant state of flux where a lot of things are "going wrong" constantly. But the beauty of a successful blockchain network is that it can handle and fix these inconsistencies and errors gracefully. Many of the errors you see displayed from the node should not concern you because they're temporary, and the node understands them and knows how to handle them.
Here's a list of commonly seen "errors" that you should NOT be concerned about,

Is incentivized testnet live?

We have an ongoing incentivized testnet, please check out this step by step guide to participate!

How do I run a node?

We recommend everyone who wants to run a node join our Discord server and look for the #node-operations channel.

Is there a testnet?

Yes, you can look at the testnet through our explorer. It is a test network so occasionally it will go down or get wiped, please join our Discord server for the latest information.

How do I report a problem?

Always try include the following information when you're reporting a problem,
  • Your node's public address (see how to get your node's public address)
  • Your system resources: CPU (# of cores), RAM, Disk
  • Are you running this on a dedicated or a shared machine
  • If it's in the cloud, the cloud service provider, and your instance's physical location (e.g., Frankfurt - Germany)
  • Screenshot the error message, or better yet the logs (see how to get the logs)
  • System resource consumption screenshot - e.g., a time-series of CPU or RAM utilization
  • Anything out of the ordinary you were doing right before this error occurred, e.g., tried to import a previous state_db.
Thanks for all your feedback!

How do I download the node's logs when reporting a problem?

Here's the command to generate logs from the node,
docker logs taraxa_compose_node_1 > logs
Note that, the container is not always called taraxa_compose_node_1 on every environment. If this doesn't wrok, please check to make sure - use docker ps to see a list of all your containers and figure out exactly what the name of your container is.
If the node has been running for a while, the log file might be too big, so it's a good idea just to get the latest few log entries, say 50,000, you can try this,
docker logs --tail 50000 taraxa_compose_node_1 > logs
Now that you have the logs file, just send it to the dev team along with your problem report. Thanks!

How do I tell if my node has been synced?

Either go to the dashboard, which is located at your node's IP at port :3000, or you can look in the CLI log outputs and look for the ---- tl;dr ---- section, first line should tell you the node's sync status.

Why does my node's sync percentage go down?

This is normal.
Your node determines its synchronization progress by asking the nodes it's connected to. Sometimes, and this often happens after a network recovers from a crash, the node is only connected to peers that aren't 100% synced themselves. But when the node connects to a new peer who is either fully synced or has made more sync progress than its existing (or previous) peers, the node adjusts and re-calculates its sync progress.
We recommend comparing your node's synchronization status against the network progress on the explorer to get a better sense of where your node actually is.

How do I know if my node is producing blocks?

There are several ways to tell,
  • Go to your node's IP at port :3000, and see "Synced - Participating in consensus", or if you see that in your node's logs STATUS: GOOD. NODE SYNCED AND PARTICIPATING IN CONSENSUS
  • Go to the explorer's node page and see if your address is listed, note it's paginated so you may not be on the first page
  • Search for your node's public address on the explorer and see how many blocks (if any) it has produced
  • Go to the community site's node list and see if your node is listed active
Several things to note,
  • Sometimes the explorer is reset and that will cause you to not see the node list or the community site node list, the most reliable way to tell is to look at your local node and see if it is participating in consensus
  • You will often see messages like PARTICIPATING IN CONSENSUS BUT NO NEW FINALIZED BLOCKS, PBFT STALLED, POSSIBLY PARTITIONED. NODE HAS NOT RESTARTED SYNCING, or STUCK. NODE HAS NOT RESTARTED SYNCING, these happen from time to time and not necessarily specific to your node

What if my node is not producing any blocks at all and just says "Synced"?

If your node is 100% synced but has not produced any block, please make sure that your node is properly registered on the community site's node page.
If it has already been registered and still it's producing no block at all, then try deleting your node from the community site and add it back in again.

How do I update / reset my node?

Here are the instructions to update or reset the node.

Why is my node shown as inactive on the community site?

A node is considered active only if it has been fully synced as well as producing blocks. If it's not then it shows up as inactive on the community site.
A block-producing node should also show up on the node list in the explorer.

I received TARA on my node after registration, what does that mean?

TARA tokens on the testnet are not real tokens, so please don't try to send those out (it won't work), and please do not send any tokens from another chain (e.g., ETH) into the testnet - it won't work and you'll lose your tokens.
The tokens are sent to your node as part of the faucet to generate some transaction traffic on the network, and that later on we will run community-driven stress tests which will require that everyone has some testnet tokens to send around.

Why is my node eating up so much CPU / RAM?

Our recommended system setup is 4-core CPU, 16GB RAM, and 200GB disk.
Currently in the network, CPU and RAM consumption is very high during syncing. A faster, less resource-intensive sync is on the roadmap.
Our node currently also seems to eat up much more memory than intended, and node optimization is definitely on the roadmap after the mainnet candidate release.

Why is my node eating up so much disk space?

At the current stage we only have a FULL node implementation, which means the node stores the entirety of the blockchain's history. For a full node, our space consumption is comparable to other blockchain networks such as Ethereum.
What will permanently solve this problem is create a light node, which only stores the current state, and prunes (deletes) the historical transactional data. This is on the roadmap.
For now, you can ameliorate this problem by disabling and deleting some snapshots if you'd like. The node generates and stores many network snapshots (for testing purposes) which could take up a lot of memory, and we're going to update it later on to stop generating them.
If you would like to save disk space, you can do two things.
Step 1: please go to the config file
Inside this file, set,
"db_snapshot_each_n_pbft_block" : 0
IMPORTANT: Now restart the node or just restart the entire machine. If you don't this change won't take effect.
Step 2: remove the snapshots from your existing node
On a Linux system the db files are located here,
The only files you need to keep are db and state_db. The rest you can just clear out.
If you're on a different system, you can try searching for the file state_db and see where it's located.

My node gets killed when it runs out of disk space!

The most common problem we're seeing is that the node runs out of disk space. We're working to update our one-click install scripts to help with this problem - e.g., attaching a disk volume to the machine on VPS.
In the meantime please increase the disk allocation on your own.

ERROR: No such container: taraxa_compose_node-1

This happens when you're trying to access the node's container (e.g., when trying to produce the prove-you-own-your-node signature), but the container's name is wrong.
Different operating systems name these containers slightly differently. When you see this the best thing to do is to try
docker container ls
and see what your node's container is actually called.
Of course, if you are running multiple nodes then they're likely sequentially numbered, listing all the containers will also help to find the one you're looking for.

ERROR: Vote sortition failed.

Typically speaking you don't need to worry about this error.
This means that your node has received a vote that it deems invalid. This doesn't mean something's wrong with your node, likely it indicates something's wrong with the node that generated the invalid vote.

ERROR: Received NewBlock xxx has missing pivot or/and tips

Typically speaking you don't need to worry about this error.
It indicates that the node has received a DAG block, but the node is missing its parents - i.e., other DAG blocks it's pointing to. Since there's no guarantee the order in which packets arrive over the network, a node could easily see a specific block before it sees the parent. In fact, when this happens, the node will proactively request from its peers the missing parent blocks.

ERROR: DagBlockValidation failed

Typically speaking you don't need to worry about this error.
The reason for this happening is the same as the error about missing pivots or tips, and the node should naturally recover.

ERROR: Incorrect node version: 0, our node version xxxxx, host xxxxx will be disconnected

You don't need to worry about this error.
We added a versioning system, and nodes that have different versions will not connect to each other as peers. This message is your node encountering another node that's a different version, so it has decided to disconnect that node from its peers.
This design choice may be revisited later, as we progress towards mainnet we may have to take backwards compatibility into consideration.

RangeError [ERR_HTTP_INVALID_STATUS_CODE]: Invalid status code: undefined

You don't need to worry about this error.
It's actually from the node status app and it happens when the app starts before the actual node and can't get data from it.
Last modified 5h ago