Handling kusama rpc disconnects and retries
If you have a very long running processes that sends many transactions in a loop to a Kusama using polkadot.js api, you might have encountered one of 2 issues that can interrupt your process.
Transaction Failed: Error: 1014: Priority is too low: (XXX vs XXX)
This usually happens if you send transactions in a loop without waiting for a previous transaction to be added in a block from the same account. This is particulary prone to happening if you have a concurent processes running this script.
Basically polkadot.js would assign a nonce number to your transaction in a preparation to add it to the next block, but if you try to do it multiple times very quickly, the same nonce will be assigned so it's not clear if you are trying to replace the existing nonce with this new transaction.
There's a simple solution described here, but it is not bulletproof, especially if you have concurent processes.
There are number of solutions to do for this, one would be to have multiple accounts for each process, but this is inpractical. Another solution is to increment nonce manually that you share between processes, this is also a bit too much work to manage for a case like this.
Easy way out is to simply catch the error and add a retry mechanism.
API-WS: disconnected from wss://kusama-rpc.polkadot.io: 1006:: Connection dropped by remote peer.
This is quite common, ether remote rpc drops you to clean up stalled connections, or websocket drops after a period of inactivity.
polkadot.js and it's WS provider does a great job at reconnecting automatically when this is happening, but what if the remote rpc goes down for a prolonged period of time? Or something else times out in your script while you wait for polkadot.js to reconnect?
The answer again is a custom retry mechanism but this time also an rpc endpoint fallback.
For rpc disconnect issues, In most cases you don't have to worry as polkadot.js will keep trying to reconnect, but if your script cannot handle long disconnects we can write a wrapper that can accept an array of rcp endpoints and will check the current wsProvider connection status in N1 intervals for a N2 amount of retries before disconnecting, reconnecting to a next endpoint in the supplied array.
For Other errors that can happen during sending of transactions, we can implement a retry mechanism similar to the above but in singAndSend wrapper.
You can now sleep tight knowing that your overnights backend script sending tens of thousands of
system.remark extrinsics in small chunks