# Polkadot llms-full.txt Polkadot. Polkadot unites the world's innovators and changemakers, building and using the most transformative apps and blockchains. Access tools, guides, and resources to quickly start building custom chains, deploying smart contracts, and creating dApps. ## Generated automatically. Do not edit directly. Documentation: https://docs.polkadot.com/ ## List of doc pages: Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/interoperability/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/interoperability/intro-to-xcm.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/interoperability/send-messages.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/interoperability/test-and-debug.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/interoperability/xcm-channels.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/interoperability/xcm-config.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/interoperability/xcm-runtime-apis.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/networks.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/customize-parachain/add-existing-pallets.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/customize-parachain/add-pallet-instances.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/customize-parachain/add-smart-contract-functionality.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/customize-parachain/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/customize-parachain/make-custom-pallet.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/customize-parachain/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/deployment/build-deterministic-runtime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/deployment/coretime-renewal.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/deployment/generate-chain-specs.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/deployment/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/deployment/manage-coretime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/deployment/obtain-coretime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/install-polkadot-sdk.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/intro-polkadot-sdk.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/maintenance/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/maintenance/runtime-metrics-monitoring.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/maintenance/runtime-upgrades.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/maintenance/storage-migrations.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/maintenance/unlock-parachain.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/testing/benchmarking.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/testing/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/testing/mock-runtime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/parachains/testing/pallet-testing.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/block-explorers/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/connect-to-kusama.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/connect-to-polkadot.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/dev-environments/foundry.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/dev-environments/hardhat.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/dev-environments/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/dev-environments/remix.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/faqs.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/json-rpc-apis.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/libraries/ethers-js.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/libraries/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/libraries/viem.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/libraries/wagmi.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/libraries/web3-js.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/libraries/web3-py.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/local-development-node.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/precompiles/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/precompiles/interact-with-precompiles.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/precompiles/xcm-precompile.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/smart-contracts/wallets.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/api-libraries/dedot.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/api-libraries/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/api-libraries/papi.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/api-libraries/polkadot-js-api.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/api-libraries/py-substrate-interface.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/api-libraries/sidecar.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/api-libraries/subxt.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/integrations/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/integrations/indexers.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/integrations/oracles.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/integrations/wallets.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/interoperability/asset-transfer-api/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/interoperability/asset-transfer-api/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/interoperability/asset-transfer-api/reference.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/interoperability/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/interoperability/xcm-tools.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/e2e-testing/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/e2e-testing/moonwall.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/fork-chains/chopsticks/get-started.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/fork-chains/chopsticks/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/fork-chains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/light-clients.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/polkadot-omni-node.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/quickstart/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/quickstart/pop-cli.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/rpc-calls.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/spawn-chains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/spawn-chains/zombienet/get-started.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/spawn-chains/zombienet/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/develop/toolkit/parachains/spawn-chains/zombienet/write-tests.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/get-support/ai-ready-docs.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/get-support/explore-resources.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/get-support/get-in-touch.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/get-support/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/images/README.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-node/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-node/setup-bootnode.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-node/setup-full-node.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-node/setup-secure-wss.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/onboarding-and-offboarding/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/onboarding-and-offboarding/key-management.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/onboarding-and-offboarding/start-validating.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/operational-tasks/general-management.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/operational-tasks/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/operational-tasks/pause-validating.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/operational-tasks/upgrade-your-node.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/running-a-validator/requirements.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/staking-mechanics/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/staking-mechanics/offenses-and-slashes.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/infrastructure/staking-mechanics/rewards-payout.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/parachains/consensus.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/parachains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/parachains/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/polkadot-chain/agile-coretime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/polkadot-chain/elastic-scaling.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/polkadot-chain/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/polkadot-chain/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/polkadot-chain/pos-consensus.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/system-chains/asset-hub.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/system-chains/bridge-hub.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/system-chains/collectives.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/system-chains/coretime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/system-chains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/system-chains/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/architecture/system-chains/people.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/glossary.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/onchain-governance/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/onchain-governance/origins-tracks.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/onchain-governance/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/accounts.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/blocks-transactions-fees/blocks.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/blocks-transactions-fees/fees.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/blocks-transactions-fees/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/blocks-transactions-fees/transactions.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/chain-data.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/cryptography.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/data-encoding.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/interoperability.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/networks.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/node-and-runtime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/parachain-basics/randomness.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/smart-contract-basics/accounts.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/smart-contract-basics/blocks-transactions-fees.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/smart-contract-basics/evm-vs-polkavm.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/smart-contract-basics/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/smart-contract-basics/networks.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/smart-contract-basics/overview.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/polkadot-protocol/smart-contract-basics/polkavm-design.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/dapps/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/dapps/remark-tutorial.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/interoperability/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/interoperability/xcm-channels/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/interoperability/xcm-channels/para-to-para.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/interoperability/xcm-channels/para-to-system.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/interoperability/xcm-transfers/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/onchain-governance/fast-track-gov-proposal.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/onchain-governance/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/add-pallets-to-runtime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-benchmarking.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-unit-testing.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/system-chains/asset-hub/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/system-chains/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/testing/fork-live-chains.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/testing/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/polkadot-sdk/testing/spawn-basic-chain.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/demo-aplications/deploying-uniswap-v2.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/demo-aplications/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/deploy-erc20.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/deploy-nft.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/launch-your-first-project/create-contracts.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/launch-your-first-project/create-dapp-ethers-js.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/launch-your-first-project/create-dapp-viem.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/launch-your-first-project/index.md Doc-Page: https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/refs/heads/main/tutorials/smart-contracts/launch-your-first-project/test-and-deploy-with-hardhat.md ## Full content for each doc page Doc-Content: https://docs.polkadot.com/develop/ --- BEGIN CONTENT --- --- title: Develop description: Explore and learn how to build in the Polkadot ecosystem, from a custom parachain to smart contracts, supported by robust integrations and developer tools. template: index-page.html --- # Develop with Polkadot ## Introduction This guide is a starting point for developers who wish to build in the Polkadot ecosystem. To get the most from this section: 1. Identify your development pathway: - [**Parachain developers**](#parachain-developers) - build, deploy, and maintain custom parachains with the Polkadot SDK - [**Smart contract developers**](#smart-contract-developers) - leverage smart contracts and execute custom logic over existing chains to streamline your development process - [**Application developers**](#application-developers) - leverage Polkadot's underlying protocol features to create solutions for your users to interact with the ecosystem 2. Use the sections under your pathway as follows: - **Learn** - content to deepen your knowledge and understanding - **Build** - connect to goal-oriented guides and step-by-step tutorials - **Tools** - tools commonly used in your pathway - **Resources** - resources for your pathway, including references, code repositories, and outside documentation ## Development Pathways Developers can choose from different development pathways to build applications and core blockchain functionality. Each pathway caters to different types of projects and developer skill sets while complementing one another within the broader network. The Polkadot ecosystem provides multiple development pathways: ```mermaid graph TD A[Development Pathways] A --> B[Parachain Development] A --> C[Smart Contract Development] A --> D[Application Development] ``` All three pathways can leverage Cross-Consensus Messaging (XCM) to create innovative cross-chain workflows and applications. To get started with XCM, see these resources: - [**Introduction to XCM**](/develop/interoperability/intro-to-xcm/){target=\_blank} - introduces key concepts, core function definitions, and code examples - [**XCM Tools**](/develop/toolkit/interoperability/xcm-tools/){target=\_blank} - provides an overview of popular XCM tools - [**Tutorials for Managing XCM Channels**](/tutorials/interoperability/xcm-channels/){target=\_blank} - guides for using [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} UI to establish cross-chain messaging channels ### Parachain Developers Build, deploy, and maintain custom parachains with the Polkadot SDK. ### Smart Contract Developers Leverage smart contracts and execute custom logic over existing chains to streamline your development process. The Polkadot smart contract ecosystem is in active development. Please expect frequent changes. To follow progress, or join the discussion, see [Contracts on AssetHub Roadmap](https://forum.polkadot.network/t/contracts-on-assethub-roadmap/9513/57){target=\_blank} on the Polkadot Network Forum. ### Application Developers Integrate with the Polkadot blockchain's underlying protocol features to create solutions that allow users to interact with the ecosystem. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/interoperability/ --- BEGIN CONTENT --- --- title: Interoperability description: Learn how Polkadot enables blockchain interoperability through Cross-Consensus Messaging (XCM), powering secure cross-chain communication. template: index-page.html --- # Interoperability This section covers everything you need to know about building and implementing [Cross-Consensus Messaging (XCM)](/develop/interoperability/intro-to-xcm/){target=\_blank} solutions in the Polkadot ecosystem. Whether you're working on establishing cross-chain channels, sending and receiving XCM messages, or testing and debugging your cross-chain configurations, you'll find the essential resources and tools here to support your interoperability needs, regardless of your development focus. - Not sure where to start? Visit the [Interoperability](/polkadot-protocol/parachain-basics/interoperability/){target=\_blank} overview page to explore different options and find the right fit for your project - Ready to dive in? Head over to [Send XCM Messages](/develop/interoperability/send-messages/) to learn how to send a message cross-chain via XCM ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources

Review the Polkadot SDK's XCM Documentation


Dive into the official documentation to learn about the key components for supporting XCM in your parachain and enabling seamless cross-chain communication.

Follow Step-by-Step Tutorials


Enhance your XCM skills with step-by-step tutorials on building interoperability solutions on Polkadot SDK-based blockchains.

Familiarize Yourself with the XCM Format


Gain a deeper understanding of the XCM format and structure, including any extra data it may need and what each part of a message means.

Essential XCM Tools


Explore essential tools for creating and integrating cross-chain solutions within the Polkadot ecosystem.

--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/interoperability/intro-to-xcm/ --- BEGIN CONTENT --- --- title: Introduction to XCM description: Unlock blockchain interoperability with XCM — Polkadot's Cross-Consensus Messaging format for cross-chain interactions. categories: Basics, Polkadot Protocol --- # Introduction to XCM ## Introduction Polkadot’s unique value lies in its ability to enable interoperability between parachains and other blockchain systems. At the core of this capability is XCM (Cross-Consensus Messaging)—a flexible messaging format that facilitates communication and collaboration between independent consensus systems. With XCM, one chain can send intents to another one, fostering a more interconnected ecosystem. Although it was developed specifically for Polkadot, XCM is a universal format, usable in any blockchain environment. This guide provides an overview of XCM’s core principles, design, and functionality, alongside practical examples of its implementation. ## Messaging Format XCM is not a protocol but a standardized [messaging format](https://github.com/polkadot-fellows/xcm-format){target=\_blank}. It defines the structure and behavior of messages but does not handle their delivery. This separation allows developers to focus on crafting instructions for target systems without worrying about transmission mechanics. XCM messages are intent-driven, outlining desired actions for the receiving blockchain to consider and potentially alter its state. These messages do not directly execute changes; instead, they rely on the host chain's environment to interpret and implement them. By utilizing asynchronous composability, XCM facilitates efficient execution where messages can be processed independently of their original order, similar to how RESTful services handle HTTP requests without requiring sequential processing. ## The Four Principles of XCM XCM adheres to four guiding principles that ensure robust and reliable communication across consensus systems: - **Asynchronous** - XCM messages operate independently of sender acknowledgment, avoiding delays due to blocked processes - **Absolute** - XCM messages are guaranteed to be delivered and interpreted accurately, in order, and timely. Once a message is sent, one can be sure it will be processed as intended - **Asymmetric** - XCM messages follow the 'fire and forget' paradigm meaning no automatic feedback is provided to the sender. Any results must be communicated separately to the sender with an additional message back to the origin - **Agnostic** - XCM operates independently of the specific consensus mechanisms, making it compatible across diverse systems These principles guarantee that XCM provides a reliable framework for cross-chain communication, even in complex environments. ## The XCM Tech Stack ![Diagram of the XCM tech stack](/images/develop/interoperability/intro-to-xcm/intro-to-xcm-01.webp) The XCM tech stack is designed to facilitate seamless interoperable communication between chains that reside within the Polkadot ecosystem. XCM can be used to express the meaning of the messages over each of the communication channels. ## Core Functionalities of XCM XCM enhances cross-consensus communication by introducing several powerful features: - **Programmability** - supports dynamic message handling, allowing for more comprehensive use cases. Includes branching logic, safe dispatches for version checks, and asset operations like NFT management - **Functional Multichain Decomposition** - enables mechanisms such as remote asset locking, asset namespacing, and inter-chain state referencing, with contextual message identification - **Bridging** - establishes a universal reference framework for multi-hop setups, connecting disparate systems like Ethereum and Bitcoin with the Polkadot relay chain acting as a universal location The standardized format for messages allows parachains to handle tasks like user balances, governance, and staking, freeing the Polkadot relay chain to focus on shared security. These features make XCM indispensable for implementing scalable and interoperable blockchain applications. ## XCM Example The following is a simplified XCM message demonstrating a token transfer from Alice to Bob on the same chain (ParaA). ```rust let message = Xcm(vec![ WithdrawAsset((Here, amount).into()), BuyExecution { fees: (Here, amount).into(), weight_limit: WeightLimit::Unlimited }, DepositAsset { assets: All.into(), beneficiary: MultiLocation { parents: 0, interior: Junction::AccountId32 { network: None, id: BOB.clone().into() }.into(), }.into() } ]); ``` The message consists of three instructions described as follows: - [**WithdrawAsset**](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#withdrawasset){target=\_blank} - transfers a specified number of tokens from Alice's account to a holding register ```rust WithdrawAsset((Here, amount).into()), ``` - `Here` - the native parachain token - `amount` - the number of tokens that are transferred The first instruction takes as an input the MultiAsset that should be withdrawn. The MultiAsset describes the native parachain token with the `Here` keyword. The `amount` parameter is the number of tokens that are transferred. The withdrawal account depends on the origin of the message. In this example the origin of the message is Alice. The `WithdrawAsset` instruction moves `amount` number of native tokens from Alice's account into the holding register. - [**BuyExecution**](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#buyexecution){target=\_blank} - allocates fees to cover the execution [weight](/polkadot-protocol/glossary/#weight){target=\_blank} of the XCM instructions ```rust BuyExecution { fees: (Here, amount).into(), weight_limit: WeightLimit::Unlimited }, ``` - `fees` - describes the asset in the holding register that should be used to pay for the weight - `weight_limit` - defines the maximum fees that can be used to buy weight - [**DepositAsset**](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#depositasset){target=\_blank} - moves the remaining tokens from the holding register to Bob’s account ```rust DepositAsset { assets: All.into(), beneficiary: MultiLocation { parents: 0, interior: Junction::AccountId32 { network: None, id: BOB.clone().into() }.into(), }.into() } ``` - `All` - the wildcard for the asset(s) to be deposited. In this case, all assets in the holding register should be deposited This step-by-step process showcases how XCM enables precise state changes within a blockchain system. You can find a complete XCM message example in the [XCM repository](https://github.com/paritytech/xcm-docs/blob/main/examples/src/0_first_look/mod.rs){target=\_blank}. ## Overview XCM revolutionizes cross-chain communication by enabling use cases such as: - Token transfers between blockchains - Asset locking for cross-chain smart contract interactions - Remote execution of functions on other blockchains These functionalities empower developers to build innovative, multi-chain applications, leveraging the strengths of various blockchain networks. To stay updated on XCM’s evolving format or contribute, visit the [XCM repository](https://github.com/paritytech/xcm-docs/blob/main/examples/src/0_first_look/mod.rs){target=\_blank}. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/interoperability/send-messages/ --- BEGIN CONTENT --- --- title: Send XCM Messages description: Send cross-chain messages using XCM, Polkadot's Cross-Consensus Messaging format, designed to support secure communication between chains. categories: Basics, Polkadot Protocol --- # Send XCM Messages ## Introduction One of the core FRAME pallets that enables parachains to engage in cross-chain communication using the Cross-Consensus Message (XCM) format is [`pallet-xcm`](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/index.html){target=\_blank}. It facilitates the sending, execution, and management of XCM messages, thereby allowing parachains to interact with other chains within the ecosystem. Additionally, `pallet-xcm`, also referred to as the XCM pallet, supports essential operations like asset transfers, version negotiation, and message routing. This page provides a detailed overview of the XCM pallet's key features, its primary roles in XCM operations, and the main extrinsics it offers. Whether aiming to execute XCM messages locally or send them to external chains, this guide covers the foundational concepts and practical applications you need to know. ## XCM Frame Pallet Overview The [`pallet-xcm`](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/index.html){target=\_blank} provides a set of pre-defined, commonly used [XCVM programs](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#12-the-xcvm){target=\_blank} in the form of a [set of extrinsics](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/dispatchables/index.html){target=\blank}. This pallet provides some [default implementations](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/struct.Pallet.html#implementations){target=\_blank} for traits required by [`XcmConfig`](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm_benchmarks/trait.Config.html#associatedtype.XcmConfig){target=\_blank}. The [XCM executor](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/struct.XcmExecutor.html){target=\_blank} is also included as an associated type within the pallet's configuration. For further details about the XCM configuration, see the [XCM Configuration](/develop/interoperability/xcm-config/){target=\_blank} page. Where the [XCM format](https://github.com/polkadot-fellows/xcm-format){target=\_blank} defines a set of instructions used to construct XCVM programs, `pallet-xcm` defines a set of extrinsics that can be utilized to build XCVM programs, either to target the local or external chains. The `pallet-xcm` functionality is divided into three categories: - **Primitive** - dispatchable functions to execute XCM locally - **High-level** - functions for asset transfers between chains - **Version negotiation-specific** - functions for managing XCM version compability ### Key Roles of the XCM Pallet The XCM pallet plays a central role in managing cross-chain messages, with its primary responsibilities including: - **Execute XCM messages** - interacts with the XCM executor to validate and execute messages, adhering to predefined security and filter criteria - **Send messages across chains** - allows authorized origins to send XCM messages, enabling controlled cross-chain communication - **Reserve-based transfers and teleports** - supports asset movement between chains, governed by filters that restrict operations to authorized origins - **XCM version negotiation** - ensures compatibility by selecting the appropriate XCM version for inter-chain communication - **Asset trapping and recovery** - manages trapped assets, enabling safe reallocation or recovery when issues occur during cross-chain transfers - **Support for XCVM operations** - oversees state and configuration requirements necessary for executing cross-consensus programs within the XCVM framework ## Primary Extrinsics of the XCM Pallet This page will highlight the two **Primary Primitive Calls** responsible for sending and executing XCVM programs as dispatchable functions within the pallet. ### Execute The [`execute`](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/enum.Call.html#variant.execute){target=\_blank} call directly interacts with the XCM executor, allowing for the execution of XCM messages originating from a locally signed origin. The executor validates the message, ensuring it complies with any configured barriers or filters before executing. Once validated, the message is executed locally, and an event is emitted to indicate the result—whether the message was fully executed or only partially completed. Execution is capped by a maximum weight ([`max_weight`](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/enum.Call.html#variant.execute.field.max_weight){target=\_blank}); if the required weight exceeds this limit, the message will not be executed. ```rust pub fn execute( message: Box::RuntimeCall>>, max_weight: Weight, ) ``` For further details about the `execute` extrinsic, see the [`pallet-xcm` documentation](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/struct.Pallet.html){target=\_blank}. !!!warning Partial execution of messages may occur depending on the constraints or barriers applied. ### Send The [`send`](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/enum.Call.html#variant.send){target=\_blank} call enables XCM messages to be sent to a specified destination. This could be a parachain, smart contract, or any external system governed by consensus. Unlike the execute call, the message is not executed locally but is transported to the destination chain for processing. The destination is defined using a [Location](https://paritytech.github.io/polkadot-sdk/master/xcm_docs/glossary/index.html#location){target=\_blank}, which describes the target chain or system. This ensures precise delivery through the configured XCM transport mechanism. ```rust pub fn send( dest: Box, message: Box::RuntimeCall>>, ) ``` For further information about the `send` extrinsic, see the [`pallet-xcm` documentation](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/struct.Pallet.html){target=\_blank}. ## XCM Router The [`XcmRouter`](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm/pallet/trait.Config.html#associatedtype.XcmRouter){target=\_blank} is a critical component the XCM pallet requires to facilitate sending XCM messages. It defines where messages can be sent and determines the appropriate XCM transport protocol for the operation. For instance, the Kusama network employs the [`ChildParachainRouter`](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_common/xcm_sender/struct.ChildParachainRouter.html){target=\_blank}, which restricts routing to [Downward Message Passing (DMP)](https://wiki.polkadot.network/learn/learn-xcm-transport/#dmp-downward-message-passing){target=\_blank} from the relay chain to parachains, ensuring secure and controlled communication. ```rust // Only one router so far - use DMP to communicate with child parachains. ChildParachainRouter, )>; ``` For more details about XCM transport protocols, see the [XCM Channels](/develop/interoperability/xcm-channels/){target=\_blank} page. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/interoperability/test-and-debug/ --- BEGIN CONTENT --- --- title: Testing and Debugging description: Learn how to test and debug cross-chain communication via the XCM Emulator to ensure interoperability and reliable execution. categories: Basics, Polkadot Protocol --- # Testing and Debugging ## Introduction Cross-Consensus Messaging (XCM) is a core feature of the Polkadot ecosystem, enabling communication between parachains, relay chains, and system chains. To ensure the reliability of XCM-powered blockchains, thorough testing and debugging are essential before production deployment. This guide covers the XCM Emulator, a tool designed to facilitate onboarding and testing for developers. Use the emulator if: - A live runtime is not yet available - Extensive configuration adjustments are needed, as emulated chains differ from live networks - Rust-based tests are preferred for automation and integration For scenarios where real blockchain state is required, [Chopsticks](/tutorials/polkadot-sdk/testing/fork-live-chains/#xcm-testing){target=\_blank} allows testing with any client compatible with Polkadot SDK-based chains. ## XCM Emulator Setting up a live network with multiple interconnected parachains for XCM testing can be complex and resource-intensive. The [`xcm-emulator`](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/xcm/xcm-emulator){target=\_blank} is a tool designed to simulate the execution of XCM programs using predefined runtime configurations. These configurations include those utilized by live networks like Kusama, Polkadot, and Asset Hub. This tool enables testing of cross-chain message passing, providing a way to verify outcomes, weights, and side effects efficiently. It achieves this by utilizing mocked runtimes for both the relay chain and connected parachains, enabling developers to focus on message logic and configuration without needing a live network. The `xcm-emulator` relies on transport layer pallets. However, the messages do not leverage the same messaging infrastructure as live networks since the transport mechanism is mocked. Additionally, consensus-related events are not covered, such as disputes and staking events. Parachains should use end-to-end (E2E) tests to validate these events. ### Advantages and Limitations The XCM Emulator provides both advantages and limitations when testing cross-chain communication in simulated environments. - **Advantages**: - **Interactive debugging** - offers tracing capabilities similar to EVM, enabling detailed analysis of issues - **Runtime composability** - facilitates testing and integration of multiple runtime components - **Immediate feedback** - supports Test-Driven Development (TDD) by providing rapid test results - **Seamless integration testing** - simplifies the process of testing new runtime versions in an isolated environment - **Limitations**: - **Simplified emulation** - always assumes message delivery, which may not mimic real-world network behavior - **Dependency challenges** - requires careful management of dependency versions and patching. Refer to the [Cargo dependency documentation](https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html){target=\_blank} - **Compilation overhead** - testing environments can be resource-intensive, requiring frequent compilation updates ### How Does It Work? The `xcm-emulator` provides macros for defining a mocked testing environment. Check all the existing macros and functionality in the [XCM Emulator source code](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/xcm/xcm-emulator/src/lib.rs){target=\_blank}. The most important macros are: - [**`decl_test_relay_chains`**](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/xcm/xcm-emulator/src/lib.rs#L355){target=\_blank} - defines runtime and configuration for the relay chains. Example: ```rust // Westend declaration decl_test_relay_chains! { #[api_version(11)] pub struct Westend { genesis = genesis::genesis(), on_init = (), runtime = westend_runtime, core = { SovereignAccountOf: westend_runtime::xcm_config::LocationConverter, }, pallets = { XcmPallet: westend_runtime::XcmPallet, Sudo: westend_runtime::Sudo, Balances: westend_runtime::Balances, Treasury: westend_runtime::Treasury, AssetRate: westend_runtime::AssetRate, Hrmp: westend_runtime::Hrmp, Identity: westend_runtime::Identity, IdentityMigrator: westend_runtime::IdentityMigrator, } }, } ``` - [**`decl_test_parachains`**](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/xcm/xcm-emulator/src/lib.rs#L590){target=\_blank} - defines runtime and configuration for the parachains. Example: ```rust // AssetHubWestend Parachain declaration decl_test_parachains! { pub struct AssetHubWestend { genesis = genesis::genesis(), on_init = { asset_hub_westend_runtime::AuraExt::on_initialize(1); }, runtime = asset_hub_westend_runtime, core = { XcmpMessageHandler: asset_hub_westend_runtime::XcmpQueue, LocationToAccountId: asset_hub_westend_runtime::xcm_config::LocationToAccountId, ParachainInfo: asset_hub_westend_runtime::ParachainInfo, MessageOrigin: cumulus_primitives_core::AggregateMessageOrigin, }, pallets = { PolkadotXcm: asset_hub_westend_runtime::PolkadotXcm, Balances: asset_hub_westend_runtime::Balances, Assets: asset_hub_westend_runtime::Assets, ForeignAssets: asset_hub_westend_runtime::ForeignAssets, PoolAssets: asset_hub_westend_runtime::PoolAssets, AssetConversion: asset_hub_westend_runtime::AssetConversion, } }, } ``` - [**`decl_test_bridges`**](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/xcm/xcm-emulator/src/lib.rs#L1178){target=\_blank} - creates bridges between chains, specifying the source, target, and message handler. Example: ```rust decl_test_bridges! { pub struct RococoWestendMockBridge { source = BridgeHubRococoPara, target = BridgeHubWestendPara, handler = RococoWestendMessageHandler }, pub struct WestendRococoMockBridge { source = BridgeHubWestendPara, target = BridgeHubRococoPara, handler = WestendRococoMessageHandler } } ``` - [**`decl_test_networks`**](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/xcm/xcm-emulator/src/lib.rs#L916){target=\_blank} - defines a testing network with relay chains, parachains, and bridges, implementing message transport and processing logic. Example: ```rust decl_test_networks! { pub struct WestendMockNet { relay_chain = Westend, parachains = vec![ AssetHubWestend, BridgeHubWestend, CollectivesWestend, CoretimeWestend, PeopleWestend, PenpalA, PenpalB, ], bridge = () }, } ``` By leveraging these macros, developers can customize their testing networks by defining relay chains and parachains tailored to their needs. For guidance on implementing a mock runtime for a Polkadot SDK-based chain, refer to the [Pallet Testing](/develop/parachains/testing/pallet-testing/){target=\_blank} article. This framework enables thorough testing of runtime and cross-chain interactions, enabling developers to effectively design, test, and optimize cross-chain functionality. To see a complete example of implementing and executing tests, refer to the [integration tests](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/parachains/integration-tests/emulated){target=\_blank} in the Polkadot SDK repository. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/interoperability/xcm-channels/ --- BEGIN CONTENT --- --- title: XCM Channels description: Learn how Polkadot's cross-consensus messaging (XCM) channels connect parachains, facilitating communication and blockchain interaction. categories: Basics, Polkadot Protocol --- # XCM Channels ## Introduction Polkadot is designed to enable interoperability between its connected parachains. At the core of this interoperability is the [Cross-Consensus Message Format (XCM)](/develop/interoperability/intro-to-xcm/){target=\_blank}, a standard language that allows parachains to communicate and interact with each other. The network-layer protocol responsible for delivering XCM-formatted messages between parachains is the Cross-Chain Message Passing (XCMP) protocol. XCMP maintains messaging queues on the relay chain, serving as a bridge to facilitate cross-chain interactions. As XCMP is still under development, Polkadot has implemented a temporary alternative called Horizontal Relay-routed Message Passing (HRMP). HRMP offers the same interface and functionality as the planned XCMP but it has a crucial difference, it stores all messages directly in the relay chain’s storage, which is more resource-intensive. Once XCMP is fully implemented, HRMP will be deprecated in favor of the native XCMP protocol. XCMP will offer a more efficient and scalable solution for cross-chain message passing, as it will not require the relay chain to store all the messages. ## Establishing HRMP Channels To enable communication between parachains using the HRMP protocol, the parachains must explicitly establish communication channels by registering them on the relay chain. Downward and upward channels from and to the relay chain are implicitly available, meaning they do not need to be explicitly opened. Opening an HRMP channel requires the parachains involved to make a deposit on the relay chain. This deposit serves a specific purpose, it covers the costs associated with using the relay chain's storage for the message queues linked to the channel. The amount of this deposit varies based on parameters defined by the specific relay chain being used. ### Relay Chain Parameters Each Polkadot relay chain has a set of configurable parameters that control the behavior of the message channels between parachains. These parameters include `hrmpSenderDeposit`, `hrmpRecipientDeposit`, `hrmpChannelMaxMessageSize`, `hrmpChannelMaxCapacity`, and more. When a parachain wants to open a new channel, it must consider these parameter values to ensure the channel is configured correctly. To view the current values of these parameters in the Polkadot network: 1. Visit [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot.api.onfinality.io%2Fpublic-ws#/explorer), navigate to the **Developer** dropdown and select the **Chain state** option ![](/images/develop/interoperability/xcm-channels/xcm-channels-1.webp) 2. Query the chain configuration parameters. The result will display the current settings for all the Polkadot network parameters, including the HRMP channel settings 1. Select **`configuration`** 2. Choose the **`activeConfig()`** call 3. Click the **+** button to execute the query 4. Check the chain configuration ![](/images/develop/interoperability/xcm-channels/xcm-channels-2.webp) ### Dispatching Extrinsics Establishing new HRMP channels between parachains requires dispatching specific extrinsic calls on the Polkadot relay chain from the parachain's origin. The most straightforward approach is to implement the channel opening logic off-chain, then use the XCM pallet's `send` extrinsic to submit the necessary instructions to the relay chain. However, the ability to send arbitrary programs through the `Transact` instruction in XCM is typically restricted to privileged origins, such as the `sudo` pallet or governance mechanisms. Parachain developers have a few options for triggering the required extrinsic calls from their parachain's origin, depending on the configuration and access controls defined: - **Sudo** - if the parachain has a `sudo` pallet configured, the sudo key holder can use the sudo extrinsic to dispatch the necessary channel opening calls - **Governance** - the parachain's governance system, such as a council or OpenGov, can be used to authorize the channel opening calls - **Privileged accounts** - the parachain may have other designated privileged accounts that are allowed to dispatch the HRMP channel opening extrinsics ## Where to Go Next Explore the following tutorials for detailed, step-by-step guidance on setting up cross-chain communication channels in Polkadot:
- Tutorial __Opening HRMP Channels Between Parachains__ --- Learn how to open HRMP channels between parachains on Polkadot. Discover the step-by-step process for establishing uni- and bidirectional communication. [:octicons-arrow-right-24: Reference](/tutorials/interoperability/xcm-channels/para-to-para/) - Tutorial __Opening HRMP Channels with System Parachains__ --- Learn how to open HRMP channels with Polkadot system parachains. Discover the process for establishing bi-directional communication using a single XCM message. [:octicons-arrow-right-24: Reference](/tutorials/interoperability/xcm-channels/para-to-system/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/interoperability/xcm-config/ --- BEGIN CONTENT --- --- title: XCM Config description: Learn how the XCM Executor configuration works for your custom Polkadot SDK-based runtime with detailed guidance and references. categories: Reference, Polkadot Protocol --- # XCM Config ## Introduction The [XCM executor](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/index.html){target=\_blank} is a crucial component responsible for interpreting and executing XCM messages (XCMs) with Polkadot SDK-based chains. It processes and manages XCM instructions, ensuring they are executed correctly and in sequentially. Adhering to the [Cross-Consensus Virtual Machine (XCVM) specification](https://paritytech.github.io/xcm-docs/overview/xcvm.html#the-xcvm){target=\_blank}, the XCM executor can be customized or replaced with an alternative that also complies with the [XCVM standards](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#12-the-xcvm){target=\_blank}. The `XcmExecutor` is not a pallet but a struct parameterized by a `Config` trait. The `Config` trait is the inner configuration, parameterizing the outer `XcmExecutor` struct. Both configurations are set up within the runtime. The executor is highly configurable, with the [XCM builder](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/index.html){target=\_blank} offering building blocks to tailor the configuration to specific needs. While they serve as a foundation, users can easily create custom blocks to suit unique configurations. Users can also create their building blocks to address unique needs. This article examines the XCM configuration process, explains each configurable item, and provides examples of the tools and types available to help customize these settings. ## XCM Executor Configuration The `Config` trait defines the XCM executor’s configuration, which requires several associated types. Each type has specific trait bounds that the concrete implementation must fulfill. Some types, such as `RuntimeCall`, come with a default implementation in most cases, while others use the unit type `()` as the default. For many of these types, selecting the appropriate implementation carefully is crucial. Predefined solutions and building blocks can be adapted to your specific needs. These solutions can be found in the [`xcm-builder`](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/polkadot/xcm/xcm-builder){target=\_blank} folder. Each type is explained below, along with an overview of some of its implementations: ```rust pub trait Config { type RuntimeCall: Parameter + Dispatchable + GetDispatchInfo; type XcmSender: SendXcm; type AssetTransactor: TransactAsset; type OriginConverter: ConvertOrigin<::RuntimeOrigin>; type IsReserve: ContainsPair; type IsTeleporter: ContainsPair; type Aliasers: ContainsPair; type UniversalLocation: Get; type Barrier: ShouldExecute; type Weigher: WeightBounds; type Trader: WeightTrader; type ResponseHandler: OnResponse; type AssetTrap: DropAssets; type AssetClaims: ClaimAssets; type AssetLocker: AssetLock; type AssetExchanger: AssetExchange; type SubscriptionService: VersionChangeNotifier; type PalletInstancesInfo: PalletsInfoAccess; type MaxAssetsIntoHolding: Get; type FeeManager: FeeManager; type MessageExporter: ExportXcm; type UniversalAliases: Contains<(MultiLocation, Junction)>; type CallDispatcher: CallDispatcher; type SafeCallFilter: Contains; type TransactionalProcessor: ProcessTransaction; type HrmpNewChannelOpenRequestHandler: HandleHrmpNewChannelOpenRequest; type HrmpChannelAcceptedHandler: HandleHrmpChannelAccepted; type HrmpChannelClosingHandler: HandleHrmpChannelClosing; type XcmRecorder: RecordXcm; } ``` ## Config Items Each configuration item is explained below, detailing the associated type’s purpose and role in the XCM executor. Many of these types have predefined solutions available in the `xcm-builder`. Therefore, the available configuration items are: - [**`RuntimeCall`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.RuntimeCall){target=\_blank} - defines the runtime's callable functions, created via the [`frame::runtime`](https://paritytech.github.io/polkadot-sdk/master/frame_support/attr.runtime.html){target=\_blank} macro. It represents an enum listing the callable functions of all implemented pallets ```rust type RuntimeCall: Parameter + Dispatchable + GetDispatchInfo ``` The associated traits signify: - `Parameter` - ensures the type is encodable, decodable, and usable as a parameter - `Dispatchable` - indicates it can be executed in the runtime - `GetDispatchInfo` - provides weight details, determining how long execution takes - [**`XcmSender`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.XcmSender){target=\_blank} - implements the [`SendXcm`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm/v4/trait.SendXcm.html){target=\_blank} trait, specifying how the executor sends XCMs using transport layers (e.g., UMP for relay chains or XCMP for sibling chains). If a runtime lacks certain transport layers, such as [HRMP](https://wiki.polkadot.network/learn/learn-xcm-transport/#hrmp-xcmp-lite){target=\_blank} (or [XCMP](https://wiki.polkadot.network/learn/learn-xcm-transport/#xcmp-cross-consensus-message-passing-design-summary){target=\_blank}) ```rust type XcmSender: SendXcm; ``` - [**`AssetTransactor`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.AssetTransactor){target=\_blank} - implements the [`TransactAsset`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/traits/trait.TransactAsset.html){target=\_blank} trait, handling the conversion and transfer of MultiAssets between accounts or registers. It can be configured to support native tokens, fungibles, and non-fungibles or multiple tokens using pre-defined adapters like [`FungibleAdapter`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.FungibleAdapter.html){target=\_blank} or custom solutions ```rust type AssetTransactor: TransactAsset; ``` - [**`OriginConverter`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.OriginConverter){target=\_blank} - implements the [`ConvertOrigin`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/traits/trait.ConvertOrigin.html){target=\_blank} trait to map `MultiLocation` origins to `RuntimeOrigin`. Multiple implementations can be combined, and [`OriginKind`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/test_utils/enum.OriginKind.html){target=\_blank} is used to resolve conflicts. Pre-defined converters like [`SovereignSignedViaLocation`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.SovereignSignedViaLocation.html){target=\_blank} and [`SignedAccountId32AsNative`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.SignedAccountId32AsNative.html){target=\_blank} handle sovereign and local accounts respectively ```rust type OriginConverter: ConvertOrigin<::RuntimeOrigin>; ``` - [**`IsReserve`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.IsReserve){target=\_blank} - specifies trusted `` pairs for depositing reserve assets. Using the unit type `()` blocks reserve deposits. The [`NativeAsset`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.NativeAsset.html){target=\_blank} struct is an example of a reserve implementation ```rust type IsReserve: ContainsPair; ``` - [**`IsTeleporter`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.IsTeleporter){target=\_blank} - defines trusted `` pairs for teleporting assets to the chain. Using `()` blocks the [`ReceiveTeleportedAssets`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/test_utils/enum.Instruction.html#variant.ReceiveTeleportedAsset){target=\_blank} instruction. The [`NativeAsset`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.NativeAsset.html){target=\_blank} struct can act as an implementation ```rust type IsTeleporter: ContainsPair; ``` - [**`Aliasers`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.Aliasers){target=\_blank} - a list of `(Origin, Target)` pairs enabling each `Origin` to be replaced with its corresponding `Target` ```rust type Aliasers: ContainsPair; ``` - [**`UniversalLocation`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.UniversalLocation){target=\_blank} - specifies the runtime's location in the consensus universe ```rust type UniversalLocation: Get; ``` - Some examples are: - `X1(GlobalConsensus(NetworkId::Polkadot))` for Polkadot - `X1(GlobalConsensus(NetworkId::Kusama))` for Kusama - `X2(GlobalConsensus(NetworkId::Polkadot), Parachain(1000))` for Statemint - [**`Barrier`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.Barrier){target=\_blank} - implements the [`ShouldExecute`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/traits/trait.ShouldExecute.html){target=\_blank} trait, functioning as a firewall for XCM execution. Multiple barriers can be combined in a tuple, where execution halts if one succeeds ```rust type Barrier: ShouldExecute; ``` - [**`Weigher`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.Weigher){target=\_blank} - calculates the weight of XCMs and instructions, enforcing limits and refunding unused weight. Common solutions include [`FixedWeightBounds`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.FixedWeightBounds.html){target=\_blank}, which uses a base weight and limits on instructions ```rust type Weigher: WeightBounds; ``` - [**`Trader`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.Trader){target=\_blank} - manages asset-based weight purchases and refunds for `BuyExecution` instructions. The [`UsingComponents`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_builder/struct.UsingComponents.html){target=\_blank} trader is a common implementation ```rust type Trader: WeightTrader; ``` - [**`ResponseHandler`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.ResponseHandler){target=\_blank} - handles `QueryResponse` instructions, implementing the [`OnResponse`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/traits/trait.OnResponse.html){target=\_blank} trait. FRAME systems typically use the pallet-xcm implementation ```rust type ResponseHandler: OnResponse; ``` - [**`AssetTrap`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.AssetTrap){target=\_blank} - handles leftover assets in the holding register after XCM execution, allowing them to be claimed via `ClaimAsset`. If unsupported, assets are burned ```rust type AssetTrap: DropAssets; ``` - [**`AssetClaims`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.AssetClaims){target=\_blank} - facilitates the claiming of trapped assets during the execution of the `ClaimAsset` instruction. Commonly implemented via pallet-xcm ```rust type AssetClaims: ClaimAssets; ``` - [**`AssetLocker`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.AssetLocker){target=\_blank} - handles the locking and unlocking of assets. Can be omitted using `()` if asset locking is unnecessary ```rust type AssetLocker: AssetLock; ``` - [**`AssetExchanger`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.AssetExchanger){target=\_blank} - implements the [`AssetExchange`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/traits/trait.AssetExchange.html){target=\_blank} trait to manage asset exchanges during the `ExchangeAsset` instruction. The unit type `()` disables this functionality ```rust type AssetExchanger: AssetExchange; ``` - [**`SubscriptionService`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.SubscriptionService){target=\_blank} - manages `(Un)SubscribeVersion` instructions and returns the XCM version via `QueryResponse`. Typically implemented by pallet-xcm ```rust type SubscriptionService: VersionChangeNotifier; ``` - [**`PalletInstancesInfo`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.PalletInstancesInfo){target=\_blank} - provides runtime pallet information for `QueryPallet` and `ExpectPallet` instructions. FRAME-specific systems often use this, or it can be disabled with `()` ```rust type PalletInstancesInfo: PalletsInfoAccess; ``` - [**`MaxAssetsIntoHolding`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.MaxAssetsIntoHolding){target=\_blank} - limits the number of assets in the [Holding register](https://wiki.polkadot.network/learn/learn-xcm/#holding-register){target=\_blank}. At most, twice this limit can be held under worst-case conditions ```rust type MaxAssetsIntoHolding: Get; ``` - [**`FeeManager`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.FeeManager){target=\_blank} - manages fees for XCM instructions, determining whether fees should be paid, waived, or handled in specific ways. Fees can be waived entirely using `()` ```rust type FeeManager: FeeManager; ``` - [**`MessageExporter`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.MessageExporter){target=\_blank} - implements the [`ExportXcm`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/traits/trait.ExportXcm.html){target=\_blank} trait, enabling XCMs export to other consensus systems. It can spoof origins for use in bridges. Use `()` to disable exporting ```rust type MessageExporter: ExportXcm; ``` - [**`UniversalAliases`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.UniversalAliases){target=\_blank} - lists origin locations and universal junctions allowed to elevate themselves in the `UniversalOrigin` instruction. Using `Nothing` prevents origin aliasing ```rust type UniversalAliases: Contains<(MultiLocation, Junction)>; ``` - [**`CallDispatcher`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.CallDispatcher){target=\_blank} - dispatches calls from the `Transact` instruction, adapting the origin or modifying the call as needed. Can default to `RuntimeCall` ```rust type CallDispatcher: CallDispatcher; ``` - [**`SafeCallFilter`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.SafeCallFilter){target=\_blank} - whitelists calls permitted in the `Transact` instruction. Using `Everything` allows all calls, though this is temporary until proof size weights are accounted for ```rust type SafeCallFilter: Contains; ``` - [**`TransactionalProcessor`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.TransactionalProcessor){target=\_blank} - implements the [`ProccessTransaction`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/traits/trait.ProcessTransaction.html){target=\_blank} trait. It ensures that XCM instructions are executed atomically, meaning they either fully succeed or fully fail without any partial effects. This type allows for non-transactional XCM instruction processing by setting the `()` type ```rust type TransactionalProcessor: ProcessTransaction; ``` - [**`HrmpNewChannelOpenRequestHandler`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.HrmpNewChannelOpenRequestHandler){target=\_blank} - enables optional logic execution in response to the `HrmpNewChannelOpenRequest` XCM notification ```rust type HrmpNewChannelOpenRequestHandler: HandleHrmpNewChannelOpenRequest; ``` - [**`HrmpChannelAcceptedHandler`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.HrmpChannelAcceptedHandler){target=\_blank} - enables optional logic execution in response to the `HrmpChannelAccepted` XCM notification ```rust type HrmpChannelAcceptedHandler: HandleHrmpChannelAccepted; ``` - [**`HrmpChannelClosingHandler`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.HrmpChannelClosingHandler){target=\_blank} - enables optional logic execution in response to the `HrmpChannelClosing` XCM notification ```rust type HrmpChannelClosingHandler: HandleHrmpChannelClosing; ``` - [**`XcmRecorder`**](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/trait.Config.html#associatedtype.XcmRecorder){target=\_blank} - allows tracking of the most recently executed XCM, primarily for use with dry-run runtime APIs ```rust type XcmRecorder: RecordXcm; ``` ### Inner Config The `Config` trait underpins the `XcmExecutor`, defining its core behavior through associated types for asset handling, XCM processing, and permission management. These types are categorized as follows: - **Handlers** - manage XCMs sending, asset transactions, and special notifications - **Filters** - define trusted combinations, origin substitutions, and execution barriers - **Converters** - handle origin conversion for call execution - **Accessors** - provide weight determination and pallet information - **Constants** - specify universal locations and asset limits - **Common Configs** - include shared settings like `RuntimeCall` The following diagram outlines this categorization: ```mermaid flowchart LR A[Inner Config] --> B[Handlers] A --> C[Filters] A --> D[Converters] A --> E[Accessors] A --> F[Constants] A --> G[Common Configs] B --> H[XcmSender] B --> I[AssetTransactor] B --> J[Trader] B --> K[ResponseHandler] B --> L[AssetTrap] B --> M[AssetLocker] B --> N[AssetExchanger] B --> O[AssetClaims] B --> P[SubscriptionService] B --> Q[FeeManager] B --> R[MessageExporter] B --> S[CallDispatcher] B --> T[HrmpNewChannelOpenRequestHandler] B --> U[HrmpChannelAcceptedHandler] B --> V[HrmpChannelClosingHandler] C --> W[IsReserve] C --> X[IsTeleporter] C --> Y[Aliasers] C --> Z[Barrier] C --> AA[UniversalAliases] C --> AB[SafeCallFilter] D --> AC[OriginConverter] E --> AD[Weigher] E --> AE[PalletInstancesInfo] F --> AF[UniversalLocation] F --> AG[MaxAssetsIntoHolding] G --> AH[RuntimeCall] ``` ### Outer Config The `XcmExecutor` struct extends the functionality of the inner config by introducing fields for execution context, asset handling, error tracking, and operational management. For further details, see the documentation for [`XcmExecutor`](https://paritytech.github.io/polkadot-sdk/master/staging_xcm_executor/struct.XcmExecutor.html#impl-XcmExecutor%3CConfig%3E){target=\_blank}. ## Multiple Implementations Some associated types in the `Config` trait are highly configurable and may have multiple implementations (e.g., Barrier). These implementations are organized into a tuple `(impl_1, impl_2, ..., impl_n)`, and the execution follows a sequential order. Each item in the tuple is evaluated individually, each being checked to see if it fails. If an item passes (e.g., returns `Ok` or `true`), the execution stops, and the remaining items are not evaluated. The following example of the `Barrier` type demonstrates how this grouping operates (understanding each item in the tuple is unnecessary for this explanation). In the following example, the system will first check the `TakeWeightCredit` type when evaluating the barrier. If it fails, it will check `AllowTopLevelPaidExecutionFrom`, and so on, until one of them returns a positive result. If all checks fail, a Barrier error will be triggered. ```rust pub type Barrier = ( TakeWeightCredit, AllowTopLevelPaidExecutionFrom, AllowKnownQueryResponses, AllowSubscriptionsFrom, ); pub struct XcmConfig; impl xcm_executor::Config for XcmConfig { ... type Barrier = Barrier; ... } ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/interoperability/xcm-runtime-apis/ --- BEGIN CONTENT --- --- title: XCM Runtime APIs description: Learn about XCM Runtime APIs in Polkadot for cross-chain communication. Explore the APIs to simulate and test XCM messages before execution on the network. categories: Reference, Polkadot Protocol --- # XCM Runtime APIs ## Introduction Runtime APIs allow node-side code to extract information from the runtime state. While simple storage access retrieves stored values directly, runtime APIs enable arbitrary computation, making them a powerful tool for interacting with the chain's state. Unlike direct storage access, runtime APIs can derive values from storage based on arguments or perform computations that don't require storage access. For example, a runtime API might expose a formula for fee calculation, using only the provided arguments as inputs rather than fetching data from storage. In general, runtime APIs are used for: - Accessing a storage item - Retrieving a bundle of related storage items - Deriving a value from storage based on arguments - Exposing formulas for complex computational calculations This section will teach you about specific runtime APIs that support XCM processing and manipulation. ## Dry Run API The [Dry-run API](https://paritytech.github.io/polkadot-sdk/master/xcm_runtime_apis/dry_run/trait.DryRunApi.html){target=\_blank}, given an extrinsic, or an XCM program, returns its effects: - Execution result - Local XCM (in the case of an extrinsic) - Forwarded XCMs - List of events This API can be used independently for dry-running, double-checking, or testing. However, it mainly shines when used with the [Xcm Payment API](#xcm-payment-api), given that it only estimates fees if you know the specific XCM you want to execute or send. ### Dry Run Call This API allows a dry-run of any extrinsic and obtaining the outcome if it fails or succeeds, as well as the local xcm and remote xcm messages sent to other chains. ```rust fn dry_run_call(origin: OriginCaller, call: Call) -> Result, Error>; ``` ??? interface "Input parameters" `origin` ++"OriginCaller"++ ++"required"++ The origin used for executing the transaction. --- `call` ++"Call"++ ++"required"++ The extrinsic to be executed. --- ??? interface "Output parameters" ++"Result, Error>"++ Effects of dry-running an extrinsic. If an error occurs, it is returned instead of the effects. ??? child "Type `CallDryRunEffects`" `execution_result` ++"DispatchResultWithPostInfo"++ The result of executing the extrinsic. --- `emitted_events` ++"Vec"++ The list of events fired by the extrinsic. --- `local_xcm` ++"Option>"++ The local XCM that was attempted to be executed, if any. --- `forwarded_xcms` ++"Vec<(VersionedLocation, Vec>)>"++ The list of XCMs that were queued for sending. ??? child "Type `Error`" Enum: - **`Unimplemented`** - an API part is unsupported - **`VersionedConversionFailed`** - converting a versioned data structure from one version to another failed --- ??? interface "Example" This example demonstrates how to simulate a cross-chain asset transfer from the Paseo network to the Pop Network using a [reserve transfer](https://wiki.polkadot.network/docs/learn/xcm/journey/transfers-reserve){target=\_blank} mechanism. Instead of executing the actual transfer, the code shows how to test and verify the transaction's behavior through a dry run before performing it on the live network. Replace `INSERT_USER_ADDRESS` with your SS58 address before running the script. ***Usage with PAPI*** ```js import { paseo } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; import { PolkadotRuntimeOriginCaller, XcmVersionedLocation, XcmVersionedAssets, XcmV3Junction, XcmV3Junctions, XcmV3WeightLimit, XcmV3MultiassetFungibility, XcmV3MultiassetAssetId, } from '@polkadot-api/descriptors'; import { DispatchRawOrigin } from '@polkadot-api/descriptors'; import { Binary } from 'polkadot-api'; import { ss58Decode } from '@polkadot-labs/hdkd-helpers'; // Connect to the Paseo relay chain const client = createClient( withPolkadotSdkCompat(getWsProvider('wss://paseo-rpc.dwellir.com')), ); const paseoApi = client.getTypedApi(paseo); const popParaID = 4001; const userAddress = 'INSERT_USER_ADDRESS'; const userPublicKey = ss58Decode(userAddress)[0]; const idBeneficiary = Binary.fromBytes(userPublicKey); // Define the origin caller // This is a regular signed account owned by a user let origin = PolkadotRuntimeOriginCaller.system( DispatchRawOrigin.Signed(userAddress), ); // Define a transaction to transfer assets from Polkadot to Pop Network using a Reserve Transfer const tx = paseoApi.tx.XcmPallet.limited_reserve_transfer_assets({ dest: XcmVersionedLocation.V3({ parents: 0, interior: XcmV3Junctions.X1( XcmV3Junction.Parachain(popParaID), // Destination is the Pop Network parachain ), }), beneficiary: XcmVersionedLocation.V3({ parents: 0, interior: XcmV3Junctions.X1( XcmV3Junction.AccountId32({ // Beneficiary address on Pop Network network: undefined, id: idBeneficiary, }), ), }), assets: XcmVersionedAssets.V3([ { id: XcmV3MultiassetAssetId.Concrete({ parents: 0, interior: XcmV3Junctions.Here(), // Native asset from the sender. In this case PAS }), fun: XcmV3MultiassetFungibility.Fungible(120000000000n), // Asset amount to transfer }, ]), fee_asset_item: 0, // Asset used to pay transaction fees weight_limit: XcmV3WeightLimit.Unlimited(), // No weight limit on transaction }); // Execute the dry run call to simulate the transaction const dryRunResult = await paseoApi.apis.DryRunApi.dry_run_call( origin, tx.decodedCall, ); // Extract the data from the dry run result const { execution_result: executionResult, emitted_events: emmittedEvents, local_xcm: localXcm, forwarded_xcms: forwardedXcms, } = dryRunResult.value; // Extract the XCM generated by this call const xcmsToPop = forwardedXcms.find( ([location, _]) => location.type === 'V4' && location.value.parents === 0 && location.value.interior.type === 'X1' && location.value.interior.value.type === 'Parachain' && location.value.interior.value.value === popParaID, // Pop network's ParaID ); const destination = xcmsToPop[0]; const remoteXcm = xcmsToPop[1][0]; // Print the results const resultObject = { execution_result: executionResult, emitted_events: emmittedEvents, local_xcm: localXcm, destination: destination, remote_xcm: remoteXcm, }; console.dir(resultObject, { depth: null }); client.destroy(); ``` ***Output***
    {
      execution_result: {
        success: true,
        value: {
          actual_weight: undefined,
          pays_fee: { type: 'Yes', value: undefined }
        }
      },
      emitted_events: [
        {
          type: 'Balances',
          value: {
            type: 'Transfer',
            value: {
              from: '12pGtwHPL4tUAUcyeCoJ783NKRspztpWmXv4uxYRwiEnYNET',
              to: '13YMK2ePPKQeW7ynqLozB65WYjMnNgffQ9uR4AzyGmqnKeLq',
              amount: 120000000000n
            }
          }
        },
        {
          type: 'Balances',
          value: { type: 'Issued', value: { amount: 0n } }
        },
        {
          type: 'XcmPallet',
          value: {
            type: 'Attempted',
            value: {
              outcome: {
                type: 'Complete',
                value: { used: { ref_time: 251861000n, proof_size: 6196n } }
              }
            }
          }
        },
        {
          type: 'Balances',
          value: {
            type: 'Burned',
            value: {
              who: '12pGtwHPL4tUAUcyeCoJ783NKRspztpWmXv4uxYRwiEnYNET',
              amount: 397000000n
            }
          }
        },
        {
          type: 'Balances',
          value: {
            type: 'Minted',
            value: {
              who: '13UVJyLnbVp9RBZYFwFGyDvVd1y27Tt8tkntv6Q7JVPhFsTB',
              amount: 397000000n
            }
          }
        },
        {
          type: 'XcmPallet',
          value: {
            type: 'FeesPaid',
            value: {
              paying: {
                parents: 0,
                interior: {
                  type: 'X1',
                  value: {
                    type: 'AccountId32',
                    value: {
                      network: { type: 'Polkadot', value: undefined },
                      id: FixedSizeBinary {
                        asText: [Function (anonymous)],
                        asHex: [Function (anonymous)],
                        asOpaqueHex: [Function (anonymous)],
                        asBytes: [Function (anonymous)],
                        asOpaqueBytes: [Function (anonymous)]
                      }
                    }
                  }
                }
              },
              fees: [
                {
                  id: {
                    parents: 0,
                    interior: { type: 'Here', value: undefined }
                  },
                  fun: { type: 'Fungible', value: 397000000n }
                }
              ]
            }
          }
        },
        {
          type: 'XcmPallet',
          value: {
            type: 'Sent',
            value: {
              origin: {
                parents: 0,
                interior: {
                  type: 'X1',
                  value: {
                    type: 'AccountId32',
                    value: {
                      network: { type: 'Polkadot', value: undefined },
                      id: FixedSizeBinary {
                        asText: [Function (anonymous)],
                        asHex: [Function (anonymous)],
                        asOpaqueHex: [Function (anonymous)],
                        asBytes: [Function (anonymous)],
                        asOpaqueBytes: [Function (anonymous)]
                      }
                    }
                  }
                }
              },
              destination: {
                parents: 0,
                interior: { type: 'X1', value: { type: 'Parachain', value: 4001 } }
              },
              message: [
                {
                  type: 'ReserveAssetDeposited',
                  value: [
                    {
                      id: {
                        parents: 1,
                        interior: { type: 'Here', value: undefined }
                      },
                      fun: { type: 'Fungible', value: 120000000000n }
                    }
                  ]
                },
                { type: 'ClearOrigin', value: undefined },
                {
                  type: 'BuyExecution',
                  value: {
                    fees: {
                      id: {
                        parents: 1,
                        interior: { type: 'Here', value: undefined }
                      },
                      fun: { type: 'Fungible', value: 120000000000n }
                    },
                    weight_limit: { type: 'Unlimited', value: undefined }
                  }
                },
                {
                  type: 'DepositAsset',
                  value: {
                    assets: {
                      type: 'Wild',
                      value: { type: 'AllCounted', value: 1 }
                    },
                    beneficiary: {
                      parents: 0,
                      interior: {
                        type: 'X1',
                        value: {
                          type: 'AccountId32',
                          value: {
                            network: undefined,
                            id: FixedSizeBinary {
                              asText: [Function (anonymous)],
                              asHex: [Function (anonymous)],
                              asOpaqueHex: [Function (anonymous)],
                              asBytes: [Function (anonymous)],
                              asOpaqueBytes: [Function (anonymous)]
                            }
                          }
                        }
                      }
                    }
                  }
                }
              ],
              message_id: FixedSizeBinary {
                asText: [Function (anonymous)],
                asHex: [Function (anonymous)],
                asOpaqueHex: [Function (anonymous)],
                asBytes: [Function (anonymous)],
                asOpaqueBytes: [Function (anonymous)]
              }
            }
          }
        }
      ],
      local_xcm: undefined,
      destination: {
        type: 'V4',
        value: {
          parents: 0,
          interior: { type: 'X1', value: { type: 'Parachain', value: 4001 } }
        }
      },
      remote_xcm: {
        type: 'V3',
        value: [
          {
            type: 'ReserveAssetDeposited',
            value: [
              {
                id: {
                  type: 'Concrete',
                  value: {
                    parents: 1,
                    interior: { type: 'Here', value: undefined }
                  }
                },
                fun: { type: 'Fungible', value: 120000000000n }
              }
            ]
          },
          { type: 'ClearOrigin', value: undefined },
          {
            type: 'BuyExecution',
            value: {
              fees: {
                id: {
                  type: 'Concrete',
                  value: {
                    parents: 1,
                    interior: { type: 'Here', value: undefined }
                  }
                },
                fun: { type: 'Fungible', value: 120000000000n }
              },
              weight_limit: { type: 'Unlimited', value: undefined }
            }
          },
          {
            type: 'DepositAsset',
            value: {
              assets: { type: 'Wild', value: { type: 'AllCounted', value: 1 } },
              beneficiary: {
                parents: 0,
                interior: {
                  type: 'X1',
                  value: {
                    type: 'AccountId32',
                    value: {
                      network: undefined,
                      id: FixedSizeBinary {
                        asText: [Function (anonymous)],
                        asHex: [Function (anonymous)],
                        asOpaqueHex: [Function (anonymous)],
                        asBytes: [Function (anonymous)],
                        asOpaqueBytes: [Function (anonymous)]
                      }
                    }
                  }
                }
              }
            }
          },
          {
            type: 'SetTopic',
            value: FixedSizeBinary {
              asText: [Function (anonymous)],
              asHex: [Function (anonymous)],
              asOpaqueHex: [Function (anonymous)],
              asBytes: [Function (anonymous)],
              asOpaqueBytes: [Function (anonymous)]
            }
          }
        ]
      }
    }      
  
...
    {
      execution_result: {
        success: true,
        value: {
          actual_weight: undefined,
          pays_fee: { type: 'Yes', value: undefined }
        }
      },
      emitted_events: [
        {
          type: 'Balances',
          value: {
            type: 'Transfer',
            value: {
              from: '12pGtwHPL4tUAUcyeCoJ783NKRspztpWmXv4uxYRwiEnYNET',
              to: '13YMK2ePPKQeW7ynqLozB65WYjMnNgffQ9uR4AzyGmqnKeLq',
              amount: 120000000000n
            }
          }
        },
        {
          type: 'Balances',
          value: { type: 'Issued', value: { amount: 0n } }
        },
        {
          type: 'XcmPallet',
          value: {
            type: 'Attempted',
            value: {
              outcome: {
                type: 'Complete',
                value: { used: { ref_time: 251861000n, proof_size: 6196n } }
              }
            }
          }
        },
        {
          type: 'Balances',
          value: {
            type: 'Burned',
            value: {
              who: '12pGtwHPL4tUAUcyeCoJ783NKRspztpWmXv4uxYRwiEnYNET',
              amount: 397000000n
            }
          }
        },
        {
          type: 'Balances',
          value: {
            type: 'Minted',
            value: {
              who: '13UVJyLnbVp9RBZYFwFGyDvVd1y27Tt8tkntv6Q7JVPhFsTB',
              amount: 397000000n
            }
          }
        },
        {
          type: 'XcmPallet',
          value: {
            type: 'FeesPaid',
            value: {
              paying: {
                parents: 0,
                interior: {
                  type: 'X1',
                  value: {
                    type: 'AccountId32',
                    value: {
                      network: { type: 'Polkadot', value: undefined },
                      id: FixedSizeBinary {
                        asText: [Function (anonymous)],
                        asHex: [Function (anonymous)],
                        asOpaqueHex: [Function (anonymous)],
                        asBytes: [Function (anonymous)],
                        asOpaqueBytes: [Function (anonymous)]
                      }
                    }
                  }
                }
              },
              fees: [
                {
                  id: {
                    parents: 0,
                    interior: { type: 'Here', value: undefined }
                  },
                  fun: { type: 'Fungible', value: 397000000n }
                }
              ]
            }
          }
        },
        {
          type: 'XcmPallet',
          value: {
            type: 'Sent',
            value: {
              origin: {
                parents: 0,
                interior: {
                  type: 'X1',
                  value: {
                    type: 'AccountId32',
                    value: {
                      network: { type: 'Polkadot', value: undefined },
                      id: FixedSizeBinary {
                        asText: [Function (anonymous)],
                        asHex: [Function (anonymous)],
                        asOpaqueHex: [Function (anonymous)],
                        asBytes: [Function (anonymous)],
                        asOpaqueBytes: [Function (anonymous)]
                      }
                    }
                  }
                }
              },
              destination: {
                parents: 0,
                interior: { type: 'X1', value: { type: 'Parachain', value: 4001 } }
              },
              message: [
                {
                  type: 'ReserveAssetDeposited',
                  value: [
                    {
                      id: {
                        parents: 1,
                        interior: { type: 'Here', value: undefined }
                      },
                      fun: { type: 'Fungible', value: 120000000000n }
                    }
                  ]
                },
                { type: 'ClearOrigin', value: undefined },
                {
                  type: 'BuyExecution',
                  value: {
                    fees: {
                      id: {
                        parents: 1,
                        interior: { type: 'Here', value: undefined }
                      },
                      fun: { type: 'Fungible', value: 120000000000n }
                    },
                    weight_limit: { type: 'Unlimited', value: undefined }
                  }
                },
                {
                  type: 'DepositAsset',
                  value: {
                    assets: {
                      type: 'Wild',
                      value: { type: 'AllCounted', value: 1 }
                    },
                    beneficiary: {
                      parents: 0,
                      interior: {
                        type: 'X1',
                        value: {
                          type: 'AccountId32',
                          value: {
                            network: undefined,
                            id: FixedSizeBinary {
                              asText: [Function (anonymous)],
                              asHex: [Function (anonymous)],
                              asOpaqueHex: [Function (anonymous)],
                              asBytes: [Function (anonymous)],
                              asOpaqueBytes: [Function (anonymous)]
                            }
                          }
                        }
                      }
                    }
                  }
                }
              ],
              message_id: FixedSizeBinary {
                asText: [Function (anonymous)],
                asHex: [Function (anonymous)],
                asOpaqueHex: [Function (anonymous)],
                asBytes: [Function (anonymous)],
                asOpaqueBytes: [Function (anonymous)]
              }
            }
          }
        }
      ],
      local_xcm: undefined,
      destination: {
        type: 'V4',
        value: {
          parents: 0,
          interior: { type: 'X1', value: { type: 'Parachain', value: 4001 } }
        }
      },
      remote_xcm: {
        type: 'V3',
        value: [
          {
            type: 'ReserveAssetDeposited',
            value: [
              {
                id: {
                  type: 'Concrete',
                  value: {
                    parents: 1,
                    interior: { type: 'Here', value: undefined }
                  }
                },
                fun: { type: 'Fungible', value: 120000000000n }
              }
            ]
          },
          { type: 'ClearOrigin', value: undefined },
          {
            type: 'BuyExecution',
            value: {
              fees: {
                id: {
                  type: 'Concrete',
                  value: {
                    parents: 1,
                    interior: { type: 'Here', value: undefined }
                  }
                },
                fun: { type: 'Fungible', value: 120000000000n }
              },
              weight_limit: { type: 'Unlimited', value: undefined }
            }
          },
          {
            type: 'DepositAsset',
            value: {
              assets: { type: 'Wild', value: { type: 'AllCounted', value: 1 } },
              beneficiary: {
                parents: 0,
                interior: {
                  type: 'X1',
                  value: {
                    type: 'AccountId32',
                    value: {
                      network: undefined,
                      id: FixedSizeBinary {
                        asText: [Function (anonymous)],
                        asHex: [Function (anonymous)],
                        asOpaqueHex: [Function (anonymous)],
                        asBytes: [Function (anonymous)],
                        asOpaqueBytes: [Function (anonymous)]
                      }
                    }
                  }
                }
              }
            }
          },
          {
            type: 'SetTopic',
            value: FixedSizeBinary {
              asText: [Function (anonymous)],
              asHex: [Function (anonymous)],
              asOpaqueHex: [Function (anonymous)],
              asBytes: [Function (anonymous)],
              asOpaqueBytes: [Function (anonymous)]
            }
          }
        ]
      }
    }      
  
--- ### Dry Run XCM This API allows the direct dry-run of an xcm message instead of an extrinsic one, checks if it will execute successfully, and determines what other xcm messages will be forwarded to other chains. ```rust fn dry_run_xcm(origin_location: VersionedLocation, xcm: VersionedXcm) -> Result, Error>; ``` ??? interface "Input parameters" `origin_location` ++"VersionedLocation"++ ++"required"++ The location of the origin that will execute the xcm message. --- `xcm` ++"VersionedXcm"++ ++"required"++ A versioned XCM message. --- ??? interface "Output parameters" ++"Result, Error>"++ Effects of dry-running an extrinsic. If an error occurs, it is returned instead of the effects. ??? child "Type `XcmDryRunEffects`" `execution_result` ++"DispatchResultWithPostInfo"++ The result of executing the extrinsic. --- `emitted_events` ++"Vec"++ The list of events fired by the extrinsic. --- `forwarded_xcms` ++"Vec<(VersionedLocation, Vec>)>"++ The list of XCMs that were queued for sending. ??? child "Type `Error`" Enum: - **`Unimplemented`** - an API part is unsupported - **`VersionedConversionFailed`** - converting a versioned data structure from one version to another failed --- ??? interface "Example" This example demonstrates how to simulate a [teleport asset transfer](https://wiki.polkadot.network/docs/learn/xcm/journey/transfers-teleport){target=\_blank} from the Paseo network to the Paseo Asset Hub parachain. The code shows how to test and verify the received XCM message's behavior in the destination chain through a dry run on the live network. Replace `INSERT_USER_ADDRESS` with your SS58 address before running the script. ***Usage with PAPI*** ```js import { createClient } from 'polkadot-api'; import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; import { XcmVersionedXcm, paseoAssetHub, XcmVersionedLocation, XcmV3Junction, XcmV3Junctions, XcmV3WeightLimit, XcmV3MultiassetFungibility, XcmV3MultiassetAssetId, XcmV3Instruction, XcmV3MultiassetMultiAssetFilter, XcmV3MultiassetWildMultiAsset, } from '@polkadot-api/descriptors'; import { Binary } from 'polkadot-api'; import { ss58Decode } from '@polkadot-labs/hdkd-helpers'; // Connect to Paseo Asset Hub const client = createClient( withPolkadotSdkCompat(getWsProvider('wss://asset-hub-paseo-rpc.dwellir.com')), ); const paseoAssetHubApi = client.getTypedApi(paseoAssetHub); const userAddress = 'INSERT_USER_ADDRESS'; const userPublicKey = ss58Decode(userAddress)[0]; const idBeneficiary = Binary.fromBytes(userPublicKey); // Define the origin const origin = XcmVersionedLocation.V3({ parents: 1, interior: XcmV3Junctions.Here(), }); // Define a xcm message comming from the Paseo relay chain to Asset Hub to Teleport some tokens const xcm = XcmVersionedXcm.V3([ XcmV3Instruction.ReceiveTeleportedAsset([ { id: XcmV3MultiassetAssetId.Concrete({ parents: 1, interior: XcmV3Junctions.Here(), }), fun: XcmV3MultiassetFungibility.Fungible(12000000000n), }, ]), XcmV3Instruction.ClearOrigin(), XcmV3Instruction.BuyExecution({ fees: { id: XcmV3MultiassetAssetId.Concrete({ parents: 1, interior: XcmV3Junctions.Here(), }), fun: XcmV3MultiassetFungibility.Fungible(BigInt(12000000000n)), }, weight_limit: XcmV3WeightLimit.Unlimited(), }), XcmV3Instruction.DepositAsset({ assets: XcmV3MultiassetMultiAssetFilter.Wild( XcmV3MultiassetWildMultiAsset.All(), ), beneficiary: { parents: 0, interior: XcmV3Junctions.X1( XcmV3Junction.AccountId32({ network: undefined, id: idBeneficiary, }), ), }, }), ]); // Execute dry run xcm const dryRunResult = await paseoAssetHubApi.apis.DryRunApi.dry_run_xcm( origin, xcm, ); // Print the results console.dir(dryRunResult.value, { depth: null }); client.destroy(); ``` ***Output***
    {
      execution_result: {
        type: 'Complete',
        value: { used: { ref_time: 15574200000n, proof_size: 359300n } }
      },
      emitted_events: [
        {
          type: 'System',
          value: {
            type: 'NewAccount',
            value: { account: '12pGtwHPL4tUAUcyeCoJ783NKRspztpWmXv4uxYRwiEnYNET' }
          }
        },
        {
          type: 'Balances',
          value: {
            type: 'Endowed',
            value: {
              account: '12pGtwHPL4tUAUcyeCoJ783NKRspztpWmXv4uxYRwiEnYNET',
              free_balance: 10203500000n
            }
          }
        },
        {
          type: 'Balances',
          value: {
            type: 'Minted',
            value: {
              who: '12pGtwHPL4tUAUcyeCoJ783NKRspztpWmXv4uxYRwiEnYNET',
              amount: 10203500000n
            }
          }
        },
        {
          type: 'Balances',
          value: { type: 'Issued', value: { amount: 1796500000n } }
        },
        {
          type: 'Balances',
          value: {
            type: 'Deposit',
            value: {
              who: '13UVJyLgBASGhE2ok3TvxUfaQBGUt88JCcdYjHvUhvQkFTTx',
              amount: 1796500000n
            }
          }
        }
      ],
      forwarded_xcms: [
        [
          {
            type: 'V4',
            value: { parents: 1, interior: { type: 'Here', value: undefined } }
          },
          []
        ]
      ]
    }
  
--- ## XCM Payment API The [XCM Payment API](https://paritytech.github.io/polkadot-sdk/master/xcm_runtime_apis/fees/trait.XcmPaymentApi.html){target=\_blank} provides a standardized way to determine the costs and payment options for executing XCM messages. Specifically, it enables clients to: - Retrieve the [weight](/polkadot-protocol/glossary/#weight) required to execute an XCM message - Obtain a list of acceptable `AssetIds` for paying execution fees - Calculate the cost of the weight in a specified `AssetId` - Estimate the fees for XCM message delivery This API eliminates the need for clients to guess execution fees or identify acceptable assets manually. Instead, clients can query the list of supported asset IDs formatted according to the XCM version they understand. With this information, they can weigh the XCM program they intend to execute and convert the computed weight into its cost using one of the acceptable assets. To use the API effectively, the client must already know the XCM program to be executed and the chains involved in the program's execution. ### Query Acceptable Payment Assets Retrieves the list of assets that are acceptable for paying fees when using a specific XCM version ```rust fn query_acceptable_payment_assets(xcm_version: Version) -> Result, Error>; ``` ??? interface "Input parameters" `xcm_version` ++"Version"++ ++"required"++ Specifies the XCM version that will be used to send the XCM message. --- ??? interface "Output parameters" ++"Result, Error>"++ A list of acceptable payment assets. Each asset is provided in a versioned format (`VersionedAssetId`) that matches the specified XCM version. If an error occurs, it is returned instead of the asset list. ??? child "Type `Error`" Enum: - **`Unimplemented`** - an API part is unsupported - **`VersionedConversionFailed`** - converting a versioned data structure from one version to another failed - **`WeightNotComputable`** - XCM message weight calculation failed - **`UnhandledXcmVersion`** - XCM version not able to be handled - **`AssetNotFound`** - the given asset is not handled as a fee asset - **`Unroutable`** - destination is known to be unroutable --- ??? interface "Example" This example demonstrates how to query the acceptable payment assets for executing XCM messages on the Paseo Asset Hub network using XCM version 3. ***Usage with PAPI*** ```js import { paseoAssetHub } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; // Connect to the polkadot relay chain const client = createClient( withPolkadotSdkCompat(getWsProvider('wss://asset-hub-paseo-rpc.dwellir.com')), ); const paseoAssetHubApi = client.getTypedApi(paseoAssetHub); // Define the xcm version to use const xcmVersion = 3; // Execute the runtime call to query the assets const result = await paseoAssetHubApi.apis.XcmPaymentApi.query_acceptable_payment_assets( xcmVersion, ); // Print the assets console.dir(result.value, { depth: null }); client.destroy(); ``` ***Output***
    [
      {
        type: 'V3',
        value: {
          type: 'Concrete',
          value: { parents: 1, interior: { type: 'Here', value: undefined } }
        }
      }
    ]
  
--- ### Query XCM Weight Calculates the weight required to execute a given XCM message. It is useful for estimating the execution cost of a cross-chain message in the destination chain before sending it. ```rust fn query_xcm_weight(message: VersionedXcm<()>) -> Result; ``` ??? interface "Input parameters" `message` ++"VersionedXcm<()>"++ ++"required"++ A versioned XCM message whose execution weight is being queried. --- ??? interface "Output parameters" ++"Result"++ The calculated weight required to execute the provided XCM message. If the calculation fails, an error is returned instead. ??? child "Type `Weight`" `ref_time` ++"u64"++ The weight of computational time used based on some reference hardware. --- `proof_size` ++"u64"++ The weight of storage space used by proof of validity. --- ??? child "Type `Error`" Enum: - **`Unimplemented`** - an API part is unsupported - **`VersionedConversionFailed`** - converting a versioned data structure from one version to another failed - **`WeightNotComputable`** - XCM message weight calculation failed - **`UnhandledXcmVersion`** - XCM version not able to be handled - **`AssetNotFound`** - the given asset is not handled as a fee asset - **`Unroutable`** - destination is known to be unroutable --- ??? interface "Example" This example demonstrates how to calculate the weight needed to execute a [teleport transfer](https://wiki.polkadot.network/docs/learn/xcm/journey/transfers-teleport){target=\_blank} from the Paseo network to the Paseo Asset Hub parachain using the XCM Payment API. The result shows the required weight in terms of reference time and proof size needed in the destination chain. Replace `INSERT_USER_ADDRESS` with your SS58 address before running the script. ***Usage with PAPI*** ```js import { createClient } from 'polkadot-api'; import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; import { XcmVersionedXcm, paseoAssetHub, XcmV3Junction, XcmV3Junctions, XcmV3WeightLimit, XcmV3MultiassetFungibility, XcmV3MultiassetAssetId, XcmV3Instruction, XcmV3MultiassetMultiAssetFilter, XcmV3MultiassetWildMultiAsset, } from '@polkadot-api/descriptors'; import { Binary } from 'polkadot-api'; import { ss58Decode } from '@polkadot-labs/hdkd-helpers'; // Connect to Paseo Asset Hub const client = createClient( withPolkadotSdkCompat(getWsProvider('wss://asset-hub-paseo-rpc.dwellir.com')), ); const paseoAssetHubApi = client.getTypedApi(paseoAssetHub); const userAddress = 'INSERT_USER_ADDRESS'; const userPublicKey = ss58Decode(userAddress)[0]; const idBeneficiary = Binary.fromBytes(userPublicKey); // Define a xcm message comming from the Paseo relay chain to Asset Hub to Teleport some tokens const xcm = XcmVersionedXcm.V3([ XcmV3Instruction.ReceiveTeleportedAsset([ { id: XcmV3MultiassetAssetId.Concrete({ parents: 1, interior: XcmV3Junctions.Here(), }), fun: XcmV3MultiassetFungibility.Fungible(12000000000n), }, ]), XcmV3Instruction.ClearOrigin(), XcmV3Instruction.BuyExecution({ fees: { id: XcmV3MultiassetAssetId.Concrete({ parents: 1, interior: XcmV3Junctions.Here(), }), fun: XcmV3MultiassetFungibility.Fungible(BigInt(12000000000n)), }, weight_limit: XcmV3WeightLimit.Unlimited(), }), XcmV3Instruction.DepositAsset({ assets: XcmV3MultiassetMultiAssetFilter.Wild( XcmV3MultiassetWildMultiAsset.All(), ), beneficiary: { parents: 0, interior: XcmV3Junctions.X1( XcmV3Junction.AccountId32({ network: undefined, id: idBeneficiary, }), ), }, }), ]); // Execute the query weight runtime call const result = await paseoAssetHubApi.apis.XcmPaymentApi.query_xcm_weight(xcm); // Print the results console.dir(result.value, { depth: null }); client.destroy(); ``` ***Output***
{ ref_time: 15574200000n, proof_size: 359300n }
--- ### Query Weight to Asset Fee Converts a given weight into the corresponding fee for a specified `AssetId`. It allows clients to determine the cost of execution in terms of the desired asset. ```rust fn query_weight_to_asset_fee(weight: Weight, asset: VersionedAssetId) -> Result; ``` ??? interface "Input parameters" `weight` ++"Weight"++ ++"required"++ The execution weight to be converted into a fee. ??? child "Type `Weight`" `ref_time` ++"u64"++ The weight of computational time used based on some reference hardware. --- `proof_size` ++"u64"++ The weight of storage space used by proof of validity. --- --- `asset` ++"VersionedAssetId"++ ++"required"++ The asset in which the fee will be calculated. This must be a versioned asset ID compatible with the runtime. --- ??? interface "Output parameters" ++"Result"++ The fee needed to pay for the execution for the given `AssetId.` ??? child "Type `Error`" Enum: - **`Unimplemented`** - an API part is unsupported - **`VersionedConversionFailed`** - converting a versioned data structure from one version to another failed - **`WeightNotComputable`** - XCM message weight calculation failed - **`UnhandledXcmVersion`** - XCM version not able to be handled - **`AssetNotFound`** - the given asset is not handled as a fee asset - **`Unroutable`** - destination is known to be unroutable --- ??? interface "Example" This example demonstrates how to calculate the fee for a given execution weight using a specific versioned asset ID (PAS token) on Paseo Asset Hub. ***Usage with PAPI*** ```js import { paseoAssetHub } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; // Connect to the polkadot relay chain const client = createClient( withPolkadotSdkCompat(getWsProvider('wss://asset-hub-paseo-rpc.dwellir.com')), ); const paseoAssetHubApi = client.getTypedApi(paseoAssetHub); // Define the weight to convert to fee const weight = { ref_time: 15574200000n, proof_size: 359300n }; // Define the versioned asset id const versionedAssetId = { type: 'V4', value: { parents: 1, interior: { type: 'Here', value: undefined } }, }; // Execute the runtime call to convert the weight to fee const result = await paseoAssetHubApi.apis.XcmPaymentApi.query_weight_to_asset_fee( weight, versionedAssetId, ); // Print the fee console.dir(result.value, { depth: null }); client.destroy(); ``` ***Output***
1796500000n
--- ### Query Delivery Fees Retrieves the delivery fees for sending a specific XCM message to a designated destination. The fees are always returned in a specific asset defined by the destination chain. ```rust fn query_delivery_fees(destination: VersionedLocation, message: VersionedXcm<()>) -> Result; ``` ??? interface "Input parameters" `destination` ++"VersionedLocation"++ ++"required"++ The target location where the message will be sent. Fees may vary depending on the destination, as different destinations often have unique fee structures and sender mechanisms. --- `message` ++"VersionedXcm<()>"++ ++"required"++ The XCM message to be sent. The delivery fees are calculated based on the message's content and size, which can influence the cost. --- ??? interface "Output parameters" ++"Result"++ The calculated delivery fees expressed in a specific asset supported by the destination chain. If an error occurs during the query, it returns an error instead. ??? child "Type `Error`" Enum: - **`Unimplemented`** - an API part is unsupported - **`VersionedConversionFailed`** - converting a versioned data structure from one version to another failed - **`WeightNotComputable`** - XCM message weight calculation failed - **`UnhandledXcmVersion`** - XCM version not able to be handled - **`AssetNotFound`** - the given asset is not handled as a fee asset - **`Unroutable`** - destination is known to be unroutable --- ??? interface "Example" This example demonstrates how to query the delivery fees for sending an XCM message from Paseo to Paseo Asset Hub. Replace `INSERT_USER_ADDRESS` with your SS58 address before running the script. ***Usage with PAPI*** ```js import { createClient } from 'polkadot-api'; import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; import { XcmVersionedXcm, paseo, XcmVersionedLocation, XcmV3Junction, XcmV3Junctions, XcmV3WeightLimit, XcmV3MultiassetFungibility, XcmV3MultiassetAssetId, XcmV3Instruction, XcmV3MultiassetMultiAssetFilter, XcmV3MultiassetWildMultiAsset, } from '@polkadot-api/descriptors'; import { Binary } from 'polkadot-api'; import { ss58Decode } from '@polkadot-labs/hdkd-helpers'; const client = createClient( withPolkadotSdkCompat(getWsProvider('wss://paseo-rpc.dwellir.com')), ); const paseoApi = client.getTypedApi(paseo); const paseoAssetHubParaID = 1000; const userAddress = 'INSERT_USER_ADDRESS'; const userPublicKey = ss58Decode(userAddress)[0]; const idBeneficiary = Binary.fromBytes(userPublicKey); // Define the destination const destination = XcmVersionedLocation.V3({ parents: 0, interior: XcmV3Junctions.X1(XcmV3Junction.Parachain(paseoAssetHubParaID)), }); // Define the xcm message that will be sent to the destination const xcm = XcmVersionedXcm.V3([ XcmV3Instruction.ReceiveTeleportedAsset([ { id: XcmV3MultiassetAssetId.Concrete({ parents: 1, interior: XcmV3Junctions.Here(), }), fun: XcmV3MultiassetFungibility.Fungible(12000000000n), }, ]), XcmV3Instruction.ClearOrigin(), XcmV3Instruction.BuyExecution({ fees: { id: XcmV3MultiassetAssetId.Concrete({ parents: 1, interior: XcmV3Junctions.Here(), }), fun: XcmV3MultiassetFungibility.Fungible(BigInt(12000000000n)), }, weight_limit: XcmV3WeightLimit.Unlimited(), }), XcmV3Instruction.DepositAsset({ assets: XcmV3MultiassetMultiAssetFilter.Wild( XcmV3MultiassetWildMultiAsset.All(), ), beneficiary: { parents: 0, interior: XcmV3Junctions.X1( XcmV3Junction.AccountId32({ network: undefined, id: idBeneficiary, }), ), }, }), ]); // Execute the query delivery fees runtime call const result = await paseoApi.apis.XcmPaymentApi.query_delivery_fees( destination, xcm, ); // Print the results console.dir(result.value, { depth: null }); client.destroy(); ``` ***Output***
    {
      type: 'V3',
      value: [
        {
          id: {
            type: 'Concrete',
            value: { parents: 0, interior: { type: 'Here', value: undefined } }
          },
          fun: { type: 'Fungible', value: 396000000n }
        }
      ]
    }
  
--- --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/networks/ --- BEGIN CONTENT --- --- title: Networks description: Explore the Polkadot ecosystem networks and learn the unique purposes of each, tailored for blockchain innovation, testing, and enterprise-grade solutions. template: root-subdirectory-page.html categories: Basics, Networks --- # Networks ## Introduction The Polkadot ecosystem consists of multiple networks designed to support different stages of blockchain development, from main networks to test networks. Each network serves a unique purpose, providing developers with flexible environments for building, testing, and deploying blockchain applications. This section includes essential network information such as RPC endpoints, currency symbols and decimals, and how to acquire TestNet tokens for the Polkadot ecosystem of networks. ## Production Networks ### Polkadot Polkadot is the primary production blockchain network for high-stakes, enterprise-grade applications. Polkadot MainNet has been running since May 2020 and has implementations in various programming languages ranging from Rust to JavaScript. === "Network Details" **Currency symbol** - `DOT` --- **Currency decimals** - 10 --- **Block explorer** - [Polkadot Subscan](https://polkadot.subscan.io/){target=\_blank} === "RPC Endpoints" Blockops ``` wss://polkadot-public-rpc.blockops.network/ws ``` --- Dwellir ``` wss://polkadot-rpc.dwellir.com ``` --- Dwellir Tunisia ``` wss://polkadot-rpc-tn.dwellir.com ``` --- IBP1 ``` wss://rpc.ibp.network/polkadot ``` --- IBP2 ``` wss://polkadot.dotters.network ``` --- LuckyFriday ``` wss://rpc-polkadot.luckyfriday.io ``` --- OnFinality ``` wss://polkadot.api.onfinality.io/public-ws ``` --- RadiumBlock ``` wss://polkadot.public.curie.radiumblock.co/ws ``` --- RockX ``` wss://rockx-dot.w3node.com/polka-public-dot/ws ``` --- Stakeworld ``` wss://dot-rpc.stakeworld.io ``` --- SubQuery ``` wss://polkadot.rpc.subquery.network/public/ws ``` --- Light client ``` light://substrate-connect/polkadot ``` ### Kusama Kusama is a network built as a risk-taking, fast-moving "canary in the coal mine" for its cousin Polkadot. As it is built on top of the same infrastructure, Kusama often acts as a final testing ground for new features before they are launched on Polkadot. Unlike true TestNets, however, the Kusama KSM native token does have economic value. This incentive encourages paricipants to maintain this robust and performant structure for the benefit of the community. === "Network Details" **Currency symbol** - `KSM` --- **Currency decimals** - 12 --- **Block explorer** - [Kusama Subscan](https://kusama.subscan.io/){target=\_blank} === "RPC Endpoints" Dwellir ``` wss://kusama-rpc.dwellir.com ``` --- Dwellir Tunisia ``` wss://kusama-rpc-tn.dwellir.com ``` --- IBP1 ``` wss://rpc.ibp.network/kusama ``` --- IBP2 ``` wss://kusama.dotters.network ``` --- LuckyFriday ``` wss://rpc-kusama.luckyfriday.io ``` --- OnFinality ``` wss://kusama.api.onfinality.io/public-ws ``` --- RadiumBlock ``` wss://kusama.public.curie.radiumblock.co/ws ``` --- RockX ``` wss://rockx-ksm.w3node.com/polka-public-ksm/ws ``` --- Stakeworld ``` wss://rockx-ksm.w3node.com/polka-public-ksm/ws ``` --- Light client ``` light://substrate-connect/kusama ``` ## Test Networks ### Westend Westend is the primary test network that mirrors Polkadot's functionality for protocol-level feature development. As a true TestNet, the WND native token intentionally does not have any economic value. Use the faucet information in the following section to obtain WND tokens. === "Network Information" **Currency symbol** - `WND` --- **Currency decimals** - 12 --- **Block explorer** - [Westend Subscan](https://westend.subscan.io/){target=\_blank} --- **Faucet** - [Official Westend faucet](https://faucet.polkadot.io/westend){target=\_blank} === "RPC Endpoints" Dwellir ``` wss://westend-rpc.dwellir.com ``` --- Dwellir Tunisia ``` wss://westend-rpc-tn.dwellir.com ``` --- IBP1 ``` wss://rpc.ibp.network/westend ``` --- IBP2 ``` wss://westend.dotters.network ``` --- OnFinality ``` wss://westend.api.onfinality.io/public-ws ``` --- Parity ``` wss://westend-rpc.polkadot.io ``` --- Light client ``` light://substrate-connect/westend ``` ### Paseo Paseo is a decentralised, community run, stable testnet for parachain and dapp developers to build and test their applications. Unlike Westend, Paseo is not intended for protocol-level testing. As a true TestNet, the PAS native token intentionally does not have any economic value. Use the faucet information in the following section to obtain PAS tokens. === "Network Information" **RPC URL** ``` wss://paseo.rpc.amforc.com ``` --- **Currency symbol** - `PAS` --- **Currency decimals** - 10 --- **Block explorer** - [Paseo Subscan](https://paseo.subscan.io/){target=\_blank} --- **Faucet** - [Official Paseo faucet](https://faucet.polkadot.io/){target=\_blank} === "RPC Endpoints" Amforc ``` wss://paseo.rpc.amforc.com ``` --- Dwellir ``` wss://paseo-rpc.dwellir.com ``` --- IBP1 ``` wss://rpc.ibp.network/paseo ``` --- IBP2 ``` wss://paseo.dotters.network ``` --- StakeWorld ``` wss://pas-rpc.stakeworld.io ``` ## Additional Resources - [**Polkadot Fellowship runtimes repository**](https://github.com/polkadot-fellows/runtimes){target=\_blank} - find a collection of runtimes for Polkadot, Kusama, and their system-parachains as maintained by the community via the [Polkadot Technical Fellowship](https://wiki.polkadot.network/learn/learn-polkadot-technical-fellowship/){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/customize-parachain/add-existing-pallets/ --- BEGIN CONTENT --- --- title: Add a Pallet to the Runtime description: Learn how to include and configure pallets in a Polkadot SDK-based runtime, from adding dependencies to implementing necessary traits. categories: Parachains --- # Add a Pallet to the Runtime ## Introduction The [Polkadot SDK Solochain Template](https://github.com/paritytech/polkadot-sdk-solochain-template){target=\_blank} provides a functional runtime that includes default FRAME development modules (pallets) to help you get started with building a custom blockchain. Each pallet has specific configuration requirements, such as the parameters and types needed to enable the pallet's functionality. In this guide, you'll learn how to add a pallet to a runtime and configure the settings specific to that pallet. The purpose of this article is to help you: - Learn how to update runtime dependencies to integrate a new pallet - Understand how to configure pallet-specific Rust traits to enable the pallet's functionality - Grasp the entire workflow of integrating a new pallet into your runtime ## Configuring Runtime Dependencies For Rust programs, this configuration is defined in the `Cargo.toml` file, which specifies the settings and dependencies that control what gets compiled into the final binary. Since the Polkadot SDK runtime compiles to both a native binary (which includes standard Rust library functions) and a Wasm binary (which does not include the standard Rust library), the `runtime/Cargo.toml` file manages two key aspects: - The locations and versions of the pallets that are to be imported as dependencies for the runtime - The features in each pallet that should be enabled when compiling the native Rust binary. By enabling the standard (`std`) feature set from each pallet, you ensure that the runtime includes the functions, types, and primitives necessary for the native build, which are otherwise excluded when compiling the Wasm binary For information about adding dependencies in `Cargo.toml` files, see the [Dependencies](https://doc.rust-lang.org/cargo/guide/dependencies.html){target=\_blank} page in the Cargo documentation. To learn more about enabling and managing features from dependent packages, see the [Features](https://doc.rust-lang.org/cargo/reference/features.html){target=\_blank} section in the Cargo documentation. ## Dependencies for a New Pallet To add the dependencies for a new pallet to the runtime, you must modify the `Cargo.toml` file by adding a new line into the `[workspace.dependencies]` section with the pallet you want to add. This pallet definition might look like: ```toml title="Cargo.toml" pallet-example = { version = "4.0.0-dev", default-features = false } ``` This line imports the `pallet-example` crate as a dependency and specifies the following: - **`version`** - the specific version of the crate to import - **`default-features`** - determines the behavior for including pallet features when compiling the runtime with standard Rust libraries !!! tip If you're importing a pallet that isn't available on [`crates.io`](https://crates.io/){target=\_blank}, you can specify the pallet's location (either locally or from a remote repository) by using the `git` or `path` key. For example: ```toml title="Cargo.toml" pallet-example = { version = "4.0.0-dev", default-features = false, git = "INSERT_PALLET_REMOTE_URL", } ``` In this case, replace `INSERT_PALLET_REMOTE_URL` with the correct repository URL. For local paths, use the path key like so: ```toml title="Cargo.toml" pallet-example = { version = "4.0.0-dev", default-features = false, path = "INSERT_PALLET_RELATIVE_PATH", } ``` Ensure that you substitute `INSERT_PALLET_RELATIVE_PATH` with the appropriate local path to the pallet. Next, add this dependency to the `[dependencies]` section of the `runtime/Cargo.toml` file, so it inherits from the main `Cargo.toml` file: ```toml title="runtime/Cargo.toml" pallet-examples.workspace = true ``` To enable the `std` feature of the pallet, add the pallet to the following section: ```toml title="runtime/Cargo.toml" [features] default = ["std"] std = [ ... "pallet-example/std", ... ] ``` This section specifies the default feature set for the runtime, which includes the `std` features for each pallet. When the runtime is compiled with the `std` feature set, the standard library features for all listed pallets are enabled. If you forget to update the features section in the `Cargo.toml` file, you might encounter `cannot find function` errors when compiling the runtime. For more details about how the runtime is compiled as both a native binary (using `std`) and a Wasm binary (using `no_std`), refer to the [Wasm build](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/substrate/index.html#wasm-build){target=\_blank} section in the Polkadot SDK documentation. To ensure that the new dependencies resolve correctly for the runtime, you can run the following command: ```bash cargo check --release ``` ## Config Trait for Pallets Every Polkadot SDK pallet defines a Rust trait called `Config`. This trait specifies the types and parameters that the pallet needs to integrate with the runtime and perform its functions. The primary purpose of this trait is to act as an interface between this pallet and the runtime in which it is embedded. A type, function, or constant in this trait is essentially left to be configured by the runtime that includes this pallet. Consequently, a runtime that wants to include this pallet must implement this trait. You can inspect any pallet’s `Config` trait by reviewing its Rust documentation or source code. The `Config` trait ensures the pallet has access to the necessary types (like events, calls, or origins) and integrates smoothly with the rest of the runtime. At its core, the `Config` trait typically looks like this: ```rust #[pallet::config] pub trait Config: frame_system::Config { /// Event type used by the pallet. type RuntimeEvent: From + IsType<::RuntimeEvent>; /// Weight information for controlling extrinsic execution costs. type WeightInfo: WeightInfo; } ``` This basic structure shows that every pallet must define certain types, such as `RuntimeEvent` and `WeightInfo`, to function within the runtime. The actual implementation can vary depending on the pallet’s specific needs. ### Utility Pallet Example For instance, in the [`utility`](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/utility){target=\_blank} pallet, the `Config` trait is implemented with the following types: ```rust #[pallet::config] pub trait Config: frame_system::Config { /// The overarching event type. type RuntimeEvent: From + IsType<::RuntimeEvent>; /// The overarching call type. type RuntimeCall: Parameter + Dispatchable + GetDispatchInfo + From> + UnfilteredDispatchable + IsSubType> + IsType<::RuntimeCall>; /// The caller origin, overarching type of all pallets origins. type PalletsOrigin: Parameter + Into<::RuntimeOrigin> + IsType<<::RuntimeOrigin as frame_support::traits::OriginTrait>::PalletsOrigin>; /// Weight information for extrinsics in this pallet. type WeightInfo: WeightInfo; } ``` This example shows how the `Config` trait defines types like `RuntimeEvent`, `RuntimeCall`, `PalletsOrigin`, and `WeightInfo`, which the pallet will use when interacting with the runtime. ## Parameter Configuration for Pallets Traits in Rust define shared behavior, and within the Polkadot SDK, they allow runtimes to integrate and utilize a pallet's functionality by implementing its associated configuration trait and parameters. Some of these parameters may require constant values, which can be defined using the [`parameter_types!`](https://paritytech.github.io/polkadot-sdk/master/frame_support/macro.parameter_types.html){target=\_blank} macro. This macro simplifies development by expanding the constants into the appropriate struct types with functions that the runtime can use to access their types and values in a consistent manner. For example, the following code snippet shows how the solochain template configures certain parameters through the [`parameter_types!`]({{ dependencies.repositories.polkadot_sdk_solochain_template.repository_url }}/blob/{{dependencies.repositories.polkadot_sdk_solochain_template.version}}/runtime/src/lib.rs#L138){target=\_blank} macro in the `runtime/lib.rs` file: ```rust parameter_types! { pub const BlockHashCount: BlockNumber = 2400; pub const Version: RuntimeVersion = VERSION; /// We allow for 2 seconds of compute with a 6 second average block time. pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights::with_sensible_defaults( Weight::from_parts(2u64 * WEIGHT_REF_TIME_PER_SECOND, u64::MAX), NORMAL_DISPATCH_RATIO, ); pub BlockLength: frame_system::limits::BlockLength = frame_system::limits::BlockLength ::max_with_normal_ratio(5 * 1024 * 1024, NORMAL_DISPATCH_RATIO); pub const SS58Prefix: u8 = 42; } ``` ## Pallet Config in the Runtime To integrate a new pallet into the runtime, you must implement its `Config` trait in the `runtime/lib.rs` file. This is done by specifying the necessary types and parameters in Rust, as shown below: ```rust impl pallet_example::Config for Runtime { type RuntimeEvent = RuntimeEvent; type WeightInfo = pallet_template::weights::SubstrateWeight; ... } ``` Finally, to compose the runtime, update the list of pallets in the same file by modifying the [`#[frame_support::runtime]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/attr.runtime.html){target=\_blank} section. This Rust macro constructs the runtime with your specified name and pallets, wraps the runtime's configuration, and automatically generates boilerplate code for pallet inclusion. Use the following format when adding your pallet: ```rust #[frame_support::runtime] mod runtime { #[runtime::runtime] #[runtime::derive( RuntimeCall, RuntimeEvent, RuntimeError, RuntimeOrigin, RuntimeFreezeReason, RuntimeHoldReason, RuntimeSlashReason, RuntimeLockId, RuntimeTask )] pub struct Runtime; #[runtime::pallet_index(0)] pub type System = frame_system; #[runtime::pallet_index(1)] pub type Example = pallet_example; ``` ## Where to Go Next With the pallet successfully added and configured, the runtime is ready to be compiled and used. Following this guide’s steps, you’ve integrated a new pallet into the runtime, set up its dependencies, and ensured proper configuration. You can now proceed to any of the following points:
- Guide __Add Multiple Pallet Instances__ --- Learn how to implement multiple instances of the same pallet in your Polkadot SDK-based runtime to create and interact with modular blockchain components. [:octicons-arrow-right-24: Reference](/develop/parachains/customize-parachain/add-pallet-instances/) - Guide __Make a Custom Pallet__ --- Learn how to create custom pallets using FRAME, allowing for flexible, modular, and scalable blockchain development. Follow the step-by-step guide. [:octicons-arrow-right-24: Reference](/develop/parachains/customize-parachain/make-custom-pallet/) - Guide __Pallet Testing__ --- Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. [:octicons-arrow-right-24: Reference](/develop/parachains/testing)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/customize-parachain/add-pallet-instances/ --- BEGIN CONTENT --- --- title: Add Multiple Pallet Instances description: Learn how to implement multiple instances of the same pallet in your Polkadot SDK-based runtime to create and interact with modular blockchain components. categories: Parachains --- # Add Multiple Pallet Instances ## Introduction Running multiple instances of the same pallet within a runtime is a powerful technique in Polkadot SDK development. This approach lets you reuse pallet functionality without reimplementing it, enabling diverse use cases with the same codebase. The Polkadot SDK provides developer-friendly traits for creating instantiable pallets and, in most cases, handles unique storage allocation for different instances automatically. This guide teaches you how to implement and configure multiple instances of a pallet in your runtime. ## Understanding Instantiable Pallets Unlike standard pallets that exist as a single instance in a runtime, instantiable pallets require special configuration through an additional [generic parameter](https://doc.rust-lang.org/reference/items/generics.html){target=\_blank} `I`. This generic `I` creates a unique [lifetime](https://doc.rust-lang.org/rust-by-example/scope/lifetime.html){target=\_blank} for each pallet instance, affecting the pallet's generic types and its configuration trait `T`. You can identify an instantiable pallet by examining its `Pallet` struct definition, which will include both the standard generic `T` and the instantiation generic `I`: ```rust #[pallet::pallet] pub struct Pallet(PhantomData<(T, I)>); ``` The instantiation generic also appears throughout the pallet's components, including the `Config` trait, storage items, events, errors, and genesis configuration. ## Adding Instantiable Pallets to Your Runtime The process resembles adding a standard pallet with some key differences. In this example you will see how adding two instances of the [pallet-collective](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/collective){target=\_blank} is implemented. ### Define Pallet Parameters First, define the parameters needed to configure the pallet instances. This step is identical whether implementing single or multiple instances: ```rust parameter_types! { pub const MotionDuration: BlockNumber = 24 * HOURS; pub MaxProposalWeight: Weight = Perbill::from_percent(50) * RuntimeBlockWeights::get().max_block; pub const MaxProposals: u32 = 100; pub const MaxMembers: u32 = 100; } ``` ### Configure the Pallet Instances For a single instance, the configuration would look like this: ```rust hl_lines="1" impl pallet_collective::Config for Runtime { type RuntimeOrigin = RuntimeOrigin; type Proposal = RuntimeCall; type RuntimeEvent = RuntimeEvent; type MotionDuration = MotionDuration; type MaxProposals = MaxProposals; type MaxMembers = MaxMembers; type DefaultVote = pallet_collective::MoreThanMajorityThenPrimeDefaultVote; type SetMembersOrigin = EnsureRoot; type WeightInfo = pallet_collective::weights::SubstrateWeight; type MaxProposalWeight = MaxProposalWeight; type DisapproveOrigin = EnsureRoot; type KillOrigin = EnsureRoot; type Consideration = (); } ``` For multiple instances, you need to create a unique identifier for each instance using the `Instance` type with a number suffix, then implement the configuration for each one: ```rust hl_lines="2-3" // Configure first instance type Collective1 = pallet_collective::Instance1; impl pallet_collective::Config for Runtime { type RuntimeOrigin = RuntimeOrigin; type Proposal = RuntimeCall; type RuntimeEvent = RuntimeEvent; type MotionDuration = MotionDuration; type MaxProposals = MaxProposals; type MaxMembers = MaxMembers; type DefaultVote = pallet_collective::MoreThanMajorityThenPrimeDefaultVote; type SetMembersOrigin = EnsureRoot; type WeightInfo = pallet_collective::weights::SubstrateWeight; type MaxProposalWeight = MaxProposalWeight; type DisapproveOrigin = EnsureRoot; type KillOrigin = EnsureRoot; type Consideration = (); } ``` ```rust hl_lines="2-3" // Configure second instance type Collective2 = pallet_collective::Instance2; impl pallet_collective::Config for Runtime { type RuntimeOrigin = RuntimeOrigin; type Proposal = RuntimeCall; type RuntimeEvent = RuntimeEvent; type MotionDuration = MotionDuration; type MaxProposals = MaxProposals; type MaxMembers = MaxMembers; type DefaultVote = pallet_collective::MoreThanMajorityThenPrimeDefaultVote; type SetMembersOrigin = EnsureRoot; type WeightInfo = pallet_collective::weights::SubstrateWeight; type MaxProposalWeight = MaxProposalWeight; type DisapproveOrigin = EnsureRoot; type KillOrigin = EnsureRoot; type Consideration = (); } ``` While the example above uses identical configurations for both instances, you can customize each instance's parameters to serve different purposes within your runtime. ### Add Pallet Instances to the Runtime Finally, add both pallet instances to your runtime definition, ensuring each has: - A unique pallet index - The correct instance type specified ```rust hl_lines="6-10" #[frame_support::runtime] mod runtime { #[runtime::runtime] // ... other runtime configuration #[runtime::pallet_index(16)] pub type Collective1 = pallet_collective; #[runtime::pallet_index(17)] pub type Collective2 = pallet_collective; // ... other pallets } ``` ## Where to Go Next If you've followed all the steps correctly, you should now be able to compile your runtime and interact with both instances of the pallet. Each instance will operate independently with its own storage, events, and configured parameters. Now that you've mastered implementing multiple pallet instances, the next step is creating your own custom pallets. Explore the following resources:
- Guide __Make a Custom Pallet__ --- Learn how to create custom pallets using FRAME, allowing for flexible, modular, and scalable blockchain development. Follow the step-by-step guide. [:octicons-arrow-right-24: Reference](/develop/parachains/customize-parachain/make-custom-pallet/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/customize-parachain/add-smart-contract-functionality/ --- BEGIN CONTENT --- --- title: Add Smart Contract Functionality description: Add smart contract capabilities to your Polkadot SDK-based blockchain. Explore EVM and Wasm integration for enhanced chain functionality. categories: Parachains --- # Add Smart Contract Functionality ## Introduction When building your custom blockchain with the Polkadot SDK, you have the flexibility to add smart contract capabilities through specialized pallets. These pallets allow blockchain users to deploy and execute smart contracts, enhancing your chain's functionality and programmability. Polkadot SDK-based blockchains support two distinct smart contract execution environments: [EVM (Ethereum Virtual Machine)](#evm-smart-contracts) and [Wasm (WebAssembly)](#wasm-smart-contracts). Each environment allows developers to deploy and execute different types of smart contracts, providing flexibility in choosing the most suitable solution for their needs. ## EVM Smart Contracts To enable Ethereum-compatible smart contracts in your blockchain, you'll need to integrate [Frontier](https://github.com/polkadot-evm/frontier){target=\_blank}, the Ethereum compatibility layer for Polkadot SDK-based chains. This requires adding two essential pallets to your runtime: - [**`pallet-evm`**](https://github.com/polkadot-evm/frontier/tree/master/frame/evm){target=\_blank} - provides the EVM execution environment - [**`pallet-ethereum`**](https://github.com/polkadot-evm/frontier/tree/master/frame/ethereum){target=\_blank} - handles Ethereum-formatted transactions and RPC capabilities For step-by-step guidance on adding these pallets to your runtime, refer to [Add a Pallet to the Runtime](/develop/parachains/customize-parachain/add-existing-pallets/){target=\_blank}. For a real-world example of how these pallets are implemented in production, you can check Moonbeam's implementation of [`pallet-evm`](https://github.com/moonbeam-foundation/moonbeam/blob/9e2ddbc9ae8bf65f11701e7ccde50075e5fe2790/runtime/moonbeam/src/lib.rs#L532){target=\_blank} and [`pallet-ethereum`](https://github.com/moonbeam-foundation/moonbeam/blob/9e2ddbc9ae8bf65f11701e7ccde50075e5fe2790/runtime/moonbeam/src/lib.rs#L698){target=\_blank}. ## Wasm Smart Contracts To support Wasm-based smart contracts, you'll need to integrate: - [**`pallet-contracts`**](https://docs.rs/pallet-contracts/latest/pallet_contracts/index.html#contracts-pallet){target=\_blank} - provides the Wasm smart contract execution environment This pallet enables the deployment and execution of Wasm-based smart contracts on your blockchain. For detailed instructions on adding this pallet to your runtime, see [Add a Pallet to the Runtime](/develop/parachains/customize-parachain/add-existing-pallets/){target=\_blank}. For a real-world example of how this pallet is implemented in production, you can check Astar's implementation of [`pallet-contracts`](https://github.com/AstarNetwork/Astar/blob/b6f7a408d31377130c3713ed52941a06b5436402/runtime/astar/src/lib.rs#L693){target=\_blank}. ## Where to Go Next Now that you understand how to enable smart contract functionality in your blockchain, you might want to:
- Guide __Smart Contracts Overview__ --- Learn how developers can build smart contracts on Polkadot by leveraging the PolkaVM, Wasm/ink! or EVM contracts across many parachains. [:octicons-arrow-right-24: Reference](/develop/smart-contracts/overview/) - Guide __Wasm (ink!) Contracts__ --- Learn to build Wasm smart contracts with ink!, a Rust-based eDSL. Explore installation, contract structure, and key features. [:octicons-arrow-right-24: Reference](/develop/smart-contracts/overview#wasm-ink) - Guide __EVM Contracts__ --- Learn how Polkadot parachains such as Moonbeam, Astar, Acala, and Manta leverage the Ethereum Virtual Machine (EVM) and integrate it into their parachains. [:octicons-arrow-right-24: Reference](/develop/smart-contracts/overview#parachain-contracts)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/customize-parachain/ --- BEGIN CONTENT --- --- title: Customize Your Parachain description: Learn to build a custom parachain with Polkadot SDK's FRAME framework, which includes pallet development, testing, smart contracts, and runtime customization. template: index-page.html --- # Customize Your Parachain Learn how to build a custom parachain with Polkadot SDK's FRAME framework, which includes pallet development, testing, smart contracts, and runtime customization. Pallets are modular components within the FRAME ecosystem that contain specific blockchain functionalities. This modularity grants developers increased flexibility and control around which behaviors to include in the core logic of their parachain. The [FRAME directory](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame){target=\_blank} includes a robust library of pre-built pallets you can use as examples or templates to ease development. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/customize-parachain/make-custom-pallet/ --- BEGIN CONTENT --- --- title: Make a Custom Pallet description: Learn how to create custom pallets using FRAME, allowing for flexible, modular, and scalable blockchain development. Follow the step-by-step guide. categories: Parachains --- # Make a Custom Pallet ## Introduction FRAME provides a powerful set of tools for blockchain development, including a library of pre-built pallets. However, its true strength lies in the ability to create custom pallets tailored to your specific needs. This section will guide you through creating your own custom pallet, allowing you to extend your blockchain's functionality in unique ways. To get the most out of this guide, ensure you're familiar with [FRAME concepts](/develop/parachains/customize-parachain/overview/){target=\_blank}. Creating custom pallets offers several advantages over relying on pre-built pallets: - **Flexibility** - define runtime behavior that precisely matches your project requirements - **Modularity** - combine pre-built and custom pallets to achieve the desired blockchain functionality - **Scalability** - add or modify features as your project evolves As you follow this guide to create your custom pallet, you'll work with the following key sections: 1. **Imports and dependencies** - bring in necessary FRAME libraries and external modules 2. **Runtime configuration trait** - specify the types and constants required for your pallet to interact with the runtime 3. **Runtime events** - define events that your pallet can emit to communicate state changes 4. **Runtime errors** - define the error types that can be returned from the function calls dispatched to the runtime 5. **Runtime storage** - declare on-chain storage items for your pallet's state 6. **Extrinsics (function calls)** - create callable functions that allow users to interact with your pallet and execute transactions For additional macros you can include in a pallet, beyond those covered in this guide, refer to the [pallet_macros](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/index.html){target=\_blank} section of the Polkadot SDK Docs. ## Initial Setup This section will guide you through the initial steps of creating the foundation for your custom FRAME pallet. You'll create a new Rust library project and set up the necessary dependencies. 1. Create a new Rust library project using the following `cargo` command: ```bash cargo new --lib custom-pallet \ && cd custom-pallet ``` This command creates a new library project named `custom-pallet` and navigates into its directory. 2. Configure the dependencies required for FRAME pallet development in the `Cargo.toml` file as follows: ```toml [package] name = "custom-pallet" version = "0.1.0" edition = "2021" [dependencies] frame-support = { version = "37.0.0", default-features = false } frame-system = { version = "37.0.0", default-features = false } codec = { version = "3.6.12", default-features = false, package = "parity-scale-codec", features = [ "derive", ] } scale-info = { version = "2.11.1", default-features = false, features = [ "derive", ] } sp-runtime = { version = "39.0.0", default-features = false } [features] default = ["std"] std = [ "frame-support/std", "frame-system/std", "codec/std", "scale-info/std", "sp-runtime/std", ] ``` !!!note Proper version management is crucial for ensuring compatibility and reducing potential conflicts in your project. Carefully select the versions of the packages according to your project's specific requirements: - When developing for a specific Polkadot SDK runtime, ensure that your pallet's dependency versions match those of the target runtime - If you're creating this pallet within a Polkadot SDK workspace: - Define the actual versions in the root `Cargo.toml` file - Use workspace inheritance in your pallet's `Cargo.toml` to maintain consistency across your project - Regularly check for updates to FRAME and Polkadot SDK dependencies to benefit from the latest features, performance improvements, and security patches For detailed information about workspace inheritance and how to properly integrate your pallet with the runtime, see the [Add an Existing Pallet to the Runtime](/develop/parachains/customize-parachain/add-existing-pallets/){target=\_blank} page. 3. Initialize the pallet structure by replacing the contents of `src/lib.rs` with the following scaffold code: ```rust pub use pallet::*; #[frame_support::pallet] pub mod pallet { use frame_support::pallet_prelude::*; use frame_system::pallet_prelude::*; #[pallet::pallet] pub struct Pallet(_); #[pallet::config] // snip #[pallet::event] // snip #[pallet::error] // snip #[pallet::storage] // snip #[pallet::call] // snip } ``` With this scaffold in place, you're ready to start implementing your custom pallet's specific logic and features. The subsequent sections of this guide will walk you through populating each of these components with the necessary code for your pallet's functionality. ## Pallet Configuration Every pallet includes a Rust trait called [`Config`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/trait.Config.html){target=\_blank}, which exposes configurable options and links your pallet to other parts of the runtime. All types and constants the pallet depends on must be declared within this trait. These types are defined generically and made concrete when the pallet is instantiated in the `runtime/src/lib.rs` file of your blockchain. In this step, you'll only configure the common types used by all pallets: - **`RuntimeEvent`** - since this pallet emits events, the runtime event type is required to handle them. This ensures that events generated by the pallet can be correctly processed and interpreted by the runtime - **`WeightInfo`** - this type defines the weights associated with the pallet's callable functions (also known as dispatchables). Weights help measure the computational cost of executing these functions. However, the `WeightInfo` type will be left unconfigured since setting up custom weights is outside the scope of this guide Replace the line containing the [`#[pallet::config]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.config.html){target=\_blank} macro with the following code block: ```rust #[pallet::config] pub trait Config: frame_system::Config { /// The overarching runtime event type. type RuntimeEvent: From> + IsType<::RuntimeEvent>; /// A type representing the weights required by the dispatchables of this pallet. type WeightInfo; } ``` ## Pallet Events After configuring the pallet to emit events, the next step is to define the events that can be triggered by functions within the pallet. Events provide a straightforward way to inform external entities, such as dApps, chain explorers, or users, that a significant change has occurred in the runtime. In a FRAME pallet, the details of each event and its parameters are included in the node’s metadata, making them accessible to external tools and interfaces. The [`generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\_blank} macro generates a `deposit_event` function on the `Pallet`, which converts the pallet’s event type into the [`RuntimeEvent`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/trait.Config.html#associatedtype.RuntimeEvent){target=\_blank} (as specified in the `Config` trait) and deposits it using [`frame_system::Pallet::deposit_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.deposit_event){target=\_blank}. This step adds an event called `SomethingStored`, which is triggered when a user successfully stores a value in the pallet. The event records both the value and the account that performed the action. To define events, replace the [`#[pallet::event]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.event.html){target=\_blank} line with the following code block: ```rust #[pallet::event] #[pallet::generate_deposit(pub(super) fn deposit_event)] pub enum Event { /// A user has successfully set a new value. SomethingStored { /// The new value set. something: u32, /// The account who set the new value. who: T::AccountId, }, } ``` ## Pallet Errors While events signal the successful completion of calls, errors indicate when and why a call has failed. It's essential to use informative names for errors to clearly communicate the cause of failure. Like events, error documentation is included in the node's metadata, so providing helpful descriptions is crucial. Errors are defined as an enum named `Error` with a generic type. Variants can have fields or be fieldless. Any field type specified in the error must implement the [`TypeInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_prelude/trait.TypeInfo.html){target=\_blank} trait, and the encoded size of each field should be as small as possible. Runtime errors can be up to 4 bytes in size, allowing the return of additional information when needed. This step defines two basic errors: one for handling cases where no value has been set and another for managing arithmetic overflow. To define errors, replace the [`#[pallet::error]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.error.html){target=\_blank} line with the following code block: ```rust #[pallet::error] pub enum Error { /// The value retrieved was `None` as no value was previously set. NoneValue, /// There was an attempt to increment the value in storage over `u32::MAX`. StorageOverflow, } ``` ## Pallet Storage To persist and store state/data within the pallet (and subsequently, the blockchain you are building), the `#[pallet::storage]` macro is used. This macro allows the definition of abstract storage within the runtime and sets metadata for that storage. It can be applied multiple times to define different storage items. Several types are available for defining storage, which you can explore in the [Polkadot SDK documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/storage/types/index.html){target=\_blank}. This step adds a simple storage item, `Something`, which stores a single `u32` value in the pallet's runtime storage To define storage, replace the [`#[pallet::storage]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.storage.html){target=\_blank} line with the following code block: ```rust #[pallet::storage] pub type Something = StorageValue<_, u32>; ``` ## Pallet Dispatchable Extrinsics Dispatchable functions enable users to interact with the pallet and trigger state changes. These functions are represented as "extrinsics," which are similar to transactions. They must return a [`DispatchResult`](https://paritytech.github.io/polkadot-sdk/master/frame_support/dispatch/type.DispatchResult.html){target=\_blank} and be annotated with a weight and a call index. The `#[pallet::call_index]` macro is used to explicitly define an index for calls in the `Call` enum. This is useful for maintaining backward compatibility in the event of new dispatchables being introduced, as changing the order of dispatchables would otherwise alter their index. The `#[pallet::weight]` macro assigns a weight to each call, determining its execution cost. This section adds two dispatchable functions: - **`do_something`** - takes a single `u32` value, stores it in the pallet's storage, and emits an event - **`cause_error`** - checks if a value exists in storage. If the value is found, it increments and is stored back. If no value is present or an overflow occurs, a custom error is returned To implement these calls, replace the [`#[pallet::call]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.call.html){target=\_blank} line with the following code block: ```rust #[pallet::call] impl Pallet { #[pallet::call_index(0)] #[pallet::weight(Weight::default())] pub fn do_something(origin: OriginFor, something: u32) -> DispatchResult { // Check that the extrinsic was signed and get the signer. let who = ensure_signed(origin)?; // Update storage. Something::::put(something); // Emit an event. Self::deposit_event(Event::SomethingStored { something, who }); // Return a successful `DispatchResult` Ok(()) } #[pallet::call_index(1)] #[pallet::weight(Weight::default())] pub fn cause_error(origin: OriginFor) -> DispatchResult { let _who = ensure_signed(origin)?; // Read a value from storage. match Something::::get() { // Return an error if the value has not been set. None => Err(Error::::NoneValue.into()), Some(old) => { // Increment the value read from storage. This will cause an error in the event // of overflow. let new = old.checked_add(1).ok_or(Error::::StorageOverflow)?; // Update the value in storage with the incremented result. Something::::put(new); Ok(()) }, } } } ``` ## Pallet Implementation Overview After following all the previous steps, the pallet is now fully implemented. Below is the complete code, combining the configuration, events, errors, storage, and dispatchable functions: ???code ```rust pub use pallet::*; #[frame_support::pallet] pub mod pallet { use frame_support::pallet_prelude::*; use frame_system::pallet_prelude::*; #[pallet::pallet] pub struct Pallet(_); #[pallet::config] pub trait Config: frame_system::Config { /// The overarching runtime event type. type RuntimeEvent: From> + IsType<::RuntimeEvent>; /// A type representing the weights required by the dispatchables of this pallet. type WeightInfo; } #[pallet::event] #[pallet::generate_deposit(pub(super) fn deposit_event)] pub enum Event { /// A user has successfully set a new value. SomethingStored { /// The new value set. something: u32, /// The account who set the new value. who: T::AccountId, }, } #[pallet::error] pub enum Error { /// The value retrieved was `None` as no value was previously set. NoneValue, /// There was an attempt to increment the value in storage over `u32::MAX`. StorageOverflow, } #[pallet::storage] pub type Something = StorageValue<_, u32>; #[pallet::call] impl Pallet { #[pallet::call_index(0)] #[pallet::weight(Weight::default())] pub fn do_something(origin: OriginFor, something: u32) -> DispatchResult { // Check that the extrinsic was signed and get the signer. let who = ensure_signed(origin)?; // Update storage. Something::::put(something); // Emit an event. Self::deposit_event(Event::SomethingStored { something, who }); // Return a successful `DispatchResult` Ok(()) } #[pallet::call_index(1)] #[pallet::weight(Weight::default())] pub fn cause_error(origin: OriginFor) -> DispatchResult { let _who = ensure_signed(origin)?; // Read a value from storage. match Something::::get() { // Return an error if the value has not been set. None => Err(Error::::NoneValue.into()), Some(old) => { // Increment the value read from storage. This will cause an error in the event // of overflow. let new = old.checked_add(1).ok_or(Error::::StorageOverflow)?; // Update the value in storage with the incremented result. Something::::put(new); Ok(()) }, } } } } ``` ## Where to Go Next With the pallet implemented, the next steps involve ensuring its reliability and performance before integrating it into a runtime. Check the following sections:
- Guide __Testing__ --- Learn how to effectively test the functionality and reliability of your pallet to ensure it behaves as expected. [:octicons-arrow-right-24: Reference](/develop/parachains/testing/) - Guide __Benchmarking__ --- Explore methods to measure the performance and execution cost of your pallet. [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking) - Guide __Add a Pallet to the Runtime__ --- Follow this guide to include your pallet in a Polkadot SDK-based runtime, making it ready for use in your blockchain. [:octicons-arrow-right-24: Reference](/develop/parachains/customize-parachain/add-existing-pallets/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/customize-parachain/overview/ --- BEGIN CONTENT --- --- title: Overview of FRAME description: Learn how Polkadot SDK’s FRAME framework simplifies blockchain development with modular pallets and support libraries for efficient runtime design. categories: Basics, Parachains --- # Overview ## Introduction The runtime is the heart of any Polkadot SDK-based blockchain, handling the essential logic that governs state changes and transaction processing. With Polkadot SDK’s [FRAME (Framework for Runtime Aggregation of Modularized Entities)](/polkadot-protocol/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_bank}, developers gain access to a powerful suite of tools for building custom blockchain runtimes. FRAME offers a modular architecture, featuring reusable pallets and support libraries, to streamline development. This guide provides an overview of FRAME, its core components like pallets and system libraries, and demonstrates how to compose a runtime tailored to your specific blockchain use case. Whether you’re integrating pre-built modules or designing custom logic, FRAME equips you with the tools to create scalable, feature-rich blockchains. ## FRAME Runtime Architecture The following diagram illustrates how FRAME components integrate into the runtime: ![](/images/develop/parachains/customize-parachain/overview/frame-overview-1.webp) All transactions sent to the runtime are handled by the `frame_executive` pallet, which dispatches them to the appropriate pallet for execution. These runtime modules contain the logic for specific blockchain features. The `frame_system` module provides core functions, while `frame_support` libraries offer useful tools to simplify pallet development. Together, these components form the backbone of a FRAME-based blockchain's runtime. ### Pallets Pallets are modular components within the FRAME ecosystem that encapsulate specific blockchain functionalities. These modules offer customizable business logic for various use cases and features that can be integrated into a runtime. Developers have the flexibility to implement any desired behavior in the core logic of the blockchain, such as: - Exposing new transactions - Storing information - Enforcing business rules Pallets also include necessary wiring code to ensure proper integration and functionality within the runtime. FRAME provides a range of [pre-built pallets](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame){target=\_blank} for standard and common blockchain functionalities, including consensus algorithms, staking mechanisms, governance systems, and more. These pre-existing pallets serve as building blocks or templates, which developers can use as-is, modify, or reference when creating custom functionalities. #### Pallet Structure Polkadot SDK heavily utilizes Rust macros, allowing developers to focus on specific functional requirements when writing pallets instead of dealing with technicalities and scaffolding code. A typical pallet skeleton looks like this: ```rust pub use pallet::*; #[frame_support::pallet] pub mod pallet { use frame_support::pallet_prelude::*; use frame_system::pallet_prelude::*; #[pallet::pallet] #[pallet::generate_store(pub(super) trait Store)] pub struct Pallet(_); #[pallet::config] // snip #[pallet::event] // snip #[pallet::error] // snip #[pallet::storage] // snip #[pallet::call] // snip } ``` All pallets, including custom ones, can implement these attribute macros: - **`#[frame_support::pallet]`** - marks the module as usable in the runtime - **`#[pallet::pallet]`** - applied to a structure used to retrieve module information easily - **`#[pallet::config]`** - defines the configuration for the pallets's data types - **`#[pallet::event]`** - defines events to provide additional information to users - **`#[pallet::error]`** - lists possible errors in an enum to be returned upon unsuccessful execution - **`#[pallet::storage]`** - defines elements to be persisted in storage - **`#[pallet::call]`** - defines functions exposed as transactions, allowing dispatch to the runtime These macros are applied as attributes to Rust modules, functions, structures, enums, and types and serve as the core components of a pallet. They enable the pallet to be built and added to the runtime, exposing the custom logic to the outer world. For a comprehensive guide on these and additional macros, see the [`pallet_macros`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/index.html){target=\_blank} section in the Polkadot SDK documentation. ### Support Libraries In addition to purpose-specific pallets, FRAME offers services and core libraries that facilitate composing and interacting with the runtime: - [**`frame_system` pallet**](https://paritytech.github.io/polkadot-sdk/master/frame_system/index.html){target=\_blank} - provides low-level types, storage, and functions for the runtime - [**`frame_executive` pallet**](https://paritytech.github.io/polkadot-sdk/master/frame_executive/index.html){target=\_blank} - orchestrates the execution of incoming function calls to the respective pallets in the runtime - [**`frame_support` crate**](https://paritytech.github.io/polkadot-sdk/master/frame_support/index.html){target=\_blank} - is a collection of Rust macros, types, traits, and modules that simplify the development of Substrate pallets - [**`frame_benchmarking` crate**](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/trait.Benchmark.html){target=\_blank} - contains common runtime patterns for benchmarking and testing purposes ## Compose a Runtime with Pallets The Polkadot SDK allows developers to construct a runtime by combining various pallets, both built-in and custom-made. This modular approach enables the creation of unique blockchain behaviors tailored to specific requirements. The following diagram illustrates the process of selecting and combining FRAME pallets to compose a runtime: ![](/images/develop/parachains/customize-parachain/overview/frame-overview-2.webp) This modular design allows developers to: - Rapidly prototype blockchain systems - Easily add or remove features by including or excluding pallets - Customize blockchain behavior without rebuilding core components - Leverage tested and optimized code from built-in pallets ## Starting from Templates Using pre-built templates is an efficient way to begin building a custom blockchain. Templates provide a foundational setup with pre-configured modules, letting developers avoid starting from scratch and instead focus on customization. Depending on your project’s goals—whether you want a simple test chain, a standalone chain, or a parachain that integrates with Polkadot’s relay chains—there are templates designed to suit different levels of complexity and scalability. ### Solochain Templates Solochain templates are designed for developers who want to create standalone blockchains that operate independently without connecting to a relay chain: - [**`minimal-template`**](https://github.com/paritytech/polkadot-sdk/tree/master/templates/minimal){target=\_blank} - includes only the essential components necessary for a functioning blockchain. It’s ideal for developers who want to gain familiarity with blockchain basics and test simple customizations before scaling up - [**`solochain-template`**](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=\_blank} - provides a foundation for creating standalone blockchains with moderate features, including a simple consensus mechanism and several core FRAME pallets. It’s a solid starting point for developers who want a fully functional chain that doesn’t depend on a relay chain ### Parachain Templates Parachain templates are specifically designed for chains that will connect to and interact with relay chains in the Polkadot ecosystem: - [**`parachain-template`**](https://github.com/paritytech/polkadot-sdk/tree/master/templates/parachain){target=\_blank} - designed for connecting to relay chains like Polkadot, Kusama, or Paseo, this template enables a chain to operate as a parachain. For projects aiming to integrate with Polkadot’s ecosystem, this template offers a great starting point - [**`OpenZeppelin`**](https://github.com/OpenZeppelin/polkadot-runtime-templates/tree/main){target=\_blank} - offers two flexible starting points: - The [`generic-runtime-template`](https://github.com/OpenZeppelin/polkadot-runtime-templates/tree/main/generic-template){target=\_blank} provides a minimal setup with essential pallets and secure defaults, creating a reliable foundation for custom blockchain development - The [`evm-runtime-template`](https://github.com/OpenZeppelin/polkadot-runtime-templates/tree/main/evm-template){target=\_blank} enables EVM compatibility, allowing developers to migrate Solidity contracts and EVM-based dApps. This template is ideal for Ethereum developers looking to leverage Substrate's capabilities Choosing a suitable template depends on your project’s unique requirements, level of customization, and integration needs. Starting from a template speeds up development and lets you focus on implementing your chain’s unique features rather than the foundational blockchain setup. ## Where to Go Next For more detailed information on implementing this process, refer to the following sections: - [Add a Pallet to Your Runtime](/develop/parachains/customize-parachain/add-existing-pallets/) - [Create a Custom Pallet](/develop/parachains/customize-parachain/make-custom-pallet/) --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/deployment/build-deterministic-runtime/ --- BEGIN CONTENT --- --- title: Build a deterministic runtime description: Explains how to use the Polkadot SDK runtime toolbox and Docker to build deterministic Wasm binaries for Polkadot SDK-based chains. categories: Parachains --- # Build a Deterministic Runtime ## Introduction By default, the Rust compiler produces optimized Wasm binaries. These binaries are suitable for working in an isolated environment, such as local development. However, the Wasm binaries the compiler builds by default aren't guaranteed to be deterministically reproducible. Each time the compiler generates the Wasm runtime, it might produce a slightly different Wasm byte code. This is problematic in a blockchain network where all nodes must use exactly the same raw chain specification file. Working with builds that aren't guaranteed to be deterministically reproducible can cause other problems, too. For example, for automating the build processes for a blockchain, it is ideal that the same code always produces the same result (in terms of bytecode). Compiling the Wasm runtime with every push would produce inconsistent and unpredictable results without a deterministic build, making it difficult to integrate with any automation and likely to break a CI/CD pipeline continuously. Deterministic builds—code that always compiles to exactly the same bytecode—ensure that the Wasm runtime can be inspected, audited, and independently verified. ## Prerequisites Before you begin, ensure you have [Docker](https://www.docker.com/get-started/){target=\_blank} installed. ## Tooling for Wasm Runtime To compile the Wasm runtime deterministically, the same tooling that produces the runtime for Polkadot, Kusama, and other Polkadot SDK-based chains can be used. This tooling, referred to collectively as the Substrate Runtime Toolbox or [`srtool`](https://github.com/paritytech/srtool){target=\_blank}, ensures that the same source code consistently compiles to an identical Wasm blob. The core component of `srtool` is a Docker container executed as part of a Docker image. The name of the `srtool` Docker image specifies the version of the Rust compiler used to compile the code included in the image. For example, the image `{{dependencies.repositories.srtool.docker_image_name}}:{{dependencies.repositories.srtool.docker_image_version}}` indicates that the code in the image was compiled with version `{{dependencies.repositories.srtool.docker_image_version}}` of the `rustc` compiler. ## Working with the Docker Container The [`srtool-cli`](https://github.com/chevdor/srtool-cli){target=\_blank} package is a command-line utility written in Rust that installs an executable program called `srtool`. This program simplifies the interactions with the `srtool` Docker container. Over time, the tooling around the `srtool` Docker image has expanded to include the following tools and helper programs: - [**`srtool-cli`**](https://github.com/chevdor/srtool-cli){target=\_blank} - provides a command-line interface to pull the srtool Docker image, get information about the image and tooling used to interact with it, and build the runtime using the `srtool` Docker container - [**`subwasm`**](https://github.com/chevdor/subwasm){target=\_blank} - provides command-line options for working with the metadata and Wasm runtime built using srtool. The `subwasm` program is also used internally to perform tasks in the `srtool` image - [**`srtool-actions`**](https://github.com/chevdor/srtool-actions){target=\_blank} - provides GitHub actions to integrate builds produced using the `srtool` image with your GitHub CI/CD pipelines - [**`srtool-app`**](https://gitlab.com/chevdor/srtool-app){target=\_blank} - provides a simple graphical user interface for building the runtime using the `srtool` Docker image ## Prepare the Environment It is recommended to install the `srtool-cli` program to work with the Docker image using a simple command-line interface. To prepare the environment: 1. Verify that Docker is installed by running the following command: ```bash docker --version ``` If Docker is installed, the command will display version information:
docker --version Docker version 20.10.17, build 100c701
2. Install the `srtool` command-line interface by running the following command: ```bash cargo install --git https://github.com/chevdor/srtool-cli ``` 3. View usage information for the `srtool` command-line interface by running the following command: ```bash srtool help ``` 4. Download the latest `srtool` Docker image by running the following command: ```bash srtool pull ``` ## Start a Deterministic Build After preparing the environment, the Wasm runtime can be compiled using the `srtool` Docker image. To build the runtime, you need to open your Polkadot SDK-based project in a terminal shell and run the following command: ```bash srtool build --app --package INSERT_RUNTIME_PACKAGE_NAME --runtime-dir INSERT_RUNTIME_PATH ``` - The name specified for the `--package` should be the name defined in the `Cargo.toml` file for the runtime - The path specified for the `--runtime-dir` should be the path to the `Cargo.toml` file for the runtime. For example: ```plain node/ pallets/ runtime/ ├──lib.rs └──Cargo.toml # INSERT_RUNTIME_PATH should be the path to this file ... ``` - If the `Cargo.toml` file for the runtime is located in a `runtime` subdirectory, for example, `runtime/kusama`, the `--runtime-dir` parameter can be omitted ## Use srtool in GitHub Actions To add a GitHub workflow for building the runtime: 1. Create a `.github/workflows` directory in the chain's directory 2. In the `.github/workflows` directory, click **Add file**, then select **Create new file** 3. Copy the sample GitHub action from `basic.yml` example in the [`srtools-actions`](https://github.com/chevdor/srtool-actions){target=\_blank} repository and paste it into the file you created in the previous step ??? interface "`basic.yml`" ```yml name: Srtool build on: push jobs: srtool: runs-on: ubuntu-latest strategy: matrix: chain: ["asset-hub-kusama", "asset-hub-westend"] steps: - uses: actions/checkout@v3 - name: Srtool build id: srtool_build uses: chevdor/srtool-actions@v0.8.0 with: chain: ${{ matrix.chain }} runtime_dir: polkadot-parachains/${{ matrix.chain }}-runtime - name: Summary run: | echo '${{ steps.srtool_build.outputs.json }}' | jq . > ${{ matrix.chain }}-srtool-digest.json cat ${{ matrix.chain }}-srtool-digest.json echo "Runtime location: ${{ steps.srtool_build.outputs.wasm }}" ``` 4. Modify the settings in the sample action For example, modify the following settings: - The name of the chain - The name of the runtime package - The location of the runtime 5. Type a name for the action file and commit ## Use the srtool Image via Docker Hub If utilizing [`srtool-cli`](https://github.com/chevdor/srtool-cli){target=\_blank} or [`srtool-app`](https://gitlab.com/chevdor/srtool-app){target=\_blank} isn't an option, the `paritytech/srtool` container image can be used directly via Docker Hub. To pull the image from Docker Hub: 1. Sign in to Docker Hub 2. Type `paritytech/srtool` in the search field and press enter 3. Click **paritytech/srtool**, then click **Tags** 4. Copy the command for the image you want to pull 5. Open a terminal shell on your local computer 6. Paste the command you copied from the Docker Hub. For example, you might run a command similar to the following, which downloads and unpacks the image: ```bash docker pull paritytech/srtool:{{ dependencies.repositories.srtool.docker_image_version }} ``` ### Naming Convention for Images Keep in mind that there is no `latest` tag for the `srtool` image. Ensure that the image selected is compatible with the locally available version of the Rust compiler. The naming convention for `paritytech/srtool` Docker images specifies the version of the Rust compiler used to compile the code included in the image. Some images specify both a compiler version and the version of the build script used. For example, an image named `paritytech/srtool:1.62.0-0.9.19` was compiled with version `1.62.0` of the `rustc` compiler and version `0.9.19` of the build script. Images that only specify the compiler version always contain the software's latest version. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/deployment/coretime-renewal/ --- BEGIN CONTENT --- --- title: Coretime Renewal description: Learn how to renew coretime manually or automatically to ensure uninterrupted parachain operation with predictable pricing and minimal risk. categories: Parachains --- # Coretime Renewal ## Introduction Coretime can be purchased in bulk for a period of 28 days, providing access to Polkadot's shared security and interoperability for Polkadot parachains. The bulk purchase of coretime includes a rent-control mechanism that keeps future purchases within a predictable price range of the initial purchase. This allows cores to be renewed at a known price without competing against other participants in the open market. ## Bulk Sale Phases The bulk sale process consists of three distinct phases: 1. **Interlude phase** - the period between bulk sales when renewals are prioritized 2. **Lead-in phase** - following the interlude phase, a new `start_price` is set, and a Dutch auction begins, lasting for `leadin_length` blocks. During this phase, prices experience downward pressure as the system aims to find market equilibrium. The final price at the end of this phase becomes the `regular_price`, which will be used in the subsequent fixed price phase 3. **Fixed price phase** - the final phase where remaining cores are sold at the `regular_price` established during the lead-in phase. This provides a stable and predictable pricing environment for participants who did not purchase during the price discovery period For more comprehensive information about the coretime sales process, refer to the [Coretime Sales](https://wiki.polkadot.network/learn/learn-agile-coretime/#coretime-sales){target=\_blank} section in the Polkadot Wiki. ## Renewal Timing While renewals can technically be made during any phase, it is strongly recommended that they be completed during the interlude phase. Delaying renewal introduces the risk that the core could be sold to another market participant, preventing successful renewal. Renewals must be initiated well in advance to avoid the scenario above. For example, if you purchase a core in bulk sale #1, you obtain coretime for the upcoming bulk period (during which bulk sale #2 takes place). Your renewal must be completed during bulk sale #2, ideally during its interlude phase, to secure coretime for the subsequent period. ## Manual Renewal Cores can be renewed by issuing the [`broker.renew(core)`](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.renew){target=\_blank} extrinsic during the coretime sale period. While this process is straightforward, it requires manual action that must not be overlooked. Failure to complete this renewal step before all available cores are sold could result in your parachain being unable to secure a core for the next operational period. To manually renew a core: 1. In [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank}, connect to the Coretime chain, navigate to the **Developer** dropdown, and select the **Extrinsics** option ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-1.webp) 2. Submit the `broker.renew` extrinsic 1. Select the **broker** pallet 2. Choose the **renew** extrinsic 3. Fill in the **core** parameter 4. Click the **Submit Transaction** button ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-2.webp) For optimal results, the renewal should be performed during the interlude phase. Upon successful submission, your core will be renewed for the next coretime period, ensuring the continued operation of your parachain. ## Auto-Renewal The coretime auto-renewal feature simplifies maintaining continuous coretime allocation by automatically renewing cores at the beginning of each sale period. This eliminates the need for parachains to manually renew their cores for each bulk period, reducing operational overhead and the risk of missing renewal deadlines. When auto-renewal is enabled, the system follows this process at the start of each sale: 1. The system scans all registered auto-renewal records 2. For each record, it attempts to process renewal payments from the task's [sovereign account](/polkadot-protocol/glossary/#sovereign-account){target=\_blank} (which is the sibling account on the Coretime chain derived from the parachain ID) 3. Upon successful payment, the system emits a `Renewed` event and secures the core for the next period 4. If payment fails due to insufficient funds or other issues, the system emits an `AutoRenewalFailed` event Even if an auto-renewal attempt fails, the auto-renewal setting remains active for subsequent sales. This means the setting persists across multiple periods once you've configured auto-renewal. To enable auto-renewal for your parachain, you must configure several components, as detailed in the following sections. ### Set Up an HRMP Channel A Horizontal Relay-routed Message Passing (HRMP) channel must be opened between your parachain and the Coretime system chain before configuring auto-renewal. For instructions on establishing this connection, consult the [Opening HRMP Channels with System Parachains](/tutorials/interoperability/xcm-channels/para-to-system/){target=\_blank} guide. ### Fund Sovereign Account The [sovereign account](https://github.com/polkadot-fellows/xcm-format/blob/10726875bd3016c5e528c85ed6e82415e4b847d7/README.md?plain=1#L50){target=\_blank} of your parachain on the Coretime chain needs adequate funding to cover both XCM transaction fees and the recurring coretime renewal payments. To determine your parachain's sovereign account address, you can: - Use the **"Para ID" to Address** section in [Substrate Utilities](https://www.shawntabrizi.com/substrate-js-utilities/){target=\_blank} with the **Sibling** option selected - Calculate it manually: 1. Identify the appropriate prefix: - For sibling chains - `0x7369626c` (decodes to `b"sibl"`) 2. Encode your parachain ID as a u32 [SCALE](/polkadot-protocol/parachain-basics/data-encoding#data-types){target=\_blank} value: - For parachain 2000, this would be `d0070000` 3. Combine the prefix with the encoded ID to form the sovereign account address: - **Hex** - `0x7369626cd0070000000000000000000000000000000000000000000000000000` - **SS58 format** - `5Eg2fntJ27qsari4FGrGhrMqKFDRnkNSR6UshkZYBGXmSuC8` ### Auto-Renewal Configuration Extrinsics The Coretime chain provides two primary extrinsics for managing the auto-renewal functionality: - [**`enable_auto_renew(core, task, workload_end_hint)`**](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.enable_auto_renew){target=\_blank} - use this extrinsic to activate automatic renewals for a specific core. This transaction must originate from the sovereign account of the parachain task **Parameters:** - **`core`** - the core currently assigned to the task - **`task`** - the task for which auto-renewal is being enabled - **`workload_end_hint`** - the timeslice at which the currently assigned core will stop being used. This value helps the system determine when auto-renewal should begin. It is recommended to always provide this value to avoid ambiguity - If the coretime expires in the current sale period, use the last timeslice of the current sale period - If the coretime expires at the end of the next sale period (e.g., because you've already renewed), use the last timeslice of the next sale period - If a lease is active, use the timeslice when the lease ends - [**`disable_auto_renew(core, task)`**](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.disable_auto_renew){target=\_blank} - use this extrinsic to stop automatic renewals. This extrinsic also requires that the origin is the sovereign account of the parachain task **Parameters:** - **`core`** - the core currently assigned to the task - **`task`** - the task for which auto-renewal is enabled ### Construct the Enable Auto-Renewal Extrinsic To configure auto-renewal, you'll need to gather specific information for the `enable_auto_renew` extrinsic parameters: - **`core`** - identify which core your parachain is assigned to when it expires. This requires checking both current assignments and planned future assignments: - **For current period** - query `broker.workload()` - **For next period** - query `broker.workplan()` **Example for parachain `2000`:** - Current assignment (workload) ```txt [ [50] [{ mask: 0xffffffffffffffffffff assignment: {Task: 2,000} }] ] ``` - Future assignment (workplan) ```txt [ [[322,845, 48]] [{ mask: 0xffffffffffffffffffff assignment: {Task: 2,000} }] ] ``` **Note:** use the core from workplan (`48` in this example) if your task appears there. Only use the core from workload if it's not listed in workplan. - **`task`** - use your parachain ID, which can be verified by connecting to your parachain and querying `parachainInfo.parachainId()` - **`workload_end_hint`** - you should always set it explicitly to avoid misbehavior. This value indicates when your assigned core will expire. Here's how to calculate the correct value based on how your core is assigned: - If the parachain uses bulk coretime, query `broker.saleinfo`. You’ll get a result like: ```json { "saleStart": 1544949, "leadinLength": 100800, "endPrice": 922760076, "regionBegin": 322845, "regionEnd": 327885, "idealCoresSold": 18, "coresOffered": 18, "firstCore": 44, "selloutPrice": 92272712073, "coresSold": 18 } ``` - If the core expires in the current sale, use the `regionBegin` value, which in this case is `322845` - If the core has already been renewed and will expire in the next sale, use the `regionEnd` value. In this example, that would be `327885` - If the parachain has a lease, query `broker.leases`, which returns entries like: ```json [ { "until": 359280, "task": 2035 }, ... ] ``` - Use the `until` value of the lease that corresponds to your task. For example, `359280` would be the value for `workload_end_hint` in the case of task `2035` Once you have these values, construct the extrinsic: 1. In [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank}, connect to the Coretime chain, navigate to the **Developer** dropdown, and select the **Extrinsics** option ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-1.webp) 2. Create the `broker.enable_auto_renew` extrinsic 1. Select the **broker** pallet 2. Choose the **enableAutoRenew** extrinsic 3. Fill in the parameters 4. Copy the encoded call data ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-3.webp) For parachain `2000` on core `48` with `workload_end_hint` `327885`, the **encoded call data** is:`0x32153000d007000001cd000500` 3. Check the transaction weight for executing the call. You can estimate this by executing the `transactionPaymentCallApi.queryCallInfo` runtime call with the encoded call data previously obtained ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-4.webp) ### Submit the XCM from Your Parachain To activate auto-renewal, you must submit an XCM from your parachain to the Coretime chain using Root origin. This can be done through the sudo pallet (if available) or your parachain's governance system. The XCM needs to execute these operations: 1. Withdraw DOT from your parachain's sovereign account on the Coretime chain 2. Buy execution to pay for transaction fees 3. Execute the auto-renewal extrinsic 4. Refund surplus DOT back to the sovereign account Here's how to submit this XCM using Acala (Parachain 2000) as an example: 1. In [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank}, connect to your parachain, navigate to the **Developer** dropdown and select the **Extrinsics** option 2. Create a `sudo.sudo` extrinsic that executes `polkadotXcm.send`: 1. Use the `sudo.sudo` extrinsic to execute the following call as Root 2. Select the **polkadotXcm** pallet 3. Choose the **send** extrinsic 4. Set the **dest** parameter as the Coretime chain (Parachain 1005) ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-5.webp) 3. Construct the XCM and submit it: 1. Add a **WithdrawAsset** instruction 2. Add a **BuyExecution** instruction 3. Add a **Transact** instruction with the following parameters: - **originKind** - use `SovereignAccount` - **requireWeightAtMost** - use the weight calculated previously - **call** - use the encoded call data generated before 4. Add a **RefundSurplus** instruction 5. Add a **DepositAsset** instruction to send the remaining funds to the parachain sovereign account 6. Click the **Submit Transaction** button ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-6.webp) After successful execution, your parachain should have auto-renewal enabled. To verify this, check the events emitted in the Coretime chain. You should see a confirmation event named `broker.AutoRenewalEnabled`, which includes two parameters: - **core** - the core currently assigned to your task, in this example, `48` - **task** - the task for which auto-renewal was enabled, in this example, `2000` You can find this event in the list of recent events. It should look similar to the following: ![](/images/develop/parachains/deployment/coretime-renewal/coretime-renewal-7.webp) --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/deployment/generate-chain-specs/ --- BEGIN CONTENT --- --- title: Generate Chain Specs description: Describes the role of the chain specification in a network, how to specify its parameters when starting a node, and how to customize and distribute it. categories: Parachains --- # Generate Chain Specs ## Introduction A chain specification collects information that describes a Polkadot SDK-based network. A chain specification is a crucial parameter when starting a node, providing the genesis configurations, bootnodes, and other parameters relating to that particular network. It identifies the network a blockchain node connects to, the other nodes it initially communicates with, and the initial state that nodes must agree on to produce blocks. The chain specification is defined using the [`ChainSpec`](https://paritytech.github.io/polkadot-sdk/master/sc_chain_spec/struct.GenericChainSpec.html){target=\_blank} struct. This struct separates the information required for a chain into two parts: - **Client specification** - contains information the _node_ uses to communicate with network participants and send data to telemetry endpoints. Many of these chain specification settings can be overridden by command-line options when starting a node or can be changed after the blockchain has started - **Initial genesis state** - agreed upon by all nodes in the network. It must be set when the blockchain is first started and cannot be changed after that without starting a whole new blockchain ## Node Settings Customization For the node, the chain specification controls information such as: - The bootnodes the node will communicate with - The server endpoints for the node to send telemetry data to - The human and machine-readable names for the network the node will connect to The chain specification can be customized to include additional information. For example, you can configure the node to connect to specific blocks at specific heights to prevent long-range attacks when syncing a new node from genesis. Note that you can customize node settings after genesis. However, nodes only add peers that use the same [`protocolId`](https://paritytech.github.io/polkadot-sdk/master/sc_service/struct.GenericChainSpec.html#method.protocol_id){target=_blank}. ## Genesis Configuration Customization All nodes in the network must agree on the genesis state before they can agree on any subsequent blocks. The information configured in the genesis portion of a chain specification is used to create a genesis block. When you start the first node, it takes effect and cannot be overridden with command-line options. However, you can configure some information in the genesis section of a chain specification. For example, you can customize it to include information such as: - Initial account balances - Accounts that are initially part of a governance council - The account that controls the `sudo` key - Any other genesis state for a pallet Nodes also require the compiled Wasm to execute the runtime logic on the chain, so the initial runtime must also be supplied in the chain specification. For a more detailed look at customizing the genesis chain specification, be sure to check out the [Polkadot SDK Docs](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/chain_spec_genesis/index.html){target=_blank}. ## Declaring Storage Items for a Runtime A runtime usually requires some storage items to be configured at genesis. This includes the initial state for pallets, for example, how much balance specific accounts have, or which account will have sudo permissions. These storage values are configured in the genesis portion of the chain specification. You can create a [patch](https://paritytech.github.io/polkadot-sdk/master/sc_chain_spec/index.html#chain-spec-formats){target=_blank} file and ingest it using the [`chain-spec-builder`](https://paritytech.github.io/polkadot-sdk/master/staging_chain_spec_builder/index.html){target=_blank} utility, that is explained in the [Creating a Custom Chain Specification](#creating-a-custom-chain-specification) section. ## Chain Specification JSON Format Users generally work with the JSON format of the chain specification. Internally, the chain specification is embedded in the [`GenericChainSpec`](https://paritytech.github.io/polkadot-sdk/master/sc_chain_spec/struct.GenericChainSpec.html){target=\_blank} struct, with specific properties accessible through the [`ChainSpec`](https://paritytech.github.io/polkadot-sdk/master/sc_chain_spec/trait.ChainSpec.html){target=\_blank} struct. The chain specification includes the following keys: - **`name`** - the human-readable name for the network - **`id`** - the machine-readable identifier for the network - **`chainType`** - the type of chain to start (refer to [`ChainType`](https://paritytech.github.io/polkadot-sdk/master/sc_chain_spec/enum.ChainType.html){target=\_blank} for more details) - **`bootNodes`** - a list of multiaddresses belonging to the chain's boot nodes - **`telemetryEndpoints`** - an optional list of multiaddresses for telemetry endpoints with verbosity levels ranging from 0 to 9 (0 being the lowest verbosity) - **`protocolId`** - the optional protocol identifier for the network - **`forkId`** - an optional fork ID that should typically be left empty; it can be used to signal a fork at the network level when two chains share the same genesis hash - **`properties`** - custom properties provided as a key-value JSON object - **`codeSubstitutes`** - an optional mapping of block numbers to Wasm code - **`genesis`** - the genesis configuration for the chain For example, the following JSON shows a basic chain specification file: ```json { "name": "chainName", "id": "chainId", "chainType": "Local", "bootNodes": [], "telemetryEndpoints": null, "protocolId": null, "properties": null, "codeSubstitutes": {}, "genesis": { "code": "0x..." } } ``` ## Creating a Custom Chain Specification To create a custom chain specification, you can use the [`chain-spec-builder`](https://paritytech.github.io/polkadot-sdk/master/staging_chain_spec_builder/index.html){target=\_blank} tool. This is a CLI tool that is used to generate chain specifications from the runtime of a node. To install the tool, run the following command: ```bash cargo install --git https://github.com/paritytech/polkadot-sdk --force staging-chain-spec-builder ``` To verify the installation, run the following: ```bash chain-spec-builder --help ``` ### Plain Chain Specifications To create a plain chain specification, first ensure that the runtime has been compiled and is available at the specified path. Next, you can use the following utility within your project: ```bash chain-spec-builder create -r INSERT_RUNTIME_WASM_PATH INSERT_COMMAND ``` Replace `INSERT_RUNTIME_WASM_PATH` with the path to the runtime Wasm file and `INSERT_COMMAND` with the command to insert the runtime into the chain specification. The available commands are: - **`patch`** - overwrites the runtime's default genesis config with the provided patch. You can check the following [patch file](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/substrate/bin/utils/chain-spec-builder/tests/input/patch.json){target=\_blank} as a reference - **`full`** - build the genesis config for runtime using the JSON file. No defaults will be used. As a reference, you can check the following [full file](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/substrate/bin/utils/chain-spec-builder/tests/input/full.json){target=\_blank} - **`default`** - gets the default genesis config for the runtime and uses it in `ChainSpec`. Please note that the default genesis config may not be valid. For some runtimes, initial values should be added there (e.g., session keys, BABE epoch) - **`named-preset`** - uses named preset provided by the runtime to build the chain spec ### Raw Chain Specifications With runtime upgrades, the blockchain's runtime can be upgraded with newer business logic. Chain specifications contain information structured in a way that the node's runtime can understand. For example, consider this excerpt of a common entry for a chain specification: ```json "sudo": { "key": "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY" } ``` In the plain chain spec JSON file, the keys and associated values are in a human-readable format, which can be used to initialize the genesis storage. When the chain specification is loaded, the runtime converts these readable values into storage items within the trie. However, for long-lived networks like testnets or production chains, using the raw format for storage initialization is preferred. This avoids the need for conversion by the runtime and ensures that storage items remain consistent, even when runtime upgrades occur. To enable a node with an upgraded runtime to synchronize with a chain from genesis, the plain chain specification is encoded in a raw format. The raw format allows the distribution of chain specifications that all nodes can use to synchronize the chain even after runtime upgrades. To convert a plain chain specification to a raw chain specification, you can use the following utility: ```bash chain-spec-builder convert-to-raw chain_spec.json ``` After the conversion to the raw format, the `sudo key` snippet looks like this: ```json "0x50a63a871aced22e88ee6466fe5aa5d9": "0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d", ``` The raw chain specification can be used to initialize the genesis storage for a node. ## Generate Custom Keys for Your Collator To securely deploy your parachain, you must generate custom cryptographic keys for your collators (block producers). Each collator requires two distinct sets of keys with different security requirements and operational purposes. - **Account keys**: Serve as the primary identity and financial controller for your collator. These keys are used to interact with the network and manage funds. They should be treated as cold storage and must never exist on the filesystem of the collator node. Secure offline backup is essential. - **Session keys**: Handle block production operations to identify your node and sign blocks on the network. These keys are stored in the parachain keystore and function as operational "hot wallet" keys. If compromised, an attacker could impersonate your node, potentially resulting in slashing of your funds. To minimize these risks, implement regular session key rotation and treat them with the same caution as hot wallet keys. To perform this step, you can use [Subkey](https://docs.rs/crate/subkey/latest){target=\_blank}, a command-line tool for generating and managing keys: ```bash docker run -it parity/subkey:latest generate --scheme sr25519 ``` The output should look similar to the following:
docker run -it parity/subkey:latest generate --scheme sr25519
Secret phrase: lemon play remain picture leopard frog mad bridge hire hazard best buddy
Network ID: substrate
Secret seed: 0xb748b501de061bae1fcab1c0b814255979d74d9637b84e06414a57a1a149c004
Public key (hex): 0xf4ec62ec6e70a3c0f8dcbe0531e2b1b8916cf16d30635bbe9232f6ed3f0bf422
Account ID: 0xf4ec62ec6e70a3c0f8dcbe0531e2b1b8916cf16d30635bbe9232f6ed3f0bf422
Public key (SS58): 5HbqmBBJ5ALUzho7tw1k1jEgKBJM7dNsQwrtfSfUskT1a3oe
SS58 Address: 5HbqmBBJ5ALUzho7tw1k1jEgKBJM7dNsQwrtfSfUskT1a3oe
Ensure that this command is executed twice to generate the keys for both the account and session keys. Save them for future reference. After generating the plain chain specification, you need to edit this file by inserting the account IDs and session keys in SS58 format generated for your collators in the `collatorSelection.invulnerables` and `session.keys` fields. ### Add Invulnerables In the `collatorSelection.invulnerables` array, add the SS58 addresses (account keys) of your collators. These addresses will be automatically included in the active collator set: ```json "collatorSelection": { "candidacyBond": 16000000000, "desiredCandidates": 0, "invulnerables": [ "INSERT_ACCOUNT_ID_COLLATOR_1", "INSERT_ACCOUNT_ID_COLLATOR_2", "INSERT_ACCOUNT_ID_COLLATOR_3" ] } ``` - **`candidacyBond`**: Minimum stake required for collator candidates (in Planck units). - **`desiredCandidates`**: Number of candidates beyond invulnerables (set to 0 for invulnerables-only). - **`invulnerables`**: Use the SS58 addresses from your generated account keys as collators. ### Add Session Keys For each invulnerable collator, add a corresponding entry in the `session.keys` array. This maps each collator's account ID to their session keys: ```json "session": { "keys": [ [ "INSERT_ACCOUNT_ID_COLLATOR_1", "INSERT_ACCOUNT_ID_COLLATOR_1", { "aura": "INSERT_SESSION_KEY_COLLATOR_1" } ], [ "INSERT_ACCOUNT_ID_COLLATOR_2", "INSERT_ACCOUNT_ID_COLLATOR_2", { "aura": "INSERT_SESSION_KEY_COLLATOR_2" } ], [ "INSERT_ACCOUNT_ID_COLLATOR_3", "INSERT_ACCOUNT_ID_COLLATOR_3", { "aura": "INSERT_SESSION_KEY_COLLATOR_3" } ] ], "nonAuthorityKeys": [] } ``` ## Where to Go Next After generating a chain specification, you can use it to initialize the genesis storage for a node. Refer to the following guides to learn how to proceed with the deployment of your blockchain:
- Guide __Obtain Coretime__ --- Learn how to obtain the necessary coretime configuration to synchronize your blockchain’s timestamping and enhance its performance. [:octicons-arrow-right-24: Reference](/develop/parachains/deployment/obtain-coretime/) - Guide __Deployment__ --- Explore the steps required to deploy your chain specification, ensuring a smooth launch of your network and proper node operation. [:octicons-arrow-right-24: Reference](/develop/parachains/deployment/) - Guide __Maintenance__ --- Discover best practices for maintaining your blockchain post-deployment, including how to manage upgrades and monitor network health. [:octicons-arrow-right-24: Reference](/develop/parachains/maintenance/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/deployment/ --- BEGIN CONTENT --- --- title: Deployment description: Learn how to prepare your blockchain for deployment using the Polkadot SDK, including building deterministic Wasm runtimes and generating chain specifications. template: index-page.html --- # Deployment Learn how to prepare your blockchain for deployment using the Polkadot SDK, including building deterministic Wasm runtimes and generating chain specifications. To better understand the deployment process, check out the following section. If you're ready to start jump to [In This Section](#in-this-section) to begin working through the deployment guides. ## Deployment Process Taking your Polkadot SDK-based blockchain from a local environment to production involves several steps, ensuring your network is stable, secure, and ready for real-world use. The following diagram outlines the process at a high level: ```mermaid flowchart TD %% Group 1: Pre-Deployment subgraph group1 [Pre-Deployment] direction LR A("Local \nDevelopment \nand Testing") --> B("Runtime \nCompilation") B --> C("Generate \nChain \nSpecifications") C --> D("Prepare \nDeployment \nEnvironment") D --> E("Acquire \nCoretime") end %% Group 2: Deployment subgraph group2 [Deployment] F("Launch \nand \nMonitor") end %% Group 3: Post-Deployment subgraph group3 [Post-Deployment] G("Maintenance \nand \nUpgrades") end %% Connections Between Groups group1 --> group2 group2 --> group3 %% Styling style group1 fill:#ffffff,stroke:#6e7391,stroke-width:1px style group2 fill:#ffffff,stroke:#6e7391,stroke-width:1px style group3 fill:#ffffff,stroke:#6e7391,stroke-width:1px ``` - **Local development and testing** - the process begins with local development and testing. Developers focus on building the runtime by selecting and configuring the necessary pallets while refining network features. In this phase, running a local TestNet is essential to verify transactions and ensure the blockchain behaves as expected. Unit and integration tests ensure the network works as expected before launch. Thorough testing is conducted, not only for individual components but also for interactions between pallets - **Runtime compilation** - Polkadot SDK-based blockchains are built with Wasm, a highly portable and efficient format. Compiling your blockchain's runtime into Wasm ensures it can be executed reliably across various environments, guaranteeing network-wide compatibility and security. The [srtool](https://github.com/paritytech/srtool){target=\_blank} is helpful for this purpose since it allows you to compile [deterministic runtimes](/develop/parachains/deployment/build-deterministic-runtime/){target=\_blank} - **Generate chain specifications** - the chain spec file defines the structure and configuration of your blockchain. It includes initial node identities, session keys, and other parameters. Defining a well-thought-out chain specification ensures that your network will operate smoothly and according to your intended design - **Deployment environment** - whether launching a local test network or a production-grade blockchain, selecting the proper infrastructure is vital. For further information about these topics, see the [Infrastructure](/infrastructure/){target=\_blank} section - **Acquire coretime** - to build on top of the Polkadot network, users need to acquire coretime (either on-demand or in bulk) to access the computational resources of the relay chain. This allows for the secure validation of parachain blocks through a randomized selection of relay chain validators If you’re building a standalone blockchain (solochain) that won’t connect to Polkadot as a parachain, you can skip the preceding step, as there’s no need to acquire coretime or implement [Cumulus](/develop/parachains/#cumulus){target=\_blank}. - **Launch and monitor** - once everything is configured, you can launch the blockchain, initiating the network with your chain spec and Wasm runtime. Validators or collators will begin producing blocks, and the network will go live. Post-launch, monitoring is vital to ensuring network health—tracking block production, node performance, and overall security - **Maintenance and upgrade** - a blockchain continues to evolve post-deployment. As the network expands and adapts, it may require runtime upgrades, governance updates, coretime renewals, and even modifications to the underlying code. For an in-depth guide on this topic, see the [Maintenance](/develop/parachains/maintenance/){target=\_blank} section ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/deployment/manage-coretime/ --- BEGIN CONTENT --- --- title: Manage Coretime description: Learn to manage bulk coretime regions through transfer, partition, interlace, assign, and pool operations for optimal resource allocation. categories: Parachains --- # Manage Coretime ## Introduction Coretime management involves manipulating [bulk coretime](/develop/parachains/deployment/obtain-coretime/#bulk-coretime){target=\_blank} regions to optimize resource allocation and usage. Regions represent allocated computational resources on cores and can be modified through various operations to meet different project requirements. This guide covers the essential operations for managing your coretime regions effectively. ## Transfer [Transfer](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.transfer){target=\_blank} ownership of a bulk coretime region to a new owner. This operation allows you to change who controls and manages a specific region. Use this operation when you need to delegate control of computational resources to another account or when selling regions to other parties. ```rust pub fn transfer(region_id: RegionId, new_owner: T::AccountId) ``` **Parameters:** - **`origin`**: Must be a signed origin of the account which owns the region `region_id`. - **`region_id`**: The region whose ownership should change. - **`new_owner`**: The new owner for the region. ## Partition Split a bulk coretime region into two non-overlapping regions at a specific time point. This operation divides a region temporally, creating two shorter regions that together span the same duration as the original. The [partition](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.partition){target=\_blank} operation removes the original region and creates two new regions with the same owner and core mask. The first new region spans from the original start time to the pivot point, while the second spans from the pivot point to the original end time. This is useful when you want to use part of your allocated time immediately and reserve the remainder for later use or when you want to sell or transfer only a portion of your time allocation. ```rust pub fn partition(region_id: RegionId, pivot: Timeslice) ``` **Parameters:** - **`origin`**: Must be a signed origin of the account which owns the region `region_id`. - **`region_id`**: The region which should be partitioned into two non-overlapping regions. - **`pivot`**: The offset in time into the region at which to make the split. ## Interlace Split a bulk coretime region into two wholly-overlapping regions with complementary interlace masks. This operation allows core sharing by dividing computational resources between two projects that run simultaneously. The [interlace](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.interlace){target=\_blank} operation removes the original region and creates two new regions with the same time span and owner. One region receives the specified core mask, while the other receives the XOR of the specified mask and the original region's core mask. Use interlacing when you want to share core resources between multiple tasks or when you need to optimize resource utilization by running complementary workloads simultaneously. ```rust pub fn interlace(region_id: RegionId, pivot: CoreMask) ``` Parameters: - **`origin`**: Must be a signed origin of the account which owns the region `region_id`. - **`region_id`**: The region which should become two interlaced regions of incomplete regularity. - **`pivot`**: The interlace mask of one of the two new regions (the other is its partial complement). ## Assign [Assign](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.assign){target=\_blank} a bulk coretime region to a specific task for execution. This operation places an item in the work plan corresponding to the region's properties and assigns it to the target task. If the region's end time has already passed, the operation becomes a no-op. If the region's beginning has passed, it effectively starts from the next schedulable timeslice. Use this operation to execute your tasks on the allocated cores. Choose a final assignment when you're certain about the task allocation or provisional when you might need flexibility for later changes. ```rust pub fn assign(region_id: RegionId, task: TaskId, finality: Finality) ``` **Parameters:** - **`origin`**: Must be a signed origin of the account which owns the region `region_id`. - **`region_id`**: The region which should be assigned to the task. - **`task`**: The task to assign. - **`finality`**: Indication of whether this assignment is final or provisional. ## Pool Place a bulk coretime region into the instantaneous coretime pool to earn revenue from unused computational resources. The [pool](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/struct.Pallet.html#method.pool){target=\_blank} operation places the region in the workplan and assigns it to the instantaneous coretime pool. The region details are recorded to calculate a pro rata share of the instantaneous coretime sales revenue relative to other pool providers. Use pooling when you have unused coretime that you want to monetize, or when you want to contribute to the network's available computational resources while earning passive income. ```rust pub fn pool(region_id: RegionId, payee: T::AccountId, finality: Finality) ``` **Parameters:** - **`origin`**: Must be a signed origin of the account which owns the region `region_id`. - **`region_id`**: The region which should be assigned to the pool. - **`payee`**: The account which can collect revenue from the usage of this region. - **`finality`**: Indication of whether this pool assignment is final or provisional. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/deployment/obtain-coretime/ --- BEGIN CONTENT --- --- title: Obtain Coretime description: Learn how to obtain and manage coretime for your Polkadot parachain. Explore bulk and on-demand options, prerequisites, and initial setup. categories: Parachains --- # Obtain Coretime ## Introduction Securing coretime is essential for operating a parachain on Polkadot. It provides your parachain with guaranteed computational resources and access to Polkadot's shared security model, ensuring your blockchain can process transactions, maintain its state, and interact securely with other parachains in the network. Without coretime, a parachain cannot participate in the ecosystem or leverage the relay chain's validator set for security. Coretime represents the computational resources allocated to your parachain on the Polkadot network. It determines when and how often your parachain can produce blocks and have them validated by the relay chain. There are two primary methods to obtain coretime: - **Bulk coretime** - purchase computational resources in advance for a full month - **On-demand coretime** - buy computational resources as needed for individual block production This guide explains the different methods of obtaining coretime and walks through the necessary steps to get your parachain running. ## Prerequisites Before obtaining coretime, ensure you have: - Developed your parachain runtime using the Polkadot SDK - Set up and configured a parachain collator for your target relay chain - Successfully compiled your parachain collator node - Generated and exported your parachain's genesis state - Generated and exported your parachain's validation code (Wasm) ## Initial Setup Steps 1. Reserve a unique identifier, `ParaID`, for your parachain: 1. Connect to the relay chain 2. Submit the [`registrar.reserve`](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_common/paras_registrar/pallet/dispatchables/fn.reserve.html){target=\_blank} extrinsic Upon success, you'll receive a registered `ParaID` 2. Register your parachain's essential information by submitting the [`registrar.register`](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_common/paras_registrar/pallet/dispatchables/fn.register.html){target=\_blank} extrinsic with the following parameters: - **`id`** - your reserved `ParaID` - **`genesisHead`** - your exported genesis state - **`validationCode`** - your exported Wasm validation code 3. Start your parachain collator and begin synchronization with the relay chain ## Obtaining Coretime ### Bulk Coretime Bulk coretime provides several advantages: - Monthly allocation of resources - Guaranteed block production slots (every 12 seconds, or 6 seconds with [Asynchronous Backing](https://wiki.polkadot.network/learn/learn-async-backing/#asynchronous-backing){target=\_blank}) - Priority renewal rights - Protection against price fluctuations - Ability to split and resell unused coretime To purchase bulk coretime: 1. Access the Coretime system parachain 2. Interact with the Broker pallet 3. Purchase your desired amount of coretime 4. Assign the purchased core to your registered `ParaID` After successfully obtaining coretime, your parachain will automatically start producing blocks at regular intervals. For current marketplaces and pricing, consult the [Coretime Marketplaces](https://wiki.polkadot.network/learn/learn-guides-coretime-marketplaces/){target=\_blank} page on the Polkadot Wiki. ### On-demand Coretime On-demand coretime allows for flexible, as-needed block production. To purchase: 1. Ensure your collator node is fully synchronized with the relay chain 2. Submit the `onDemand.placeOrderAllowDeath` extrinsic on the relay chain with: - **`maxAmountFor`** - sufficient funds for the transaction - **`paraId`** - your registered `ParaID` After succesfully executing the extrinsic, your parachain will produce a block. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/ --- BEGIN CONTENT --- --- title: Parachains description: Learn how to build, deploy, and maintain your parachain with the Polkadot SDK, from initial setup through customization, testing, runtime upgrades, and network operations. template: index-page.html --- # Parachains This section provides a complete guide to working with the Polkadot SDK, from getting started to long-term network maintenance. Discover how to create custom blockchains, test and deploy your parachains, and ensure their continued performance and reliability. ## Building Parachains with the Polkadot SDK With the [Polkadot relay chain](/polkadot-protocol/architecture/polkadot-chain/){target=\_blank} handling security and consensus, parachain developers are free to focus on features such as asset management, governance, and cross-chain communication. The Polkadot SDK equips developers with the tools to build, deploy, and maintain efficient, scalable parachains. Polkadot SDK’s FRAME framework provides developers with the tools to do the following: - **Customize parachain runtimes** - [runtimes](/polkadot-protocol/glossary/#runtime){target=\_blank} are the core building blocks that define the logic and functionality of Polkadot SDK-based parachains and let developers customize the parameters, rules, and behaviors that shape their blockchain network - **Develop new pallets** - create custom modular pallets to define runtime behavior and acheive desired blockchain functionality - **Add smart contract functionality** - use specialized pallets to deploy and execute smart contracts, enhancing your chain's functionality and programmability - **Test your build for a confident deployment** - create a test environment that can simulate runtime and mock transaction execution - **Deploy your blockchain for use** - take your Polkadot SDK-based blockchain from a local environment to production - **Maintain your network including monitoring and upgrades** - runtimes can be ugraded through forkless runtime updates, enabling seamless evolution of the parachain New to parachain development? Start with the [Introduction to the Polkadot SDK](/develop/parachains/intro-polkadot-sdk/) to discover how this framework simplifies building custom parachains. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/install-polkadot-sdk/ --- BEGIN CONTENT --- --- title: Install Polkadot SDK Dependencies description: Install everything you need to begin working with Substrated-based blockchains and the Polkadot SDK, the framework for building blockchains. categories: Basics, Tooling --- # Install Polkadot SDK Dependencies This guide provides step-by-step instructions for installing the dependencies you need to work with the Polkadot SDK-based chains on macOS, Linux, and Windows. Follow the appropriate section for your operating system to ensure all necessary tools are installed and configured properly. ## macOS You can install Rust and set up a Substrate development environment on Apple macOS computers with Intel or Apple M1 processors. ### Before You Begin Before you install Rust and set up your development environment on macOS, verify that your computer meets the following basic requirements: - Operating system version is 10.7 Lion or later - Processor speed of at least 2 GHz. Note that 3 GHz is recommended - Memory of at least 8 GB RAM. Note that 16 GB is recommended - Storage of at least 10 GB of available space - Broadband Internet connection #### Install Homebrew In most cases, you should use Homebrew to install and manage packages on macOS computers. If you don't already have Homebrew installed on your local computer, you should download and install it before continuing. To install Homebrew: 1. Open the Terminal application 2. Download and install Homebrew by running the following command: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" ``` 3. Verify Homebrew has been successfully installed by running the following command: ```bash brew --version ``` The command displays output similar to the following:
brew --version Homebrew 4.3.15
#### Support for Apple Silicon Protobuf must be installed before the build process can begin. To install it, run the following command: ```bash brew install protobuf ``` ### Install Required Packages and Rust Because the blockchain requires standard cryptography to support the generation of public/private key pairs and the validation of transaction signatures, you must also have a package that provides cryptography, such as `openssl`. To install `openssl` and the Rust toolchain on macOS: 1. Open the Terminal application 2. Ensure you have an updated version of Homebrew by running the following command: ```bash brew update ``` 3. Install the `openssl` package by running the following command: ```bash brew install openssl ``` 4. Download the `rustup` installation program and use it to install Rust by running the following command: ```bash curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` 5. Follow the prompts displayed to proceed with a default installation 6. Update your current shell to include Cargo by running the following command: ```bash source ~/.cargo/env ``` 7. Configure the Rust toolchain to default to the latest stable version by running the following commands: ```bash rustup default stable rustup update rustup target add wasm32-unknown-unknown rustup component add rust-src ``` 8. [Verify your installation](#verifying-installation) 9. Install `cmake` using the following command: ```bash brew install cmake ``` ## Linux Rust supports most Linux distributions. Depending on the specific distribution and version of the operating system you use, you might need to add some software dependencies to your environment. In general, your development environment should include a linker or C-compatible compiler, such as `clang` and an appropriate integrated development environment (IDE). ### Before You Begin {: #before-you-begin-linux } Check the documentation for your operating system for information about the installed packages and how to download and install any additional packages you might need. For example, if you use Ubuntu, you can use the Ubuntu Advanced Packaging Tool (`apt`) to install the `build-essential` package: ```bash sudo apt install build-essential ``` At a minimum, you need the following packages before you install Rust: ```text clang curl git make ``` Because the blockchain requires standard cryptography to support the generation of public/private key pairs and the validation of transaction signatures, you must also have a package that provides cryptography, such as `libssl-dev` or `openssl-devel`. ### Install Required Packages and Rust {: #install-required-packages-and-rust-linux } To install the Rust toolchain on Linux: 1. Open a terminal shell 2. Check the packages you have installed on the local computer by running an appropriate package management command for your Linux distribution 3. Add any package dependencies you are missing to your local development environment by running the appropriate package management command for your Linux distribution: === "Ubuntu" ```bash sudo apt install --assume-yes git clang curl libssl-dev protobuf-compiler ``` === "Debian" ```sh sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler ``` === "Arch" ```sh pacman -Syu --needed --noconfirm curl git clang make protobuf ``` === "Fedora" ```sh sudo dnf update sudo dnf install clang curl git openssl-devel make protobuf-compiler ``` === "OpenSUSE" ```sh sudo zypper install clang curl git openssl-devel llvm-devel libudev-devel make protobuf ``` Remember that different distributions might use different package managers and bundle packages in different ways. For example, depending on your installation selections, Ubuntu Desktop and Ubuntu Server might have different packages and different requirements. However, the packages listed in the command-line examples are applicable for many common Linux distributions, including Debian, Linux Mint, MX Linux, and Elementary OS. 4. Download the `rustup` installation program and use it to install Rust by running the following command: ```bash curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` 5. Follow the prompts displayed to proceed with a default installation 6. Update your current shell to include Cargo by running the following command: ```bash source $HOME/.cargo/env ``` 7. Verify your installation by running the following command: ```bash rustc --version ``` 8. Configure the Rust toolchain to default to the latest stable version by running the following commands: ```bash rustup default stable rustup update rustup target add wasm32-unknown-unknown rustup component add rust-src ``` 9. [Verify your installation](#verifying-installation) ## Windows (WSL) In general, UNIX-based operating systems—like macOS or Linux—provide a better development environment for building Substrate-based blockchains. However, suppose your local computer uses Microsoft Windows instead of a UNIX-based operating system. In that case, you can configure it with additional software to make it a suitable development environment for building Substrate-based blockchains. To prepare a development environment on a Microsoft Windows computer, you can use Windows Subsystem for Linux (WSL) to emulate a UNIX operating environment. ### Before You Begin {: #before-you-begin-windows } Before installing on Microsoft Windows, verify the following basic requirements: - You have a computer running a supported Microsoft Windows operating system: - **For Windows desktop** - you must be running Microsoft Windows 10, version 2004 or later, or Microsoft Windows 11 to install WSL - **For Windows server** - you must be running Microsoft Windows Server 2019, or later, to install WSL on a server operating system - You have good internet connection and access to a shell terminal on your local computer ### Set Up Windows Subsystem for Linux WSL enables you to emulate a Linux environment on a computer that uses the Windows operating system. The primary advantage of this approach for Substrate development is that you can use all of the code and command-line examples as described in the Substrate documentation. For example, you can run common commands—such as `ls` and `ps`—unmodified. By using WSL, you can avoid configuring a virtual machine image or a dual-boot operating system. To prepare a development environment using WSL: 1. Check your Windows version and build number to see if WSL is enabled by default. If you have Microsoft Windows 10, version 2004 (Build 19041 and higher), or Microsoft Windows 11, WSL is available by default and you can continue to the next step. If you have an older version of Microsoft Windows installed, see the [WSL manual installation steps for older versions](https://learn.microsoft.com/en-us/windows/wsl/install-manual){target=\_blank}. If you are installing on an older version of Microsoft Windows, you can download and install WLS 2 if your computer has Windows 10, version 1903 or higher 2. Select **Windows PowerShell** or **Command Prompt** from the **Start** menu, right-click, then **Run as administrator** 3. In the PowerShell or Command Prompt terminal, run the following command: ```bash wsl --install ``` This command enables the required WSL 2 components that are part of the Windows operating system, downloads the latest Linux kernel, and installs the Ubuntu Linux distribution by default. If you want to review the other Linux distributions available, run the following command: ```bash wsl --list --online ``` 4. After the distribution is downloaded, close the terminal 5. Click the **Start** menu, select **Shut down or sign out**, then click **Restart** to restart the computer. Restarting the computer is required to start the installation of the Linux distribution. It can take a few minutes for the installation to complete after you restart. For more information about setting up WSL as a development environment, see the [Set up a WSL development environment](https://learn.microsoft.com/en-us/windows/wsl/setup/environment){target=\_blank} docs ### Install Required Packages and Rust {: #install-required-packages-and-rust-windows } To install the Rust toolchain on WSL: 1. Click the **Start** menu, then select **Ubuntu** 2. Type a UNIX user name to create user account 3. Type a password for your UNIX user, then retype the password to confirm it 4. Download the latest updates for the Ubuntu distribution using the Ubuntu Advanced Packaging Tool (`apt`) by running the following command: ```bash sudo apt update ``` 5. Add the required packages for the Ubuntu distribution by running the following command: ```bash sudo apt install --assume-yes git clang curl libssl-dev llvm libudev-dev make protobuf-compiler ``` 6. Download the `rustup` installation program and use it to install Rust for the Ubuntu distribution by running the following command: ```bash curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` 7. Follow the prompts displayed to proceed with a default installation 8. Update your current shell to include Cargo by running the following command: ```bash source ~/.cargo/env ``` 9. Verify your installation by running the following command: ```bash rustc --version ``` 10. Configure the Rust toolchain to use the latest stable version as the default toolchain by running the following commands: ```bash rustup default stable rustup update rustup target add wasm32-unknown-unknown rustup component add rust-src ``` 11. [Verify your installation](#verifying-installation) ## Verifying Installation Verify the configuration of your development environment by running the following command: ```bash rustup show ``` The command displays output similar to the following:
rustup show ...
active toolchain ---------------- name: stable-aarch64-apple-darwin active because: it's the default toolchain installed targets: aarch64-apple-darwin wasm32-unknown-unknown
## Where to Go Next - [Parachain Zero to Hero Tutorials](/tutorials/polkadot-sdk/parachains/zero-to-hero/){target=\_blank} - a series of step-by-step guides to building, testing, and deploying custom pallets and runtimes using the Polkadot SDK --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/intro-polkadot-sdk/ --- BEGIN CONTENT --- --- title: Introduction to Polkadot SDK description: Learn about the Polkadot SDK, a robust developer toolkit for building custom blockchains. Explore its components and how it powers the Polkadot protocol. categories: Basics, Tooling --- # Introduction to Polkadot SDK ## Introduction The [Polkadot SDK](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}){target=\_blank} is a powerful and versatile developer kit designed to facilitate building on the Polkadot network. It provides the necessary components for creating custom blockchains, parachains, generalized rollups, and more. Written in the Rust programming language, it puts security and robustness at the forefront of its design. Whether you're building a standalone chain or deploying a parachain on Polkadot, this SDK equips developers with the libraries and tools needed to manage runtime logic, compile the codebase, and utilize core features like staking, governance, and Cross-Consensus Messaging (XCM). It also provides a means for building generalized peer-to-peer systems beyond blockchains. The Polkadot SDK houses the following overall functionality: - Networking and peer-to-peer communication (powered by [Libp2p](/polkadot-protocol/glossary#libp2p){target=\_blank}) - Consensus protocols, such as [BABE](/polkadot-protocol/glossary#blind-assignment-of-blockchain-extension-babe){target=\_blank}, [GRANDPA](/polkadot-protocol/glossary#grandpa){target=\_blank}, or [Aura](/polkadot-protocol/glossary#authority-round-aura){target=\_blank} - Cryptography - The ability to create portable Wasm runtimes - A selection of pre-built modules, called [pallets](/polkadot-protocol/glossary#pallet){target=\_blank} - Benchmarking and testing suites For an in-depth look at the monorepo, see the [Polkadot SDK Rust documentation](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/index.html){target=\_blank}. ## Polkadot SDK Overview The Polkadot SDK is composed of five major components: ![](/images/develop/parachains/intro-polkadot-sdk/intro-polkadot-sdk-1.webp) - [**Substrate**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/substrate/index.html){target=\_blank} - a set of libraries and primitives for building blockchains - [**FRAME**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/frame_runtime/index.html){target=\_blank} - a blockchain development framework built on top of Substrate - [**Cumulus**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/cumulus/index.html){target=\_blank} - a set of libraries and pallets to add parachain capabilities to a Substrate/FRAME runtime - [**XCM (Cross Consensus Messaging)**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/xcm/index.html){target=\_blank} - the primary format for conveying messages between parachains - [**Polkadot**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/polkadot/index.html){target=\_blank} - the node implementation for the Polkadot protocol ### Substrate Substrate is a Software Development Kit (SDK) that uses Rust-based libraries and tools to enable you to build application-specific blockchains from modular and extensible components. Application-specific blockchains built with Substrate can run as standalone services or in parallel with other chains to take advantage of the shared security provided by the Polkadot ecosystem. Substrate includes default implementations of the core components of the blockchain infrastructure to allow you to focus on the application logic. Every blockchain platform relies on a decentralized network of computers—called nodes—that communicate with each other about transactions and blocks. In general, a node in this context is the software running on the connected devices rather than the physical or virtual machine in the network. As software, Substrate-based nodes consist of two main parts with separate responsibilities: - **Client** - services to handle network and blockchain infrastructure activity - Native binary - Executes the Wasm runtime - Manages components like database, networking, mempool, consensus, and others - Also known as "Host" - **Runtime** - business logic for state transitions - Application logic - Compiled to [Wasm](https://webassembly.org/){target=\_blank} - Stored as a part of the chain state - Also known as State Transition Function (STF) ```mermaid %%{init: {'flowchart': {'padding': 25, 'nodeSpacing': 10, 'rankSpacing': 50}}}%% graph TB %% Define comprehensive styles classDef titleStyle font-size:30px,font-weight:bold,stroke-width:2px,padding:20px subgraph sg1[Substrate Node] %% Add invisible spacer with increased height spacer[ ] style spacer height:2px,opacity:0 B[Wasm Runtime - STF] I[RuntimeCall Executor] subgraph sg2[Client] direction TB C[Network and Blockchain
Infrastructure Services] end I -.-> B end %% Apply comprehensive styles class sg1 titleStyle ``` ### FRAME FRAME provides the core modular and extensible components that make the Substrate SDK flexible and adaptable to different use cases. FRAME includes Rust-based libraries that simplify the development of application-specific logic. Most of the functionality that FRAME provides takes the form of plug-in modules called [pallets](/polkadot-protocol/glossary#pallet){target=\_blank} that you can add and configure to suit your requirements for a custom runtime. ```mermaid graph LR subgraph SP["Runtime"] direction LR Timestamp ~~~ Aura ~~~ GRANDPA Balances ~~~ TransactionPayment ~~~ Sudo subgraph Timestamp["Timestamp"] SS1[Custom Config] end subgraph Aura["Aura"] SS2[Custom Config] end subgraph GRANDPA["GRANDPA"] SS3[Custom Config] end subgraph Balances["Balances"] SS4[Custom Config] end subgraph TransactionPayment["Transaction Payment"] SS5[Custom Config] end subgraph Sudo["Sudo"] SS6[Custom Config] end style Timestamp stroke:#FF69B4 style Aura stroke:#FF69B4 style GRANDPA stroke:#FF69B4 style Balances stroke:#FF69B4 style TransactionPayment stroke:#FF69B4 style Sudo stroke:#FF69B4 style SS1 stroke-dasharray: 5 style SS2 stroke-dasharray: 5 style SS3 stroke-dasharray: 5 style SS4 stroke-dasharray: 5 style SS5 stroke-dasharray: 5 style SS6 stroke-dasharray: 5 end subgraph AP["FRAME Pallets"] direction LR A1[Aura]~~~A2[BABE]~~~A3[GRANDPA]~~~A4[Transaction\nPayment] B1[Identity]~~~B2[Balances]~~~B3[Sudo]~~~B4[EVM] C1[Timestamp]~~~C2[Assets]~~~C3[Contracts]~~~C4[and more...] end AP --> SP ``` ### Cumulus Cumulus provides utilities and libraries to turn FRAME-based runtimes into runtimes that can be a parachain on Polkadot. Cumulus runtimes are still FRAME runtimes but contain the necessary functionality that allows that runtime to become a parachain on a relay chain. ## Why Use Polkadot SDK? Using the Polkadot SDK, you can build application-specific blockchains without the complexity of building a blockchain from scratch or the limitations of building on a general-purpose blockchain. You can focus on crafting the business logic that makes your chain unique and innovative with the additional benefits of flexibility, upgradeability, open-source licensing, and cross-consensus interoperability. ## Create a Custom Blockchain Using the SDK Before starting your blockchain development journey, you'll need to decide whether you want to build a standalone chain or a parachain that connects to the Polkadot network. Each path has its considerations and requirements. Once you've made this decision, follow these development stages: ```mermaid graph LR A[Install the Polkadot SDK] --> B[Build the Chain] B --> C[Deploy the Chain] ``` 1. [**Install the Polkadot SDK**](/develop/parachains/install-polkadot-sdk/) - set up your development environment with all necessary dependencies and tools 2. [**Build the chain**](/develop/parachains/customize-parachain) - learn how to create and customize your blockchain's runtime, configure pallets, and implement your chain's unique features 3. [**Deploy the chain**](/develop/parachains/deployment) - follow the steps to launch your blockchain, whether as a standalone network or as a parachain on Polkadot Each stage is covered in detail in its respective guide, walking you through the process from initial setup to final deployment. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/maintenance/ --- BEGIN CONTENT --- --- title: Maintenance description: Learn how to maintain Polkadot SDK-based networks, covering runtime monitoring, upgrades, and storage migrations for optimal blockchain performance. template: index-page.html --- # Maintenance Learn how to maintain Polkadot SDK-based networks, focusing on runtime monitoring, upgrades, and storage migrations for optimal performance. Proper maintenance ensures your blockchain remains secure, efficient, and adaptable to changing needs. These sections will guide you through monitoring your network, using runtime versioning, and performing forkless upgrades to keep your blockchain secure and up-to-date without downtime. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/maintenance/runtime-metrics-monitoring/ --- BEGIN CONTENT --- --- title: Runtime Metrics and Monitoring description: Learn how to monitor and visualize node performance in Polkadot SDK-based networks using telemetry, Prometheus, and Grafana for efficient runtime monitoring. categories: Parachains --- # Runtime Metrics and Monitoring ## Introduction Maintaining a stable, secure, and efficient network requires continuous monitoring. Polkadot SDK-based nodes are equipped with built-in telemetry components that automatically collect and transmit detailed data about node performance in real-time. This telemetry system is a core feature of the Substrate framework, allowing for easy monitoring of network health without complex setup. [Substrate's client telemetry](https://paritytech.github.io/polkadot-sdk/master/sc_telemetry/index.html){target=\_blank} enables real-time data ingestion, which can be visualized on a client dashboard. The telemetry process uses tracing and logging to gather operational data. This data is sent through a tracing layer to a background task called the [`TelemetryWorker`](https://paritytech.github.io/polkadot-sdk/master/sc_telemetry/struct.TelemetryWorker.html){target=\_blank}, which then forwards it to configured remote telemetry servers. If multiple Substrate nodes run within the same process, the telemetry system uses a `tracing::Span` to distinguish data from each node. This ensures that each task, managed by the `sc-service`'s [`TaskManager`](https://paritytech.github.io/polkadot-sdk/master/sc_service/struct.TaskManager.html){target=\_blank}, inherits a span for data consistency, making it easy to track parallel node operations. Each node can be monitored for basic metrics, such as block height, peer connections, CPU usage, and memory. Substrate nodes expose these metrics at the `host:9615/metrics` endpoint, accessible locally by default. To expose metrics on all interfaces, start a node with the `--prometheus-external` flag. As a developer or node operator, the telemetry system handles most of the technical setup. Collected data is automatically sent to a default telemetry server, where it’s aggregated and displayed on a dashboard, making it easy to monitor network performance and identify issues. ## Runtime Metrics Substrate exposes a variety of metrics about the operation of your network, such as the number of peer connections, memory usage, and block production. To capture and visualize these metrics, you can configure and use tools like [Prometheus](https://prometheus.io/){target=\_blank} and [Grafana](https://grafana.com/){target=\_blank}. At a high level, Substrate exposes telemetry data that can be consumed by the Prometheus endpoint and then presented as visual information in a Grafana dashboard or graph. The provided diagram offers a simplified overview of how the interaction between Substrate, Prometheus, and Grafana can be configured to display information about node operations. ```mermaid graph TD subNode([Substrate Node]) --> telemetryStream[Exposed Telemetry Stream] telemetryStream --> prometheus[Prometheus] prometheus --> endpoint[Endpoint: Every 1 minute] endpoint --> grafana[Grafana] grafana --> userOpen[User Opens a Graph] prometheus --> localData[Local Prometheus Data] localData --> getmetrics[Get Metrics] ``` The diagram shows the flow of data from the Substrate node to the monitoring and visualization components. The Substrate node exposes a telemetry stream, which is consumed by Prometheus. Prometheus is configured to collect data every minute and store it. Grafana is then used to visualize the data, allowing the user to open graphs and retrieve specifc metrics from the telemetry stream. ## Visual Monitoring The [Polkadot telemetry](https://telemetry.polkadot.io/){target=\_blank} dashboard provides a real-time view of how currently online nodes are performing. This dashboard, allows users to select the network you need to check on, and also the information you want to display by turning visible columns on and off from the list of columns available. The monitoring dashboard provides the following indicators and metrics: - **Validator** - identifies whether the node is a validator node or not - **Location** - displays the geographical location of the node - **Implementation** - shows the version of the software running on the node - **Network ID** - displays the public network identifier for the node - **Peer count** - indicates the number of peers connected to the node - **Transactions in queue** - shows the number of transactions waiting in the [`Ready` queue](https://paritytech.github.io/polkadot-sdk/master/sc_transaction_pool_api/enum.TransactionStatus.html#variant.Ready){target=\_blank} for a block author - **Upload bandwidth** - graphs the node's recent upload activity in MB/s - **Download bandwidth** - graphs the node's recent download activity in MB/s - **State cache size** - graphs the size of the node's state cache in MB - **Block** - displays the current best block number to ensure synchronization with peers - **Block hash** - shows the block hash for the current best block number - **Finalized block** - displays the most recently finalized block number to ensure synchronization with peers - **Finalized block hash** - shows the block hash for the most recently finalized block - **Block time** - indicates the time between block executions - **Block propagation time** - displays the time it took to import the most recent block - **Last block time** - shows the time it took to author the most recent block - **Node uptime** - indicates the number of days the node has been online without restarting ## Displaying Network-Wide Statistics In addition to the details available for individual nodes, you can view statistics that provide insights into the broader network. The network statistics provide detailed information about the hardware and software configurations of the nodes in the network, including: - Software version - Operating system - CPU architecture and model - Number of physical CPU cores - Total memory - Whether the node is a virtual machine - Linux distribution and kernel version - CPU and memory speed - Disk speed ## Customizing Monitoring Tools The default telemetry dashboard offers core metrics without additional setup. However, many projects prefer custom telemetry setups with more advanced monitoring and alerting policies. Typically, setting up a custom telemetry solution involves establishing monitoring and alerting policies for both on-chain events and individual node operations. This allows for more tailored monitoring and reporting compared to the default telemetry setup. ### On-Chain Activity You can monitor specific on-chain events like transactions from certain addresses or changes in the validator set. Connecting to RPC nodes allows tracking for delays or specific event timings. Running your own RPC servers is recommended for reliable queries, as public RPC nodes may occasionally be unreliable. ## Monitoring Tools To implement customized monitoring and alerting, consider using the following stack: - [**Prometheus**](https://prometheus.io/){target=\_blank} - collects metrics at intervals, stores data in a time series database, and applies rules for evaluation - [**Grafana**](https://grafana.com/){target=\_blank} - visualizes collected data through customizable dashboards - [**Node exporter**](https://github.com/prometheus/node_exporter){target=\_blank} - reports host metrics, including CPU, memory, and bandwidth usage - [**Alert manager**](https://github.com/prometheus/alertmanager){target=\_blank} - manages alerts, routing them based on defined rules - [**Loki**](https://github.com/grafana/loki){target=\_blank} - scalable log aggregator for searching and viewing logs across infrastructure ### Change the Telemetry Server Once backend monitoring is configured, use the `--telemetry-url` flag when starting a node to specify telemetry endpoints and verbosity levels. Multiple telemetry URLs can be provided, and verbosity ranges from 0 (least verbose) to 9 (most verbose). For instance, setting a custom telemetry server with verbosity level 5 would look like: ```bash ./target/release/node-template --dev \ --telemetry-url "wss://192.168.48.1:9616 5" \ --prometheus-port 9616 \ --prometheus-external ``` For more information on the backend components for telemetry or configuring your own server, you can refer to the [`substrate-telemetry`](https://github.com/paritytech/substrate-telemetry){target=\_blank} project or the [Substrate Telemetry Helm Chart](https://github.com/paritytech/helm-charts/blob/main/charts/substrate-telemetry/README.md){target=\_blank} for Kubernetes deployments. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/maintenance/runtime-upgrades/ --- BEGIN CONTENT --- --- title: Runtime Upgrades description: This page covers how runtime versioning and storage migration support forkless upgrades for Polkadot SDK-based networks and how they factor into chain upgrades. categories: Parachains --- # Runtime Upgrades ## Introduction One of the defining features of Polkadot SDK-based blockchains is the ability to perform forkless runtime upgrades. Unlike traditional blockchains, which require hard forks and node coordination for upgrades, Polkadot networks enable seamless updates without network disruption. Forkless upgrades are achieved through WebAssembly (Wasm) runtimes stored on-chain, which can be securely swapped and upgraded as part of the blockchain's state. By leveraging decentralized consensus, runtime updates can happen trustlessly, ensuring continuous improvement and evolution without halting operations. This guide explains how Polkadot's runtime versioning, Wasm deployment, and storage migrations enable these upgrades, ensuring the blockchain evolves smoothly and securely. You'll also learn how different upgrade processes apply to solo chains and parachains, depending on the network setup. ## How Runtime Upgrades Work In FRAME, the [`system`](https://paritytech.github.io/polkadot-sdk/master/frame_system/index.html){target=\_blank} pallet uses the [`set_code`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/enum.Call.html#variant.set_code){target=\_blank} extrinsic to update the Wasm code for the runtime. This method allows solo chains to upgrade without disruption. For parachains, upgrades are more complex. Parachains must first call `authorize_upgrade`, followed by `apply_authorized_upgrade`, to ensure the relay chain approves and applies the changes. Additionally, changes to current functionality that impact storage often require a [storage migration](#storage-migrations). ### Runtime Versioning The executor is the component that selects the runtime execution environment to communicate with. Although you can override the default execution strategies for custom scenarios, in most cases, the executor selects the appropriate binary to use by evaluating and comparing key parameters from the native and Wasm runtime binaries. The runtime includes a [runtime version struct](https://paritytech.github.io/polkadot-sdk/master/sp_version/struct.RuntimeVersion.html){target=\_blank} to provide the needed parameter information to the executor process. A sample runtime version struct might look as follows: ```rust pub const VERSION: RuntimeVersion = RuntimeVersion { spec_name: create_runtime_str!("node-template"), impl_name: create_runtime_str!("node-template"), authoring_version: 1, spec_version: 1, impl_version: 1, apis: RUNTIME_API_VERSIONS, transaction_version: 1, }; ``` The struct provides the following parameter information to the executor: - **`spec_name`** - the identifier for the different runtimes - **`impl_name`** - the name of the implementation of the spec. Serves only to differentiate code of different implementation teams - **`authoring_version`** - the version of the authorship interface. An authoring node won't attempt to author blocks unless this is equal to its native runtime - **`spec_version`** - the version of the runtime specification. A full node won't attempt to use its native runtime in substitute for the on-chain Wasm runtime unless the `spec_name`, `spec_version`, and `authoring_version` are all the same between the Wasm and native binaries. Updates to the `spec_version` can be automated as a CI process. This parameter is typically incremented when there's an update to the `transaction_version` - **`impl_version`** - the version of the implementation of the specification. Nodes can ignore this. It is only used to indicate that the code is different. As long as the `authoring_version` and the `spec_version` are the same, the code might have changed, but the native and Wasm binaries do the same thing. In general, only non-logic-breaking optimizations would result in a change of the `impl_version` - **`transaction_version`** - the version of the interface for handling transactions. This parameter can be useful to synchronize firmware updates for hardware wallets or other signing devices to verify that runtime transactions are valid and safe to sign. This number must be incremented if there is a change in the index of the pallets in the `construct_runtime!` macro or if there are any changes to dispatchable functions, such as the number of parameters or parameter types. If `transaction_version` is updated, then the `spec_version` must also be updated - **`apis`** - a list of supported [runtime APIs](https://paritytech.github.io/polkadot-sdk/master/sp_api/macro.impl_runtime_apis.html){target=\_blank} along with their versions The executor follows the same consensus-driven logic for both the native runtime and the Wasm runtime before deciding which to execute. Because runtime versioning is a manual process, there is a risk that the executor could make incorrect decisions if the runtime version is misrepresented or incorrectly defined. ### Accessing the Runtime Version The runtime version can be accessed through the `state.getRuntimeVersion` RPC endpoint, which accepts an optional block identifier. It can also be accessed through the runtime metadata to understand the APIs the runtime exposes and how to interact with them. The runtime metadata should only change when the chain's [runtime `spec_version`](https://paritytech.github.io/polkadot-sdk/master/sp_version/struct.RuntimeVersion.html#structfield.spec_version){target=\_blank} changes. ## Storage Migrations [Storage migrations](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_runtime_upgrades_and_migrations/index.html#migrations){target=\_blank} are custom, one-time functions that allow you to update storage to adapt to changes in the runtime. For example, if a runtime upgrade changes the data type used to represent user balances from an unsigned integer to a signed integer, the storage migration would read the existing value as an unsigned integer and write back an updated value that has been converted to a signed integer. If you don't make changes to how data is stored when needed, the runtime can't properly interpret the storage values to include in the runtime state and is likely to lead to undefined behavior. ### Storage Migrations with FRAME FRAME storage migrations are implemented using the [`OnRuntimeUpgrade`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.OnRuntimeUpgrade.html){target=\_blank} trait. The `OnRuntimeUpgrade` trait specifies a single function, `on_runtime_upgrade`, that allows you to specify logic to run immediately after a runtime upgrade but before any `on_initialize` functions or transactions are executed. For further details about this process, see the [Storage Migrations](/develop/parachains/maintenance/storage-migrations/){target=\_blank} page. ### Ordering Migrations By default, FRAME orders the execution of `on_runtime_upgrade` functions based on the order in which the pallets appear in the `construct_runtime!` macro. The functions run in reverse order for upgrades, starting with the last pallet executed first. You can impose a custom order if needed. FRAME storage migrations run in this order: 1. Custom `on_runtime_upgrade` functions if using a custom order 2. System `frame_system::on_runtime_upgrade` functions 3. All `on_runtime_upgrade` functions defined in the runtime starting with the last pallet in the `construct_runtime!` macro --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/maintenance/storage-migrations/ --- BEGIN CONTENT --- --- title: Storage Migrations description: Ensure smooth runtime upgrades with storage migrations, update data formats, and prevent errors. Learn when and how to implement migrations efficiently. categories: Parachains --- # Storage Migrations ## Introduction Storage migrations are a crucial part of the runtime upgrade process. They allow you to update the [storage items](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.storage.html){target=\_blank} of your blockchain, adapting to changes in the runtime. Whenever you change the encoding or data types used to represent data in storage, you'll need to provide a storage migration to ensure the runtime can correctly interpret the existing stored values in the new runtime state. Storage migrations must be executed precisely during the runtime upgrade process to ensure data consistency and prevent [runtime panics](https://doc.rust-lang.org/std/macro.panic.html){target=\_blank}. The migration code needs to run as follows: - After the new runtime is deployed - Before any other code from the new runtime executes - Before any [`on_initialize`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.on_initialize){target=\_blank} hooks run - Before any transactions are processed This timing is critical because the new runtime expects data to be in the updated format. Any attempt to decode the old data format without proper migration could result in runtime panics or undefined behavior. ## Storage Migration Scenarios A storage migration is necessary whenever a runtime upgrade changes the storage layout or the encoding/interpretation of existing data. Even if the underlying data type appears to still "fit" the new storage representation, a migration may be required if the interpretation of the stored values has changed. Storage migrations ensure data consistency and prevent corruption during runtime upgrades. Below are common scenarios categorized by their impact on storage and migration requirements: - Migration required: - Reordering or mutating fields of an existing data type to change the encoded/decoded data representation - Removal of a pallet or storage item warrants cleaning up storage via a migration to avoid state bloat - Migration not required: - Adding a new storage item would not require any migration since no existing data needs transformation - Adding or removing an extrinsic introduces no new interpretation of preexisting data, so no migration is required The following are some common scenarios where a storage migration is needed: - **Changing data types** - changing the underlying data type requires a migration to convert the existing values ```rust #[pallet::storage] pub type FooValue = StorageValue<_, Foo>; // old pub struct Foo(u32) // new pub struct Foo(u64) ``` - **Changing data representation** - modifying the representation of the stored data, even if the size appears unchanged, requires a migration to ensure the runtime can correctly interpret the existing values ```rust #[pallet::storage] pub type FooValue = StorageValue<_, Foo>; // old pub struct Foo(u32) // new pub struct Foo(i32) // or pub struct Foo(u16, u16) ``` - **Extending an enum** - adding new variants to an enum requires a migration if you reorder existing variants, insert new variants between existing ones, or change the data type of existing variants. No migration is required when adding new variants at the end of the enum ```rust #[pallet::storage] pub type FooValue = StorageValue<_, Foo>; // old pub enum Foo { A(u32), B(u32) } // new (New variant added at the end. No migration required) pub enum Foo { A(u32), B(u32), C(u128) } // new (Reordered variants. Requires migration) pub enum Foo { A(u32), C(u128), B(u32) } ``` - **Changing the storage key** - modifying the storage key, even if the underlying data type remains the same, requires a migration to ensure the runtime can locate the correct stored values. ```rust #[pallet::storage] pub type FooValue = StorageValue<_, u32>; // new #[pallet::storage] pub type BarValue = StorageValue<_, u32>; ``` !!!warning In general, any change to the storage layout or data encoding used in your runtime requires careful consideration of the need for a storage migration. Overlooking a necessary migration can lead to undefined behavior or data loss during a runtime upgrade. ## Implement Storage Migrations The [`OnRuntimeUpgrade`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.OnRuntimeUpgrade.html){target=\_blank} trait provides the foundation for implementing storage migrations in your runtime. Here's a detailed look at its essential functions: ```rust pub trait OnRuntimeUpgrade { fn on_runtime_upgrade() -> Weight { ... } fn try_on_runtime_upgrade(checks: bool) -> Result { ... } fn pre_upgrade() -> Result, TryRuntimeError> { ... } fn post_upgrade(_state: Vec) -> Result<(), TryRuntimeError> { ... } } ``` ### Core Migration Function The [`on_runtime_upgrade`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.on_runtime_upgrade){target=\_blank} function executes when the FRAME Executive pallet detects a runtime upgrade. Important considerations when using this function include: - It runs before any pallet's `on_initialize` hooks - Critical storage items (like [`block_number`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.block_number){target=\_blank}) may not be set - Execution is mandatory and must be completed - Careful weight calculation is required to prevent bricking the chain When implementing the migration logic, your code must handle several vital responsibilities. A migration implementation must do the following to operate correctly: - Read existing storage values in their original format - Transform data to match the new format - Write updated values back to storage - Calculate and return consumed weight ### Migration Testing Hooks The `OnRuntimeUpgrade` trait provides some functions designed specifically for testing migrations. These functions never execute on-chain but are essential for validating migration behavior in test environments. The migration test hooks are as follows: - **[`try_on_runtime_upgrade`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.OnRuntimeUpgrade.html#method.try_on_runtime_upgrade){target=\_blank}** - this function serves as the primary orchestrator for testing the complete migration process. It coordinates the execution flow from `pre-upgrade` checks through the actual migration to `post-upgrade` verification. Handling the entire migration sequence ensures that storage modifications occur correctly and in the proper order. Preserving this sequence is particularly valuable when testing multiple dependent migrations, where the execution order matters - **[`pre_upgrade`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.pre_upgrade){target=\_blank}** - before a runtime upgrade begins, the `pre_upgrade` function performs preliminary checks and captures the current state. It returns encoded state data that can be used for `post-upgrade` verification. This function must never modify storage - it should only read and verify the existing state. The data it returns includes critical state values that should remain consistent or transform predictably during migration - **[`post_upgrade`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.post_upgrade){target=\_blank}** - after the migration completes, `post_upgrade` validates its success. It receives the state data captured by `pre_upgrade` to verify that the migration was executed correctly. This function checks for storage consistency and ensures all data transformations are completed as expected. Like `pre_upgrade`, it operates exclusively in testing environments and should not modify storage ### Migration Structure There are two approaches to implementing storage migrations. The first method involves directly implementing `OnRuntimeUpgrade` on structs. This approach requires manually checking the on-chain storage version against the new [`StorageVersion`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/struct.StorageVersion.html){target=\_blank} and executing the transformation logic only when the check passes. This version verification prevents multiple executions of the migration during subsequent runtime upgrades. The recommended approach is to implement [`UncheckedOnRuntimeUpgrade`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.UncheckedOnRuntimeUpgrade.html){target=\_blank} and wrap it with [`VersionedMigration`](https://paritytech.github.io/polkadot-sdk/master/frame_support/migrations/struct.VersionedMigration.html){target=\_blank}. `VersionedMigration` implements `OnRuntimeUpgrade` and handles storage version management automatically, following best practices and reducing potential errors. `VersionedMigration` requires five type parameters: - `From` - the source version for the upgrade - `To` - the target version for the upgrade - `Inner` - the `UncheckedOnRuntimeUpgrade` implementation - `Pallet` - the pallet being upgraded - `Weight` - the runtime's [`RuntimeDbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} implementation Examine the following migration example that transforms a simple `StorageValue` storing a `u32` into a more complex structure that tracks both current and previous values using the `CurrentAndPreviousValue` struct: - Old `StorageValue` format: ```rust #[pallet::storage] pub type Value = StorageValue<_, u32>; ``` - New `StorageValue` format: ```rust /// Example struct holding the most recently set [`u32`] and the /// second most recently set [`u32`] (if one existed). #[docify::export] #[derive( Clone, Eq, PartialEq, Encode, Decode, RuntimeDebug, scale_info::TypeInfo, MaxEncodedLen, )] pub struct CurrentAndPreviousValue { /// The most recently set value. pub current: u32, /// The previous value, if one existed. pub previous: Option, } #[pallet::storage] pub type Value = StorageValue<_, CurrentAndPreviousValue>; ``` - Migration: ```rust use frame_support::{ storage_alias, traits::{Get, UncheckedOnRuntimeUpgrade}, }; #[cfg(feature = "try-runtime")] use alloc::vec::Vec; /// Collection of storage item formats from the previous storage version. /// /// Required so we can read values in the v0 storage format during the migration. mod v0 { use super::*; /// V0 type for [`crate::Value`]. #[storage_alias] pub type Value = StorageValue, u32>; } /// Implements [`UncheckedOnRuntimeUpgrade`], migrating the state of this pallet from V0 to V1. /// /// In V0 of the template [`crate::Value`] is just a `u32`. In V1, it has been upgraded to /// contain the struct [`crate::CurrentAndPreviousValue`]. /// /// In this migration, update the on-chain storage for the pallet to reflect the new storage /// layout. pub struct InnerMigrateV0ToV1(core::marker::PhantomData); impl UncheckedOnRuntimeUpgrade for InnerMigrateV0ToV1 { /// Return the existing [`crate::Value`] so we can check that it was correctly set in /// `InnerMigrateV0ToV1::post_upgrade`. #[cfg(feature = "try-runtime")] fn pre_upgrade() -> Result, sp_runtime::TryRuntimeError> { use codec::Encode; // Access the old value using the `storage_alias` type let old_value = v0::Value::::get(); // Return it as an encoded `Vec` Ok(old_value.encode()) } /// Migrate the storage from V0 to V1. /// /// - If the value doesn't exist, there is nothing to do. /// - If the value exists, it is read and then written back to storage inside a /// [`crate::CurrentAndPreviousValue`]. fn on_runtime_upgrade() -> frame_support::weights::Weight { // Read the old value from storage if let Some(old_value) = v0::Value::::take() { // Write the new value to storage let new = crate::CurrentAndPreviousValue { current: old_value, previous: None }; crate::Value::::put(new); // One read + write for taking the old value, and one write for setting the new value T::DbWeight::get().reads_writes(1, 2) } else { // No writes since there was no old value, just one read for checking T::DbWeight::get().reads(1) } } /// Verifies the storage was migrated correctly. /// /// - If there was no old value, the new value should not be set. /// - If there was an old value, the new value should be a [`crate::CurrentAndPreviousValue`]. #[cfg(feature = "try-runtime")] fn post_upgrade(state: Vec) -> Result<(), sp_runtime::TryRuntimeError> { use codec::Decode; use frame_support::ensure; let maybe_old_value = Option::::decode(&mut &state[..]).map_err(|_| { sp_runtime::TryRuntimeError::Other("Failed to decode old value from storage") })?; match maybe_old_value { Some(old_value) => { let expected_new_value = crate::CurrentAndPreviousValue { current: old_value, previous: None }; let actual_new_value = crate::Value::::get(); ensure!(actual_new_value.is_some(), "New value not set"); ensure!( actual_new_value == Some(expected_new_value), "New value not set correctly" ); }, None => { ensure!(crate::Value::::get().is_none(), "New value unexpectedly set"); }, }; Ok(()) } } /// [`UncheckedOnRuntimeUpgrade`] implementation [`InnerMigrateV0ToV1`] wrapped in a /// [`VersionedMigration`](frame_support::migrations::VersionedMigration), which ensures that: /// - The migration only runs once when the on-chain storage version is 0 /// - The on-chain storage version is updated to `1` after the migration executes /// - Reads/Writes from checking/settings the on-chain storage version are accounted for pub type MigrateV0ToV1 = frame_support::migrations::VersionedMigration< 0, // The migration will only execute when the on-chain storage version is 0 1, // The on-chain storage version will be set to 1 after the migration is complete InnerMigrateV0ToV1, crate::pallet::Pallet, ::DbWeight, >; ``` ### Migration Organization Best practices recommend organizing migrations in a separate module within your pallet. Here's the recommended file structure: ```plain my-pallet/ ├── src/ │ ├── lib.rs # Main pallet implementation │ └── migrations/ # All migration-related code │ ├── mod.rs # Migrations module definition │ ├── v1.rs # V0 -> V1 migration │ └── v2.rs # V1 -> V2 migration └── Cargo.toml ``` This structure provides several benefits: - Separates migration logic from core pallet functionality - Makes migrations easier to test and maintain - Provides explicit versioning of storage changes - Simplifies the addition of future migrations ### Scheduling Migrations To execute migrations during a runtime upgrade, you must configure them in your runtime's Executive pallet. Add your migrations in `runtime/src/lib.rs`: ```rust /// Tuple of migrations (structs that implement `OnRuntimeUpgrade`) type Migrations = ( pallet_my_pallet::migrations::v1::Migration, // More migrations can be added here ); pub type Executive = frame_executive::Executive< Runtime, Block, frame_system::ChainContext, Runtime, AllPalletsWithSystem, Migrations, // Include migrations here >; ``` ## Single-Block Migrations Single-block migrations execute their logic within one block immediately following a runtime upgrade. They run as part of the runtime upgrade process through the `OnRuntimeUpgrade` trait implementation and must be completed before any other runtime logic executes. While single-block migrations are straightforward to implement and provide immediate data transformation, they carry significant risks. The most critical consideration is that they must complete within one block's weight limits. This is especially crucial for parachains, where exceeding block weight limits will brick the chain. Use single-block migrations only when you can guarantee: - The migration has a bounded execution time - Weight calculations are thoroughly tested - Total weight will never exceed block limits For a complete implementation example of a single-block migration, refer to the [single-block migration example]( https://paritytech.github.io/polkadot-sdk/master/pallet_example_single_block_migrations/index.html){target=\_blank} in the Polkadot SDK documentation. ## Multi Block Migrations Multi-block migrations distribute the migration workload across multiple blocks, providing a safer approach for production environments. The migration state is tracked in storage, allowing the process to pause and resume across blocks. This approach is essential for production networks and parachains as the risk of exceeding block weight limits is eliminated. Multi-block migrations can safely handle large storage collections, unbounded data structures, and complex nested data types where weight consumption might be unpredictable. Multi-block migrations are ideal when dealing with: - Large-scale storage migrations - Unbounded storage items or collections - Complex data structures with uncertain weight costs The primary trade-off is increased implementation complexity, as you must manage the migration state and handle partial completion scenarios. However, multi-block migrations' significant safety benefits and operational reliability are typically worth the increased complexity. For a complete implementation example of multi-block migrations, refer to the [official example](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/examples/multi-block-migrations){target=\_blank} in the Polkadot SDK. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/maintenance/unlock-parachain/ --- BEGIN CONTENT --- --- title: Unlock a Parachain description: Learn how to unlock your parachain. This step-by-step guide covers verifying lock status, preparing calls, and executing the unlock process. categories: Parachains --- # Unlock a Parachain ## Introduction Parachain locks are a critical security mechanism in the Polkadot ecosystem designed to maintain decentralization during the parachain lifecycle. These locks prevent potential centralization risks that could emerge during the early stages of parachain operation. The locking system follows strict, well-defined conditions that distribute control across multiple authorities: - Relay chain governance has the authority to lock any parachain - A parachain can lock its own lock - Parachain managers have permission to lock the parachain - Parachains are locked automatically when they successfully produce their first block Similarly, unlocking a parachain follows controlled procedures: - Relay chain governance retains the authority to unlock any parachain - A parachain can unlock its own lock This document guides you through checking a parachain's lock status and safely executing the unlock procedure from a parachain using [XCM (Cross-Consensus Messaging)](/develop/interoperability/intro-to-xcm/){target=\_blank}. ## Check If the Parachain Is Locked Before unlocking a parachain, you should verify its current lock status. This can be done through the Polkadot.js interface: 1. In [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank}, connect to the relay chain, navigate to the **Developer** dropdown and select the **Chain State** option 2. Query the parachain locked status: 1. Select **`registrar`** 2. Choose the **`paras`** option 3. Input the parachain ID you want to check as a parameter (e.g. `2006`) 4. Click the **+** button to execute the query 5. Check the status of the parachain lock - **`manager`** - the account that has placed a deposit for registering this parachain - **`deposit`** - the amount reserved by the `manager` account for the registration - **`locked`** - whether the parachain registration should be locked from being controlled by the manager ![](/images/develop/parachains/maintenance/unlock-parachain/unlock-parachain-1.webp) ## How to Unlock a Parachain Unlocking a parachain requires sending an XCM (Cross-Consensus Message) to the relay chain from the parachain itself, sending a message with Root origin, or this can be accomplished through the relay chain's governance mechanism, executing a root call. If sending an XCM, the parachain origin must have proper authorization, typically from either the parachain's sudo pallet (if enabled) or its governance system. This guide demonstrates the unlocking process using a parachain with the sudo pallet. For parachains using governance-based authorization instead, the process will require adjustments to how the XCM is sent. ### Prepare the Unlock Call Before sending the XCM, you need to construct the relay chain call that will be executed. Follow these steps to prepare the `registrar.removeLock` extrinsic: 1. In [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank}, connect to the relay chain, navigate to the **Developer** dropdown and select the **Extrinsics** option 2. Build the `registrar.removeLock` extrinsic 1. Select the **registrar** pallet 2. Choose the **removeLock** extrinsic 3. Fill in the parachain ID parameter (e.g., `2006`) 4. Copy the **encoded call data** ![](/images/develop/parachains/maintenance/unlock-parachain/unlock-parachain-2.webp) To ensure your encoded call data is correct, check this [example](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fdot-rpc.stakeworld.io#/extrinsics/decode/0x4604d6070000){target=\_blank} of a decoded `removeLock` call for parachain 2006. Your encoded data should follow the same pattern. 3. Determine the transaction weight required for executing the call. You can estimate this by executing the `transactionPaymentCallApi.queryCallInfo` runtime call with the encoded call data previously obtained: ![](/images/develop/parachains/maintenance/unlock-parachain/unlock-parachain-3.webp) This weight information is crucial for properly configuring your XCM message's execution parameters in the next steps. ### Fund the Sovereign Account For a successful XCM execution, the [sovereign account](https://github.com/polkadot-fellows/xcm-format/blob/10726875bd3016c5e528c85ed6e82415e4b847d7/README.md?plain=1#L50){target=\_blank} of your parachain on the relay chain must have sufficient funds to cover transaction fees. The sovereign account is a deterministic address derived from your parachain ID. You can identify your parachain's sovereign account using either of these methods: === "Runtime API" Execute the `locationToAccountApi.convertLocation` runtime API call to convert your parachain's location into its sovereign account address on the relay chain. ![](/images/develop/parachains/maintenance/unlock-parachain/unlock-parachain-7.webp) === "Substrate Utilities" Use the **"Para ID" to Address** section in [Substrate Utilities](https://www.shawntabrizi.com/substrate-js-utilities/){target=\_blank} with the **Child** option selected. === "Manual Calculation" 1. Identify the appropriate prefix: - For parent/child chains use the prefix `0x70617261` (which decodes to `b"para"`) 2. Encode your parachain ID as a u32 [SCALE](/polkadot-protocol/parachain-basics/data-encoding#data-types){target=\_blank} value: - For parachain 2006, this would be `d6070000` 3. Combine the prefix with the encoded ID to form the sovereign account address: - **Hex** - `0x70617261d6070000000000000000000000000000000000000000000000000000` - **SS58 format** - `5Ec4AhPW97z4ZyYkd3mYkJrSeZWcwVv4wiANES2QrJi1x17F` You can transfer funds to this account from any account on the relay chain using a standard transfer. To calculate the amount needed, refer to the [XCM Payment API](/develop/interoperability/xcm-runtime-apis/#xcm-payment-api){target=\_blank}. The calculation will depend on the XCM built in the next step. ### Craft and Submit the XCM With the call data prepared and the sovereign account funded, you can now construct and send the XCM from your parachain to the relay chain. The XCM will need to perform several operations in sequence: 1. Withdraw DOT from your parachain's sovereign account 2. Buy execution to pay for transaction fees 3. Execute the `registrar.removeLock` extrinsic 4. Return any unused funds to your sovereign account Here's how to submit this XCM using Astar (Parachain 2006) as an example: 1. In [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank}, connect to the parachain, navigate to the **Developer** dropdown and select the **Extrinsics** option 2. Create a `sudo.sudo` extrinsic that executes `polkadotXcm.send`: 1. Use the `sudo.sudo` extrinsic to execute the following call as Root 2. Select the **polkadotXcm** pallet 3. Choose the **send** extrinsic 4. Set the **dest** parameter as the relay chain ![](/images/develop/parachains/maintenance/unlock-parachain/unlock-parachain-4.webp) 3. Construct the XCM and submit it: 1. Add a **WithdrawAsset** instruction 2. Add a **BuyExecution** instruction - **fees** - **id** - the asset location to use for the fee payment. In this example, the relay chain native asset is used - **fun** - select `Fungible` and use the same amount you withdrew from the sovereign account in the previous step - **weightLimit** - use `Unlimited` 3. Add a **Transact** instruction with the following parameters: - **originKind** - use `Native` - **requireWeightAtMost** - use the weight calculated previously - **call** - use the encoded call data generated before 4. Add a **RefundSurplus** instruction 5. Add a **DepositAsset** instruction to send the remaining funds to the parachain sovereign account 6. Click the **Submit Transaction** button ![](/images/develop/parachains/maintenance/unlock-parachain/unlock-parachain-5.webp) If the amount withdrawn in the first instruction is exactly the amount needed to pay the transaction fees, instructions 4 and 5 can be omitted. To validate your XCM, examine the following reference [extrinsic](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fastar.public.curie.radiumblock.co%2Fws#/extrinsics/decode/0x63003300040100041400040000000700e40b5402130000000700e40b540200060042d3c91800184604d6070000140d0100000100591f){target=_blank} showing the proper instruction sequence and parameter formatting. Following this structure will help ensure successful execution of your message. After submitting the transaction, wait for it to be finalized and then verify that your parachain has been successfully unlocked by following the steps described in the [Check if the Parachain is Locked](#check-if-the-parachain-is-locked) section. If the parachain shows as unlocked, your operation has been successful. If it still appears locked, verify that your XCM transaction was processed correctly and consider troubleshooting the XCM built. ![](/images/develop/parachains/maintenance/unlock-parachain/unlock-parachain-6.webp) --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/testing/benchmarking/ --- BEGIN CONTENT --- --- title: Benchmarking FRAME Pallets description: Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. categories: Parachains --- # Benchmarking ## Introduction Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/polkadot-protocol/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks. The Polkadot SDK leverages the [FRAME](/polkadot-protocol/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure. ## The Case for Benchmarking Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights. Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability. ### Benchmarking and Weight In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as: - Computational complexity - Storage complexity (proof size) - Database reads and writes - Hardware specifications Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model. Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period. Within FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs: ```rust hl_lines="2" #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::do_something())] pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } ``` The `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic. ## Benchmarking Process Benchmarking a pallet involves the following steps: 1. Creating a `benchmarking.rs` file within your pallet's structure 2. Writing a benchmarking test for each extrinsic 3. Executing the benchmarking tool to calculate weights based on performance metrics The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag. ### Prepare Your Environment Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: ```bash cargo install frame-omni-bencher ``` Before writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following: ```toml title="Cargo.toml" frame-benchmarking = { version = "37.0.0", default-features = false } runtime-benchmarks = [ "frame-benchmarking/runtime-benchmarks", "frame-support/runtime-benchmarks", "frame-system/runtime-benchmarks", "sp-runtime/runtime-benchmarks", ] std = [ # ... "frame-benchmarking?/std", # ... ] ``` You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`: ```toml title="Cargo.toml" runtime-benchmarks = [ "frame-benchmarking/runtime-benchmarks", "frame-support/runtime-benchmarks", "frame-system/runtime-benchmarks", "sp-runtime/runtime-benchmarks", ] ``` Lastly, ensure that `frame-benchmarking` is included in `std = []`: ```toml title="Cargo.toml" std = [ # ... "frame-benchmarking?/std", # ... ] ``` Once complete, you have the required dependencies for writing benchmark tests for your pallet. ### Write Benchmark Tests Create a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following: ``` my-pallet/ ├── src/ │ ├── lib.rs # Main pallet implementation │ └── benchmarking.rs # Benchmarking └── Cargo.toml ``` With the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\_blank} to get started as follows: ```rust title="benchmarking.rs (starter template)" //! Benchmarking setup for pallet-template #![cfg(feature = "runtime-benchmarks")] use super::*; use frame_benchmarking::v2::*; #[benchmarks] mod benchmarks { use super::*; #[cfg(test)] use crate::pallet::Pallet as Template; use frame_system::RawOrigin; #[benchmark] fn do_something() { let caller: T::AccountId = whitelisted_caller(); #[extrinsic_call] do_something(RawOrigin::Signed(caller), 100); assert_eq!(Something::::get().map(|v| v.block_number), Some(100u32.into())); } #[benchmark] fn cause_error() { Something::::put(CompositeStruct { block_number: 100u32.into() }); let caller: T::AccountId = whitelisted_caller(); #[extrinsic_call] cause_error(RawOrigin::Signed(caller)); assert_eq!(Something::::get().map(|v| v.block_number), Some(101u32.into())); } impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test); } ``` In your benchmarking tests, employ these best practices: - **Write custom testing functions** - the function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing - **Use the `#[extrinsic_call]` macro** - this macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\_blank} docs for more details - **Validate extrinsic behavior** - the `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context Add the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following: ```rust #[cfg(feature = "runtime-benchmarks")] mod benchmarking; ``` ### Add Benchmarks to Runtime Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows: 1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations: ```rust title="benchmarks.rs" frame_benchmarking::define_benchmarks!( [frame_system, SystemBench::] [pallet_parachain_template, TemplatePallet] [pallet_balances, Balances] [pallet_session, SessionBench::] [pallet_timestamp, Timestamp] [pallet_message_queue, MessageQueue] [pallet_sudo, Sudo] [pallet_collator_selection, CollatorSelection] [cumulus_pallet_parachain_system, ParachainSystem] [cumulus_pallet_xcmp_queue, XcmpQueue] ); ``` For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown: ```rust title="benchmarks.rs" hl_lines="3" frame_benchmarking::define_benchmarks!( [frame_system, SystemBench::] [pallet_parachain_template, TemplatePallet] [pallet_balances, Balances] [pallet_session, SessionBench::] [pallet_timestamp, Timestamp] [pallet_message_queue, MessageQueue] [pallet_sudo, Sudo] [pallet_collator_selection, CollatorSelection] [cumulus_pallet_parachain_system, ParachainSystem] [cumulus_pallet_xcmp_queue, XcmpQueue] ); ); ``` !!!warning "Updating `define_benchmarks!` macro is required" Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here. 2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this: ```rust title="lib.rs" #[cfg(feature = "runtime-benchmarks")] mod benchmarks; ``` The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code. 3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: ```toml runtime-benchmarks = [ # ... "pallet_parachain_template/runtime-benchmarks", ] ``` ### Run Benchmarks You can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled: 1. Run `build` with the feature flag included: ```bash cargo build --features runtime-benchmarks --release ``` 2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: ```bash touch weights.rs ``` 3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: ```bash curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ --output ./pallets/benchmarking/frame-weight-template.hbs ``` 4. Run the benchmarking tool to measure extrinsic weights: ```bash frame-omni-bencher v1 benchmark pallet \ --runtime INSERT_PATH_TO_WASM_RUNTIME \ --pallet INSERT_NAME_OF_PALLET \ --extrinsic "" \ --template ./frame-weight-template.hbs \ --output weights.rs ``` !!! tip "Flag definitions" - `--runtime` - the path to your runtime's Wasm - `--pallet` - the name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks` - `--extrinsic` - which extrinsic to test. Using `""` implies all extrinsics will be benchmarked - `--template` - defines how weight information should be formatted - `--output` - where the output of the auto-generated weights will reside The generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity:
frame-omni-bencher v1 benchmark pallet \ --runtime INSERT_PATH_TO_WASM_RUNTIME \ --pallet "INSERT_NAME_OF_PALLET" \ --extrinsic "" \ --template ./frame-weight-template.hbs \ --output ./weights.rs ... 2025-01-15T16:41:33.557045Z INFO polkadot_sdk_frame::benchmark::pallet: [ 0 % ] Starting benchmark: pallet_parachain_template::do_something 2025-01-15T16:41:33.564644Z INFO polkadot_sdk_frame::benchmark::pallet: [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error ... Created file: "weights.rs"
#### Add Benchmark Weights to Pallet Once the `weights.rs` is generated, you must integrate it with your pallet. 1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration: ```rust title="lib.rs" pub mod weights; use crate::weights::WeightInfo; /// Configure the pallet by specifying the parameters and types on which it depends. #[pallet::config] pub trait Config: frame_system::Config { // ... /// A type representing the weights required by the dispatchables of this pallet. type WeightInfo: WeightInfo; } ``` 2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows: ```rust hl_lines="2" title="lib.rs" #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::do_something())] pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } ``` 3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code: ```rust title="mod.rs" // Configure pallet. impl pallet_parachain_template::Config for Runtime { // ... type WeightInfo = pallet_parachain_template::weights::SubstrateWeight; } ``` ## Where to Go Next - View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank} - Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/testing/ --- BEGIN CONTENT --- --- title: Testing Your Polkadot SDK-Based Blockchain description: Explore comprehensive testing strategies for Polkadot SDK-based blockchains, from setting up test environments to verifying runtime and pallet interactions. template: index-page.html --- # Testing Your Polkadot SDK-Based Blockchain Explore comprehensive testing strategies for Polkadot SDK-based blockchains, from setting up test environments to verifying runtime and pallet interactions. Testing is essential to feeling confident your network will behave the way you intend upon deployment. Through these guides, you'll learn to: - Create effective test environments - Validate pallet interactions - Simulate blockchain conditions - Verify runtime behavior ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/testing/mock-runtime/ --- BEGIN CONTENT --- --- title: Mock Runtime for Pallet Testing description: Learn to create a mock environment in the Polkadot SDK for testing intra-pallet functionality and inter-pallet interactions seamlessly. categories: Parachains --- # Mock Runtime ## Introduction Testing is essential in Polkadot SDK development to ensure your blockchain operates as intended and effectively handles various potential scenarios. This guide walks you through setting up an environment to test pallets within the [runtime](/polkadot-protocol/glossary#runtime){target=_blank}, allowing you to evaluate how different pallets, their configurations, and system components interact to ensure reliable blockchain functionality. ## Configuring a Mock Runtime ### Testing Module The mock runtime includes all the necessary pallets and configurations needed for testing. To ensure proper testing, you must create a module that integrates all components, enabling assessment of interactions between pallets and system elements. Here's a simple example of how to create a testing module that simulates these interactions: ```rust pub mod tests { use crate::*; // ... } ``` The `crate::*;` snippet imports all the components from your crate (including runtime configurations, pallet modules, and utility functions) into the `tests` module. This allows you to write tests without manually importing each piece, making the code more concise and readable. You can opt to instead create a separate `mock.rs` file to define the configuration for your mock runtime and a companion `tests.rs` file to house the specific logic for each test. Once the testing module is configured, you can craft your mock runtime using the [`frame_support::runtime`](https://paritytech.github.io/polkadot-sdk/master/frame_support/attr.runtime.html){target=\_blank} macro. This macro allows you to define a runtime environment that will be created for testing purposes: ```rust pub mod tests { use crate::*; #[frame_support::runtime] mod runtime { #[runtime::runtime] #[runtime::derive( RuntimeCall, RuntimeEvent, RuntimeError, RuntimeOrigin, RuntimeFreezeReason, RuntimeHoldReason, RuntimeSlashReason, RuntimeLockId, RuntimeTask )] pub struct Test; #[runtime::pallet_index(0)] pub type System = frame_system::Pallet; // Other pallets... } } ``` ### Genesis Storage The next step is configuring the genesis storage—the initial state of your runtime. Genesis storage sets the starting conditions for the runtime, defining how pallets are configured before any blocks are produced. You can only customize the initial state only of those items that implement the [`[pallet::genesis_config]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.genesis_config.html){target=\_blank} and [`[pallet::genesis_build]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.genesis_build.html){target=\_blank} macros within their respective pallets. In Polkadot SDK, you can create this storage using the [`BuildStorage`](https://paritytech.github.io/polkadot-sdk/master/sp_runtime/trait.BuildStorage.html){target=\_blank} trait from the [`sp_runtime`](https://paritytech.github.io/polkadot-sdk/master/sp_runtime){target=\_blank} crate. This trait is essential for building the configuration that initializes the blockchain's state. The function `new_test_ext()` demonstrates setting up this environment. It uses `frame_system::GenesisConfig::::default()` to generate a default genesis configuration for the runtime, followed by `.build_storage()` to create the initial storage state. This storage is then converted into a format usable by the testing framework, [`sp_io::TestExternalities`](https://paritytech.github.io/polkadot-sdk/master/sp_io/type.TestExternalities.html){target=\_blank}, allowing tests to be executed in a simulated blockchain environment. Here's the code that sets the genesis storage configuration: ```rust pub mod tests { use crate::*; use sp_runtime::BuildStorage; #[frame_support::runtime] mod runtime { #[runtime::runtime] #[runtime::derive( RuntimeCall, RuntimeEvent, RuntimeError, RuntimeOrigin, RuntimeFreezeReason, RuntimeHoldReason, RuntimeSlashReason, RuntimeLockId, RuntimeTask )] pub struct Test; #[runtime::pallet_index(0)] pub type System = frame_system::Pallet; // Other pallets... } pub fn new_test_ext() -> sp_io::TestExternalities { frame_system::GenesisConfig::::default() .build_storage() .unwrap() .into() } } ``` You can also customize the genesis storage to set initial values for your runtime pallets. For example, you can set the initial balance for accounts like this: ```rust // Build genesis storage according to the runtime's configuration pub fn new_test_ext() -> sp_io::TestExternalities { // Define the initial balances for accounts let initial_balances: Vec<(AccountId32, u128)> = vec![ (AccountId32::from([0u8; 32]), 1_000_000_000_000), (AccountId32::from([1u8; 32]), 2_000_000_000_000), ]; let mut t = frame_system::GenesisConfig::::default() .build_storage() .unwrap(); // Adding balances configuration to the genesis config pallet_balances::GenesisConfig:: { balances: initial_balances, } .assimilate_storage(&mut t) .unwrap(); t.into() } ``` For a more idiomatic approach, see the [Your First Pallet](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/guides/your_first_pallet/index.html#better-test-setup){target=\_blank} guide from the Polkadot SDK Rust documentation. ### Pallet Configuration Each pallet in the mocked runtime requires an associated configuration, specifying the types and values it depends on to function. These configurations often use basic or primitive types (e.g., u32, bool) instead of more complex types like structs or traits, ensuring the setup remains straightforward and manageable. ```rust #[derive_impl(frame_system::config_preludes::TestDefaultConfig)] impl frame_system::Config for Test { ... type Index = u64; type BlockNumber = u64; type Hash = H256; type Hashing = BlakeTwo256; type AccountId = u64; ... } impl pallet_template::Config for Test { type RuntimeEvent = RuntimeEvent; type WeightInfo = (); ... } ``` The configuration should be set for each pallet existing in the mocked runtime. The simplification of types is for simplifying the testing process. For example, `AccountId` is `u64`, meaning a valid account address can be an unsigned integer: ```rust let alice_account: u64 = 1; ``` ## Where to Go Next With the mock environment in place, developers can now test and explore how pallets interact and ensure they work seamlessly together. For further details about mocking runtimes, see the following [Polkadot SDK docs guide](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/guides/your_first_pallet/index.html#your-first-test-runtime){target=\_blank}.
- Guide __Pallet Testing__ --- Learn how to efficiently test pallets in the Polkadot SDK, ensuring your pallet operations are reliable and secure. [:octicons-arrow-right-24: Reference](/develop/parachains/testing/pallet-testing/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/parachains/testing/pallet-testing/ --- BEGIN CONTENT --- --- title: Pallet Testing description: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. categories: Parachains --- # Pallet Testing ## Introduction Unit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment to simulate runtime and mock transaction execution for extrinsics and queries. To begin unit testing, you must first set up a mock runtime that simulates blockchain behavior, incorporating the necessary pallets. For a deeper understanding, consult the [Mock Runtime](/develop/parachains/testing/mock-runtime/){target=\_blank} guide. ## Writing Unit Tests Once the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet module's `test.rs` file. Unit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you've defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet. The tests confirm that: - **Pallets initialize correctly** - at the start of each test, the system should initialize with block number 0, and the pallets should be in their default states - **Pallets modify each other's state** - the second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions - **State transitions between blocks are seamless** - by simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number Testing pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled. This approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable. ### Test Initialization Each test starts by initializing the runtime environment, typically using the `new_test_ext()` function, which sets up the mock storage and environment. ```rust #[test] fn test_pallet_functionality() { new_test_ext().execute_with(|| { // Test logic goes here }); } ``` ### Function Call Testing Call the pallet's extrinsics or functions to simulate user interaction or internal logic. Use the `assert_ok!` macro to check for successful execution and `assert_err!` to verify that errors are correctly handled. ```rust #[test] fn it_works_for_valid_input() { new_test_ext().execute_with(|| { // Call an extrinsic or function assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); }); } #[test] fn it_fails_for_invalid_input() { new_test_ext().execute_with(|| { // Call an extrinsic with invalid input and expect an error assert_err!( TemplateModule::some_function(Origin::signed(1), invalid_param), Error::::InvalidInput ); }); } ``` ### Storage Testing After calling a function or extrinsic in your pallet, it's essential to verify that the state changes in the pallet's storage match the expected behavior to ensure data is updated correctly based on the actions taken. The following example shows how to test the storage behavior before and after the function call: ```rust #[test] fn test_storage_update_on_extrinsic_call() { new_test_ext().execute_with(|| { // Check the initial storage state (before the call) assert_eq!(Something::::get(), None); // Dispatch a signed extrinsic, which modifies storage assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42)); // Validate that the storage has been updated as expected (after the call) assert_eq!(Something::::get(), Some(42)); }); } ``` ### Event Testing It's also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the [`#generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\_blank} macro are stored under the system's event storage key (system/events) as [`EventRecord`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.EventRecord.html){target=\_blank} entries. These can be accessed using [`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\_blank} or verified with specific helper methods provided by the system pallet, such as [`assert_has_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_has_event){target=\_blank} and [`assert_last_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\_blank}. Here's an example of testing events in a mock runtime: ```rust #[test] fn it_emits_events_on_success() { new_test_ext().execute_with(|| { // Call an extrinsic or function assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); // Verify that the expected event was emitted assert!(System::events().iter().any(|record| { record.event == Event::TemplateModule(TemplateEvent::SomeEvent) })); }); } ``` Some key considerations are: - **Block number** - events are not emitted on the genesis block, so you need to set the block number using [`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\_blank} to ensure events are triggered - **Converting events** - use `.into()` when instantiating your pallet's event to convert it into a generic event type, as required by the system's event storage ## Where to Go Next - Dive into the full implementation of the [`mock.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/mock.rs){target=\_blank} and [`test.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/tests.rs){target=\_blank} files in the [Solochain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=_blank}
- Guide __Benchmarking__ --- Explore methods to measure the performance and execution cost of your pallet. [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/block-explorers/ --- BEGIN CONTENT --- --- title: Block Explorers description: Access PolkaVM explorers like Subscan, BlockScout, and Routescan to track transactions, analyze contracts, and view on-chain data from smart contracts. categories: Smart Contracts, Tooling --- # Block Explorers !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Block explorers serve as comprehensive blockchain analytics platforms that provide access to on-chain data. These web applications function as search engines for blockchain networks, allowing users to query, visualize, and analyze blockchain data in real time through intuitive interfaces. ## Core Functionality These block explorers provide essential capabilities for interacting with smart contracts in Polkadot Hub: - **Transaction tracking** - monitor transaction status, confirmations, fees, and metadata - **Address analysis** - view account balances, transaction history, and associated contracts - **Block information** - examine block contents - **Smart contract interaction** - review contract code, verification status, and interaction history - **Token tracking** - monitor ERC-20, ERC-721, and other token standards with transfer history and holder analytics - **Network statistics** - access metrics on transaction volume, gas usage, and other network parameters ## Available Block Explorers The following block explorers are available for PolkaVM smart contracts, providing specialized tools for monitoring and analyzing contract activity within the Polkadot ecosystem: ### BlockScout BlockScout is an open-source explorer platform with a user-friendly interface adapted for PolkaVM contracts. It excels at detailed contract analytics and provides developers with comprehensive API access. - [Polkadot Hub TestNet BlockScout](https://blockscout-passet-hub.parity-testnet.parity.io/){target=\_blank} ![](/images/develop/smart-contracts/block-explorers/block-explorers-2.webp) --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/connect-to-kusama/ --- BEGIN CONTENT --- --- title: Connect to Kusama description: Explore how to connect to Kusama Hub for developing and testing smart contracts in a live environment with real monetary value. --- # Connect to Kusama !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. For more information about how to connect to a Polkadot network, please check the [Wallets](/develop/smart-contracts/wallets/){target=\_blank} guide. !!! info "Production Environment" Kusama Hub offers a live environment for deploying smart contracts. Please note that the most recent version of Polkadot's Ethereum-compatible stack is available on the TestNet; however, you can also deploy it to the Kusama Hub for production use. ## Networks Details Developers can leverage smart contracts on Kusama Hub for live production deployments. This section outlines the network specifications and connection details. === "Kusama Hub" Network name ```text Kusama Hub ``` --- Currency symbol ```text KSM ``` --- Chain ID ```text 420420418 ``` --- RPC URL ```text https://kusama-asset-hub-eth-rpc.polkadot.io ``` --- Block explorer URL ```text https://blockscout-kusama-asset-hub.parity-chains-scw.parity.io/ ``` --- ## Important Deployment Considerations While the compatibility with regular EVM codebases is still being maximized, some recommendations include: - **Leverage [Hardhat](/develop/smart-contracts/dev-environments/hardhat){target=\_blank}** to compile, deploy, and interact with your contract. - **Use MetaMask** to interact with your dApp (note that using MetaMask can sometimes lead to `Invalid transaction` errors - this is actively being worked on and will be fixed soon). - **Avoid Remix** for deployment as MetaMask enforces a 48kb size limit when using the [Remix IDE](/develop/smart-contracts/dev-environments/remix){target=\_blank}, which is why Hardhat Polkadot is recommended for deployment. Kusama Hub is a live environment. Ensure your contracts are thoroughly tested before deployment, as transactions on Kusama Hub involve real KSM tokens and **cannot be reversed**. ## Where to Go Next For your next steps, explore the various smart contract guides demonstrating how to use and integrate different tools and development environments into your workflow.
- Guide **Deploy your first contract with Hardhat** --- Explore the recommended smart contract development and deployment process on Kusama Hub using Hardhat. [:octicons-arrow-right-24: Build with HardHat](/develop/smart-contracts/dev-environments/hardhat/) - Guide **Interact with the blockchain using viem** --- Use viem for interacting with Ethereum-compatible chains to deploy and interact with smart contracts on Kusama Hub. [:octicons-arrow-right-24: Build with viem](/develop/smart-contracts/libraries/viem/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/connect-to-polkadot/ --- BEGIN CONTENT --- --- title: Connect to Polkadot description: Explore how to connect to Polkadot Hub, configure your wallet, and obtain test tokens for developing and testing smart contracts. categories: Smart Contracts --- # Connect to Polkadot !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. For more information about how to connect to Polkadot Hub, please check the [Wallets for Polkadot Hub](/develop/smart-contracts/wallets/){target=\_blank} guide. ## Networks Details Developers can leverage smart contracts across diverse networks, from TestNets to MainNet. This section outlines the network specifications and connection details for each environment. === "Polkadot Hub TestNet" Network name ```text Polkadot Hub TestNet ``` --- Currency symbol ```text PAS ``` --- Chain ID ```text 420420422 ``` --- RPC URL ```text https://testnet-passet-hub-eth-rpc.polkadot.io ``` --- Block explorer URL ```text https://blockscout-passet-hub.parity-testnet.parity.io/ ``` ## Test Tokens You will need testnet tokens to perform transactions and engage with smart contracts on any chain. Here's how to obtain Paseo (PAS) tokens for testing purposes: 1. Navigate to the [Polkadot Faucet](https://faucet.polkadot.io/?parachain=1111){target=\_blank}. If the desired network is not already selected, choose it from the Network drop-down 2. Copy your address linked to the TestNet and paste it into the designated field ![](/images/develop/smart-contracts/connect-to-polkadot/connect-to-polkadot-1.webp) 3. Click the **Get Some PASs** button to request free test PAS tokens. These tokens will be sent to your wallet shortly ![](/images/develop/smart-contracts/connect-to-polkadot/connect-to-polkadot-2.webp) Now that you have obtained PAS tokens in your wallet, you’re ready to deploy and interact with smart contracts on Polkadot Hub TestNet! These tokens will allow you to pay for gas fees when executing transactions, deploying contracts, and testing your dApp functionality in a secure testnet environment. ## Where to Go Next For your next steps, explore the various smart contract guides demonstrating how to use and integrate different tools and development environments into your workflow.
- Guide __Deploy your first contract with Remix__ --- Explore the smart contract development and deployment process on Polkadot Hub using the Remix IDE. [:octicons-arrow-right-24: Build with Remix IDE](/develop/smart-contracts/dev-environments/remix/) - Guide __Interact with the blockchain with viem__ --- Use viem for interacting with Ethereum-compatible chains, to deploy and interact with smart contracts on Polkadot Hub. [:octicons-arrow-right-24: Build with viem](/develop/smart-contracts/libraries/viem/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/dev-environments/foundry/ --- BEGIN CONTENT --- --- title: Use Foundry with Polkadot Hub description: Learn to install, configure, and use foundry-polkadot for smart contract development on Polkadot with PolkaVM bytecode compilation. --- # Foundry !!! warning Consider that features like Anvil (Foundry's local blockchain) and `forge test` (for running Solidity tests) are not yet supported in `foundry-polkadot`. ## Overview Foundry is a fast, modular, and extensible toolkit for Ethereum application development written in Rust. It provides a suite of command-line tools, including `forge` for compiling, testing, and deploying smart contracts and `cast` for interacting with blockchains. [`foundry-polkadot`](https://github.com/paritytech/foundry-polkadot/){target=\_blank} is an adaptation explicitly engineered for the Polkadot Hub, tailored for developers already familiar with Foundry who seek to leverage its capabilities within the Polkadot ecosystem. Additionally, this guide offers detailed information on the `forge` and `cast` commands supported within `foundry-polkadot`, complete with simple, runnable examples for quick reference. ## Installation The installation process is tailored for the Polkadot variant: - `foundry-polkadot` is installed via `foundryup-polkadot`, its dedicated installer. To get started, open your terminal and execute: ```bash curl -L https://raw.githubusercontent.com/paritytech/foundry-polkadot/refs/heads/master/foundryup/install | bash ``` This command starts the installation of `foundryup-polkadot`. After installation, run the following command to download the precompiled `foundry-polkadot` binaries: ```bash foundryup-polkadot ``` This command will install the `forge` and `cast` binaries, which are explained below. Windows users must use a Unix-like terminal environment such as Git BASH or Windows Subsystem for Linux (WSL), as PowerShell and Command Prompt are not currently supported by `foundryup`. ## Compiler Integration A core divergence lies in the underlying Solidity compiler. - **`foundry`** is built to interface with the `solc` compiler, which targets Ethereum's Ethereum Virtual Machine (EVM). - **`foundry-polkadot`**, in contrast, introduces and primarily utilizes the `resolc` compiler to compile down Solidity contracts into PolkaVM bytecode. - **Command-Line Flag**: For commands that involve compilation (e.g., `forge build`), you can use the `--resolc` flag to enable `resolc` compilation. For example: ```bash forge build --resolc ``` This command instructs Forge to use `resolc` instead of `solc`, generating bytecode compatible with PolkaVM. - **Configuration File**: Alternatively, you can configure `resolc` usage in the `foundry.toml` file by adding: ```toml [profile.default.resolc] resolc_compile = true ``` Setting `resolc_compile = false` reverts to using `solc`, ensuring compatibility with Ethereum projects. By default, `foundry-polkadot` uses `solc` unless `resolc` is explicitly enabled. `resolc` also exposes specific options for fine-tuning the compilation process, such as `--use-resolc ` for specifying a compiler version or path, `-O, --resolc-optimizer-mode ` for setting optimization levels, and `--heap-size ` and `--stack-size ` for configuring contract memory. ## Command-Line Interface (CLI) `foundry-polkadot` preserves the familiar `forge` and `cast` subcommand structure. However, it's crucial to note that commands which involve compilation (such as `create`, `bind`, `build`, and `inspect`) will yield different output when `resolc` is utilized, as the generated bytecode is specifically designed for PolkaVM rather than the EVM. ## Unsupported or Modified Features Not all functionalities from the original Foundry are present or behave identically in `foundry-polkadot`: - **Currently Unsupported**: - Compilation of Yul code is not yet supported. - Support for factory contracts deployment is a known issue that is currently unresolved. - **Broader Feature Limitations**: Integration with `Anvil` and `Chisel` (Foundry's local blockchain and EVM toolkit, respectively) is not available. This limitation directly impacts the support for several key commands, including `forge test` for running tests, `forge snapshot` for creating blockchain state snapshots, and `forge script` for complex deployment and interaction scripts. - **Modified Feature**: The most notable modification is in the **compilation output**. When ``resolc`` is employed, the resulting bytecode will fundamentally differ from that generated by ``solc``, reflecting PolkaVM's distinct architectural requirements. ## Set up a Project Initialize a new project using `forge init`: ```bash forge init my-polkadot-project cd my-polkadot-project ``` This command creates a complete project structure with the following components: - **`src/`** - Contains the Solidity smart contracts (includes a sample `Counter.sol` contract by default) - **`lib/`** - Houses external dependencies and libraries (`forge-std` testing library is included) - **`script/`** - Stores deployment and interaction scripts (includes `Counter.s.sol` deployment script by default) - **`test/`** - Contains your contract tests (includes `Counter.t.sol` test file by default) - **`foundry.toml`** - Main configuration file for compiler settings, network configurations, and project preferences The default project includes a simple `Counter` contract that demonstrates basic state management through increment and decrement functions, along with corresponding tests and deployment scripts to help you get started quickly. ## Compile a Project Compile contracts using `forge build`: ```bash forge build --resolc ``` !!!note You can still use `forge build` for compiling to regular EVM bytecode. PolkaVM bytecode starts with `0x505` prefix. Inspect compiled artifacts with: ```bash forge inspect Counter bytecode --resolc ``` If successful, you will see the following output:
forge inspect Counter bytecode --resolc 0x50564d00008213000000000000010700c13000c0008004808f08000000000e0000001c0000002a0000003500000040000000520000005d00000063616c6c5f646174615f636f707963616c6c5f646174615f6c6f616463616c6c5f646174615f73697a65676574...
## Deploy a Contract Deploy contracts using `forge create`: ```bash forge create Counter \ --rpc-url \ --private-key \ --resolc ``` If the operation completes successfully, you'll see the following output (for example, to deploy to the Passet Hub chain):
forge create Counter \   --rpc-url https://testnet-passet-hub-eth-rpc.polkadot.io \   --private-key <INSERT_PRIVATE_KEY> \   --resolc
[:] Compiling... Compiler run successful!
For contracts with constructor arguments: ```bash forge create MyToken \ --rpc-url \ --private-key \ --constructor-args "MyToken" "MTK" 1000000 \ --resolc ``` !!! note "Network Compatibility" Use the `--resolc` flag when deploying to PolkaVM-compatible networks. Omit it for Ethereum-compatible networks. ## Supported `foundry-polkadot` Commands This section provides a detailed breakdown of the `forge` and `cast` commands supported in `foundry-polkadot`. ### Forge Commands * **`init`** * **Command**: `forge init ` * **Description**: Initializes a new Foundry project in the current directory, setting up the basic project structure and installing standard libraries. * **`bind`** * **Command**: `forge bind [--resolc]` * **Description**: Generates type-safe Rust bindings for your Solidity contracts. Use `--resolc` to ensure compilation with the `resolc` compiler for PolkaVM compatibility. * **`bind-json`** * **Command**: `forge bind-json [--resolc]` * **Description**: Generates JSON bindings for your Solidity contracts. Use `--resolc` for `resolc`-based compilation. * **`build`** * **Command**: `forge build [--resolc]` * **Description**: Compiles all Solidity contracts in your project. Specify `--resolc` to compile for PolkaVM. * **`cache clean`** * **Command**: `forge cache clean` * **Description**: Clears the Foundry cache directory. * **`cache ls`** * **Command**: `forge cache ls` * **Description**: Lists the contents of the Foundry cache. * **`clean`** * **Command**: `forge clean` * **Description**: Removes all build artifacts from the project's `out` directory. * **`compiler resolve`** * **Command**: `forge compiler resolve [--resolc]` * **Description**: Resolves and displays the versions of Solidity compilers Foundry is using. Use `--resolc` to also check for `resolc`. * **`config`** * **Command**: `forge config` * **Description**: Displays the current Foundry project configuration, including settings from `foundry.toml`. * **`create`** * **Command**: `forge create [OPTIONS] ` * **Required Parameters**: `` (the name of the contract to deploy) * **Description**: Deploys a new contract to a specified blockchain network. The `--resolc` flag ensures it's compiled for PolkaVM. You'll typically need to provide an RPC URL, a private key for the deployer account, and potentially constructor arguments. * **`doc`** * **Command**: `forge doc` * **Description**: Generates documentation for your Solidity contracts. * **`flatten`** * **Command**: `forge flatten [OPTIONS] ` * **Required Parameters**: `` (the path to the Solidity file) * **Description**: Combines all imports of a Solidity file into a single file, useful for deployment or verification. * **`fmt`** * **Command**: `forge fmt` * **Description**: Formats Solidity code according to a predefined style. * **`geiger`** * **Command**: `forge geiger ` * **Required Parameters**: `` (the path to the Solidity file) * **Description**: Analyzes Solidity code for potential security vulnerabilities and gas inefficiencies. * **`generate test`** * **Command**: `forge generate test --contract-name ` * **Required Parameters**: `--contract-name ` (the name of the contract for which to generate a test) * **Description**: Creates a new test file with boilerplate code for a specified contract. * **`generate-fig-spec`** * **Command**: `forge generate-fig-spec` * **Description**: Generates a Fig specification for CLI autocompletion tools. * **`inspect`** * **Command**: `forge inspect [--resolc]` * **Required Parameters**: `` (the contract to inspect), `` (e.g., `bytecode`, `abi`, `methods`, `events`) * **Description**: Displays various artifacts of a compiled contract. Use `--resolc` to inspect `resolc`-compiled artifacts; the bytecode will start with `0x505`. * **`install`** * **Command**: `forge install ` * **Description**: Installs a Solidity library or dependency from a Git repository. * **`update`** * **Command**: `forge update []` * **Description**: Updates installed dependencies. If a repository is specified, only that one is updated. * **`remappings`** * **Command**: `forge remappings` * **Description**: Lists the currently configured Solidity compiler remappings. * **`remove`** * **Command**: `forge remove ` * **Description**: Removes an installed Solidity dependency. Use `--force` to remove without confirmation. * **`selectors upload`** * **Command**: `forge selectors upload [--all]` * **Description**: Uploads function selectors from compiled contracts to OpenChain. Use `--all` to upload for all contracts. * **`selectors list`** * **Command**: `forge selectors list` * **Description**: Lists all known function selectors for contracts in the project. * **`selectors find`** * **Command**: `forge selectors find ` * **Description**: Searches for a function signature given its 4-byte selector. * **`selectors cache`** * **Command**: `forge selectors cache` * **Description**: Caches function selectors for faster lookup. * **`tree`** * **Command**: `forge tree` * **Description**: Displays the dependency tree of your Solidity contracts. !!!warning "Non-working Commands" Consider that some foundry commands are not yet supported in `foundry-polkadot`: * **`clone`**: This command is not supported in `foundry-polkadot`. * **`coverage`**: Code coverage analysis is not supported. * **`snapshot`**: Creating blockchain state snapshots is not supported. * **`test`**: Running Solidity tests is not supported. ### Cast Commands * **`4byte`** * **Command**: `cast 4byte [OPTIONS] [TOPIC_0]` * **Description**: Decodes a 4-byte function selector into its human-readable function signature. * **`4byte-event`** * **Command**: `cast 4byte-event [OPTIONS] [TOPIC_0]` * **Description**: Decodes a 4-byte event topic into its human-readable event signature. * **`abi-encode`** * **Command**: `cast abi-encode [ARGS]...` * **Required Parameters**: `` (the function signature), `[ARGS]` (arguments to encode) * **Description**: ABI-encodes function arguments according to a given signature. * **`address-zero`** * **Command**: `cast address-zero` * **Description**: Returns the zero address (0x00...00). * **`age`** * **Command**: `cast age [OPTIONS] [BLOCK]` * **Description**: Converts a block number or tag (e.g., `latest`) into its timestamp. * **`balance`** * **Command**: `cast balance [OPTIONS] ` * **Required Parameters**: `` (the address to check) * **Description**: Retrieves the native token balance of a given address on the specified RPC network. * **`base-fee`** * **Command**: `cast base-fee [OPTIONS] [BLOCK]` * **Description**: Retrieves the base fee per gas for a specific block (defaults to `latest`). * **`block`** * **Command**: `cast block [OPTIONS] [BLOCK]` * **Description**: Retrieves comprehensive details about a specific block (defaults to `latest`). * **`block-number`** * **Command**: `cast block-number [OPTIONS] [BLOCK]` * **Description**: Retrieves the number of the latest or a specified block. * **`call`** * **Command**: `cast call [OPTIONS] [ARGS]...` * **Description**: Executes a read-only (constant) function call on a contract. No transaction is sent to the network. * **`chain`** * **Command**: `cast chain [OPTIONS]` * **Description**: Displays the human-readable name of the connected blockchain. * **`chain-id`** * **Command**: `cast chain-id [OPTIONS]` * **Description**: Displays the chain ID of the connected blockchain. * **`client`** * **Command**: `cast client [OPTIONS]` * **Description**: Retrieves information about the connected RPC client (node software). * **`code`** * **Command**: `cast code [OPTIONS] ` * **Required Parameters**: `` (the contract address) * **Description**: Retrieves the bytecode deployed at a given contract address. * **`codesize`** * **Command**: `cast codesize [OPTIONS] ` * **Required Parameters**: `` (the contract address) * **Description**: Retrieves the size of the bytecode deployed at a given contract address. * **`compute-address`** * **Command**: `cast compute-address [OPTIONS] ` * **Required Parameters**: `` (the deployer's address) * **Description**: Computes the predicted contract address based on the deployer's address and nonce. * **`decode-abi`** * **Command**: `cast decode-abi ` * **Required Parameters**: `` (the function signature), `` (the ABI-encoded data) * **Description**: Decodes ABI-encoded output data from a contract call given its signature. * **`decode-calldata`** * **Command**: `cast decode-calldata ` * **Required Parameters**: `` (the function signature), `` (the raw calldata) * **Description**: Decodes raw calldata into human-readable arguments using a function signature. * **`decode-error`** * **Command**: `cast decode-error [--sig ]` * **Required Parameters**: `` (the error data) * **Description**: Decodes a custom error message from a transaction revert. You may need to provide the error signature. * **`decode-event`** * **Command**: `cast decode-event [--sig ]` * **Required Parameters**: `` (the event data) * **Description**: Decodes event data from a transaction log. * **`estimate`** * **Command**: `cast estimate [OPTIONS] [TO] [SIG] [ARGS]...` * **Required Parameters**: `[TO]` (the recipient address or contract), `[SIG]` (function signature), `[ARGS]` (arguments) * **Description**: Estimates the gas cost for a transaction or function call. * **`find-block`** * **Command**: `cast find-block [OPTIONS] ` * **Required Parameters**: `` (a Unix timestamp) * **Description**: Finds the closest block number to a given Unix timestamp. * **`gas-price`** * **Command**: `cast gas-price [OPTIONS]` * **Description**: Retrieves the current average gas price on the network. * **`generate-fig-spec`** * **Command**: `cast generate-fig-spec` * **Description**: Generates a Fig specification for CLI autocompletion. * **`index-string`** * **Command**: `cast index-string ` * **Description**: Computes the Keccak-256 hash of a string, useful for event topics. * **`index-erc7201`** * **Command**: `cast index-erc7201 ` * **Description**: Computes the hash for an ERC-7201 identifier. * **`logs`** * **Command**: `cast logs [OPTIONS] [SIG_OR_TOPIC] [TOPICS_OR_ARGS]...` * **Required Parameters**: `[SIG_OR_TOPIC]` (a signature or topic hash) * **Description**: Filters and displays event logs from transactions. * **`max-int`** * **Command**: `cast max-int` * **Description**: Displays the maximum value for a signed 256-bit integer. * **`max-uint`** * **Command**: `cast max-uint` * **Description**: Displays the maximum value for an unsigned 256-bit integer. * **`min-int`** * **Command**: `cast min-int` * **Description**: Displays the minimum value for a signed 256-bit integer. * **`mktx`** * **Command**: `cast mktx [OPTIONS] [TO] [SIG] [ARGS]...` * **Required Parameters**: `[TO]` (the recipient address or contract) * **Description**: Creates a raw, signed transaction that can be broadcast later. * **`decode-transaction`** * **Command**: `cast decode-transaction [OPTIONS] [TX]` * **Required Parameters**: `[TX]` (the raw transaction hex string) * **Description**: Decodes a raw transaction hex string into its human-readable components. * **`namehash increment`** * **Command**: `cast namehash ` * **Description**: Computes the ENS (Ethereum Name Service) namehash for a given name. * **`nonce`** * **Command**: `cast nonce [OPTIONS] ` * **Required Parameters**: `` (the address to check) * **Description**: Retrieves the transaction count (nonce) for a given address. * **`parse-bytes32-address`** * **Command**: `cast parse-bytes32-address ` * **Description**: Parses a 32-byte hex string (e.g., from `bytes32`) into an Ethereum address. * **`parse-bytes32-string`** * **Command**: `cast parse-bytes32-string ` * **Description**: Parses a 32-byte hex string into a human-readable string. * **`parse-units`** * **Command**: `cast parse-units [UNIT]` * **Description**: Converts a human-readable amount into its smallest unit (e.g., Ether to Wei). Defaults to `ether`. * **`pretty-calldata`** * **Command**: `cast pretty-calldata [OPTIONS] ` * **Required Parameters**: `` (the calldata hex string) * **Description**: Attempts to pretty-print and decode a raw calldata string into possible function calls. * **`publish`** * **Command**: `cast publish [OPTIONS] ` * **Description**: Broadcasts a raw, signed transaction to the network. * **`receipt`** * **Command**: `cast receipt [OPTIONS] ` * **Description**: Retrieves the transaction receipt for a given transaction hash, including status, gas usage, and logs. * **`rpc`** * **Command**: `cast rpc [OPTIONS] [PARAMS]...` * **Required Parameters**: `` (the RPC method to call), `[PARAMS]` (parameters for the method) * **Description**: Makes a direct RPC call to the connected blockchain node. * **`send`** * **Command**: `cast send [OPTIONS] [ARGS]...` * **Required Parameters**: `` (the recipient address or contract) * **Description**: Sends a transaction to a contract or address, executing a function or transferring value. * **`sig`** * **Command**: `cast sig ` * **Required Parameters**: `` (the full function signature string) * **Description**: Computes the 4-byte function selector for a given function signature. * **`sig-event`** * **Command**: `cast sig-event ` * **Required Parameters**: `` (the full event signature string) * **Description**: Computes the Keccak-256 hash (topic) for a given event signature. * **`storage`** * **Command**: `cast storage [OPTIONS]
[SLOT]` * **Required Parameters**: `
` (the contract address) * **Description**: Retrieves the raw value stored at a specific storage slot of a contract. * **`tx`** * **Command**: `cast tx [OPTIONS] ` * **Description**: Retrieves comprehensive details about a specific transaction. * **`upload-signature`** * **Command**: `cast upload-signature [OPTIONS] ` * **Required Parameters**: `` (the function or event signature) * **Description**: Uploads a function or event signature to the OpenChain registry. * **`wallet`** * **Command**: `cast wallet new` * **Description**: Generates a new random Ethereum keypair (private key and address). * **`wallet new-mnemonic`** * **Command**: `cast wallet new-mnemonic` * **Description**: Generates a new BIP-39 mnemonic phrase and derives the first account from it. * **`wallet address`** * **Command**: `cast wallet address [OPTIONS]` * **Description**: Derives and displays the Ethereum address from a private key or mnemonic (if provided). !!!warning "Non-working Commands" Consider that some foundry commands are not yet supported in `foundry-polkadot`: * **`proof`**: This command, used for generating Merkle proofs, is not supported. * **`storage-root`**: This command, used for retrieving the storage root of a contract, is not supported. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/dev-environments/hardhat/ --- BEGIN CONTENT --- --- title: Use Hardhat with Polkadot Hub description: Learn how to create, compile, test, and deploy smart contracts on Polkadot Hub using Hardhat, a powerful development environment for blockchain developers. categories: Smart Contracts, Tooling --- # Hardhat !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**.
- :octicons-code-16:{ .lg .middle } __Test and Deploy with Hardhat__ --- Master Solidity smart contract development with Hardhat. Learn testing, deployment, and network interaction in one comprehensive tutorial.
[:octicons-arrow-right-24: Get Started](/tutorials/smart-contracts/launch-your-first-project/test-and-deploy-with-hardhat){target=\_blank}
!!! note "Contracts Code Blob Size Disclaimer" The maximum contract code blob size on Polkadot Hub networks is _100 kilobytes_, significantly larger than Ethereum’s EVM limit of 24 kilobytes. For detailed comparisons and migration guidelines, see the [EVM vs. PolkaVM](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm/#current-memory-limits){target=\_blank} documentation page. ## Overview Hardhat is a robust development environment for Ethereum-compatible chains that makes smart contract development more efficient. This guide walks you through the essentials of using Hardhat to create, compile, test, and deploy smart contracts on Polkadot Hub. ## Prerequisites Before getting started, ensure you have: - [Node.js](https://nodejs.org/){target=\_blank} (v16.0.0 or later) and npm installed - Basic understanding of Solidity programming - Some PAS test tokens to cover transaction fees (easily obtainable from the [Polkadot faucet](https://faucet.polkadot.io/?parachain=1111){target=\_blank}). To learn how to get test tokens, check out the [Test Tokens](/develop/smart-contracts/connect-to-polkadot#test-tokens){target=\_blank} section ## Set Up Hardhat 1. Create a new directory for your project and navigate into it: ```bash mkdir hardhat-example cd hardhat-example ``` 2. Initialize a new npm project: ```bash npm init -y ``` 3. To interact with Polkadot, Hardhat requires the following plugin to compile contracts to PolkaVM bytecode and to spawn a local node compatible with PolkaVM: ```bash npm install --save-dev @parity/hardhat-polkadot@0.1.8 ``` 4. Create a Hardhat project: ```bash npx hardhat-polkadot init ``` Select **Create a JavaScript project** when prompted and follow the instructions. After that, your project will be created with three main folders: - **`contracts`** - where your Solidity smart contracts live - **`test`** - contains your test files that validate contract functionality - **`ignition`** - deployment modules for safely deploying your contracts to various networks 5. Add the following folders to the `.gitignore` file if they are not already there: ```bash echo '/artifacts-pvm' >> .gitignore echo '/cache-pvm' >> .gitignore echo '/ignition/deployments/' >> .gitignore ``` 6. Finish the setup by installing all the dependencies: ```bash npm install ``` !!! note This last step is needed to set up the `hardhat-polkadot` plugin. It will install the `@parity/hardhat-polkadot` package and all its dependencies. In the future, the plugin will handle this automatically. ## Compile Your Contract The plugin will compile your Solidity contracts for Solidity versions `0.8.0` and higher to be PolkaVM compatible. When compiling your contract, there are two ways to configure your compilation process: - **npm compiler** - uses library [@parity/resolc](https://www.npmjs.com/package/@parity/resolc){target=\_blank} for simplicity and ease of use - **Binary compiler** - uses your local `resolc` binary directly for more control and configuration options To compile your project, follow these instructions: 1. Modify your Hardhat configuration file to specify which compilation process you will be using and activate the `polkavm` flag in the Hardhat network: === "npm Configuration" ```javascript title="hardhat.config.js" hl_lines="9-11 14" // hardhat.config.js require('@nomicfoundation/hardhat-toolbox'); require('@parity/hardhat-polkadot'); /** @type import('hardhat/config').HardhatUserConfig */ module.exports = { solidity: '0.8.28', resolc: { compilerSource: 'npm', }, networks: { hardhat: { polkavm: true, }, }, }; ``` === "Binary Configuration" ```javascript title="hardhat.config.js" hl_lines="9-14 17" // hardhat.config.js require('@nomicfoundation/hardhat-toolbox'); require('@parity/hardhat-polkadot'); /** @type import('hardhat/config').HardhatUserConfig */ module.exports = { solidity: '0.8.28', resolc: { compilerSource: 'binary', settings: { compilerPath: 'INSERT_PATH_TO_RESOLC_COMPILER', }, }, networks: { hardhat: { polkavm: true, }, }, }; ``` For the binary configuration, replace `INSERT_PATH_TO_RESOLC_COMPILER` with the proper path to the binary. To obtain the binary, check the [releases](https://github.com/paritytech/revive/releases){target=\_blank} section of the `resolc` compiler, and download the latest version. The default settings used can be found in the [`constants.ts`](https://github.com/paritytech/hardhat-polkadot/blob/v0.1.5/packages/hardhat-polkadot-resolc/src/constants.ts#L8-L23){target=\_blank} file of the `hardhat-polkadot` source code. You can change them according to your project needs. Generally, the recommended settings for optimized outputs are the following: ```javascript title="hardhat.config.js" hl_lines="4-10" resolc: { ... settings: { optimizer: { enabled: true, parameters: 'z', fallbackOz: true, runs: 200, }, standardJson: true, }, ... } ``` You can check the [`ResolcConfig`](https://github.com/paritytech/hardhat-polkadot/blob/v0.1.5/packages/hardhat-polkadot-resolc/src/types.ts#L26){target=\_blank} for more information about compilation settings. 2. Compile the contract with Hardhat: ```bash npx hardhat compile ``` 3. After successful compilation, you'll see the artifacts generated in the `artifacts-pvm` directory: ```bash ls artifacts-pvm/contracts/*.sol/ ``` You should see JSON files containing the contract ABI and bytecode of the contracts you compiled. ## Set Up a Testing Environment Hardhat allows you to spin up a local testing environment to test and validate your smart contract functionalities before deploying to live networks. The `hardhat-polkadot` plugin provides the possibility to spin up a local node with an ETH-RPC adapter for running local tests. For complete isolation and control over the testing environment, you can configure Hardhat to work with a fresh local Substrate node. This approach is ideal when you want to test in a clean environment without any existing state or when you need specific node configurations. Configure a local node setup by adding the node binary path along with the ETH-RPC adapter path: ```javascript title="hardhat.config.js" hl_lines="12-20" // hardhat.config.js require('@nomicfoundation/hardhat-toolbox'); require('@parity/hardhat-polkadot'); /** @type import('hardhat/config').HardhatUserConfig */ module.exports = { ... networks: { hardhat: { polkavm: true, nodeConfig: { nodeBinaryPath: 'INSERT_PATH_TO_SUBSTRATE_NODE', rpcPort: 8000, dev: true, }, adapterConfig: { adapterBinaryPath: 'INSERT_PATH_TO_ETH_RPC_ADAPTER', dev: true, }, }, }, }; ``` Replace `INSERT_PATH_TO_SUBSTRATE_NODE` and `INSERT_PATH_TO_ETH_RPC_ADAPTER` with the actual paths to your compiled binaries. The `dev: true` flag configures both the node and adapter for development mode. To obtain these binaries, check the [Installation](/develop/smart-contracts/local-development-node#install-the-substrate-node-and-eth-rpc-adapter){target=\_blank} section on the Local Development Node page. Once configured, start your chosen testing environment with: ```bash npx hardhat node ``` This command will launch either the forked network or local node (depending on your configuration) along with the ETH-RPC adapter, providing you with a complete testing environment ready for contract deployment and interaction. By default, the Substrate node will be running on `localhost:8000` and the ETH-RPC adapter on `localhost:8545`. The output will be something like this:
npx hardhat node
Starting server at 127.0.0.1:8000 ../bin/substrate-node --rpc-port=8000 --dev Starting the Eth RPC Adapter at 127.0.0.1:8545 ../bin/eth-rpc --node-rpc-url=ws://localhost:8000 --dev 2025-05-29 13:00:32 Running in --dev mode, RPC CORS has been disabled. 2025-05-29 13:00:32 Running in --dev mode, RPC CORS has been disabled. 2025-05-29 13:00:32 🌐 Connecting to node at: ws://localhost:8000 ... 2025-05-29 13:00:32 Substrate Node 2025-05-29 13:00:32 ✌️ version 3.0.0-dev-f73c228b7a1 2025-05-29 13:00:32 ❤️ by Parity Technologies <admin@parity.io>, 2017-2025 2025-05-29 13:00:32 📋 Chain specification: Development 2025-05-29 13:00:32 🏷 Node name: electric-activity-4221 2025-05-29 13:00:32 👤 Role: AUTHORITY 2025-05-29 13:00:32 💾 Database: RocksDb at /var/folders/f4/7rdt2m9d7j361dm453cpggbm0000gn/T/substrateOaoecu/chains/dev/db/full 2025-05-29 13:00:36 [0] 💸 generated 1 npos voters, 1 from validators and 0 nominators ...
## Test Your Contract When testing your contract, be aware that [`@nomicfoundation/hardhat-toolbox/network-helpers`](https://hardhat.org/hardhat-network-helpers/docs/overview){target=\_blank} is not fully compatible with Polkadot Hub's available RPCs. Specifically, Hardhat-only helpers like `time` and `loadFixture` may not work due to missing RPC calls in the node. For more details, refer to the [Compatibility](https://github.com/paritytech/hardhat-polkadot/tree/main/packages/hardhat-polkadot-node#compatibility){target=\_blank} section in the `hardhat-revive` docs. You should avoid using helpers like `time` and `loadFixture` when writing tests. To run your test: 1. Update the `hardhat.config.js` file accordingly to the [Set Up a Testing Environment](#set-up-a-testing-environment) section 2. Execute the following command to run your tests: ```bash npx hardhat test ``` ## Deploy to a Local Node Before deploying to a live network, you can deploy your contract to a local node using [Ignition](https://hardhat.org/ignition/docs/getting-started#overview){target=\_blank} modules: 1. Update the Hardhat configuration file to add the local network as a target for local deployment: ```javascript title="hardhat.config.js" hl_lines="13-16" // hardhat.config.js require('@nomicfoundation/hardhat-toolbox'); require('@parity/hardhat-polkadot'); /** @type import('hardhat/config').HardhatUserConfig */ module.exports = { ... networks: { hardhat: { ... }, localNode: { polkavm: true, url: `http://127.0.0.1:8545`, }, }, }, }; ``` 2. Start a local node: ```bash npx hardhat node ``` This command will spawn a local Substrate node along with the ETH-RPC adapter. 3. In a new terminal window, deploy the contract using Ignition: ```bash npx hardhat ignition deploy ./ignition/modules/MyToken.js --network localNode ``` ## Deploying to a Live Network After testing your contract locally, you can deploy it to a live network. This guide will use the Polkadot Hub TestNet as the target network. Here's how to configure and deploy: 1. Fund your deployment account with enough tokens to cover gas fees. In this case, the needed tokens are PAS (on Polkadot Hub TestNet). You can use the [Polkadot faucet](https://faucet.polkadot.io/?parachain=1111){target=\_blank} to obtain testing tokens. 2. Export your private key and save it in your Hardhat environment: ```bash npx hardhat vars set PRIVATE_KEY "INSERT_PRIVATE_KEY" ``` Replace `INSERT_PRIVATE_KEY` with your actual private key. For further details on private key exportation, refer to the article [How to export an account's private key](https://support.metamask.io/configure/accounts/how-to-export-an-accounts-private-key/){target=\_blank}. !!! warning Never reveal your private key, otherwise anyone with access to it can control your wallet and steal your funds. Store it securely and never share it publicly or commit it to version control systems. 3. Check that your private key has been set up successfully by running: ```bash npx hardhat vars get PRIVATE_KEY ``` 4. Update your Hardhat configuration file with network settings for the Polkadot network you want to target: ```javascript title="hardhat.config.js" hl_lines="18-22" // hardhat.config.js require('@nomicfoundation/hardhat-toolbox'); require('@parity/hardhat-polkadot'); const { vars } = require('hardhat/config'); /** @type import('hardhat/config').HardhatUserConfig */ module.exports = { ... networks: { hardhat: { ... }, localNode: { ... }, polkadotHubTestnet: { polkavm: true, url: 'https://testnet-passet-hub-eth-rpc.polkadot.io', accounts: [vars.get('PRIVATE_KEY')], }, }, }, }; ``` 6. Deploy your contract using Ignition: ```bash npx hardhat ignition deploy ./ignition/modules/MyToken.js --network polkadotHubTestnet ``` ## Interacting with Your Contract Once deployed, you can create a script to interact with your contract. To do so, create a file called `scripts/interact.js` and add some logic to interact with the contract. For example, for the default `MyToken.sol` contract, you can use the following file that connects to the contract at its address and retrieves the `unlockTime`, which represents when funds can be withdrawn. The script converts this timestamp into a readable date and logs it. It then checks the contract's balance and displays it. Finally, it attempts to call the withdrawal function on the contract, but it catches and logs the error message if the withdrawal is not yet allowed (e.g., before `unlockTime`). ```javascript title="interact.js" const hre = require('hardhat'); async function main() { // Get the contract factory const MyToken = await hre.ethers.getContractFactory('MyToken'); // Replace with your deployed contract address const contractAddress = 'INSERT_CONTRACT_ADDRESS'; // Attach to existing contract const token = await MyToken.attach(contractAddress); // Get signers const [deployer] = await hre.ethers.getSigners(); // Read contract state const name = await token.name(); const symbol = await token.symbol(); const totalSupply = await token.totalSupply(); const balance = await token.balanceOf(deployer.address); console.log(`Token: ${name} (${symbol})`); console.log( `Total Supply: ${hre.ethers.formatUnits(totalSupply, 18)} tokens`, ); console.log( `Deployer Balance: ${hre.ethers.formatUnits(balance, 18)} tokens`, ); } main().catch((error) => { console.error(error); process.exitCode = 1; }); ``` Run your interaction script: ```bash npx hardhat run scripts/interact.js --network polkadotHubTestnet ``` ## Where to Go Next Hardhat provides a powerful environment for developing, testing, and deploying smart contracts on Polkadot Hub. Its plugin architecture allows seamless integration with PolkaVM through the `hardhat-resolc` and `hardhat-revive-node` plugins. Explore more about smart contracts through these resources:
- Guide __Smart Contracts on Polkadot__ --- Dive into advanced smart contract concepts. [:octicons-arrow-right-24: Get Started](/develop/smart-contracts/) - External __Hardhat Documentation__ --- Learn more about Hardhat's advanced features and best practices. [:octicons-arrow-right-24: Get Started](https://hardhat.org/docs){target=\_blank} - External __OpenZeppelin Contracts__ --- Test your skills by deploying contracts with prebuilt templates. [:octicons-arrow-right-24: Get Started](https://www.openzeppelin.com/solidity-contracts){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/dev-environments/ --- BEGIN CONTENT --- --- title: Dev Environments description: Explore development environments for building smart contracts on Polkadot, including frameworks and tools to enhance your development workflow. template: index-page.html --- # Dev Environments !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. Explore the tools and frameworks available for building and testing smart contracts on the Polkadot network. These environments streamline the development process, from writing and compiling to testing and deploying smart contracts. The guides in this section will help you evaluate each tool's strengths, making it easier to choose the best fit for your project based on complexity, team expertise, and specific requirements. ## What to Consider Consider the following when evaluating development environments for your workflow: | Development Environment | Web-Based | Installation Required | Best For | Compilation & Deployment | Testing & Debugging | Extensibility | | ----------------------- | ------------------- | ------------------------- | ------------------------------------------- | ------------------------ | ---------------------------- | ---------------------- | | **Remix** | :octicons-check-24: | No | Beginners, quick prototyping | Built-in UI & compiler | Basic tools | Limited plugin support | | **Hardhat** | :octicons-x-24: | Yes (via package manager) | Advanced development, scripting, automation | Script-based | Mocha, Chai, mainnet forking | Highly customizable | ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/dev-environments/remix/ --- BEGIN CONTENT --- --- title: Use the Polkadot Remix IDE description: Explore the smart contract development and deployment process on Asset Hub using Remix IDE, a visual IDE for blockchain developers. categories: Smart Contracts, Tooling --- # Remix IDE !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**.
- :octicons-code-16:{ .lg .middle } __Deploy NFTs Using Remix IDE__ --- Mint your NFT on Polkadot's Asset Hub. Use PolkaVM and OpenZeppelin to bring your digital asset to life with Polkadot Remix IDE.
[:octicons-arrow-right-24: Get Started](/tutorials/smart-contracts/deploy-nft){target=\_blank} - :octicons-code-16:{ .lg .middle } __Deploy ERC20s Using Remix IDE__ --- Mint your custom ERC-20 token on Polkadot's Asset Hub. Leverage PolkaVM and Polkadot Remix IDE to bring your blockchain project to life.
[:octicons-arrow-right-24: Get Started](/tutorials/smart-contracts/deploy-erc20){target=\_blank}
!!! warning The Polkadot Remix IDE's contract compilation functionality is currently limited to Google Chrome. Alternative browsers are not recommended for this task. ## Overview Remix IDE is a robust browser-based development environment for smart contracts. This guide will walk you through the essentials of the [Polkadot Remix IDE](https://remix.polkadot.io/){target=\_blank} to understand the processes of compiling, developing, and deploying smart contracts on Asset Hub. ## Prerequisites Before getting started, ensure you have: - A web browser with [Talisman](https://talisman.xyz/){target=\_blank} extension installed - Basic understanding of Solidity programming - Some WND test tokens to cover transaction fees (easily obtainable from the [Polkadot faucet](https://faucet.polkadot.io/westend?parachain=1000){target=\_blank}) ## Accessing Remix IDE Navigate to [https://remix.polkadot.io/](https://remix.polkadot.io/){target=\_blank}. The interface will load with a default workspace containing sample contracts. ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-1.webp) In this interface, you can access a file explorer, edit your code, interact with various plugins for development, and use a terminal. ## Creating a New Contract To create a new contract using the Polkadot Remix IDE, you can follow these steps: 1. Select the **Create a new file** button in the `contracts` folder ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-2.webp) 2. Name your file with a `.sol` extension, in this case, `Counter.sol` ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-3.webp) 3. Write your Solidity code in the editor You can use the following code as an example: ???- "Counter.sol" ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract Counter { int256 private count; function increment() public { count += 1; } function decrement() public { count -= 1; } function getCount() public view returns (int256) { return count; } } ``` ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-4.webp) ## Compiling Your Contract 1. To compile your contract, you need to: 1. Navigate to the **Solidity Compiler** tab (third icon in the left sidebar) 2. Select **Compile** or use `Ctrl+S` ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-5.webp) !!! note Compilation errors and warnings appear in the terminal panel at the bottom of the screen. 1. After compiling your contract, you can navigate to the **File Explorer** tab (first icon in the left sidebar) and check that: 1. The `artifact` folder is present 2. The `Counter_metadata.json` and the `Counter.json` files have been generated ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-6.webp) ## Deploying Contracts 1. To deploy your contract, you need to: 1. Navigate to the **Deploy & Run Transactions** tab (fourth icon in the left sidebar) 2. Click the **Enviroment** dropdown 3. Select **Customize this list** ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-7.webp) 2. Enable the **Injected Provider - Talisman** option ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-8.webp) 4. Click again the **Enviroment** dropdown and select **Injected Provider - Talisman** ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-9.webp) 4. Click the **Deploy** button and then click **Approve** in the Talisman wallet popup ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-10.webp) 5. Once your contract is deployed successfully, you will see the following output in the Remix terminal: ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-11.webp) ## Interacting with Contracts Once deployed, your contract appears in the **Deployed/Unpinned Contracts** section: 1. Expand the contract to view available methods ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-12.webp) !!! tip Pin your frequently used contracts to the **Pinned Contracts** section for easy access. 2. To interact with the contract, you can select any of the exposed methods ![](/images/develop/smart-contracts/evm-toolkit/dev-environments/remix/remix-13.webp) In this way, you can interact with your deployed contract by reading its state or writing to it. The button color indicates the type of interaction available: - **Red** - modifies state and is payable - **Orange** - modifies state only - **Blue** - reads state ## Where to Go Next The Polkadot Remix IDE offers an environment for developing, compiling, and deploying smart contracts on Asset Hub. Its intuitive interface allows developers to easily write Solidity code, compile contracts, and interact with them directly in the browser. Explore more about smart contracts through these resources:
- Guide __Smart Contracts on Polkadot__ --- Dive into advanced smart contract concepts. [:octicons-arrow-right-24: Get Started](/develop/smart-contracts/) - External __OpenZeppelin Contracts__ --- Test your skills by deploying a simple contracts with prebuilt templates. [:octicons-arrow-right-24: Get Started](https://www.openzeppelin.com/solidity-contracts){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/faqs/ --- BEGIN CONTENT --- --- title: Polkadot Hub Smart Contract FAQs description: Find answers to common questions about smart contract development, deployment, and compatibility in the Polkadot Hub ecosystem. categories: Smart Contracts --- # Smart Contracts FAQs !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. !!! note For a list of known incompatibilities, please refer to the [Solidity and Yul IR transaltion incompatibilities](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm/#solidity-and-yul-ir-translation-incompatibilities){target=\_blank} section. ## General Questions ### What are the different types of smart contracts I can build on Polkadot? Polkadot supports three main smart contract environments: 1. **PolkaVM contracts**: Available on Polkadot Hub, using a RISC-V-based virtual machine with Solidity compatibility. 2. **EVM contracts**: Available on parachains like Moonbeam, Astar, and Acala via the Frontier framework. 3. **Wasm contracts**: Using ink! (Rust-based) or Solidity via Solang compiler. ### Should I build a smart contract or a parachain? Choose smart contracts if: - You want to deploy quickly without managing consensus. - Your application fits within existing chain functionality. - You prefer familiar development tools (Ethereum ecosystem). - You need to interact with other contracts easily. Choose a parachain if: - You need custom logic that doesn't fit smart contract limitations. - You want full control over governance and upgrades. - You require specialized consensus mechanisms. - You need optimized fee structures. ### What's the difference between Polkadot Hub smart contracts and other EVM chains? Polkadot Hub contracts run on [PolkaVM](/polkadot-protocol/smart-contract-basics/polkavm-design){target=\_blank} instead of EVM: - **Performance**: RISC-V register-based architecture vs. stack-based EVM. - **Resource metering**: Three dimensions (`ref_time`, `proof_size`, `storage_deposit`) vs. single gas metric. - **Memory management**: Hard memory limits per contract vs. gas-based soft limits. - **Account system**: Polkadot's 32-byte accounts with automatic 20-byte address conversion. ## Development Environment ### Can I use my existing Ethereum development tools? Yes, check out the [Wallets](/develop/smart-contracts/wallets){target=\_blank} page, the [Development Environments](/develop/smart-contracts/dev-environments/){target=\_blank}, and the [Libraries](/develop/smart-contracts/libraries/){target=\_blank} sections for more information. ### How do I set up local development? Check the [Local Development Node](/develop/smart-contracts/local-development-node){target=\_blank} for further instructions. ### What networks are available for testing and deployment? - **Local Development**: Kitchensink node with Ethereum RPC proxy. - **TestNets**: Polkadot Hub TestNet. ## Technical Implementation ### How do Ethereum addresses work on Polkadot? Polkadot uses a [dual-address system](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm#account-management-comparison){target=\_blank}: - _20-byte Ethereum addresses_ are padded with `0xEE` bytes to create 32-byte Polkadot accounts. - _32-byte Polkadot accounts_ can register mappings to 20-byte addresses. - _Automatic conversion_ happens behind the scenes. - _MetaMask compatibility_ is maintained through the mapping system. ### What are the key differences in the gas model? PolkaVM uses three resource dimensions: - **`ref_time`**: Computational time (similar to traditional gas). - **`proof_size`**: State proof size for validator verification. - **`storage_deposit`**: Refundable deposit for state storage. Key implications: - Gas values are dynamically scaled based on performance benchmarks. - Cross-contract calls don't respect gas limits (use reentrancy protection). - Storage costs are separate from execution costs. ### How does contract deployment work? PolkaVM deployment differs from EVM: - _Code must be pre-uploaded_ to the chain before instantiation. - _Factory contracts_ need modification to work with pre-uploaded code hashes. - _Two-step process_: Upload code, then instantiate contracts. - _Runtime code generation_ is not supported. ### Which Solidity features are not supported? Limited support for: - **`EXTCODECOPY`**: Only works in constructor code. - **Runtime code modification**: Use on-chain constructors instead. - **Gas stipends**: `address.send()` and `address.transfer()` don't provide reentrancy protection. Unsupported operations: - `pc`, `extcodecopy`, `selfdestruct` - `blobhash`, `blobbasefee` (blob-related operations) ### How do I handle the existential deposit requirement? What it means: - Accounts need a minimum balance, also known as an existential deposit (ED), to remain active. - Accounts below this threshold are automatically deleted. How it's handled: - _Balance queries_ via Ethereum RPC automatically deduct the ED. - _New account transfers_ automatically include ED with transaction fees. - _Contract-to-contract transfers_ draw ED from transaction signer, not sending contract. ## Migration and Compatibility ### Can I migrate my existing Ethereum contracts? Most contracts work without changes: - Standard ERC-20, ERC-721, ERC-1155 tokens. - DeFi protocols and DEXs. - DAOs and governance contracts. May need modifications: - Factory contracts that create other contracts at runtime. - Contracts using `EXTCODECOPY` for runtime code manipulation. - Contracts relying on gas stipends for reentrancy protection. ## Troubleshooting ### Why are my gas calculations different? PolkaVM uses dynamic gas scaling: - Gas values reflect actual performance benchmarks. - Don't hardcode gas values—use flexible calculations. - Cross-contract calls ignore gas limits—implement proper access controls. ### I deployed a contract with MetaMask, and got a `code size` error - why? The latest MetaMask update affects the extension’s ability to deploy large contracts. Check the [Wallets](/develop/smart-contracts/wallets){target=\_blank} page for more details. ### I found a bug, where can I log it? Please log any bugs in the [`contracts-issues`](https://github.com/paritytech/contract-issues/issues){target=\_blank} repository so developers are aware of them and can address them. ## Known Issues ### Runtime Behavior - **`creationCode` returns hash instead of bytecode**: The Solidity keyword returns a `keccak256` hash rather than the actual creation bytecode. - [Issue #45](https://github.com/paritytech/contract-issues/issues/45){target=\_blank} - **Non-deterministic gas usage**: Gas consumption varies slightly for identical transactions. - [Issue #49](https://github.com/paritytech/contract-issues/issues/49){target=\_blank} - **Precompiles not recognized**: Precompile addresses return `Contract not found` error. - [Issue #111](https://github.com/paritytech/contract-issues/issues/111){target=\_blank} ### Development Tools - **`hardhat-polkadot` plugin compilation issues**: Plugin interferes with standard `npx hardhat compile` command. - [Issue #44](https://github.com/paritytech/contract-issues/issues/44){target=\_blank} ### Contract Patterns - **Minimal proxy (EIP-1167) deployment fails**: Standard proxy contracts cannot be deployed on PolkaVM. - [Issue #86](https://github.com/paritytech/contract-issues/issues/86){target=\_blank} ### Compilation - **`SDIV` opcode crash**: Compiler crashes with `Unsupported SDIV` assertion failure. - [Issue #342](https://github.com/paritytech/revive/issues/342){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/ --- BEGIN CONTENT --- --- title: Smart Contracts description: Learn about smart contract development in Polkadot, including ink! for Wasm contracts and Solidity support via EVM and PolkaVM on Polkadot Hub and parachains. template: index-page.html --- # Smart Contracts !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. Polkadot allows scalable execution of smart contracts, offering cross-chain compatibility and lower fees than legacy L1 platforms. Polkadot provides developers with flexibility in building smart contracts, supporting both Solidity contracts executed by the [PolkaVM](/polkadot-protocol/smart-contract-basics/polkavm-design#polkavm){target=\_blank} (a Polkadot-native virtual machine for programming languages that can be compiled down to RISC-V) and EVM (Ethereum Virtual Machine), as well as Rust-based contracts using ink!. This section provides tools, resources, and guides for building and deploying smart contracts on parachains. [Parachains](/polkadot-protocol/architecture/parachains/overview/){target=\_blank} are specialized blockchains connected to the relay chain, benefiting from shared security and interoperability. Depending on your language and environment preference, you can develop contracts using Rust/ink! or EVM-based solutions. ## Smart Contract Development Process Follow this step-by-step process to develop and deploy smart contracts in the Polkadot ecosystem: [timeline(polkadot-docs/.snippets/text/develop/smart-contracts/index/index-timeline.json)] ## Additional Resources
- Guide __Smart Contracts Overview__ --- Check out the Smart Contracts overview in the Polkadot ecosystem. [:octicons-arrow-right-24: Reference](/develop/smart-contracts/overview) - External __View the Official ink! Documentation__ --- Learn everything you need to know about developing smart contracts with ink!. [:octicons-arrow-right-24: Reference](https://use.ink/){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/json-rpc-apis/ --- BEGIN CONTENT --- --- title: JSON-RPC APIs description: JSON-RPC APIs guide for Polkadot Hub, covering supported methods, parameters, and examples for interacting with the chain. categories: Reference --- # JSON-RPC APIs !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Polkadot Hub provides Ethereum compatibility through its JSON-RPC interface, allowing developers to interact with the chain using familiar Ethereum tooling and methods. This document outlines the supported [Ethereum JSON-RPC methods](https://ethereum.org/en/developers/docs/apis/json-rpc/#json-rpc-methods){target=\_blank} and provides examples of how to use them. This guide uses the Polkadot Hub TestNet endpoint: ```text https://testnet-passet-hub-eth-rpc.polkadot.io ``` ## Available Methods ### eth_accounts Returns a list of addresses owned by the client. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_accounts){target=\_blank}. **Parameters**: None **Example**: ```bash title="eth_accounts" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_accounts", "params":[], "id":1 }' ``` --- ### eth_blockNumber Returns the number of the most recent block. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_blocknumber){target=\_blank}. **Parameters**: None **Example**: ```bash title="eth_blockNumber" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_blockNumber", "params":[], "id":1 }' ``` --- ### eth_call Executes a new message call immediately without creating a transaction. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_call){target=\_blank}. **Parameters**: - `transaction` ++"object"++ - the transaction call object: - `to` ++"string"++ - recipient address of the call. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `data` ++"string"++ - hash of the method signature and encoded parameters. Must be a [data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `from` ++"string"++ - (optional) sender's address for the call. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `gas` ++"string"++ - (optional) gas limit to execute the call. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `gasPrice` ++"string"++ - (optional) gas price per unit of gas. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `value` ++"string"++ - (optional) value in wei to send with the call. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `blockValue` ++"string"++ - (optional) block tag or block number to execute the call at. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} **Example**: ```bash title="eth_call" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_call", "params":[{ "to": "INSERT_RECIPIENT_ADDRESS", "data": "INSERT_ENCODED_CALL" }, "INSERT_BLOCK_VALUE"], "id":1 }' ``` Ensure to replace the `INSERT_RECIPIENT_ADDRESS`, `INSERT_ENCODED_CALL`, and `INSERT_BLOCK_VALUE` with the proper values. --- ### eth_chainId Returns the chain ID used for signing transactions. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_chainid){target=\_blank}. **Parameters**: None **Example**: ```bash title="eth_chainId" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_chainId", "params":[], "id":1 }' ``` --- ### eth_estimateGas Estimates gas required for a transaction. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_estimategas){target=\_blank}. **Parameters**: - `transaction` ++"object"++ - the transaction call object: - `to` ++"string"++ - recipient address of the call. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `data` ++"string"++ - hash of the method signature and encoded parameters. Must be a [data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `from` ++"string"++ - (optional) sender's address for the call. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `gas` ++"string"++ - (optional) gas limit to execute the call. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `gasPrice` ++"string"++ - (optional) gas price per unit of gas. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `value` ++"string"++ - (optional) value in wei to send with the call. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `blockValue` ++"string"++ - (optional) block tag or block number to execute the call at. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} **Example**: ```bash title="eth_estimateGas" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_estimateGas", "params":[{ "to": "INSERT_RECIPIENT_ADDRESS", "data": "INSERT_ENCODED_FUNCTION_CALL" }], "id":1 }' ``` Ensure to replace the `INSERT_RECIPIENT_ADDRESS` and `INSERT_ENCODED_CALL` with the proper values. --- ### eth_gasPrice Returns the current gas price in Wei. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_gasprice){target=\_blank}. **Parameters**: None **Example**: ```bash title="eth_gasPrice" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_gasPrice", "params":[], "id":1 }' ``` --- ### eth_getBalance Returns the balance of a given address. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getbalance){target=\_blank}. **Parameters**: - `address` ++"string"++ - address to query balance. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `blockValue` ++"string"++ - (optional) the block value to be fetched. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} **Example**: ```bash title="eth_getBalance" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getBalance", "params":["INSERT_ADDRESS", "INSERT_BLOCK_VALUE"], "id":1 }' ``` Ensure to replace the `INSERT_ADDRESS` and `INSERT_BLOCK_VALUE` with the proper values. --- ### eth_getBlockByHash Returns information about a block by its hash. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getblockbyhash){target=\_blank}. **Parameters**: - `blockHash` ++"string"++ – the hash of the block to retrieve. Must be a [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `fullTransactions` ++"boolean"++ – if `true`, returns full transaction details; if `false`, returns only transaction hashes **Example**: ```bash title="eth_getBlockByHash" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getBlockByHash", "params":["INSERT_BLOCK_HASH", INSERT_BOOLEAN], "id":1 }' ``` Ensure to replace the `INSERT_BLOCK_HASH` and `INSERT_BOOLEAN` with the proper values. --- ### eth_getBlockByNumber Returns information about a block by its number. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getblockbynumber){target=\_blank}. **Parameters**: - `blockValue` ++"string"++ - (optional) the block value to be fetched. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} - `fullTransactions` ++"boolean"++ – if `true`, returns full transaction details; if `false`, returns only transaction hashes **Example**: ```bash title="eth_getBlockByNumber" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getBlockByNumber", "params":["INSERT_BLOCK_VALUE", INSERT_BOOLEAN], "id":1 }' ``` Ensure to replace the `INSERT_BLOCK_VALUE` and `INSERT_BOOLEAN` with the proper values. --- ### eth_getBlockTransactionCountByNumber Returns the number of transactions in a block from a block number. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getblocktransactioncountbynumber){target=\_blank}. **Parameters**: - `blockValue` ++"string"++ - the block value to be fetched. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} **Example**: ```bash title="eth_getBlockTransactionCountByNumber" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getBlockTransactionCountByNumber", "params":["INSERT_BLOCK_VALUE"], "id":1 }' ``` Ensure to replace the `INSERT_BLOCK_VALUE` with the proper values. --- ### eth_getBlockTransactionCountByHash Returns the number of transactions in a block from a block hash. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getblocktransactioncountbyhash){target=\_blank}. **Parameters**: - `blockHash` ++"string"++ – the hash of the block to retrieve. Must be a [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string **Example**: ```bash title="eth_getBlockTransactionCountByHash" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getBlockTransactionCountByHash", "params":["INSERT_BLOCK_HASH"], "id":1 }' ``` Ensure to replace the `INSERT_BLOCK_HASH` with the proper values. --- ### eth_getCode Returns the code at a given address. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getcode){target=\_blank}. **Parameters**: - `address` ++"string"++ - contract or account address to query code. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `blockValue` ++"string"++ - (optional) the block value to be fetched. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block) **Example**: ```bash title="eth_getCode" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getCode", "params":["INSERT_ADDRESS", "INSERT_BLOCK_VALUE"], "id":1 }' ``` Ensure to replace the `INSERT_ADDRESS` and `INSERT_BLOCK_VALUE` with the proper values. --- ### eth_getLogs Returns an array of all logs matching a given filter object. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getlogs){target=\_blank}. **Parameters**: - `filter` ++"object"++ - the filter object: - `fromBlock` ++"string"++ - (optional) block number or tag to start from. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} - `toBlock` ++"string"++ - (optional) block number or tag to end at. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} - `address` ++"string" or "array of strings"++ - (optional) contract address or a list of addresses from which to get logs. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `topics` ++"array of strings"++ - (optional) array of topics for filtering logs. Each topic can be a single [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string or an array of such strings (meaning OR). - `blockhash` ++"string"++ - (optional) hash of a specific block. Cannot be used with `fromBlock` or `toBlock`. Must be a [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string **Example**: ```bash title="eth_getLogs" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getLogs", "params":[{ "fromBlock": "latest", "toBlock": "latest" }], "id":1 }' ``` --- ### eth_getStorageAt Returns the value from a storage position at a given address. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_getstorageat){target=\_blank}. **Parameters**: - `address` ++"string"++ - contract or account address to query code. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `storageKey` ++"string"++ - position in storage to retrieve data from. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `blockValue` ++"string"++ - (optional) the block value to be fetched. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block) **Example**: ```bash title="eth_getStorageAt" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getStorageAt", "params":["INSERT_ADDRESS", "INSERT_STORAGE_KEY", "INSERT_BLOCK_VALUE"], "id":1 }' ``` Ensure to replace the `INSERT_ADDRESS`, `INSERT_STORAGE_KEY`, and `INSERT_BLOCK_VALUE` with the proper values. --- ### eth_getTransactionCount Returns the number of transactions sent from an address (nonce). [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_gettransactioncount){target=\_blank}. **Parameters**: - `address` ++"string"++ - address to query balance. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `blockValue` ++"string"++ - (optional) the block value to be fetched. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block) **Example**: ```bash title="eth_getTransactionCount" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getTransactionCount", "params":["INSERT_ADDRESS", "INSERT_BLOCK_VALUE"], "id":1 }' ``` Ensure to replace the `INSERT_ADDRESS` and `INSERT_BLOCK_VALUE` with the proper values. --- ### eth_getTransactionByHash Returns information about a transaction by its hash. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_gettransactionbyhash){target=\_blank}. **Parameters**: - `transactionHash` ++"string"++ - the hash of the transaction. Must be a [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string **Example**: ```bash title="eth_getTransactionByHash" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getTransactionByHash", "params":["INSERT_TRANSACTION_HASH"], "id":1 }' ``` Ensure to replace the `INSERT_TRANSACTION_HASH` with the proper values. --- ### eth_getTransactionByBlockNumberAndIndex Returns information about a transaction by block number and transaction index. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_gettransactionbyblocknumberandindex){target=\_blank}. **Parameters**: - `blockValue` ++"string"++ - the block value to be fetched. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} - `transactionIndex` ++"string"++ - the index of the transaction in the block. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string **Example**: ```bash title="eth_getTransactionByBlockNumberAndIndex" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getTransactionByBlockNumberAndIndex", "params":["INSERT_BLOCK_VALUE", "INSERT_TRANSACTION_INDEX"], "id":1 }' ``` Ensure to replace the `INSERT_BLOCK_VALUE` and `INSERT_TRANSACTION_INDEX` with the proper values. --- ### eth_getTransactionByBlockHashAndIndex Returns information about a transaction by block hash and transaction index. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_gettransactionbyblockhashandindex){target=\_blank}. **Parameters**: - `blockHash` ++"string"++ – the hash of the block. Must be a [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `transactionIndex` ++"string"++ - the index of the transaction in the block. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string **Example**: ```bash title="eth_getTransactionByBlockHashAndIndex" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getTransactionByBlockHashAndIndex", "params":["INSERT_BLOCK_HASH", "INSERT_TRANSACTION_INDEX"], "id":1 }' ``` Ensure to replace the `INSERT_BLOCK_HASH` and `INSERT_TRANSACTION_INDEX` with the proper values. --- ### eth_getTransactionReceipt Returns the receipt of a transaction by transaction hash. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_gettransactionreceipt){target=\_blank}. **Parameters**: - `transactionHash` ++"string"++ - the hash of the transaction. Must be a [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string **Example**: ```bash title="eth_getTransactionReceipt" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_getTransactionReceipt", "params":["INSERT_TRANSACTION_HASH"], "id":1 }' ``` Ensure to replace the `INSERT_TRANSACTION_HASH` with the proper values. --- ### eth_maxPriorityFeePerGas Returns an estimate of the current priority fee per gas, in Wei, to be included in a block. **Parameters**: None **Example**: ```bash title="eth_maxPriorityFeePerGas" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_maxPriorityFeePerGas", "params":[], "id":1 }' ``` --- ### eth_sendRawTransaction Submits a raw transaction. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_sendrawtransaction){target=\_blank}. **Parameters**: - `callData` ++"string"++ - signed transaction data. Must be a [data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string **Example**: ```bash title="eth_sendRawTransaction" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_sendRawTransaction", "params":["INSERT_CALL_DATA"], "id":1 }' ``` Ensure to replace the `INSERT_CALL_DATA` with the proper values. --- ### eth_sendTransaction Creates and sends a new transaction. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_sendtransaction){target=\_blank}. **Parameters**: - `transaction` ++"object"++ - the transaction object: - `from` ++"string"++ - address sending the transaction. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `to` ++"string"++ - (optional) recipient address. No need to provide this value when deploying a contract. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `gas` ++"string"++ - (optional, default: `90000`) gas limit for execution. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `gasPrice` ++"string"++ - (optional) gas price per unit. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `value` ++"string"++ - (optional) amount of Ether to send. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `data` ++"string"++ - (optional) contract bytecode or encoded method call. Must be a [data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `nonce` ++"string"++ - (optional) transaction nonce. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string **Example**: ```bash title="eth_sendTransaction" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_sendTransaction", "params":[{ "from": "INSERT_SENDER_ADDRESS", "to": "INSERT_RECIPIENT_ADDRESS", "gas": "INSERT_GAS_LIMIT", "gasPrice": "INSERT_GAS_PRICE", "value": "INSERT_VALUE", "input": "INSERT_INPUT_DATA", "nonce": "INSERT_NONCE" }], "id":1 }' ``` Ensure to replace the `INSERT_SENDER_ADDRESS`, `INSERT_RECIPIENT_ADDRESS`, `INSERT_GAS_LIMIT`, `INSERT_GAS_PRICE`, `INSERT_VALUE`, `INSERT_INPUT_DATA`, and `INSERT_NONCE` with the proper values. --- ### eth_syncing Returns an object with syncing data or `false` if not syncing. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_syncing){target=\_blank}. **Parameters**: None **Example**: ```bash title="eth_syncing" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"eth_syncing", "params":[], "id":1 }' ``` --- ### net_listening Returns `true` if the client is currently listening for network connections, otherwise `false`. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#net_listening){target=\_blank}. **Parameters**: None **Example**: ```bash title="net_listening" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"net_listening", "params":[], "id":1 }' ``` --- ### net_peerCount Returns the number of peers currently connected to the client. **Parameters**: None **Example**: ```bash title="net_peerCount" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"net_peerCount", "params":[], "id":1 }' ``` --- ### net_version Returns the current network ID as a string. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#net_version){target=\_blank}. **Parameters**: None **Example**: ```bash title="net_version" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"net_version", "params":[], "id":1 }' ``` --- ### system_health Returns information about the health of the system. **Parameters**: None **Example**: ```bash title="system_health" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"system_health", "params":[], "id":1 }' ``` --- ### web3_clientVersion Returns the current client version. [Reference](https://ethereum.org/en/developers/docs/apis/json-rpc/#web3_clientversion){target=\_blank}. **Parameters**: None **Example**: ```bash title="web3_clientVersion" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"web3_clientVersion", "params":[], "id":1 }' ``` --- ### debug_traceBlockByNumber Traces a block's execution by its number and returns a detailed execution trace for each transaction. **Parameters**: - `blockValue` ++"string"++ - the block number or tag to trace. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} - `options` ++"object"++ - (optional) an object containing tracer options: - `tracer` ++"string"++ - the name of the tracer to use (e.g., "callTracer", "opTracer"). - Other tracer-specific options may be supported. **Example**: ```bash title="debug_traceBlockByNumber" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"debug_traceBlockByNumber", "params":["INSERT_BLOCK_VALUE", {"tracer": "callTracer"}], "id":1 }' ``` Ensure to replace `INSERT_BLOCK_VALUE` with a proper block number if needed. --- ### debug_traceTransaction Traces the execution of a single transaction by its hash and returns a detailed execution trace. **Parameters**: - `transactionHash` ++"string"++ - the hash of the transaction to trace. Must be a [32 byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `options` ++"object"++ - (optional) an object containing tracer options (e.g., `tracer: "callTracer"`). **Example**: ```bash title="debug_traceTransaction" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"debug_traceTransaction", "params":["INSERT_TRANSACTION_HASH", {"tracer": "callTracer"}], "id":1 }' ``` Ensure to replace the `INSERT_TRANSACTION_HASH` with the proper value. --- ### debug_traceCall Executes a new message call and returns a detailed execution trace without creating a transaction on the blockchain. **Parameters**: - `transaction` ++"object"++ - the transaction call object, similar to `eth_call` parameters: - `to` ++"string"++ - recipient address of the call. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `data` ++"string"++ - hash of the method signature and encoded parameters. Must be a [data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `from` ++"string"++ - (optional) sender's address for the call. Must be a [20-byte data](https://ethereum.org/en/developers/docs/apis/json-rpc/#unformatted-data-encoding){target=\_blank} string - `gas` ++"string"++ - (optional) gas limit to execute the call. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `gasPrice` ++"string"++ - (optional) gas price per unit of gas. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `value` ++"string"++ - (optional) value in wei to send with the call. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string - `blockValue` ++"string"++ - (optional) block tag or block number to execute the call at. Must be a [quantity](https://ethereum.org/en/developers/docs/apis/json-rpc/#quantities-encoding){target=\_blank} string or a [default block parameter](https://ethereum.org/en/developers/docs/apis/json-rpc/#default-block){target=\_blank} - `options` ++"object"++ - (optional) an object containing tracer options (e.g., `tracer: "callTracer"`). **Example**: ```bash title="debug_traceCall" curl -X POST https://testnet-passet-hub-eth-rpc.polkadot.io \ -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"debug_traceCall", "params":[{ "from": "INSERT_SENDER_ADDRESS", "to": "INSERT_RECIPIENT_ADDRESS", "data": "INSERT_ENCODED_CALL" }, "INSERT_BLOCK_VALUE", {"tracer": "callTracer"}], "id":1 }' ``` Ensure to replace the `INSERT_SENDER_ADDRESS`, `INSERT_RECIPIENT_ADDRESS`, `INSERT_ENCODED_CALL`, and `INSERT_BLOCK_VALUE` with the proper value. --- ## Response Format All responses follow the standard JSON-RPC 2.0 format: ```json { "jsonrpc": "2.0", "id": 1, "result": ... // The return value varies by method } ``` ## Error Handling If an error occurs, the response will include an error object: ```json { "jsonrpc": "2.0", "id": 1, "error": { "code": -32000, "message": "Error message here" } } ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/libraries/ethers-js/ --- BEGIN CONTENT --- --- title: Deploy Contracts to Polkadot Hub with Ethers.js description: Learn how to interact with Polkadot Hub using Ethers.js, from compiling and deploying Solidity contracts to interacting with deployed smart contracts. categories: Smart Contracts, Tooling --- # Ethers.js !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction [Ethers.js](https://docs.ethers.org/v6/){target=\_blank} is a lightweight library that enables interaction with Ethereum Virtual Machine (EVM)-compatible blockchains through JavaScript. Ethers is widely used as a toolkit to establish connections and read and write blockchain data. This article demonstrates using Ethers.js to interact and deploy smart contracts to Polkadot Hub. This guide is intended for developers who are familiar with JavaScript and want to interact with Polkadot Hub using Ethers.js. ## Prerequisites Before getting started, ensure you have the following installed: - **Node.js** - v22.13.1 or later, check the [Node.js installation guide](https://nodejs.org/en/download/current/){target=\_blank} - **npm** - v6.13.4 or later (comes bundled with Node.js) - **Solidity** - this guide uses Solidity `^0.8.9` for smart contract development ## Project Structure This project organizes contracts, scripts, and compiled artifacts for easy development and deployment. ```text title="Ethers.js Polkadot Hub" ethers-project ├── contracts │ ├── Storage.sol ├── scripts │ ├── connectToProvider.js │ ├── fetchLastBlock.js │ ├── compile.js │ ├── deploy.js │ ├── checkStorage.js ├── abis │ ├── Storage.json ├── artifacts │ ├── Storage.polkavm ├── contract-address.json ├── node_modules/ ├── package.json ├── package-lock.json └── README.md ``` ## Set Up the Project To start working with Ethers.js, create a new folder and initialize your project by running the following commands in your terminal: ```bash mkdir ethers-project cd ethers-project npm init -y ``` ## Install Dependencies Next, run the following command to install the Ethers.js library: ```bash npm install ethers ``` ## Set Up the Ethers.js Provider A [`Provider`](https://docs.ethers.org/v6/api/providers/#Provider){target=\_blank} is an abstraction of a connection to the Ethereum network, allowing you to query blockchain data and send transactions. It serves as a bridge between your application and the blockchain. To interact with Polkadot Hub, you must set up an Ethers.js provider. This provider connects to a blockchain node, allowing you to query blockchain data and interact with smart contracts. In the root of your project, create a file named `connectToProvider.js` and add the following code: ```js title="scripts/connectToProvider.js" const { JsonRpcProvider } = require('ethers'); const createProvider = (rpcUrl, chainId, chainName) => { const provider = new JsonRpcProvider(rpcUrl, { chainId: chainId, name: chainName, }); return provider; }; const PROVIDER_RPC = { rpc: 'INSERT_RPC_URL', chainId: 'INSERT_CHAIN_ID', name: 'INSERT_CHAIN_NAME', }; createProvider(PROVIDER_RPC.rpc, PROVIDER_RPC.chainId, PROVIDER_RPC.name); ``` !!! note Replace `INSERT_RPC_URL`, `INSERT_CHAIN_ID`, and `INSERT_CHAIN_NAME` with the appropriate values. For example, to connect to Polkadot Hub TestNet's Ethereum RPC instance, you can use the following parameters: ```js const PROVIDER_RPC = { rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io', chainId: 420420422, name: 'polkadot-hub-testnet' }; ``` To connect to the provider, execute: ```bash node connectToProvider ``` With the provider set up, you can start querying the blockchain. For instance, to fetch the latest block number: ??? code "Fetch Last Block code" ```js title="scripts/fetchLastBlock.js" const { JsonRpcProvider } = require('ethers'); const createProvider = (rpcUrl, chainId, chainName) => { const provider = new JsonRpcProvider(rpcUrl, { chainId: chainId, name: chainName, }); return provider; }; const PROVIDER_RPC = { rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io', chainId: 420420422, name: 'polkadot-hub-testnet', }; const main = async () => { try { const provider = createProvider( PROVIDER_RPC.rpc, PROVIDER_RPC.chainId, PROVIDER_RPC.name, ); const latestBlock = await provider.getBlockNumber(); console.log(`Latest block: ${latestBlock}`); } catch (error) { console.error('Error connecting to Polkadot Hub TestNet: ' + error.message); } }; main(); ``` ## Compile Contracts !!! note "Contracts Code Blob Size Disclaimer" The maximum contract code blob size on Polkadot Hub networks is _100 kilobytes_, significantly larger than Ethereum’s EVM limit of 24 kilobytes. For detailed comparisons and migration guidelines, see the [EVM vs. PolkaVM](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm/#current-memory-limits){target=\_blank} documentation page. The `revive` compiler transforms Solidity smart contracts into [PolkaVM](/develop/smart-contracts/overview#native-smart-contracts){target=\_blank} bytecode for deployment on Polkadot Hub. Revive's Ethereum RPC interface allows you to use familiar tools like Ethers.js and MetaMask to interact with contracts. ### Install the Revive Library The [`@parity/resolc`](https://www.npmjs.com/package/@parity/resolc){target=\_blank} library will compile your Solidity code for deployment on Polkadot Hub. Run the following command in your terminal to install the library: ```bash npm install --save-dev @parity/resolc ``` This guide uses `@parity/resolc` version `{{ dependencies.javascript_packages.resolc.version }}`. ### Sample Storage Smart Contract This example demonstrates compiling a `Storage.sol` Solidity contract for deployment to Polkadot Hub. The contract's functionality stores a number and permits users to update it with a new value. ```solidity title="contracts/Storage.sol" //SPDX-License-Identifier: MIT // Solidity files have to start with this pragma. // It will be used by the Solidity compiler to validate its version. pragma solidity ^0.8.9; contract Storage { // Public state variable to store a number uint256 public storedNumber; /** * Updates the stored number. * * The `public` modifier allows anyone to call this function. * * @param _newNumber - The new value to store. */ function setNumber(uint256 _newNumber) public { storedNumber = _newNumber; } } ``` ### Compile the Smart Contract To compile this contract, use the following script: ```js title="scripts/compile.js" const { compile } = require('@parity/resolc'); const { readFileSync, writeFileSync } = require('fs'); const { basename, join } = require('path'); const compileContract = async (solidityFilePath, outputDir) => { try { // Read the Solidity file const source = readFileSync(solidityFilePath, 'utf8'); // Construct the input object for the compiler const input = { [basename(solidityFilePath)]: { content: source }, }; console.log(`Compiling contract: ${basename(solidityFilePath)}...`); // Compile the contract const out = await compile(input); for (const contracts of Object.values(out.contracts)) { for (const [name, contract] of Object.entries(contracts)) { console.log(`Compiled contract: ${name}`); // Write the ABI const abiPath = join(outputDir, `${name}.json`); writeFileSync(abiPath, JSON.stringify(contract.abi, null, 2)); console.log(`ABI saved to ${abiPath}`); // Write the bytecode const bytecodePath = join(outputDir, `${name}.polkavm`); writeFileSync( bytecodePath, Buffer.from(contract.evm.bytecode.object, 'hex'), ); console.log(`Bytecode saved to ${bytecodePath}`); } } } catch (error) { console.error('Error compiling contracts:', error); } }; const solidityFilePath = join(__dirname, '../contracts/Storage.sol'); const outputDir = join(__dirname, '../contracts'); compileContract(solidityFilePath, outputDir); ``` !!! note The script above is tailored to the `Storage.sol` contract. It can be adjusted for other contracts by changing the file name or modifying the ABI and bytecode paths. The ABI (Application Binary Interface) is a JSON representation of your contract's functions, events, and their parameters. It serves as the interface between your JavaScript code and the deployed smart contract, allowing your application to know how to format function calls and interpret returned data. Execute the script above by running: ```bash node compile ``` After executing the script, the Solidity contract will be compiled into the required PolkaVM bytecode format. The ABI and bytecode will be saved into files with `.json` and `.polkavm` extensions, respectively. You can now proceed with deploying the contract to Polkadot Hub, as outlined in the next section. ## Deploy the Compiled Contract To deploy your compiled contract to Polkadot Hub, you'll need a wallet with a private key to sign the deployment transaction. You can create a `deploy.js` script in the root of your project to achieve this. The deployment script can be divided into key components: 1. Set up the required imports and utilities: ```js title="scripts/deploy.js" // Deploy an EVM-compatible smart contract using ethers.js const { writeFileSync, existsSync, readFileSync } = require('fs'); const { join } = require('path'); const { ethers, JsonRpcProvider } = require('ethers'); const codegenDir = join(__dirname); ``` 2. Create a provider to connect to Polkadot Hub: ```js title="scripts/deploy.js" // Creates an Ethereum provider with specified RPC URL and chain details const createProvider = (rpcUrl, chainId, chainName) => { const provider = new JsonRpcProvider(rpcUrl, { chainId: chainId, name: chainName, }); return provider; }; ``` 3. Set up functions to read contract artifacts: ```js title="scripts/deploy.js" // Reads and parses the ABI file for a given contract const getAbi = (contractName) => { try { return JSON.parse( readFileSync(join(codegenDir, `${contractName}.json`), 'utf8'), ); } catch (error) { console.error( `Could not find ABI for contract ${contractName}:`, error.message, ); throw error; } }; // Reads the compiled bytecode for a given contract const getByteCode = (contractName) => { try { const bytecodePath = join( codegenDir, '../contracts', `${contractName}.polkavm`, ); return `0x${readFileSync(bytecodePath).toString('hex')}`; } catch (error) { console.error( `Could not find bytecode for contract ${contractName}:`, error.message, ); throw error; } }; ``` 4. Create the main deployment function: ```js title="scripts/deploy.js" const deployContract = async (contractName, mnemonic, providerConfig) => { console.log(`Deploying ${contractName}...`); try { // Step 1: Set up provider and wallet const provider = createProvider( providerConfig.rpc, providerConfig.chainId, providerConfig.name, ); const walletMnemonic = ethers.Wallet.fromPhrase(mnemonic); const wallet = walletMnemonic.connect(provider); // Step 2: Create and deploy the contract const factory = new ethers.ContractFactory( getAbi(contractName), getByteCode(contractName), wallet, ); const contract = await factory.deploy(); await contract.waitForDeployment(); // Step 3: Save deployment information const address = await contract.getAddress(); console.log(`Contract ${contractName} deployed at: ${address}`); const addressesFile = join(codegenDir, 'contract-address.json'); const addresses = existsSync(addressesFile) ? JSON.parse(readFileSync(addressesFile, 'utf8')) : {}; addresses[contractName] = address; writeFileSync(addressesFile, JSON.stringify(addresses, null, 2), 'utf8'); } catch (error) { console.error(`Failed to deploy contract ${contractName}:`, error); } }; ``` 5. Configure and execute the deployment: ```js title="scripts/deploy.js" const providerConfig = { rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io', chainId: 420420422, name: 'polkadot-hub-testnet', }; const mnemonic = 'INSERT_MNEMONIC'; deployContract('Storage', mnemonic, providerConfig); ``` !!! note A mnemonic (seed phrase) is a series of words that can generate multiple private keys and their corresponding addresses. It's used here to derive the wallet that will sign and pay for the deployment transaction. **Always keep your mnemonic secure and never share it publicly**. Ensure to replace the `INSERT_MNEMONIC` placeholder with your actual mnemonic. ??? code "View complete script" ```js title="scripts/deploy.js" // Deploy an EVM-compatible smart contract using ethers.js const { writeFileSync, existsSync, readFileSync } = require('fs'); const { join } = require('path'); const { ethers, JsonRpcProvider } = require('ethers'); const codegenDir = join(__dirname); // Creates an Ethereum provider with specified RPC URL and chain details const createProvider = (rpcUrl, chainId, chainName) => { const provider = new JsonRpcProvider(rpcUrl, { chainId: chainId, name: chainName, }); return provider; }; // Reads and parses the ABI file for a given contract const getAbi = (contractName) => { try { return JSON.parse( readFileSync(join(codegenDir, `${contractName}.json`), 'utf8'), ); } catch (error) { console.error( `Could not find ABI for contract ${contractName}:`, error.message, ); throw error; } }; // Reads the compiled bytecode for a given contract const getByteCode = (contractName) => { try { const bytecodePath = join( codegenDir, '../contracts', `${contractName}.polkavm`, ); return `0x${readFileSync(bytecodePath).toString('hex')}`; } catch (error) { console.error( `Could not find bytecode for contract ${contractName}:`, error.message, ); throw error; } }; const deployContract = async (contractName, mnemonic, providerConfig) => { console.log(`Deploying ${contractName}...`); try { // Step 1: Set up provider and wallet const provider = createProvider( providerConfig.rpc, providerConfig.chainId, providerConfig.name, ); const walletMnemonic = ethers.Wallet.fromPhrase(mnemonic); const wallet = walletMnemonic.connect(provider); // Step 2: Create and deploy the contract const factory = new ethers.ContractFactory( getAbi(contractName), getByteCode(contractName), wallet, ); const contract = await factory.deploy(); await contract.waitForDeployment(); // Step 3: Save deployment information const address = await contract.getAddress(); console.log(`Contract ${contractName} deployed at: ${address}`); const addressesFile = join(codegenDir, 'contract-address.json'); const addresses = existsSync(addressesFile) ? JSON.parse(readFileSync(addressesFile, 'utf8')) : {}; addresses[contractName] = address; writeFileSync(addressesFile, JSON.stringify(addresses, null, 2), 'utf8'); } catch (error) { console.error(`Failed to deploy contract ${contractName}:`, error); } }; const providerConfig = { rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io', chainId: 420420422, name: 'polkadot-hub-testnet', }; const mnemonic = 'INSERT_MNEMONIC'; deployContract('Storage', mnemonic, providerConfig); ``` To run the script, execute the following command: ```bash node deploy ``` After running this script, your contract will be deployed to Polkadot Hub, and its address will be saved in `contract-address.json` within your project directory. You can use this address for future contract interactions. ## Interact with the Contract Once the contract is deployed, you can interact with it by calling its functions. For example, to set a number, read it and then modify that number by its double, you can create a file named `checkStorage.js` in the root of your project and add the following code: ```js title="scripts/checkStorage.js" const { ethers } = require('ethers'); const { readFileSync } = require('fs'); const { join } = require('path'); const createProvider = (providerConfig) => { return new ethers.JsonRpcProvider(providerConfig.rpc, { chainId: providerConfig.chainId, name: providerConfig.name, }); }; const createWallet = (mnemonic, provider) => { return ethers.Wallet.fromPhrase(mnemonic).connect(provider); }; const loadContractAbi = (contractName, directory = __dirname) => { const contractPath = join(directory, `${contractName}.json`); const contractJson = JSON.parse(readFileSync(contractPath, 'utf8')); return contractJson.abi || contractJson; // Depending on JSON structure }; const createContract = (contractAddress, abi, wallet) => { return new ethers.Contract(contractAddress, abi, wallet); }; const interactWithStorageContract = async ( contractName, contractAddress, mnemonic, providerConfig, numberToSet, ) => { try { console.log(`Setting new number in Storage contract: ${numberToSet}`); // Create provider and wallet const provider = createProvider(providerConfig); const wallet = createWallet(mnemonic, provider); // Load the contract ABI and create the contract instance const abi = loadContractAbi(contractName); const contract = createContract(contractAddress, abi, wallet); // Send a transaction to set the stored number const tx1 = await contract.setNumber(numberToSet); await tx1.wait(); // Wait for the transaction to be mined console.log(`Number successfully set to ${numberToSet}`); // Retrieve the updated number const storedNumber = await contract.storedNumber(); console.log(`Retrieved stored number:`, storedNumber.toString()); // Send a transaction to set the stored number const tx2 = await contract.setNumber(numberToSet * 2); await tx2.wait(); // Wait for the transaction to be mined console.log(`Number successfully set to ${numberToSet * 2}`); // Retrieve the updated number const updatedNumber = await contract.storedNumber(); console.log(`Retrieved stored number:`, updatedNumber.toString()); } catch (error) { console.error('Error interacting with Storage contract:', error.message); } }; const providerConfig = { name: 'asset-hub-smart-contracts', rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io', chainId: 420420422, }; const mnemonic = 'INSERT_MNEMONIC'; const contractName = 'Storage'; const contractAddress = 'INSERT_CONTRACT_ADDRESS'; const newNumber = 42; interactWithStorageContract( contractName, contractAddress, mnemonic, providerConfig, newNumber, ); ``` Ensure you replace the `INSERT_MNEMONIC`, `INSERT_CONTRACT_ADDRESS`, and `INSERT_ADDRESS_TO_CHECK` placeholders with actual values. Also, ensure the contract ABI file (`Storage.json`) is correctly referenced. To interact with the contract, run: ```bash node checkStorage ``` ## Where to Go Next Now that you have the foundational knowledge to use Ethers.js with Polkadot Hub, you can: - **Dive into Ethers.js utilities** - discover additional Ethers.js features, such as wallet management, signing messages, etc - **Implement batch transactions** - use Ethers.js to execute batch transactions for efficient multi-step contract interactions - **Build scalable applications** - combine Ethers.js with frameworks like [`Next.js`](https://nextjs.org/docs){target=\_blank} or [`Node.js`](https://nodejs.org/en){target=\_blank} to create full-stack decentralized applications (dApps) --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/libraries/ --- BEGIN CONTENT --- --- title: Libraries description: Compare libraries for interacting with smart contracts on Polkadot, including Ethers.js, Web3.js, viem, Wagmi, Web3.py, and their key differences. template: index-page.html --- # Libraries !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. Explore the key libraries for interacting with smart contracts on Polkadot-based networks. These libraries simplify contract calls, event listening, and transaction handling. This section provides setup instructions, usage examples, and a comparison to help you select the right tool for your project. ## Library Comparison Consider the following features when choosing a library for your project: | Library | Language Support | Type Safety | Performance | Best For | |------------|--------------------------|------------------------------|---------------------------------------|------------------------------------------------| | Ethers.js | JavaScript, TypeScript | Limited | Efficient, widely optimized | General dApp development | | Web3.js | JavaScript, TypeScript | Limited | Older codebase, can be less performant| Legacy projects, Web3.js users | | viem | TypeScript only | Strong TypeScript support | Lightweight, optimized for bundling | TypeScript-heavy projects, modular workflows | | Wagmi | TypeScript, React | Strong TypeScript support | React hooks-based, efficient caching | React applications, hook-based development | | Web3.py | Python | Python typing support | Standard Python performance | Python-based blockchain applications | !!! warning Web3.js has been [sunset](https://blog.chainsafe.io/web3-js-sunset/){target=\_blank}. You can find guides on using [Ethers.js](/develop/smart-contracts/libraries/ethers-js){target=\_blank} and [viem](/develop/smart-contracts/libraries/viem){target=\_blank} in the [Libraries](/develop/smart-contracts/libraries/){target=\_blank} section. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/libraries/viem/ --- BEGIN CONTENT --- --- title: viem for Polkadot Hub Smart Contracts description: This guide covers deploying and interacting with contracts on Polkadot Hub using viem, a TypeScript library for Ethereum-compatible chains. categories: Smart Contracts, Tooling --- # viem !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction [viem](https://viem.sh/){target=\_blank} is a lightweight TypeScript library designed for interacting with Ethereum-compatible blockchains. This comprehensive guide will walk you through using viem to interact with and deploy smart contracts to Polkadot Hub. ## Prerequisites Before getting started, ensure you have the following installed: - **Node.js** - v22.13.1 or later, check the [Node.js installation guide](https://nodejs.org/en/download/current/){target=\_blank} - **npm** - v6.13.4 or later (comes bundled with Node.js) - **Solidity** - this guide uses Solidity `^0.8.9` for smart contract development ## Project Structure This project organizes contracts, scripts, and compiled artifacts for easy development and deployment. ```text viem-project/ ├── package.json ├── tsconfig.json ├── src/ │ ├── chainConfig.ts │ ├── createClient.ts │ ├── createWallet.ts │ ├── compile.ts │ ├── deploy.ts │ └── interact.ts ├── contracts/ │ └── Storage.sol └── artifacts/ ├── Storage.json └── Storage.polkavm ``` ## Set Up the Project First, create a new folder and initialize your project: ```bash mkdir viem-project cd viem-project npm init -y ``` ## Install Dependencies Install viem along with other necessary dependencies, including [@parity/resolc](https://www.npmjs.com/package/@parity/resolc){target=\_blank}, which enables to compile smart contracts to [PolkaVM](/polkadot-protocol/smart-contract-basics/polkavm-design/#polkavm){target=\_blank} bytecode: ```bash # Install viem and resolc npm install viem @parity/resolc # Install TypeScript and development dependencies npm install --save-dev typescript ts-node @types/node ``` ## Initialize Project Initialize a TypeScript project by running the following command: ```bash npx tsc --init ``` Add the following scripts to your `package.json` file to enable running TypeScript files: ```json { "scripts": { "client": "ts-node src/createClient.ts", "compile": "ts-node src/compile.ts", "deploy": "ts-node src/deploy.ts", "interact": "ts-node src/interact.ts" }, } ``` Create a directory for your TypeScript source files: ```bash mkdir src ``` ## Set Up the Chain Configuration The first step is to set up the chain configuration. Create a new file at `src/chainConfig.ts`: ```typescript title="src/chainConfig.ts" import { http } from 'viem'; export const TRANSPORT = http('INSERT_RPC_URL'); // Configure the Polkadot Hub chain export const POLKADOT_HUB = { id: INSERT_CHAIN_ID, name: 'INSERT_CHAIN_NAME', network: 'INSERT_NETWORK_NAME', nativeCurrency: { decimals: INSERT_CHAIN_DECIMALS, name: 'INSERT_CURRENCY_NAME', symbol: 'INSERT_CURRENCY_SYMBOL', }, rpcUrls: { default: { http: ['INSERT_RPC_URL'], }, }, } as const; ``` Ensure to replace `INSERT_RPC_URL`, `INSERT_CHAIN_ID`, `INSERT_CHAIN_NAME`, `INSERT_NETWORK_NAME`, `INSERT_CHAIN_DECIMALS`, `INSERT_CURRENCY_NAME`, and `INSERT_CURRENCY_SYMBOL` with the proper values. Check the [Connect to Polkadot](/develop/smart-contracts/connect-to-polkadot){target=\_blank} page for more information on the possible values. ## Set Up the viem Client To interact with the chain, you need to create a client that is used solely for reading data. To accomplish this, create a new file at `src/createClient.ts`: ```typescript title="src/createClient.ts" import { createPublicClient, createWalletClient, http } from 'viem'; const transport = http('INSERT_RPC_URL'); // Configure the Polkadot Hub chain const assetHub = { id: INSERT_CHAIN_ID, name: 'INSERT_CHAIN_NAME', network: 'INSERT_NETWORK_NAME', nativeCurrency: { decimals: INSERT_CHAIN_DECIMALS, name: 'INSERT_CURRENCY_NAME', symbol: 'INSERT_CURRENCY_SYMBOL', }, rpcUrls: { default: { http: ['INSERT_RPC_URL'], }, }, } as const; // Create a public client for reading data export const publicClient = createPublicClient({ chain: assetHub, transport, }); ``` After setting up the [Public Client](https://viem.sh/docs/clients/public#public-client){target=\_blank}, you can begin querying the blockchain. Here's an example of fetching the latest block number: ??? code "Fetch Last Block code" ```js title="src/fetchLastBlock.ts" import { createPublicClient, http } from 'viem'; const transport = http('https://testnet-passet-hub-eth-rpc.polkadot.io'); // Configure the Polkadot Hub chain const polkadotHubTestnet = { id: 420420422, name: 'Polkadot Hub TestNet', network: 'polkadot-hub-testnet', nativeCurrency: { decimals: 18, name: 'PAS', symbol: 'PAS', }, rpcUrls: { default: { http: ['https://testnet-passet-hub-eth-rpc.polkadot.io'], }, }, } as const; // Create a public client for reading data export const publicClient = createPublicClient({ chain: polkadotHubTestnet, transport, }); const main = async () => { try { const block = await publicClient.getBlock(); console.log('Last block: ' + block.number.toString()); } catch (error: unknown) { console.error('Error connecting to Polkadot Hub TestNet: ' + error); } }; main(); ``` ## Set Up a Wallet In case you need to sign transactions, you will need to instantiate a [Wallet Client](https://viem.sh/docs/clients/wallet#wallet-client){target=\_blank} object within your project. To do so, create `src/createWallet.ts`: ```typescript title="src/createWallet.ts" import { privateKeyToAccount } from 'viem/accounts'; import { createWalletClient, http } from 'viem'; const transport = http('INSERT_RPC_URL'); // Configure the Polkadot Hub chain const assetHub = { id: INSERT_CHAIN_ID, name: 'INSERT_CHAIN_NAME', network: 'INSERT_NETWORK_NAME', nativeCurrency: { decimals: INSERT_CHAIN_DECIMALS, name: 'INSERT_CURRENCY_NAME', symbol: 'INSERT_CURRENCY_SYMBOL', }, rpcUrls: { default: { http: ['INSERT_RPC_URL'], }, public: { http: ['INSERT_RPC_URL'], }, }, } as const; // Create a wallet client for writing data export const createWallet = (privateKey: `0x${string}`) => { const account = privateKeyToAccount(privateKey); return createWalletClient({ account, chain: assetHub, transport, }); }; ``` !!!note The wallet you import with your private key must have sufficient funds to pay for transaction fees when deploying contracts or interacting with them. Make sure to fund your wallet with the appropriate native tokens for the network you're connecting to. ## Sample Smart Contract This example demonstrates compiling a `Storage.sol` Solidity contract for deployment to Polkadot Hub. The contract's functionality stores a number and permits users to update it with a new value. ```bash mkdir contracts artifacts ``` You can use the following contract to interact with the blockchain. Paste the following contract in `contracts/Storage.sol`: ```solidity title="contracts/Storage.sol" //SPDX-License-Identifier: MIT // Solidity files have to start with this pragma. // It will be used by the Solidity compiler to validate its version. pragma solidity ^0.8.9; contract Storage { // Public state variable to store a number uint256 public storedNumber; /** * Updates the stored number. * * The `public` modifier allows anyone to call this function. * * @param _newNumber - The new value to store. */ function setNumber(uint256 _newNumber) public { storedNumber = _newNumber; } } ``` ## Compile the Contract !!! note "Contracts Code Blob Size Disclaimer" The maximum contract code blob size on Polkadot Hub networks is _100 kilobytes_, significantly larger than Ethereum’s EVM limit of 24 kilobytes. For detailed comparisons and migration guidelines, see the [EVM vs. PolkaVM](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm/#current-memory-limits){target=\_blank} documentation page. Create a new file at `src/compile.ts` for handling contract compilation: ```typescript title="src/compile.ts" import { compile } from '@parity/resolc'; import { readFileSync, writeFileSync } from 'fs'; import { basename, join } from 'path'; const compileContract = async ( solidityFilePath: string, outputDir: string ): Promise => { try { // Read the Solidity file const source: string = readFileSync(solidityFilePath, 'utf8'); // Construct the input object for the compiler const input: Record = { [basename(solidityFilePath)]: { content: source }, }; console.log(`Compiling contract: ${basename(solidityFilePath)}...`); // Compile the contract const out = await compile(input); for (const contracts of Object.values(out.contracts)) { for (const [name, contract] of Object.entries(contracts)) { console.log(`Compiled contract: ${name}`); // Write the ABI const abiPath = join(outputDir, `${name}.json`); writeFileSync(abiPath, JSON.stringify(contract.abi, null, 2)); console.log(`ABI saved to ${abiPath}`); // Write the bytecode if ( contract.evm && contract.evm.bytecode && contract.evm.bytecode.object ) { const bytecodePath = join(outputDir, `${name}.polkavm`); writeFileSync( bytecodePath, Buffer.from(contract.evm.bytecode.object, 'hex') ); console.log(`Bytecode saved to ${bytecodePath}`); } else { console.warn(`No bytecode found for contract: ${name}`); } } } } catch (error) { console.error('Error compiling contracts:', error); } }; const solidityFilePath: string = './contracts/Storage.sol'; const outputDir: string = './artifacts/'; compileContract(solidityFilePath, outputDir); ``` To compile your contract: ```bash npm run compile ``` After executing this script, you will see the compilation results including the generated `Storage.json` (containing the contract's ABI) and `Storage.polkavm` (containing the compiled bytecode) files in the `artifacts` folder. These files contain all the necessary information for deploying and interacting with your smart contract on Polkadot Hub. ## Deploy the Contract Create a new file at `src/deploy.ts` for handling contract deployment: ```typescript title="src/deploy.ts" import { readFileSync } from 'fs'; import { join } from 'path'; import { createWallet } from './createWallet'; import { publicClient } from './createClient'; const deployContract = async ( contractName: string, privateKey: `0x${string}` ) => { try { console.log(`Deploying ${contractName}...`); // Read contract artifacts const abi = JSON.parse( readFileSync( join(__dirname, '../artifacts', `${contractName}.json`), 'utf8' ) ); const bytecode = `0x${readFileSync( join(__dirname, '../artifacts', `${contractName}.polkavm`) ).toString('hex')}` as `0x${string}`; // Create wallet const wallet = createWallet(privateKey); // Deploy contract const hash = await wallet.deployContract({ abi, bytecode, args: [], // Add constructor arguments if needed }); // Wait for deployment const receipt = await publicClient.waitForTransactionReceipt({ hash }); const contractAddress = receipt.contractAddress; console.log(`Contract deployed at: ${contractAddress}`); return contractAddress; } catch (error) { console.error('Deployment failed:', error); throw error; } }; const privateKey = 'INSERT_PRIVATE_KEY'; deployContract('Storage', privateKey); ``` Ensure to replace `INSERT_PRIVATE_KEY` with the proper value. For further details on private key exportation, refer to the article [How to export an account's private key](https://support.metamask.io/configure/accounts/how-to-export-an-accounts-private-key/){target=\_blank}. !!! warning Never commit or share your private key. Exposed keys can lead to immediate theft of all associated funds. Use environment variables instead. To deploy, run the following command: ```bash npm run deploy ``` If everything is successful, you will see the address of your deployed contract displayed in the terminal. This address is unique to your contract on the network you defined in the chain configuration, and you'll need it for any future interactions with your contract. ## Interact with the Contract Create a new file at `src/interact.ts` for interacting with your deployed contract: ```typescript title="src/interact.ts" import { publicClient } from './createClient'; import { createWallet } from './createWallet'; import { readFileSync } from 'fs'; const STORAGE_ABI = JSON.parse( readFileSync('./artifacts/Storage.json', 'utf8') ); const interactWithStorage = async ( contractAddress: `0x${string}`, privateKey: `0x${string}` ) => { try { const wallet = createWallet(privateKey); const currentNumber = await publicClient.readContract({ address: contractAddress, abi: STORAGE_ABI, functionName: 'storedNumber', args: [], }); console.log(`Stored number: ${currentNumber}`); const newNumber = BigInt(42); const { request } = await publicClient.simulateContract({ address: contractAddress, abi: STORAGE_ABI, functionName: 'setNumber', args: [newNumber], account: wallet.account, }); const hash = await wallet.writeContract(request); await publicClient.waitForTransactionReceipt({ hash }); console.log(`Number updated to ${newNumber}`); const updatedNumber = await publicClient.readContract({ address: contractAddress, abi: STORAGE_ABI, functionName: 'storedNumber', args: [], }); console.log('Updated stored number:', updatedNumber); } catch (error) { console.error('Interaction failed:', error); } }; const PRIVATE_KEY = 'INSERT_PRIVATE_KEY'; const CONTRACT_ADDRESS = 'INSERT_CONTRACT_ADDRESS'; interactWithStorage(CONTRACT_ADDRESS, PRIVATE_KEY); ``` Ensure to replace `INSERT_PRIVATE_KEY` and `INSERT_CONTRACT_ADDRESS` with the proper values. To interact with the contract: ```bash npm run interact ``` Following a successful interaction, you will see the stored value before and after the transaction. The output will show the initial stored number (0 if you haven't modified it yet), confirm when the transaction to set the number to 42 is complete, and then display the updated stored number value. This demonstrates both reading from and writing to your smart contract. ## Where to Go Next Now that you have the foundation for using viem with Polkadot Hub, consider exploring:
- External __Advanced viem Features__ --- Explore viem's documentation:
  • [:octicons-arrow-right-24: Multi call](https://viem.sh/docs/contract/multicall#multicall){target=\_blank}
  • [:octicons-arrow-right-24: Batch transactions](https://viem.sh/docs/clients/transports/http#batch-json-rpc){target=\_blank}
  • [:octicons-arrow-right-24: Custom actions](https://viem.sh/docs/clients/custom#extending-with-actions-or-configuration){target=\_blank}
- External __Test Frameworks__ --- Integrate viem with the following frameworks for comprehensive testing:
  • [:octicons-arrow-right-24: Hardhat](https://hardhat.org/){target=\_blank}
  • [:octicons-arrow-right-24: Foundry](https://book.getfoundry.sh/){target=\_blank}
- External __Event Handling__ --- Learn how to subscribe to and process contract events:
  • [:octicons-arrow-right-24: Event subscription](https://viem.sh/docs/actions/public/watchEvent#watchevent){target=\_blank}
- External __Building dApps__ --- Combine viem the following technologies to create full-stack applications:
  • [:octicons-arrow-right-24: Next.js](https://nextjs.org/docs){target=\_blank}
  • [:octicons-arrow-right-24: Node.js](https://nodejs.org/en){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/libraries/wagmi/ --- BEGIN CONTENT --- --- title: Wagmi for Polkadot Hub Smart Contracts description: Learn how to use Wagmi React Hooks to fetch and interact with smart contracts on Polkadot Hub for seamless dApp integration. categories: Smart Contracts, Tooling --- # Wagmi !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction [Wagmi](https://wagmi.sh/){target=\_blank} is a collection of [React Hooks](https://wagmi.sh/react/api/hooks){target=\_blank} for interacting with Ethereum-compatible blockchains, focusing on developer experience, feature richness, and reliability. This guide demonstrates how to use Wagmi to interact with and deploy smart contracts to Polkadot Hub, providing a seamless frontend integration for your dApps. ## Set Up the Project To start working with Wagmi, create a new React project and initialize it by running the following commands in your terminal: ```bash # Create a new React project using Next.js npx create-next-app@latest wagmi-asset-hub cd wagmi-asset-hub ``` ## Install Dependencies Install Wagmi and its peer dependencies: ```bash # Install Wagmi and its dependencies npm install wagmi viem @tanstack/react-query ``` ## Configure Wagmi for Polkadot Hub Create a configuration file to initialize Wagmi with Polkadot Hub. In your project, create a file named `src/lib/wagmi.ts` and add the code below. Be sure to replace `INSERT_RPC_URL`, `INSERT_CHAIN_ID`, `INSERT_CHAIN_NAME`, `INSERT_NETWORK_NAME`, `INSERT_CHAIN_DECIMALS`, `INSERT_CURRENCY_NAME`, and `INSERT_CURRENCY_SYMBOL` with your specific values. ```typescript title="src/lib/wagmi.ts" import { http, createConfig } from 'wagmi' // Configure the Polkadot Hub chain const assetHub = { id: INSERT_CHAIN_ID, name: 'INSERT_CHAIN_NAME', network: 'INSERT_NETWORK_NAME', nativeCurrency: { decimals: INSERT_CHAIN_DECIMALS, name: 'INSERT_CURRENCY_NAME', symbol: 'INSERT_CURRENCY_SYMBOL', }, rpcUrls: { default: { http: ['INSERT_RPC_URL'], }, }, } as const; // Create Wagmi config export const config = createConfig({ chains: [assetHub], transports: { [assetHub.id]: http(), }, }) ``` ??? code "Example Polkadot Hub TestNet Configuration" ```typescript title="src/lib/wagmi.ts" import { http, createConfig } from 'wagmi'; // Configure the Polkadot Hub chain const assetHub = { id: 420420422, name: 'polkadot-hub-testnet', network: 'polkadot-hub-testnet', nativeCurrency: { decimals: 18, name: 'PAS', symbol: 'PAS', }, rpcUrls: { default: { http: ['https://testnet-passet-hub-eth-rpc.polkadot.io'], }, }, } as const; // Create wagmi config export const config = createConfig({ chains: [assetHub], transports: { [assetHub.id]: http(), }, }); ``` ## Set Up the Wagmi Provider To enable Wagmi in your React application, you need to wrap your app with the [`WagmiProvider`](https://wagmi.sh/react/api/WagmiProvider#wagmiprovider){target=\_blank}. Update your `app/layout.tsx` file (for Next.js app router) with the following code: ```typescript title="app/layout.tsx" // For app router (src/app/layout.tsx) "use client"; import { WagmiProvider } from "wagmi"; import { QueryClient, QueryClientProvider } from "@tanstack/react-query"; import { config } from "./lib/wagmi"; // Create a query client const queryClient = new QueryClient(); export default function RootLayout({ children, }: { children: React.ReactNode; }) { return ( {children} ); } ``` !!!note If you are using a Next.js pages router, you should modify the `src/pages/_app.tsx` instead. ## Connect a Wallet Create a component to connect wallets to your dApp. Create a file named `app/components/ConnectWallet.tsx`: ```typescript title="app/components/ConnectWallet.tsx" "use client"; import React from "react"; import { useConnect, useAccount, useDisconnect } from "wagmi"; import { injected } from "wagmi/connectors"; export function ConnectWallet() { const { connect } = useConnect(); const { address, isConnected } = useAccount(); const { disconnect } = useDisconnect(); if (isConnected) { return (
Connected to {address}
); } return ( ); } ``` This component uses the following React hooks: - [**`useConnect`**](https://wagmi.sh/react/api/hooks/useConnect#useconnect){target=\_blank} - provides functions and state for connecting the user's wallet to your dApp. The `connect` function initiates the connection flow with the specified connector - [**`useDisconnect`**](https://wagmi.sh/react/api/hooks/useDisconnect#usedisconnect){target=\_blank} - provides a function to disconnect the currently connected wallet - [**`useAccount`**](https://wagmi.sh/react/api/hooks/useAccount#useaccount){target=\_blank} - returns data about the connected account, including the address and connection status ## Fetch Blockchain Data Wagmi provides various hooks to fetch blockchain data. Here's an example component that demonstrates some of these hooks: ```typescript title="app/components/BlockchainInfo.tsx" "use client"; import { useBlockNumber, useBalance, useAccount } from "wagmi"; export function BlockchainInfo() { const { address } = useAccount(); // Get the latest block number const { data: blockNumber } = useBlockNumber({ watch: true }); // Get balance for the connected wallet const { data: balance } = useBalance({ address, }); return (

Blockchain Information

Current Block: {blockNumber?.toString() || "Loading..."}

{address && balance && (

Balance:{" "} {( BigInt(balance.value) / BigInt(10 ** balance.decimals) ).toLocaleString()}{" "} {balance.symbol}

)}
); } ``` This component uses the following React hooks: - [**`useBlockNumber`**](https://wagmi.sh/react/api/hooks/useBlockNumber#useBlockNumber){target=\_blank} - fetches the current block number of the connected chain. The `watch` parameter enables real-time updates when new blocks are mined - [**`useBalance`**](https://wagmi.sh/react/api/hooks/useBalance#useBalance){target=\_blank} - retrieves the native token balance for a specified address, including value, symbol, and decimals information ## Interact with Deployed Contract This guide uses a simple Storage contract already deployed to the Polkadot Hub TestNet (`0x58053f0e8ede1a47a1af53e43368cd04ddcaf66f`). The code of that contract is: ??? code "Storage.sol" ```solidity title="Storage.sol" //SPDX-License-Identifier: MIT // Solidity files have to start with this pragma. // It will be used by the Solidity compiler to validate its version. pragma solidity ^0.8.9; contract Storage { // Public state variable to store a number uint256 public storedNumber; /** * Updates the stored number. * * The `public` modifier allows anyone to call this function. * * @param _newNumber - The new value to store. */ function setNumber(uint256 _newNumber) public { storedNumber = _newNumber; } } ``` Create a component to interact with your deployed contract. Create a file named `app/components/StorageContract.tsx`: ```typescript title="app/components/StorageContract.tsx" "use client"; import { useState } from "react"; import { useReadContract, useWriteContract, useWaitForTransactionReceipt, } from "wagmi"; const CONTRACT_ADDRESS = "0xabBd46Ef74b88E8B1CDa49BeFb5057710443Fd29" as `0x${string}`; export function StorageContract() { const [number, setNumber] = useState("42"); // Contract ABI (should match your compiled contract) const abi = [ { inputs: [], name: "storedNumber", outputs: [{ internalType: "uint256", name: "", type: "uint256" }], stateMutability: "view", type: "function", }, { inputs: [ { internalType: "uint256", name: "_newNumber", type: "uint256" }, ], name: "setNumber", outputs: [], stateMutability: "nonpayable", type: "function", }, ]; // Read the current stored number const { data: storedNumber, refetch } = useReadContract({ address: CONTRACT_ADDRESS, abi, functionName: "storedNumber", }); // Write to the contract const { writeContract, data: hash, error, isPending } = useWriteContract(); // Wait for transaction to be mined const { isLoading: isConfirming, isSuccess: isConfirmed } = useWaitForTransactionReceipt({ hash, }); const handleSetNumber = () => { writeContract({ address: CONTRACT_ADDRESS, abi, functionName: "setNumber", args: [BigInt(number)], }); }; return (

Storage Contract Interaction

Contract Address: {CONTRACT_ADDRESS}

Current Stored Number: {storedNumber?.toString() || "Loading..."}

setNumber(e.target.value)} disabled={isPending || isConfirming} />
{error &&
Error: {error.message}
} {isConfirmed && (
Successfully updated!{" "}
)}
); } ``` This component demonstrates how to interact with a smart contract using Wagmi's hooks: - [**`useReadContract`**](https://wagmi.sh/react/api/hooks/useReadContract#useReadContract){target=\_blank} - calls a read-only function on your smart contract to retrieve data without modifying the blockchain state - [**`useWriteContract`**](https://wagmi.sh/react/api/hooks/useWriteContract#useWriteContract){target=\_blank} - calls a state-modifying function on your smart contract, which requires a transaction to be signed and sent - [**`useWaitForTransactionReceipt`**](https://wagmi.sh/react/api/hooks/useWaitForTransactionReceipt#useWaitForTransactionReceipt){target=\_blank} - tracks the status of a transaction after it's been submitted, allowing you to know when it's been confirmed The component also includes proper state handling to: - Show the current value stored in the contract - Allow users to input a new value - Display transaction status (pending, confirming, or completed) - Handle errors - Provide feedback when a transaction is successful ## Integrate Components Update your main page to combine all the components. Create or update the file `src/app/page.tsx`: ```typescript title="src/app/page.tsx" "use client"; import { BlockchainInfo } from "./components/BlockchainInfo"; import { ConnectWallet } from "./components/ConnectWallet"; import { StorageContract } from "./components/StorageContract"; import { useAccount } from "wagmi"; export default function Home() { const { isConnected } = useAccount(); return (

Wagmi - Polkadot Hub Smart Contracts

{isConnected ? : Connect your wallet} {isConnected ? : Connect your wallet}
); } ``` ## Where to Go Next Now that you have the foundational knowledge to use Wagmi with Polkadot Hub, consider exploring:
- External __Advanced Wagmi__ --- Explore Wagmi's advanced features:
  • [:octicons-arrow-right-24: Watch Contract Events](https://wagmi.sh/core/api/actions/watchContractEvent#eventname){target=\_blank}
  • [:octicons-arrow-right-24: Different Transports](https://wagmi.sh/react/api/transports){target=\_blank}
  • [:octicons-arrow-right-24: Actions](https://wagmi.sh/react/api/actions){target=\_blank}
- External __Wallet Integration__ --- Connect your dApp with popular wallet providers:
  • [:octicons-arrow-right-24: MetaMask](https://wagmi.sh/core/api/connectors/metaMask){target=\_blank}
  • [:octicons-arrow-right-24: WalletConnect](https://wagmi.sh/core/api/connectors/walletConnect){target=\_blank}
  • [:octicons-arrow-right-24: Coinbase Wallet](https://wagmi.sh/core/api/connectors/coinbaseWallet){target=\_blank}
- External __Testing & Development__ --- Enhance your development workflow:
  • [:octicons-arrow-right-24: Test Suite](https://wagmi.sh/dev/contributing#_6-running-the-test-suite){target=\_blank}
  • [:octicons-arrow-right-24: Dev Playground](https://wagmi.sh/dev/contributing#_5-running-the-dev-playgrounds){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/libraries/web3-js/ --- BEGIN CONTENT --- --- title: Web3.js description: Learn how to interact with Polkadot Hub using Web3.js, deploying Solidity contracts, and interacting with deployed smart contracts. categories: Smart Contracts, Tooling --- # Web3.js !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. !!! warning Web3.js has been [sunset](https://blog.chainsafe.io/web3-js-sunset/){target=\_blank}. You can find guides on using [Ethers.js](/develop/smart-contracts/libraries/ethers-js){target=\_blank} and [viem](/develop/smart-contracts/libraries/viem){target=\_blank} in the [Libraries](/develop/smart-contracts/libraries/){target=\_blank} section. ## Introduction Interacting with blockchains typically requires an interface between your application and the network. [Web3.js](https://web3js.readthedocs.io/){target=\_blank} offers this interface through a comprehensive collection of libraries, facilitating seamless interaction with the nodes using HTTP or WebSocket protocols. This guide illustrates how to utilize Web3.js specifically for interactions with Polkadot Hub. This guide is intended for developers who are familiar with JavaScript and want to interact with the Polkadot Hub using Web3.js. ## Prerequisites Before getting started, ensure you have the following installed: - **Node.js** - v22.13.1 or later, check the [Node.js installation guide](https://nodejs.org/en/download/current/){target=\_blank} - **npm** - v6.13.4 or later (comes bundled with Node.js) - **Solidity** - this guide uses Solidity `^0.8.9` for smart contract development ## Project Structure This project organizes contracts, scripts, and compiled artifacts for easy development and deployment. ```text title="Web3.js Polkadot Hub" web3js-project ├── contracts │ ├── Storage.sol ├── scripts │ ├── connectToProvider.js │ ├── fetchLastBlock.js │ ├── compile.js │ ├── deploy.js │ ├── updateStorage.js ├── abis │ ├── Storage.json ├── artifacts │ ├── Storage.polkavm ├── node_modules/ ├── package.json ├── package-lock.json └── README.md ``` ## Set Up the Project To start working with Web3.js, begin by initializing your project: ```bash npm init -y ``` ## Install Dependencies Next, install the Web3.js library: ```bash npm install web3 ``` This guide uses `web3` version `{{ dependencies.javascript_packages.web3_js.version }}`. ## Set Up the Web3 Provider The provider configuration is the foundation of any Web3.js application. The following example establishes a connection to Polkadot Hub. To use the example script, replace `INSERT_RPC_URL`, `INSERT_CHAIN_ID`, and `INSERT_CHAIN_NAME` with the appropriate values. The provider connection script should look something like this: ```javascript title="scripts/connectToProvider.js" const { Web3 } = require('web3'); const createProvider = (rpcUrl) => { const web3 = new Web3(rpcUrl); return web3; }; const PROVIDER_RPC = { rpc: 'INSERT_RPC_URL', chainId: 'INSERT_CHAIN_ID', name: 'INSERT_CHAIN_NAME', }; createProvider(PROVIDER_RPC.rpc); ``` For example, for the Polkadot Hub TestNet, use these specific connection parameters: ```js const PROVIDER_RPC = { rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io', chainId: 420420422, name: 'polkadot-hub-testnet' }; ``` With the Web3 provider set up, you can start querying the blockchain. For instance, to fetch the latest block number of the chain, you can use the following code snippet: ???+ code "View complete script" ```javascript title="scripts/fetchLastBlock.js" const { Web3 } = require('web3'); const createProvider = (rpcUrl) => { const web3 = new Web3(rpcUrl); return web3; }; const PROVIDER_RPC = { rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io', chainId: 420420422, name: 'polkadot-hub-testnet', }; const main = async () => { try { const web3 = createProvider(PROVIDER_RPC.rpc); const latestBlock = await web3.eth.getBlockNumber(); console.log('Last block: ' + latestBlock); } catch (error) { console.error('Error connecting to Polkadot Hub TestNet: ' + error.message); } }; main(); ``` ## Compile Contracts !!! note "Contracts Code Blob Size Disclaimer" The maximum contract code blob size on Polkadot Hub networks is _100 kilobytes_, significantly larger than Ethereum’s EVM limit of 24 kilobytes. For detailed comparisons and migration guidelines, see the [EVM vs. PolkaVM](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm/#current-memory-limits){target=\_blank} documentation page. Polkadot Hub requires contracts to be compiled to [PolkaVM](/polkadot-protocol/smart-contract-basics/polkavm-design/){target=\_blank} bytecode. This is achieved using the [`revive`](https://github.com/paritytech/revive/tree/v0.2.0/js/resolc){target=\_blank} compiler. Install the [`@parity/resolc`](https://github.com/paritytech/revive){target=\_blank} library as a development dependency: ```bash npm install --save-dev @parity/resolc ``` This guide uses `@parity/resolc` version `{{ dependencies.javascript_packages.resolc.version }}`. Here's a simple storage contract that you can use to follow the process: ```solidity title="contracts/Storage.sol" //SPDX-License-Identifier: MIT pragma solidity ^0.8.9; contract Storage { // Public state variable to store a number uint256 public storedNumber; /** * Updates the stored number. * * The `public` modifier allows anyone to call this function. * * @param _newNumber - The new value to store. */ function setNumber(uint256 _newNumber) public { storedNumber = _newNumber; } } ``` With that, you can now create a `compile.js` snippet that transforms your solidity code into PolkaVM bytecode: ```javascript title="scripts/compile.js" const { compile } = require('@parity/resolc'); const { readFileSync, writeFileSync } = require('fs'); const { basename, join } = require('path'); const compileContract = async (solidityFilePath, outputDir) => { try { // Read the Solidity file const source = readFileSync(solidityFilePath, 'utf8'); // Construct the input object for the compiler const input = { [basename(solidityFilePath)]: { content: source }, }; console.log(`Compiling contract: ${basename(solidityFilePath)}...`); // Compile the contract const out = await compile(input); for (const contracts of Object.values(out.contracts)) { for (const [name, contract] of Object.entries(contracts)) { console.log(`Compiled contract: ${name}`); // Write the ABI const abiPath = join(outputDir, `${name}.json`); writeFileSync(abiPath, JSON.stringify(contract.abi, null, 2)); console.log(`ABI saved to ${abiPath}`); // Write the bytecode const bytecodePath = join(outputDir, `${name}.polkavm`); writeFileSync( bytecodePath, Buffer.from(contract.evm.bytecode.object, 'hex'), ); console.log(`Bytecode saved to ${bytecodePath}`); } } } catch (error) { console.error('Error compiling contracts:', error); } }; const solidityFilePath = './Storage.sol'; const outputDir = '.'; compileContract(solidityFilePath, outputDir); ``` To compile your contract, simply run the following command: ```bash node compile ``` After compilation, you'll have two key files: an ABI (`.json`) file, which provides a JSON interface describing the contract's functions and how to interact with it, and a bytecode (`.polkavm`) file, which contains the low-level machine code executable on PolkaVM that represents the compiled smart contract ready for blockchain deployment. ## Contract Deployment To deploy your compiled contract to Polkadot Hub using Web3.js, you'll need an account with a private key to sign the deployment transaction. The deployment process is exactly the same as for any Ethereum-compatible chain, involving creating a contract instance, estimating gas, and sending a deployment transaction. Here's how to deploy the contract, ensure replacing the `INSERT_RPC_URL`, `INSERT_PRIVATE_KEY`, and `INSERT_CONTRACT_NAME` with the appropriate values: ```javascript title="scripts/deploy.js" import { readFileSync } from 'fs'; import { Web3 } from 'web3'; const getAbi = (contractName) => { try { return JSON.parse(readFileSync(`${contractName}.json`), 'utf8'); } catch (error) { console.error( `❌ Could not find ABI for contract ${contractName}:`, error.message ); throw error; } }; const getByteCode = (contractName) => { try { return `0x${readFileSync(`${contractName}.polkavm`).toString('hex')}`; } catch (error) { console.error( `❌ Could not find bytecode for contract ${contractName}:`, error.message ); throw error; } }; export const deploy = async (config) => { try { // Initialize Web3 with RPC URL const web3 = new Web3(config.rpcUrl); // Prepare account const account = web3.eth.accounts.privateKeyToAccount(config.privateKey); web3.eth.accounts.wallet.add(account); // Load abi const abi = getAbi('Storage'); // Create contract instance const contract = new web3.eth.Contract(abi); // Prepare deployment const deployTransaction = contract.deploy({ data: getByteCode('Storage'), arguments: [], // Add constructor arguments if needed }); // Estimate gas const gasEstimate = await deployTransaction.estimateGas({ from: account.address, }); // Get current gas price const gasPrice = await web3.eth.getGasPrice(); // Send deployment transaction const deployedContract = await deployTransaction.send({ from: account.address, gas: gasEstimate, gasPrice: gasPrice, }); // Log and return contract details console.log(`Contract deployed at: ${deployedContract.options.address}`); return deployedContract; } catch (error) { console.error('Deployment failed:', error); throw error; } }; // Example usage const deploymentConfig = { rpcUrl: 'INSERT_RPC_URL', privateKey: 'INSERT_PRIVATE_KEY', contractName: 'INSERT_CONTRACT_NAME', }; deploy(deploymentConfig) .then((contract) => console.log('Deployment successful')) .catch((error) => console.error('Deployment error')); ``` For further details on private key exportation, refer to the article [How to export an account's private key](https://support.metamask.io/configure/accounts/how-to-export-an-accounts-private-key/){target=\_blank}. To deploy your contract, run the following command: ```bash node deploy ``` ## Interact with the Contract Once deployed, you can interact with your contract using Web3.js methods. Here's how to set a number and read it back, ensure replacing `INSERT_RPC_URL`, `INSERT_PRIVATE_KEY`, and `INSERT_CONTRACT_ADDRESS` with the appropriate values: ```javascript title="scripts/updateStorage.js" import { readFileSync } from 'fs'; import { Web3 } from 'web3'; const getAbi = (contractName) => { try { return JSON.parse(readFileSync(`${contractName}.json`), 'utf8'); } catch (error) { console.error( `❌ Could not find ABI for contract ${contractName}:`, error.message ); throw error; } }; const updateStorage = async (config) => { try { // Initialize Web3 with RPC URL const web3 = new Web3(config.rpcUrl); // Prepare account const account = web3.eth.accounts.privateKeyToAccount(config.privateKey); web3.eth.accounts.wallet.add(account); // Load abi const abi = getAbi('Storage'); // Create contract instance const contract = new web3.eth.Contract(abi, config.contractAddress); // Get initial value const initialValue = await contract.methods.storedNumber().call(); console.log('Current stored value:', initialValue); // Prepare transaction const updateTransaction = contract.methods.setNumber(1); // Estimate gas const gasEstimate = await updateTransaction.estimateGas({ from: account.address, }); // Get current gas price const gasPrice = await web3.eth.getGasPrice(); // Send update transaction const receipt = await updateTransaction.send({ from: account.address, gas: gasEstimate, gasPrice: gasPrice, }); // Log transaction details console.log(`Transaction hash: ${receipt.transactionHash}`); // Get updated value const newValue = await contract.methods.storedNumber().call(); console.log('New stored value:', newValue); return receipt; } catch (error) { console.error('Update failed:', error); throw error; } }; // Example usage const config = { rpcUrl: 'INSERT_RPC_URL', privateKey: 'INSERT_PRIVATE_KEY', contractAddress: 'INSERT_CONTRACT_ADDRESS', }; updateStorage(config) .then((receipt) => console.log('Update successful')) .catch((error) => console.error('Update error')); ``` To execute the logic above, run: ```bash node updateStorage ``` ## Where to Go Next Now that you’ve learned how to use Web3.js with Polkadot Hub, explore more advanced topics: - Utilize Web3.js utilities – learn about additional [Web3.js](https://docs.web3js.org/){target=\_blank} features such as signing transactions, managing wallets, and subscribing to events - Build full-stack dApps – [integrate Web3.js](https://docs.web3js.org/guides/dapps/intermediate-dapp){target=\_blank} with different libraries and frameworks to build decentralized web applications --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/libraries/web3-py/ --- BEGIN CONTENT --- --- title: Web3.py description: Learn how to interact with Polkadot Hub using the Web3 python library, deploying Solidity contracts, and interacting with deployed smart contracts. categories: Smart Contracts, Tooling --- # Web3.py !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Interacting with blockchains typically requires an interface between your application and the network. [Web3.py](https://web3py.readthedocs.io/en/stable/index.html){target=\_blank} offers this interface through a collection of libraries, facilitating seamless interaction with the nodes using HTTP or WebSocket protocols. This guide illustrates how to utilize Web3.py for interactions with Polkadot Hub. ## Set Up the Project 1. To start working with Web3.py, begin by initializing your project: ``` mkdir web3py-project cd web3py-project ``` 2. Create and activate a virtual environment for your project: ``` python -m venv venv source venv/bin/activate ``` 3. Next, install the Web3.py library: ``` pip install web3 ``` ## Set Up the Web3 Provider The [provider](https://web3py.readthedocs.io/en/stable/providers.html){target=\_blank} configuration is the foundation of any Web3.py application. The following example establishes a connection to Polkadot Hub. Follow these steps to use the provider configuration: 1. Replace `INSERT_RPC_URL` with the appropriate value. For instance, to connect to Polkadot Hub TestNet, use the following parameter: ```python PROVIDER_RPC = 'https://testnet-passet-hub-eth-rpc.polkadot.io' ``` The provider connection script should look something like this: ```python title="connect_to_provider.py" from web3 import Web3 def create_provider(rpc_url): web3 = Web3(Web3.HTTPProvider(rpc_url)) return web3 PROVIDER_RPC = 'INSERT_RPC_URL' create_provider(PROVIDER_RPC) ``` 1. With the Web3 provider set up, start querying the blockchain. For instance, you can use the following code snippet to fetch the latest block number of the chain: ```python title="fetch_last_block.py" def main(): try: web3 = create_provider(PROVIDER_RPC) latest_block = web3.eth.block_number print('Last block: ' + str(latest_block)) except Exception as error: print('Error connecting to Polkadot Hub TestNet: ' + str(error)) if __name__ == "__main__": main() ``` ??? code "View complete script" ```python title="fetch_last_block.py" from web3 import Web3 def create_provider(rpc_url): web3 = Web3(Web3.HTTPProvider(rpc_url)) return web3 PROVIDER_RPC = 'https://testnet-passet-hub-eth-rpc.polkadot.io' def main(): try: web3 = create_provider(PROVIDER_RPC) latest_block = web3.eth.block_number print('Last block: ' + str(latest_block)) except Exception as error: print('Error connecting to Polkadot Hub TestNet: ' + str(error)) if __name__ == "__main__": main() ``` ## Contract Deployment Before deploying your contracts, make sure you've compiled them and obtained two key files: - An ABI (.json) file, which provides a JSON interface describing the contract's functions and how to interact with it - A bytecode (.polkavm) file, which contains the low-level machine code executable on [PolkaVM](/polkadot-protocol/smart-contract-basics/polkavm-design#polkavm){target=\_blank} that represents the compiled smart contract ready for blockchain deployment To follow this guide, you can use the following solidity contract as an example: ```solidity title="Storage.sol" //SPDX-License-Identifier: MIT // Solidity files have to start with this pragma. // It will be used by the Solidity compiler to validate its version. pragma solidity ^0.8.9; contract Storage { // Public state variable to store a number uint256 public storedNumber; /** * Updates the stored number. * * The `public` modifier allows anyone to call this function. * * @param _newNumber - The new value to store. */ function setNumber(uint256 _newNumber) public { storedNumber = _newNumber; } } ``` To deploy your compiled contract to Polkadot Hub using Web3.py, you'll need an account with a private key to sign the deployment transaction. The deployment process is exactly the same as for any Ethereum-compatible chain, involving creating a contract instance, estimating gas, and sending a deployment transaction. Here's how to deploy the contract. Replace `INSERT_RPC_URL` and `INSERT_PRIVATE_KEY` with the appropriate values: ```python title="deploy.py" from web3 import Web3 import json def get_abi(contract_name): try: with open(f"{contract_name}.json", 'r') as file: return json.load(file) except Exception as error: print(f"❌ Could not find ABI for contract {contract_name}: {error}") raise error def get_bytecode(contract_name): try: with open(f"{contract_name}.polkavm", 'rb') as file: return '0x' + file.read().hex() except Exception as error: print(f"❌ Could not find bytecode for contract {contract_name}: {error}") raise error async def deploy(config): try: # Initialize Web3 with RPC URL web3 = Web3(Web3.HTTPProvider(config["rpc_url"])) # Prepare account account = web3.eth.account.from_key(config["private_key"]) print(f"address: {account.address}") # Load ABI abi = get_abi('Storage') # Create contract instance contract = web3.eth.contract(abi=abi, bytecode=get_bytecode('Storage')) # Get current nonce nonce = web3.eth.get_transaction_count(account.address) # Prepare deployment transaction transaction = { 'from': account.address, 'nonce': nonce, } # Build and sign transaction construct_txn = contract.constructor().build_transaction(transaction) signed_txn = web3.eth.account.sign_transaction(construct_txn, private_key=config["private_key"]) # Send transaction tx_hash = web3.eth.send_raw_transaction(signed_txn.raw_transaction) print(f"Transaction hash: {tx_hash.hex()}") # Wait for transaction receipt tx_receipt = web3.eth.wait_for_transaction_receipt(tx_hash) contract_address = tx_receipt.contractAddress # Log and return contract details print(f"Contract deployed at: {contract_address}") return web3.eth.contract(address=contract_address, abi=abi) except Exception as error: print('Deployment failed:', error) raise error if __name__ == "__main__": # Example usage import asyncio deployment_config = { "rpc_url": "INSERT_RPC_URL", "private_key": "INSERT_PRIVATE_KEY", } asyncio.run(deploy(deployment_config)) ``` !!!warning Never commit or share your private key. Exposed keys can lead to immediate theft of all associated funds. Use environment variables instead. ## Interact with the Contract After deployment, interact with your contract using Web3.py methods. The example below demonstrates how to set and retrieve a number. Be sure to replace the `INSERT_RPC_URL`, `INSERT_PRIVATE_KEY`, and `INSERT_CONTRACT_ADDRESS` placeholders with your specific values: ```python title="update_storage.py" from web3 import Web3 import json def get_abi(contract_name): try: with open(f"{contract_name}.json", 'r') as file: return json.load(file) except Exception as error: print(f"❌ Could not find ABI for contract {contract_name}: {error}") raise error async def update_storage(config): try: # Initialize Web3 with RPC URL web3 = Web3(Web3.HTTPProvider(config["rpc_url"])) # Prepare account account = web3.eth.account.from_key(config["private_key"]) # Load ABI abi = get_abi('Storage') # Create contract instance contract = web3.eth.contract(address=config["contract_address"], abi=abi) # Get initial value initial_value = contract.functions.storedNumber().call() print('Current stored value:', initial_value) # Get current nonce nonce = web3.eth.get_transaction_count(account.address) # Prepare transaction transaction = contract.functions.setNumber(1).build_transaction({ 'from': account.address, 'nonce': nonce }) # Sign transaction signed_txn = web3.eth.account.sign_transaction(transaction, private_key=config["private_key"]) # Send transaction tx_hash = web3.eth.send_raw_transaction(signed_txn.raw_transaction) print(f"Transaction hash: {tx_hash.hex()}") # Wait for receipt receipt = web3.eth.wait_for_transaction_receipt(tx_hash) # Get updated value new_value = contract.functions.storedNumber().call() print('New stored value:', new_value) return receipt except Exception as error: print('Update failed:', error) raise error if __name__ == "__main__": # Example usage import asyncio config = { "rpc_url": "INSERT_RPC_URL", "private_key": "INSERT_PRIVATE_KEY", "contract_address": "INSERT_CONTRACT_ADDRESS", } asyncio.run(update_storage(config)) ``` ## Where to Go Next Now that you have the foundation for using Web3.py with Polkadot Hub, consider exploring:
- External __Advanced Web3.py Features__ --- Explore Web3.py's documentation:
  • [:octicons-arrow-right-24: Middleware](https://web3py.readthedocs.io/en/stable/middleware.html){target=\_blank}
  • [:octicons-arrow-right-24: Filters & Events](https://web3py.readthedocs.io/en/stable/filters.html){target=\_blank}
  • [:octicons-arrow-right-24: ENS](https://web3py.readthedocs.io/en/stable/ens_overview.html){target=\_blank}
- External __Testing Frameworks__ --- Integrate Web3.py with Python testing frameworks:
  • [:octicons-arrow-right-24: Pytest](https://docs.pytest.org/){target=\_blank}
  • [:octicons-arrow-right-24: Brownie](https://eth-brownie.readthedocs.io/){target=\_blank}
- External __Transaction Management__ --- Learn advanced transaction handling:
  • [:octicons-arrow-right-24: Gas Strategies](https://web3py.readthedocs.io/en/stable/gas_price.html){target=\_blank}
  • [:octicons-arrow-right-24: Account Management](https://web3py.readthedocs.io/en/stable/web3.eth.account.html){target=\_blank}
- External __Building dApps__ --- Combine Web3.py with these frameworks to create full-stack applications:
  • [:octicons-arrow-right-24: Flask](https://flask.palletsprojects.com/){target=\_blank}
  • [:octicons-arrow-right-24: Django](https://www.djangoproject.com/){target=\_blank}
  • [:octicons-arrow-right-24: FastAPI](https://fastapi.tiangolo.com/){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/local-development-node/ --- BEGIN CONTENT --- --- title: Local Development Node description: Follow this step-by-step guide to install a Substrate node and ETH-RPC adapter for smart contract development in a local environment. categories: Smart Contracts --- # Local Development Node !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction A local development node provides an isolated blockchain environment where you can deploy, test, and debug smart contracts without incurring network fees or waiting for block confirmations. This guide demonstrates how to set up a local Polkadot SDK-based node with smart contract capabilities. By the end of this guide, you'll have: - A running Substrate node with smart contract support - An ETH-RPC adapter for Ethereum-compatible tooling integration accessible at `http://localhost:8545` ## Prerequisites Before getting started, ensure you have done the following: - Completed the [Install Polkadot SDK Dependencies](/develop/parachains/install-polkadot-sdk/){target=\_blank} guide and successfully installed [Rust](https://www.rust-lang.org/){target=\_blank} and the required packages to set up your development environment ## Install the Substrate Node and ETH-RPC Adapter The Polkadot SDK repository contains both the [Substrate node](https://github.com/paritytech/polkadot-sdk/tree/master/substrate/bin/node){target=\_blank} implementation and the [ETH-RPC adapter](https://github.com/paritytech/polkadot-sdk/tree/master/substrate/frame/revive/rpc){target=\_blank} required for Ethereum compatibility. Start by cloning the repository and navigating to the project directory: ```bash git clone -b {{dependencies.repositories.polkadot_sdk_contracts_node.version}} https://github.com/paritytech/polkadot-sdk.git cd polkadot-sdk ``` Next, you need to compile the two essential components for your development environment. The Substrate node provides the core blockchain runtime with smart contract support, while the ETH-RPC adapter enables Ethereum JSON-RPC compatibility for existing tooling: ```bash cargo build --bin substrate-node --release cargo build -p pallet-revive-eth-rpc --bin eth-rpc --release ``` The compilation process may take some time depending on your system specifications, potentially up to 30 minutes. Release builds are optimized for performance but take longer to compile than debug builds. After successful compilation, you can verify the binaries are available in the `target/release` directory: - **Substrate node path** - `polkadot-sdk/target/release/substrate-node` - **ETH-RPC adapter path** - `polkadot-sdk/target/release/eth-rpc` ## Run the Local Node With the binaries compiled, you can now start your local development environment. The setup requires running two processes. Start the Substrate node first, which will initialize a local blockchain with the `dev` chain specification. This configuration includes `pallet-revive` for smart contract functionality and uses pre-funded development accounts for testing: ```bash ./target/release/substrate-node --dev ``` The node will begin producing blocks immediately and display initialization logs:
./target/release/substrate-node --dev
2025-05-29 10:42:35 Substrate Node 2025-05-29 10:42:35 ✌️ version 3.0.0-dev-38b7581fc04 2025-05-29 10:42:35 ❤️ by Parity Technologies <admin@parity.io>, 2017-2025 2025-05-29 10:42:35 📋 Chain specification: Development 2025-05-29 10:42:35 🏷 Node name: annoyed-aunt-3163 2025-05-29 10:42:35 👤 Role: AUTHORITY 2025-05-29 10:42:35 💾 Database: RocksDb at /var/folders/x0/xl_kjddj3ql3bx7752yr09hc0000gn/T/substrate2P85EF/chains/dev/db/full 2025-05-29 10:42:40 🔨 Initializing Genesis block/state (state: 0xfc05…482e, header-hash: 0x1ae1…b8b4) 2025-05-29 10:42:40 Creating transaction pool txpool_type=SingleState ready=Limit { count: 8192, total_bytes: 20971520 } future=Limit { count: 819, total_bytes: 2097152 } 2025-05-29 10:42:40 👴 Loading GRANDPA authority set from genesis on what appears to be first startup. 2025-05-29 10:42:40 👶 Creating empty BABE epoch changes on what appears to be first startup. 2025-05-29 10:42:40 Using default protocol ID "sup" because none is configured in the chain specs 2025-05-29 10:42:40 🏷 Local node identity is: 12D3KooWAH8fgJv3hce7Yv4yKG4YXQiRqESFu6755DBnfZQU8Znm 2025-05-29 10:42:40 Running libp2p network backend 2025-05-29 10:42:40 local_peer_id=12D3KooWAH8fgJv3hce7Yv4yKG4YXQiRqESFu6755DBnfZQU8Znm 2025-05-29 10:42:40 💻 Operating system: macos 2025-05-29 10:42:40 💻 CPU architecture: aarch64 2025-05-29 10:42:40 📦 Highest known block at #0 2025-05-29 10:42:40 Error binding to '127.0.0.1:9615': Os { code: 48, kind: AddrInUse, message: "Address already in use" } 2025-05-29 10:42:40 Running JSON-RPC server: addr=127.0.0.1:63333,[::1]:63334 2025-05-29 10:42:40 🏁 CPU single core score: 1.24 GiBs, parallelism score: 1.08 GiBs with expected cores: 8 2025-05-29 10:42:40 🏁 Memory score: 49.42 GiBs 2025-05-29 10:42:40 🏁 Disk score (seq. writes): 1.91 GiBs 2025-05-29 10:42:40 🏁 Disk score (rand. writes): 529.02 MiBs 2025-05-29 10:42:40 👶 Starting BABE Authorship worker 2025-05-29 10:42:40 🥩 BEEFY gadget waiting for BEEFY pallet to become available... 2025-05-29 10:42:40 Failed to trigger bootstrap: No known peers. 2025-05-29 10:42:42 🙌 Starting consensus session on top of parent 0x1ae19030b13592b5e6fd326f26efc7b31a4f588303d348ef89ae9ebca613b8b4 (#0) 2025-05-29 10:42:42 🎁 Prepared block for proposing at 1 (5 ms) hash: 0xe046f22307fba58a3bd0cc21b1a057843d4342da8876fd44aba206f124528df0; parent_hash: 0x1ae1…b8b4; end: NoMoreTransactions; extrinsics_count: 2 2025-05-29 10:42:42 🔖 Pre-sealed block for proposal at 1. Hash now 0xa88d36087e7bf8ee59c1b17e0003092accf131ff8353a620410d7283657ce36a, previously 0xe046f22307fba58a3bd0cc21b1a057843d4342da8876fd44aba206f124528df0. 2025-05-29 10:42:42 👶 New epoch 0 launching at block 0xa88d…e36a (block slot 582842054 >= start slot 582842054). 2025-05-29 10:42:42 👶 Next epoch starts at slot 582842254 2025-05-29 10:42:42 🏆 Imported #1 (0x1ae1…b8b4 → 0xa88d…e36a)
For debugging purposes or to monitor low-level operations, you can enable detailed logging by setting environment variables before running the command: ```bash RUST_LOG="error,evm=debug,sc_rpc_server=info,runtime::revive=debug" ./target/release/substrate-node --dev ``` Once the Substrate node is running, open a new terminal window and start the ETH-RPC adapter. This component translates Ethereum JSON-RPC calls into Substrate-compatible requests, allowing you to use familiar Ethereum tools like MetaMask, Hardhat, or Ethers.js: ```bash ./target/release/eth-rpc --dev ``` You should see logs indicating that the adapter is ready to accept connections:
./target/release/eth-rpc --dev
2025-05-29 10:48:48 Running in --dev mode, RPC CORS has been disabled. 2025-05-29 10:48:48 Running in --dev mode, RPC CORS has been disabled. 2025-05-29 10:48:48 🌐 Connecting to node at: ws://127.0.0.1:9944 ... 2025-05-29 10:48:48 🌟 Connected to node at: ws://127.0.0.1:9944 2025-05-29 10:48:48 💾 Using in-memory database, keeping only 256 blocks in memory 2025-05-29 10:48:48 〽️ Prometheus exporter started at 127.0.0.1:9616 2025-05-29 10:48:48 Running JSON-RPC server: addr=127.0.0.1:8545,[::1]:8545 2025-05-29 10:48:48 🔌 Subscribing to new blocks (BestBlocks) 2025-05-29 10:48:48 🔌 Subscribing to new blocks (FinalizedBlocks)
Similar to the Substrate node, you can enable detailed logging for the ETH-RPC adapter to troubleshoot issues: ```bash RUST_LOG="info,eth-rpc=debug" ./target/release/eth-rpc --dev ``` Your local development environment is now active and accessible at `http://localhost:8545`. This endpoint accepts standard Ethereum JSON-RPC requests, enabling seamless integration with existing Ethereum development tools and workflows. You can connect wallets, deploy contracts using Remix or Hardhat, and interact with your smart contracts as you would on any Ethereum-compatible network. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/overview/ --- BEGIN CONTENT --- --- title: Smart Contracts Overview description: Learn about smart contract development capabilities in the Polkadot ecosystem, either by leveraging Polkadot Hub or other alternatives. categories: Basics, Smart Contracts --- # Smart Contracts on Polkadot !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Polkadot offers developers multiple approaches to building and deploying smart contracts within its ecosystem. As a multi-chain network designed for interoperability, Polkadot provides various environments optimized for different developer preferences and application requirements. From native smart contract support on Polkadot Hub to specialized parachain environments, developers can choose the platform that best suits their technical needs while benefiting from Polkadot's shared security model and cross-chain messaging capabilities. Whether you're looking for Ethereum compatibility through EVM-based parachains like [Moonbeam](https://docs.moonbeam.network/){target=\_blank}, [Astar](https://docs.astar.network/){target=\_blank}, and [Acala](https://evmdocs.acala.network/){target=\_blank} or prefer PolkaVM-based development with [ink!](https://use.ink/docs/v6/){target=\_blank}, the Polkadot ecosystem accommodates a range of diverse developers. These guides explore the diverse smart contract options available in the Polkadot ecosystem, helping developers understand the unique advantages of each approach and make informed decisions about where to deploy their decentralized applications. ## Native Smart Contracts ### Introduction Polkadot Hub enables smart contract deployment and execution through PolkaVM, a cutting-edge virtual machine designed specifically for the Polkadot ecosystem. This native integration allows developers to deploy smart contracts directly on Polkadot's system chain while maintaining compatibility with Ethereum development tools and workflows. ### Smart Contract Development The smart contract platform on Polkadot Hub combines _Polkadot's robust security and scalability_ with the extensive Ethereum development ecosystem. Developers can utilize familiar Ethereum libraries for contract interactions and leverage industry-standard development environments for writing and testing smart contracts. Polkadot Hub provides _full Ethereum JSON-RPC API compatibility_, ensuring seamless integration with existing development tools and services. This compatibility enables developers to maintain their preferred workflows while building on Polkadot's native infrastructure. ### Technical Architecture PolkaVM, the underlying virtual machine, utilizes a RISC-V-based register architecture _optimized for the Polkadot ecosystem_. This design choice offers several advantages: - Enhanced performance for smart contract execution. - Improved gas efficiency for complex operations. - Native compatibility with Polkadot's runtime environment. - Optimized storage and state management. ### Development Tools and Resources Polkadot Hub supports a comprehensive suite of development tools familiar to Ethereum developers. The platform integrates with popular development frameworks, testing environments, and deployment tools. Key features include: - Contract development in Solidity or Rust. - Support for standard Ethereum development libraries. - Integration with widely used development environments. - Access to blockchain explorers and indexing solutions. - Compatibility with contract monitoring and management tools. ### Cross-Chain Capabilities Smart contracts deployed on Polkadot Hub can leverage Polkadot's [cross-consensus messaging (XCM) protocol](/develop/interoperability/intro-to-xcm/){target=\_blank} protocol to seamlessly _transfer tokens and call functions on other blockchain networks_ within the Polkadot ecosystem, all without complex bridging infrastructure or third-party solutions. For further references, check the [Interoperability](/develop/interoperability/){target=\_blank} section. ### Use Cases Polkadot Hub's smart contract platform is suitable for a wide range of applications: - DeFi protocols leveraging _cross-chain capabilities_. - NFT platforms utilizing Polkadot's native token standards. - Governance systems integrated with Polkadot's democracy mechanisms. - Cross-chain bridges and asset management solutions. ## Other Smart Contract Environments Beyond Polkadot Hub's native PolkaVM support, the ecosystem offers two main alternatives for smart contract development: - **EVM-compatible parachains**: Provide access to Ethereum's extensive developer ecosystem, smart contract portability, and established tooling like Hardhat, Remix, Foundry, and OpenZeppelin. The main options include Moonbeam (the first full Ethereum-compatible parachain serving as an interoperability hub), Astar (featuring dual VM support for both EVM and WebAssembly contracts), and Acala (DeFi-focused with enhanced Acala EVM+ offering advanced DeFi primitives). - **Rust (ink!)**: ink! is a Rust-based framework that can compile to PolkaVM. It uses [`#[ink(...)]`](https://use.ink/docs/v6/macros-attributes/){target=\_blank} attribute macros to create Polkadot SDK-compatible PolkaVM bytecode, offering strong memory safety from Rust, an advanced type system, high-performance PolkaVM execution, and platform independence with sandboxed security. Each environment provides unique advantages based on developer preferences and application requirements. ## Where to Go Next Developers can use their existing Ethereum development tools and connect to Polkadot Hub's RPC endpoints. The platform's Ethereum compatibility layer ensures a smooth transition for teams already building on Ethereum-compatible chains. Subsequent sections of this guide provide detailed information about specific development tools, advanced features, and best practices for building on Polkadot Hub.
- Guide __Libraries__ --- Explore essential libraries to optimize smart contract development and interaction. [:octicons-arrow-right-24: Reference](/develop/smart-contracts/libraries/) - Guide __Dev Environments__ --- Set up your development environment for seamless contract deployment and testing. [:octicons-arrow-right-24: Reference](/develop/smart-contracts/dev-environments/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/precompiles/ --- BEGIN CONTENT --- --- title: Advanced Functionalities via Precompiles description: Explores how Polkadot integrates precompiles to run essential functions natively, improving the speed and efficiency of smart contracts on the Hub. --- # Advanced Functionalities via Precompiles !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Precompiles serve a dual purpose in the Polkadot ecosystem: they not only enable high-performance smart contracts by providing native, optimized implementations of frequently used functions but will also eventually act as critical bridges, allowing contracts to interact with core platform capabilities. This article explores how Polkadot leverages precompiles within the Revive pallet to enhance efficiency and how they will extend functionality for developers in the future, including planned access to native features like Cross-Consensus Messaging (XCM). ## What are Precompiles? Precompiles are special contract implementations that run directly at the runtime level rather than as on-chain PolkaVM contracts. In typical EVM environments, precompiles provide essential cryptographic and utility functionality at addresses that start with specific patterns. Revive follows this design pattern but with its own implementation optimized for PolkaVM. ```mermaid flowchart LR User(["User"]) DApp["DApp/Contract"] PolkaEVM["ETH RPC Adapter"] Precompiles["Precompiles"] Runtime["PolkaVM"] User --> DApp DApp -->|"Call\nfunction"| PolkaEVM PolkaEVM -->|"Detect\nprecompile\naddress"| Precompiles Precompiles -->|"Execute\noptimized\nnative code"| Runtime subgraph "Polkadot Hub" PolkaEVM Precompiles Runtime end classDef edgeLabel background:#eceff3; ``` ## Standard Precompiles in Polkadot Hub Revive implements the standard set of Ethereum precompiles: | Contract Name | Address (Last Byte) | Description | | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------: | :---------------------------------------------------------------------------------------------: | | [ECRecover](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/ecrecover.rs){target=\_blank} | 0x01 | Recovers the public key associated with a signature | | [Sha256](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/sha256.rs){target=\_blank} | 0x02 | Implements the SHA-256 hash function | | [Ripemd160](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/ripemd160.rs){target=\_blank} | 0x03 | Implements the RIPEMD-160 hash function | | [Identity](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/identity.rs){target=\_blank} | 0x04 | Data copy function (returns input as output) | | [Modexp](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/modexp.rs){target=\_blank} | 0x05 | Modular exponentiation | | [Bn128Add](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/bn128.rs#L27){target=\_blank} | 0x06 | Addition on the [alt_bn128 curve](https://eips.ethereum.org/EIPS/eip-196){target=\_blank} | | [Bn128Mul](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/bn128.rs#L48){target=\_blank} | 0x07 | Multiplication on the [alt_bn128 curve](https://eips.ethereum.org/EIPS/eip-196){target=\_blank} | | [Bn128Pairing](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/bn128.rs#L69){target=\_blank} | 0x08 | Pairing check on the alt_bn128 curve | | [Blake2F](https://github.com/paritytech/polkadot-sdk/tree/polkadot-stable2503/substrate/frame/revive/src/pure_precompiles/blake2f.rs){target=\_blank} | 0x09 | Blake2 compression function F | ## Conclusion For smart contract developers, precompiles offer a powerful way to access both low-level, high-performance operations and core platform capabilities within the smart contract execution context. Through Revive, Polkadot exposes these native functionalities, allowing developers to build faster, more efficient contracts that can take full advantage of the Polkadot ecosystem. Understanding and utilizing precompiles can unlock advanced functionality and performance gains, making them an essential tool for anyone building on the Polkadot Hub. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/precompiles/interact-with-precompiles/ --- BEGIN CONTENT --- --- title: Interact with Precompiles description: Learn how to interact with Polkadot Hub’s precompiles from Solidity to access native, low-level functions like hashing, pairing, EC ops, etc. categories: Smart Contracts --- # Interact with Precompiles !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Precompiles offer Polkadot Hub developers access to high-performance native functions directly from their smart contracts. Each precompile has a specific address and accepts a particular input data format. When called correctly, they execute optimized, native implementations of commonly used functions much more efficiently than equivalent contract-based implementations. This guide demonstrates how to interact with each standard precompile available in Polkadot Hub through Solidity smart contracts. ## Basic Precompile Interaction Pattern All precompiles follow a similar interaction pattern: ```solidity // Generic pattern for calling precompiles function callPrecompile(address precompileAddress, bytes memory input) internal returns (bool success, bytes memory result) { // Direct low-level call to the precompile address (success, result) = precompileAddress.call(input); // Ensure the call was successful require(success, "Precompile call failed"); return (success, result); } ``` Feel free to check the [`precompiles-hardhat`](https://github.com/polkadot-developers/polkavm-hardhat-examples/tree/v0.0.3/precompiles-hardhat){target=\_blank} repository to check all the precompiles examples. The repository contains a set of example contracts and test files demonstrating how to interact with each precompile in Polkadot Hub. Now, let's explore how to use each precompile available in Polkadot Hub. ## ECRecover (0x01) ECRecover recovers an Ethereum address associated with the public key used to sign a message. ```solidity title="ECRecover.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ECRecoverExample { event ECRecovered(bytes result); // Address of the ECRecover precompile address constant EC_RECOVER_ADDRESS = address(0x01); bytes public result; function callECRecover(bytes calldata input) public { bool success; bytes memory resultInMemory; (success, resultInMemory) = EC_RECOVER_ADDRESS.call{value: 0}(input); if (success) { emit ECRecovered(resultInMemory); } result = resultInMemory; } function getRecoveredAddress() public view returns (address) { require(result.length == 32, "Invalid result length"); return address(uint160(uint256(bytes32(result)))); } } ``` To interact with the ECRecover precompile, you can deploy the `ECRecoverExample` contract in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment. The `callECRecover` function takes a 128-byte input combining the message `hash`, `v`, `r`, and `s` signature values. Check this [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/ECRecover.js){target=\_blank} that shows how to format this input and verify that the recovered address matches the expected result. ## SHA-256 (0x02) The SHA-256 precompile computes the SHA-256 hash of the input data. ```solidity title="SHA256.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract SHA256Example { event SHA256Called(bytes result); // Address of the SHA256 precompile address constant SHA256_PRECOMPILE = address(0x02); bytes public result; function callH256(bytes calldata input) public { bool success; bytes memory resultInMemory; (success, resultInMemory) = SHA256_PRECOMPILE.call{value: 0}(input); if (success) { emit SHA256Called(resultInMemory); } result = resultInMemory; } } ``` To use it, you can deploy the `SHA256Example` contract in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment and call callH256 with arbitrary bytes. Check out this [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/SHA256.js){target=\_blank} shows how to pass a UTF-8 string, hash it using the precompile, and compare it with the expected hash from Node.js's [crypto](https://www.npmjs.com/package/crypto-js){target=\_blank} module. ## RIPEMD-160 (0x03) The RIPEMD-160 precompile computes the RIPEMD-160 hash of the input data. ```solidity title="RIPEMD160.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract RIPEMD160Example { // RIPEMD-160 precompile address address constant RIPEMD160_PRECOMPILE = address(0x03); bytes32 public result; event RIPEMD160Called(bytes32 result); function calculateRIPEMD160(bytes calldata input) public returns (bytes32) { (bool success, bytes memory returnData) = RIPEMD160_PRECOMPILE.call( input ); require(success, "RIPEMD-160 precompile call failed"); // return full 32 bytes, no assembly extraction bytes32 fullHash; assembly { fullHash := mload(add(returnData, 32)) } result = fullHash; emit RIPEMD160Called(fullHash); return fullHash; } } ``` To use it, you can deploy the `RIPEMD160Example` contract in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment and call `calculateRIPEMD160` with arbitrary bytes. This [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/RIPEMD160.js){target=\_blank} shows how to hash a UTF-8 string, pad the 20-byte result to 32 bytes, and verify it against the expected output. ## Identity (Data Copy) (0x04) The Identity precompile simply returns the input data as output. While seemingly trivial, it can be useful for testing and certain specialized scenarios. ```solidity title="Identity.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract IdentityExample { event IdentityCalled(bytes result); // Address of the Identity precompile address constant IDENTITY_PRECOMPILE = address(0x04); bytes public result; function callIdentity(bytes calldata input) public { bool success; bytes memory resultInMemory; (success, resultInMemory) = IDENTITY_PRECOMPILE.call(input); if (success) { emit IdentityCalled(resultInMemory); } result = resultInMemory; } } ``` To use it, you can deploy the `IdentityExample` contract in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment and call `callIdentity` with arbitrary bytes. This [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/Identity.js){target=\_blank} shows how to pass input data and verify that the precompile returns it unchanged. ## Modular Exponentiation (0x05) The ModExp precompile performs modular exponentiation, which is an operation commonly needed in cryptographic algorithms. ```solidity title="ModExp.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract ModExpExample { address constant MODEXP_ADDRESS = address(0x05); function modularExponentiation( bytes memory base, bytes memory exponent, bytes memory modulus ) public view returns (bytes memory) { bytes memory input = abi.encodePacked( toBytes32(base.length), toBytes32(exponent.length), toBytes32(modulus.length), base, exponent, modulus ); (bool success, bytes memory result) = MODEXP_ADDRESS.staticcall(input); require(success, "ModExp precompile call failed"); return result; } function toBytes32(uint256 value) internal pure returns (bytes32) { return bytes32(value); } } ``` To use it, you can deploy the `ModExpExample` contract in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment and call `modularExponentiation` with encoded `base`, `exponent`, and `modulus` bytes. This [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/ModExp.js){target=\_blank} shows how to test modular exponentiation like (4 ** 13) % 497 = 445. ## BN128 Addition (0x06) The BN128Add precompile performs addition on the alt_bn128 elliptic curve, which is essential for zk-SNARK operations. ```solidity title="BN128Add.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.20; contract BN128AddExample { address constant BN128_ADD_PRECOMPILE = address(0x06); event BN128Added(uint256 x3, uint256 y3); uint256 public resultX; uint256 public resultY; function callBN128Add(uint256 x1, uint256 y1, uint256 x2, uint256 y2) public { bytes memory input = abi.encodePacked( bytes32(x1), bytes32(y1), bytes32(x2), bytes32(y2) ); bool success; bytes memory output; (success, output) = BN128_ADD_PRECOMPILE.call{value: 0}(input); require(success, "BN128Add precompile call failed"); require(output.length == 64, "Invalid output length"); (uint256 x3, uint256 y3) = abi.decode(output, (uint256, uint256)); resultX = x3; resultY = y3; emit BN128Added(x3, y3); } } ``` To use it, you can deploy the `BN128AddExample` contract in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment and call `callBN128Add` with valid `alt_bn128` points. This [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/BN128Add.js){target=\_blank} demonstrates a valid curve addition and checks the result against known expected values. ## BN128 Scalar Multiplication (0x07) The BN128Mul precompile performs scalar multiplication on the alt_bn128 curve. ```solidity title="BN128Mul.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract BN128MulExample { // Precompile address for BN128Mul address constant BN128_MUL_ADDRESS = address(0x07); bytes public result; // Performs scalar multiplication of a point on the alt_bn128 curve function bn128ScalarMul(uint256 x1, uint256 y1, uint256 scalar) public { // Format: [x, y, scalar] - each 32 bytes bytes memory input = abi.encodePacked( bytes32(x1), bytes32(y1), bytes32(scalar) ); (bool success, bytes memory resultInMemory) = BN128_MUL_ADDRESS.call{ value: 0 }(input); require(success, "BN128Mul precompile call failed"); result = resultInMemory; } // Helper to decode result from `result` storage function getResult() public view returns (uint256 x2, uint256 y2) { bytes memory tempResult = result; require(tempResult.length >= 64, "Invalid result length"); assembly { x2 := mload(add(tempResult, 32)) y2 := mload(add(tempResult, 64)) } } } ``` To use it, deploy `BN128MulExample` in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment and call `bn128ScalarMul` with a valid point and scalar. This [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/BN128Mul.js){target=\_blank} shows how to test the operation and verify the expected scalar multiplication result on `alt_bn128`. ## BN128 Pairing Check (0x08) The BN128Pairing precompile verifies a pairing equation on the alt_bn128 curve, which is critical for zk-SNARK verification. ```solidity title="BN128Pairing.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract BN128PairingExample { // Precompile address for BN128Pairing address constant BN128_PAIRING_ADDRESS = address(0x08); bytes public result; // Performs a pairing check on the alt_bn128 curve function bn128Pairing(bytes memory input) public { // Call the precompile (bool success, bytes memory resultInMemory) = BN128_PAIRING_ADDRESS .call{value: 0}(input); require(success, "BN128Pairing precompile call failed"); result = resultInMemory; } // Helper function to decode the result from `result` storage function getResult() public view returns (bool isValid) { bytes memory tempResult = result; require(tempResult.length == 32, "Invalid result length"); uint256 output; assembly { output := mload(add(tempResult, 32)) } isValid = (output == 1); } } ``` You can deploy `BN128PairingExample` in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or your preferred environment. Check out this [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/BN128Pairing.js){target=\_blank} contains these tests with working examples. ## Blake2F (0x09) The Blake2F precompile performs the Blake2 compression function F, which is the core of the Blake2 hash function. ```solidity title="Blake2F.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract Blake2FExample { // Precompile address for Blake2F address constant BLAKE2F_ADDRESS = address(0x09); bytes public result; function blake2F(bytes memory input) public { // Input must be exactly 213 bytes require(input.length == 213, "Invalid input length - must be 213 bytes"); // Call the precompile (bool success, bytes memory resultInMemory) = BLAKE2F_ADDRESS.call{ value: 0 }(input); require(success, "Blake2F precompile call failed"); result = resultInMemory; } // Helper function to decode the result from `result` storage function getResult() public view returns (bytes32[8] memory output) { bytes memory tempResult = result; require(tempResult.length == 64, "Invalid result length"); for (uint i = 0; i < 8; i++) { assembly { mstore(add(output, mul(32, i)), mload(add(add(tempResult, 32), mul(32, i)))) } } } // Helper function to create Blake2F input from parameters function createBlake2FInput( uint32 rounds, bytes32[8] memory h, bytes32[16] memory m, bytes8[2] memory t, bool f ) public pure returns (bytes memory) { // Start with rounds (4 bytes, big-endian) bytes memory input = abi.encodePacked(rounds); // Add state vector h (8 * 32 = 256 bytes) for (uint i = 0; i < 8; i++) { input = abi.encodePacked(input, h[i]); } // Add message block m (16 * 32 = 512 bytes, but we need to convert to 16 * 8 = 128 bytes) // Blake2F expects 64-bit words in little-endian format for (uint i = 0; i < 16; i++) { // Take only the first 8 bytes of each bytes32 and reverse for little-endian bytes8 word = bytes8(m[i]); input = abi.encodePacked(input, word); } // Add offset counters t (2 * 8 = 16 bytes) input = abi.encodePacked(input, t[0], t[1]); // Add final block flag (1 byte) input = abi.encodePacked(input, f ? bytes1(0x01) : bytes1(0x00)); return input; } // Simplified function that works with raw hex input function blake2FFromHex(string memory hexInput) public { bytes memory input = hexStringToBytes(hexInput); blake2F(input); } // Helper function to convert hex string to bytes function hexStringToBytes(string memory hexString) public pure returns (bytes memory) { bytes memory hexBytes = bytes(hexString); require(hexBytes.length % 2 == 0, "Invalid hex string length"); bytes memory result = new bytes(hexBytes.length / 2); for (uint i = 0; i < hexBytes.length / 2; i++) { result[i] = bytes1( (hexCharToByte(hexBytes[2 * i]) << 4) | hexCharToByte(hexBytes[2 * i + 1]) ); } return result; } function hexCharToByte(bytes1 char) internal pure returns (uint8) { uint8 c = uint8(char); if (c >= 48 && c <= 57) return c - 48; // 0-9 if (c >= 65 && c <= 70) return c - 55; // A-F if (c >= 97 && c <= 102) return c - 87; // a-f revert("Invalid hex character"); } } ``` To use it, deploy `Blake2FExample` in [Remix](/develop/smart-contracts/dev-environments/remix){target=\_blank} or any Solidity-compatible environment and call `callBlake2F` with the properly formatted input parameters for rounds, state vector, message block, offset counters, and final block flag. This [test file](https://github.com/polkadot-developers/polkavm-hardhat-examples/blob/v0.0.3/precompiles-hardhat/test/Blake2.js){target=\_blank} demonstrates how to perform Blake2 compression with different rounds and verify the correctness of the output against known test vectors. ## Conclusion Precompiles in Polkadot Hub provide efficient, native implementations of cryptographic functions and other commonly used operations. By understanding how to interact with these precompiles from your Solidity contracts, you can build more efficient and feature-rich applications on the Polkadot ecosystem. The examples provided in this guide demonstrate the basic patterns for interacting with each precompile. Developers can adapt these patterns to their specific use cases, leveraging the performance benefits of native implementations while maintaining the flexibility of smart contract development. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/precompiles/xcm-precompile/ --- BEGIN CONTENT --- --- title: Interact with the XCM Precompile description: Learn how to use the XCM precompile to send cross-chain messages, execute XCM instructions, and estimate costs from your smart contracts. categories: Smart Contracts --- # XCM Precompile ## Introduction The [XCM (Cross-Consensus Message)](/develop/interoperability/intro-to-xcm){target=\_blank} precompile enables Polkadot Hub developers to access XCM functionality directly from their smart contracts using a Solidity interface. Located at the fixed address `0x00000000000000000000000000000000000a0000`, the XCM precompile offers three primary functions: - **`execute`**: for local XCM execution - **`send`**: for cross-chain message transmission - **`weighMessage`**: for cost estimation This guide demonstrates how to interact with the XCM precompile through Solidity smart contracts using [Remix IDE](/develop/smart-contracts/dev-environments/remix){target=\_blank}. !!!note The XCM precompile provides the barebones XCM functionality. While it provides a lot of flexibility, it doesn't provide abstractions to hide away XCM details. These have to be built on top. ## Precompile Interface The XCM precompile implements the `IXcm` interface, which defines the structure for interacting with XCM functionality. The source code for the interface is as follows: ```solidity title="IXcm.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.20; /// @dev The on-chain address of the XCM (Cross-Consensus Messaging) precompile. address constant XCM_PRECOMPILE_ADDRESS = address(0xA0000); /// @title XCM Precompile Interface /// @notice A low-level interface for interacting with `pallet_xcm`. /// It forwards calls directly to the corresponding dispatchable functions, /// providing access to XCM execution and message passing. /// @dev Documentation: /// @dev - XCM: https://docs.polkadot.com/develop/interoperability /// @dev - SCALE codec: https://docs.polkadot.com/polkadot-protocol/parachain-basics/data-encoding /// @dev - Weights: https://docs.polkadot.com/polkadot-protocol/parachain-basics/blocks-transactions-fees/fees/#transactions-weights-and-fees interface IXcm { /// @notice Weight v2 used for measurement for an XCM execution struct Weight { /// @custom:property The computational time used to execute some logic based on reference hardware. uint64 refTime; /// @custom:property The size of the proof needed to execute some logic. uint64 proofSize; } /// @notice Executes an XCM message locally on the current chain with the caller's origin. /// @dev Internally calls `pallet_xcm::execute`. /// @param message A SCALE-encoded Versioned XCM message. /// @param weight The maximum allowed `Weight` for execution. /// @dev Call @custom:function weighMessage(message) to ensure sufficient weight allocation. function execute(bytes calldata message, Weight calldata weight) external; /// @notice Sends an XCM message to another parachain or consensus system. /// @dev Internally calls `pallet_xcm::send`. /// @param destination SCALE-encoded destination MultiLocation. /// @param message SCALE-encoded Versioned XCM message. function send(bytes calldata destination, bytes calldata message) external; /// @notice Estimates the `Weight` required to execute a given XCM message. /// @param message SCALE-encoded Versioned XCM message to analyze. /// @return weight Struct containing estimated `refTime` and `proofSize`. function weighMessage(bytes calldata message) external view returns (Weight memory weight); } ``` The interface defines a `Weight` struct that represents the computational cost of XCM operations. Weight has two components: - **`refTime`**: computational time on reference hardware - **`proofSize`**: the size of the proof required for execution All XCM messages must be encoded using the [SCALE codec](/polkadot-protocol/parachain-basics/data-encoding/#data-encoding){target=\_blank}, Polkadot's standard serialization format. For further information, check the [`precompiles/IXCM.sol`](https://github.com/paritytech/polkadot-sdk/blob/cb629d46ebf00aa65624013a61f9c69ebf02b0b4/polkadot/xcm/pallet-xcm/src/precompiles/IXcm.sol){target=\_blank} file present in `pallet-xcm`. ## Interact with the XCM Precompile To interact with the XCM precompile, you can use the precompile interface directly in [Remix IDE](/develop/smart-contracts/dev-environments/remix/){target=\_blank}: 1. Create a new file called `IXcm.sol` in Remix. 2. Copy and paste the `IXcm` interface code into the file. 3. Compile the interface by selecting the button or using **Ctrl +S** keys: ![](/images/develop/smart-contracts/precompiles/xcm-precompile/xcm-precompile-01.webp) 4. In the **Deploy & Run Transactions** tab, select the `IXcm` interface from the contract dropdown. 5. Enter the precompile address `0x00000000000000000000000000000000000a0000` in the **At Address** input field. 6. Select the **At Address** button to connect to the precompile. ![](/images/develop/smart-contracts/precompiles/xcm-precompile/xcm-precompile-02.webp) 7. Once connected, you can use the Remix interface to interact with the XCM precompile's `execute`, `send`, and `weighMessage` functions. ![](/images/develop/smart-contracts/precompiles/xcm-precompile/xcm-precompile-03.webp) The main entrypoint of the precompile is the `execute` function. However, it's necessary to first call `weighMessage` to fill in the required parameters. ### Weigh a Message The `weighMessage` function estimates the computational cost required to execute an XCM message. This estimate is crucial for understanding the resources needed before actually executing or sending a message. To test this functionality in Remix, you can call `callWeighMessage` with a SCALE-encoded XCM message. For example, for testing, you can use the following encoded XCM message: ```text title="encoded-xcm-message-example" 0x050c000401000003008c86471301000003008c8647000d010101000000010100368e8759910dab756d344995f1d3c79374ca8f70066d3a709e48029f6bf0ee7e ``` ![](/images/develop/smart-contracts/precompiles/xcm-precompile/xcm-precompile-04.webp) This encoded message represents a sequence of XCM instructions: - **[Withdraw Asset](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#withdrawasset){target=\_blank}**: This instruction removes assets from the local chain's sovereign account or the caller's account, making them available for use in subsequent XCM instructions. - **[Buy Execution](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#buyexecution){target=\_blank}**: This instruction purchases execution time on the destination chain using the withdrawn assets, ensuring the message can be processed. - **[Deposit Asset](https://github.com/polkadot-fellows/xcm-format?tab=readme-ov-file#depositasset){target=\_blank}**: This instruction deposits the remaining assets into a specified account on the destination chain after execution costs have been deducted. This encoded message is provided as an example. You can craft your own XCM message tailored to your specific use case as needed. The function returns a `Weight` struct containing `refTime` and `proofSize` values, which indicate the estimated computational cost of executing this message. If successful, after calling the `callWeighMessage` function, you should see the `refTime` and `proofSize` of the message: ![](/images/develop/smart-contracts/precompiles/xcm-precompile/xcm-precompile-05.webp) !!!note You can find many more examples of XCMs in this [gist](https://gist.github.com/franciscoaguirre/a6dea0c55e81faba65bedf700033a1a2){target=\_blank}, which connects to the Polkadot Hub TestNet. ### Execute a Message The `execute` function runs an XCM message locally using the caller's origin. This function is the main entrypoint to cross-chain interactions. Follow these steps to execute a message: 1. Call `weighMessage` with your message to get the required weight. 2. Pass the same message bytes and the weight obtained from the previous step to `execute`. For example, using the same message from the weighing example, you would call `execute` with: - `message`: The encoded XCM message bytes. - `weight`: The `Weight` struct returned from `weighMessage`. You can use the [papi console](https://dev.papi.how/extrinsics#networkId=localhost&endpoint=wss%3A%2F%2Ftestnet-passet-hub.polkadot.io&data=0x1f03050c000401000003008c86471301000003008c8647000d010101000000010100368e8759910dab756d344995f1d3c79374ca8f70066d3a709e48029f6bf0ee7e0750c61e2901daad0600){target=\_blank} to examine the complete extrinsic structure for this operation. 3. On Remix, click on the **Transact** button to execute the XCM message: ![](/images/develop/smart-contracts/precompiles/xcm-precompile/xcm-precompile-06.webp) If successful, you will see the following output in the Remix terminal: ![](/images/develop/smart-contracts/precompiles/xcm-precompile/xcm-precompile-07.webp) Additionally, you can verify that the execution of this specific message was successful by checking that the beneficiary account associated with the XCM message has received the funds accordingly. ### Send a Message While most cross-chain operations can be performed via `execute`, `send` is sometimes necessary, for example, when opening HRMP channels. To send a message: 1. Prepare your destination location encoded in XCM format. 2. Prepare your XCM message (similar to the execute example). 3. Call `send` with both parameters. The destination parameter must be encoded according to XCM's location format, specifying the target parachain or consensus system. The message parameter contains the XCM instructions to be executed on the destination chain. Unlike `execute`, the `send` function doesn't require a weight parameter since the destination chain will handle execution costs according to its fee structure. ## Cross Contract Calls Beyond direct interaction and wrapper contracts, you can integrate XCM functionality directly into your existing smart contracts by inheriting from or importing the `IXcm` interface. This approach enables you to embed cross-chain capabilities into your application logic seamlessly. Whether you're building DeFi protocols, governance systems, or any application requiring cross-chain coordination, you can incorporate XCM calls directly within your contract's functions. ## Conclusion The XCM precompile provides a simple yet powerful interface for cross-chain interactions within the Polkadot ecosystem and beyond. By building and executing XCM programs, developers can build cross-chain applications that leverage the full potential of Polkadot's interoperability features. ## Next steps Head to the Polkadot Hub TestNet and start playing around with the precompile using Hardhat or Foundry. You can use PAPI to build XCM programs and test them with Chopsticks. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/smart-contracts/wallets/ --- BEGIN CONTENT --- --- title: Wallets for Polkadot Hub description: Comprehensive guide to connecting and managing wallets for Polkadot Hub, covering step-by-step instructions for interacting with the ecosystem. categories: Smart Contracts, Tooling --- # Wallets for Polkadot Hub !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Connecting a compatible wallet is the first essential step for interacting with the Polkadot Hub ecosystem. This guide explores wallet options that support both Substrate and Ethereum compatible layers, enabling transactions and smart contract interactions. Whether you're a developer testing on Polkadot Hub or a user accessing the MainNet, understanding wallet configuration is crucial for accessing the full range of Polkadot Hub's capabilities. ## Connect Your Wallet ### MetaMask [MetaMask](https://metamask.io/){target=\_blank} is a popular wallet for interacting with Ethereum-compatible chains. It allows users to connect to test networks that support Ethereum-based smart contracts. However, it's important to emphasize that MetaMask primarily facilitates interactions with smart contracts, giving users access to various chain functionalities. To get started with MetaMask, you need to install the [MetaMask extension](https://metamask.io/download/){target=\_blank} and add it to the browser. Once you install MetaMask, you can set up a new wallet and securely store your seed phrase. This phrase is crucial for recovery in case you lose access. For example, to connect to the Polkadot Hub TestNet via MetaMask, you need to follow these steps: 1. Open the MetaMask extension and click on the network icon to switch to the Polkadot Hub TestNet. ![](/images/develop/smart-contracts/wallets/wallets-1.webp){: .browser-extension} 2. Click on the **Add a custom network** button. ![](/images/develop/smart-contracts/wallets/wallets-2.webp){: .browser-extension} 3. Complete the necessary fields, then click the **Save** button (refer to the [Networks](/develop/smart-contracts/connect-to-polkadot#networks-details){target=\_blank} section for copy and paste parameters). ![](/images/develop/smart-contracts/wallets/wallets-3.webp){: .browser-extension} 4. Click on **Polkadot Hub TestNet** to switch the network. ![](/images/develop/smart-contracts/wallets/wallets-4.webp){: .browser-extension} The steps in the preceding section can be used to connect to any chain by modifying the network specification and endpoint parameters. ### SubWallet [SubWallet](https://www.subwallet.app/){target=\_blank} is a popular non-custodial wallet solution for Polkadot and Ethereum ecosystems. It offers seamless integration with Polkadot SDK-based networks while maintaining Ethereum compatibility, making the wallet an ideal choice for users and developers to interact with Polkadot Hub. SubWallet now fully supports the [Polkadot Hub TestNet](/polkadot-protocol/smart-contract-basics/networks/#test-networks){target=\_blank} where developers can deploy and interact with Ethereum-compatible, Solidity smart contracts. You can easily view and manage your Paseo native token (PAS) using the Ethereum RPC endpoint (Passet Hub EVM) or the Substrate node RPC endpoint (passet-hub). ??? code "Polkadot Hub TestNet" You can see support here for Polkadot Hub's TestNet. The **Passet Hub EVM** network uses an ETH RPC endpoint, and the **passet-hub** uses a Substrate endpoint. The ETH RPC endpoint will let you send transactions that follow an ETH format, while the Substrate endpoint will follow a Substrate transaction format. Note the PAS token, which is the native token of the Polkadot Hub TestNet. ![](/images/develop/smart-contracts/wallets/subwallet-PAS.webp){: .browser-extension} To connect to Polkadot Hub TestNet using SubWallet, follow these steps: 1. Install the [SubWallet browser extension](https://chromewebstore.google.com/detail/subwallet-polkadot-wallet/onhogfjeacnfoofkfgppdlbmlmnplgbn?hl=en){target=\_blank} and set up your wallet by following the on-screen instructions, or refer to our [step-by-step guide](https://docs.subwallet.app/main/extension-user-guide/getting-started/install-subwallet){target=\_blank} for assistance. 2. After setting up your wallet, click the List icon at the top left corner of the extension window to open **Settings**. ![](/images/develop/smart-contracts/wallets/subwallet-01.webp){: .browser-extension} 3. Scroll down and select **Manage networks**. ![](/images/develop/smart-contracts/wallets/subwallet-02.webp){: .browser-extension} 4. In the Manage network screen, either scroll down or type in the search bar to find the networks. Once done, enable the toggle next to the network name. ![](/images/develop/smart-contracts/wallets/subwallet-03.webp){: .browser-extension} You are now ready to use SubWallet to interact with [Polkadot Hub TestNet](/develop/smart-contracts/connect-to-polkadot/#networks-details){target=\_blank} seamlessly! ![](/images/develop/smart-contracts/wallets/subwallet-04.webp){: .browser-extension} ### Talisman [Talisman](https://talisman.xyz/){target=\_blank} is a specialized wallet for the Polkadot ecosystem that supports both Substrate and EVM accounts, making it an excellent choice for Polkadot Hub interactions. Talisman offers a more integrated experience for Polkadot-based chains while still providing Ethereum compatibility. To use Talisman with Polkadot Hub TestNet: 1. Install the [Talisman extension](https://talisman.xyz/download){target=\_blank} and set up your wallet by following the on-screen instructions. 2. Once installed, click on the Talisman icon in your browser extensions and click on the **Settings** button: ![](/images/develop/smart-contracts/wallets/wallets-5.webp){: .browser-extension} 3. Click the button **All settings**. ![](/images/develop/smart-contracts/wallets/wallets-6.webp){: .browser-extension} 4. Go to the **Networks & Tokens** section. ![](/images/develop/smart-contracts/wallets/wallets-7.webp) 5. Click the **Manage networks** button. ![](/images/develop/smart-contracts/wallets/wallets-8.webp) 6. Click the **+ Add network** button. ![](/images/develop/smart-contracts/wallets/wallets-9.webp) 7. Fill in the form with the required parameters and click the **Add network** button. ![](/images/develop/smart-contracts/wallets/wallets-10.webp) 8. After that, you can switch to the Polkadot Hub TestNet by clicking on the network icon and selecting **Polkadot Hub TestNet**. ![](/images/develop/smart-contracts/wallets/wallets-11.webp) After selecting the network, Talisman will automatically configure the necessary RPC URL and chain ID for you. You can now use Talisman to interact with the Polkadot Hub TestNet. ## Conclusion Choosing the right wallet for Polkadot Hub interactions depends on your specific requirements and familiarity with different interfaces. MetaMask provides a familiar entry point for developers with Ethereum experience, while Talisman offers deeper integration with Polkadot's unique features and native support for both EVM and Substrate accounts. By properly configuring your wallet connection, you gain access to the full spectrum of Polkadot Hub's capabilities. !!!info Remember to always verify network parameters when connecting to ensure a secure and reliable connection to the Polkadot ecosystem. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/api-libraries/dedot/ --- BEGIN CONTENT --- --- title: Dedot description: Dedot is a next-gen JavaScript client for Polkadot and Polkadot SDK-based blockchains, offering lightweight, tree-shakable APIs with strong TypeScript support. categories: Tooling, Dapps --- # Dedot ## Introduction [Dedot](https://github.com/dedotdev/dedot){target=\_blank} is a next-generation JavaScript client for Polkadot and Polkadot SDK-based blockchains. Designed to elevate the dApp development experience, Dedot is built and optimized to be lightweight and tree-shakable, offering precise types and APIs suggestions for individual Polkadot SDK-based blockchains and [ink! smart contracts](https://use.ink/){target=\_blank}. ### Key Features - **Lightweight and tree-shakable** – no more bn.js or WebAssembly blobs, optimized for dapps bundle size - **Fully typed API** – comprehensive TypeScript support for seamless on-chain interaction and ink! smart contract integration - **Multi-version JSON-RPC support** – compatible with both [legacy](https://github.com/w3f/PSPs/blob/master/PSPs/drafts/psp-6.md){target=\_blank} and [new](https://paritytech.github.io/json-rpc-interface-spec/introduction.html){target=\_blank} JSON-RPC APIs for broad ecosystem interoperability - **Light client support** – designed to work with light clients such as [Smoldot](https://github.com/smol-dot/smoldot){target=\_blank} - **Native TypeScript for scale codec** – implements scale codec parsing directly in TypeScript without relying on custom wrappers - **Wallet integration** – works out-of-the-box with [@polkadot/extension-based](https://github.com/polkadot-js/extension?tab=readme-ov-file#api-interface){target=\_blank} wallets - **Familiar API design** – similar API style to Polkadot.js for easy and fast migration ## Installation To add Dedot to your project, use the following command: === "npm" ```bash npm i dedot ``` === "pnpm" ```bash pnpm add dedot ``` === "yarn" ```bash yarn add dedot ``` To enable auto-completion/IntelliSense for individual chains, install the [`@dedot/chaintypes`](https://www.npmjs.com/package/@dedot/chaintypes){target=\_blank} package as a development dependency: === "npm" ```bash npm i -D @dedot/chaintypes ``` === "pnpm" ```bash pnpm add -D @dedot/chaintypes ``` === "yarn" ```bash yarn add -D @dedot/chaintypes ``` ## Get Started ### Initialize a Client Instance To connect to and interact with different networks, Dedot provides two client options depending on your needs: - **[`DedotClient`](https://docs.dedot.dev/clients-and-providers/clients#dedotclient){target=\_blank}** - interacts with chains via the [new JSON-RPC APIs](https://paritytech.github.io/json-rpc-interface-spec/introduction.html){target=\_blank} - **[`LegacyClient`](https://docs.dedot.dev/clients-and-providers/clients#legacyclient){target=\_blank}** - interacts with chains via the [legacy JSON-RPC APIs](https://github.com/w3f/PSPs/blob/master/PSPs/drafts/psp-6.md){target=\_blank} Use the following snippets to connect to Polkadot using `DedotClient`: === "WebSocket" ```typescript import { DedotClient, WsProvider } from 'dedot'; import type { PolkadotApi } from '@dedot/chaintypes'; // Initialize providers & clients const provider = new WsProvider('wss://rpc.polkadot.io'); const client = await DedotClient.new(provider); ``` === "Light Client (Smoldot)" ```typescript import { DedotClient, SmoldotProvider } from 'dedot'; import type { PolkadotApi } from '@dedot/chaintypes'; import * as smoldot from 'smoldot'; // import `polkadot` chain spec to connect to Polkadot import { polkadot } from '@substrate/connect-known-chains'; // Start smoldot instance & initialize a chain const client = smoldot.start(); const chain = await client.addChain({ chainSpec: polkadot }); // Initialize providers & clients const provider = new SmoldotProvider(chain); const client = await DedotClient.new(provider); ``` If the node doesn't support new JSON-RPC APIs yet, you can connect to the network using the `LegacyClient`, which is built on top of the legacy JSON-RPC APIs. ```typescript import { LegacyClient, WsProvider } from 'dedot'; import type { PolkadotApi } from '@dedot/chaintypes'; const provider = new WsProvider('wss://rpc.polkadot.io'); const client = await LegacyClient.new(provider); ``` ### Enable Type and API Suggestions It is recommended to specify the `ChainApi` interface (e.g., `PolkadotApi` in the example in the previous section) of the chain you want to interact with. This enables type and API suggestions/autocompletion for that particular chain (via IntelliSense). If you don't specify a `ChainApi` interface, a default `SubstrateApi` interface will be used. ```typescript import { DedotClient, WsProvider } from 'dedot'; import type { PolkadotApi, KusamaApi } from '@dedot/chaintypes'; const polkadotClient = await DedotClient.new( new WsProvider('wss://rpc.polkadot.io') ); const kusamaClient = await DedotClient.new( new WsProvider('wss://kusama-rpc.polkadot.io') ); const genericClient = await DedotClient.new( new WsProvider('ws://localhost:9944') ); ``` If you don't find the `ChainApi` for the network you're working with in [the list](https://github.com/dedotdev/chaintypes?tab=readme-ov-file#supported-networks){target=\_blank}, you can generate the `ChainApi` (types and APIs) using the built-in [`dedot` cli](https://docs.dedot.dev/cli){target=\_blank}. ```bash # Generate ChainApi interface for Polkadot network via rpc endpoint: wss://rpc.polkadot.io npx dedot chaintypes -w wss://rpc.polkadot.io ``` Or open a pull request to add your favorite network to the [`@dedot/chaintypes`](https://github.com/dedotdev/chaintypes){target=\_blank} repo. ### Read On-Chain Data Dedot provides several ways to read data from the chain: - **Access runtime constants** - use the syntax `client.consts..` to inspect runtime constants (parameter types): ```typescript const ss58Prefix = client.consts.system.ss58Prefix; console.log('Polkadot ss58Prefix:', ss58Prefix); ``` - **Storage queries** - use the syntax `client.query..` to query on-chain storage: ```typescript const balance = await client.query.system.account('INSERT_ADDRESS'); console.log('Balance:', balance.data.free); ``` - **Subscribe to storage changes**: ```typescript const unsub = await client.query.system.number((blockNumber) => { console.log(`Current block number: ${blockNumber}`); }); ``` - **Call Runtime APIs** - use the syntax `client.call..` to execute Runtime APIs: ```typescript const metadata = await client.call.metadata.metadataAtVersion(15); console.log('Metadata V15', metadata); ``` - **Watch on-chain events** - use the syntax `client.events..` to access pallet events: ```typescript const unsub = await client.events.system.NewAccount.watch((events) => { console.log('New Account Created', events); }); ``` ### Sign and Send Transactions Sign the transaction using `IKeyringPair` from Keyring ([`@polkadot/keyring`](https://polkadot.js.org/docs/keyring/start/sign-verify/){target=\_blank}) and send the transaction. ```typescript import { cryptoWaitReady } from '@polkadot/util-crypto'; import { Keyring } from '@polkadot/keyring'; // Setup keyring await cryptoWaitReady(); const keyring = new Keyring({ type: 'sr25519' }); const alice = keyring.addFromUri('//Alice'); // Send transaction const unsub = await client.tx.balances .transferKeepAlive('INSERT_DEST_ADDRESS', 2_000_000_000_000n) .signAndSend(alice, async ({ status }) => { console.log('Transaction status', status.type); if (status.type === 'BestChainBlockIncluded') { console.log(`Transaction is included in best block`); } if (status.type === 'Finalized') { console.log( `Transaction completed at block hash ${status.value.blockHash}` ); await unsub(); } }); ``` You can also use `Signer` from wallet extensions: ```typescript const injected = await window.injectedWeb3['polkadot-js'].enable('My dApp'); const account = (await injected.accounts.get())[0]; const signer = injected.signer; const unsub = await client.tx.balances .transferKeepAlive('INSERT_DEST_ADDRESS', 2_000_000_000_000n) .signAndSend(account.address, { signer }, async ({ status }) => { console.log('Transaction status', status.type); if (status.type === 'BestChainBlockIncluded') { console.log(`Transaction is included in best block`); } if (status.type === 'Finalized') { console.log( `Transaction completed at block hash ${status.value.blockHash}` ); await unsub(); } }); ``` ## Where to Go Next For more detailed information about Dedot, check the [official documentation](https://dedot.dev/){target=\_blank}. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/api-libraries/ --- BEGIN CONTENT --- --- title: API Libraries description: Dive into APIs for interacting with the Polkadot network, including Polkadot-API, Polkadot.js, Python Substrate Interface, and Sidecar REST services. template: index-page.html --- # API Libraries Explore the powerful API libraries designed for interacting with the Polkadot network. These libraries offer developers versatile tools to build, query, and manage blockchain interactions. Whether you’re working with JavaScript, TypeScript, Python, or RESTful services, they provide the flexibility to efficiently interact with and retrieve data from Polkadot-based chains. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/api-libraries/papi/ --- BEGIN CONTENT --- --- title: Polkadot-API description: Polkadot-API (PAPI) is a modular, composable library set designed for efficient interaction with Polkadot chains, prioritizing a "light-client first" approach. categories: Tooling, Dapps --- # Polkadot-API ## Introduction [Polkadot-API](https://github.com/polkadot-api/polkadot-api){target=\_blank} (PAPI) is a set of libraries built to be modular, composable, and grounded in a “light-client first” approach. Its primary aim is to equip dApp developers with an extensive toolkit for building fully decentralized applications. PAPI is optimized for light-client functionality, using the new JSON-RPC spec to support decentralized interactions fully. It provides strong TypeScript support with types and documentation generated directly from on-chain metadata, and it offers seamless access to storage reads, constants, transactions, events, and runtime calls. Developers can connect to multiple chains simultaneously and prepare for runtime updates through multi-descriptor generation and compatibility checks. PAPI is lightweight and performant, leveraging native BigInt, dynamic imports, and modular subpaths to avoid bundling unnecessary assets. It supports promise-based and observable-based APIs, integrates easily with Polkadot.js extensions, and offers signing options through browser extensions or private keys. ## Get Started ### API Instantiation To instantiate the API, you can install the package by using the following command: === "npm" ```bash npm i polkadot-api@{{dependencies.javascript_packages.polkadot_api.version}} ``` === "pnpm" ```bash pnpm add polkadot-api@{{dependencies.javascript_packages.polkadot_api.version}} ``` === "yarn" ```bash yarn add polkadot-api@{{dependencies.javascript_packages.polkadot_api.version}} ``` Then, obtain the latest metadata from the target chain and generate the necessary types: ```bash # Add the target chain npx papi add dot -n polkadot ``` The `papi add` command initializes the library by generating the corresponding types needed for the chain used. It assigns the chain a custom name and specifies downloading metadata from the Polkadot chain. You can replace `dot` with the name you prefer or with another chain if you want to add a different one. Once the latest metadata is downloaded, generate the required types: ```bash # Generate the necessary types npx papi ``` You can now set up a [`PolkadotClient`](https://github.com/polkadot-api/polkadot-api/blob/main/packages/client/src/types.ts#L153){target=\_blank} with your chosen provider to begin interacting with the API. Choose from Smoldot via WebWorker, Node.js, or direct usage, or connect through the WSS provider. The examples below show how to configure each option for your setup. === "Smoldot (WebWorker)" ```typescript // `dot` is the identifier assigned during `npx papi add` import { dot } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; import { getSmProvider } from 'polkadot-api/sm-provider'; import { chainSpec } from 'polkadot-api/chains/polkadot'; import { startFromWorker } from 'polkadot-api/smoldot/from-worker'; import SmWorker from 'polkadot-api/smoldot/worker?worker'; const worker = new SmWorker(); const smoldot = startFromWorker(worker); const chain = await smoldot.addChain({ chainSpec }); // Establish connection to the Polkadot relay chain const client = createClient(getSmProvider(chain)); // To interact with the chain, obtain the `TypedApi`, which provides // the necessary types for every API call on this chain const dotApi = client.getTypedApi(dot); ``` === "Smoldot (Node.js)" ```typescript // `dot` is the alias assigned during `npx papi add` import { dot } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; import { getSmProvider } from 'polkadot-api/sm-provider'; import { chainSpec } from 'polkadot-api/chains/polkadot'; import { startFromWorker } from 'polkadot-api/smoldot/from-node-worker'; import { fileURLToPath } from 'url'; import { Worker } from 'worker_threads'; // Get the path for the worker file in ESM const workerPath = fileURLToPath( import.meta.resolve('polkadot-api/smoldot/node-worker'), ); const worker = new Worker(workerPath); const smoldot = startFromWorker(worker); const chain = await smoldot.addChain({ chainSpec }); // Set up a client to connect to the Polkadot relay chain const client = createClient(getSmProvider(chain)); // To interact with the chain's API, use `TypedApi` for access to // all the necessary types and calls associated with this chain const dotApi = client.getTypedApi(dot); ``` === "Smoldot" ```typescript // `dot` is the alias assigned when running `npx papi add` import { dot } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; import { getSmProvider } from 'polkadot-api/sm-provider'; import { chainSpec } from 'polkadot-api/chains/polkadot'; import { start } from 'polkadot-api/smoldot'; // Initialize Smoldot client const smoldot = start(); const chain = await smoldot.addChain({ chainSpec }); // Set up a client to connect to the Polkadot relay chain const client = createClient(getSmProvider(chain)); // Access the `TypedApi` to interact with all available chain calls and types const dotApi = client.getTypedApi(dot); ``` === "WSS" ```typescript // `dot` is the identifier assigned when executing `npx papi add` import { dot } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; // Use this import for Node.js environments import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; // Establish a connection to the Polkadot relay chain const client = createClient( // The Polkadot SDK nodes may have compatibility issues; using this enhancer is recommended. // Refer to the Requirements page for additional details withPolkadotSdkCompat(getWsProvider('wss://dot-rpc.stakeworld.io')), ); // To interact with the chain, obtain the `TypedApi`, which provides // the types for all available calls in that chain const dotApi = client.getTypedApi(dot); ``` Now that you have set up the client, you can interact with the chain by reading and sending transactions. ### Reading Chain Data The `TypedApi` provides a streamlined way to read blockchain data through three main interfaces, each designed for specific data access patterns: - **Constants** - access fixed values or configurations on the blockchain using the `constants` interface: ```typescript const version = await typedApi.constants.System.Version(); ``` - **Storage queries** - retrieve stored values by querying the blockchain’s storage via the `query` interface: ```typescript const asset = await api.query.ForeignAssets.Asset.getValue( token.location, { at: 'best' }, ); ``` - **Runtime APIs** - interact directly with runtime APIs using the `apis` interface: ```typescript const metadata = await typedApi.apis.Metadata.metadata(); ``` To learn more about the different actions you can perform with the `TypedApi`, refer to the [TypedApi reference](https://papi.how/typed){target=\_blank}. ### Sending Transactions In PAPI, the `TypedApi` provides the `tx` and `txFromCallData` methods to send transactions. - The `tx` method allows you to directly send a transaction with the specified parameters by using the `typedApi.tx.Pallet.Call` pattern: ```typescript const tx: Transaction = typedApi.tx.Pallet.Call({arg1, arg2, arg3}); ``` For instance, to execute the `balances.transferKeepAlive` call, you can use the following snippet: ```typescript import { MultiAddress } from '@polkadot-api/descriptors'; const tx: Transaction = typedApi.tx.Balances.transfer_keep_alive({ dest: MultiAddress.Id('INSERT_DESTINATION_ADDRESS'), value: BigInt(INSERT_VALUE), }); ``` Ensure you replace `INSERT_DESTINATION_ADDRESS` and `INSERT_VALUE` with the actual destination address and value, respectively. - The `txFromCallData` method allows you to send a transaction using the call data. This option accepts binary call data and constructs the transaction from it. It validates the input upon creation and will throw an error if invalid data is provided. The pattern is as follows: ```typescript const callData = Binary.fromHex('0x...'); const tx: Transaction = typedApi.txFromCallData(callData); ``` For instance, to execute a transaction using the call data, you can use the following snippet: ```typescript const callData = Binary.fromHex('0x00002470617065726d6f6f6e'); const tx: Transaction = typedApi.txFromCallData(callData); ``` For more information about sending transactions, refer to the [Transactions](https://papi.how/typed/tx#transactions){target=\_blank} page. ## Where to Go Next For an in-depth guide on how to use PAPI, refer to the official [PAPI](https://papi.how/){target=\_blank} documentation. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/api-libraries/polkadot-js-api/ --- BEGIN CONTENT --- --- title: Polkadot.js API description: Interact with Polkadot SDK-based chains easily using the Polkadot.js API. Query chain data, submit transactions, and more via JavaScript or Typescript. categories: Tooling, Dapps --- # Polkadot.js API !!! warning "Maintenance Mode Only" The Polkadot.js API is now in maintenance mode and is no longer actively developed. New projects should use [Dedot](/develop/toolkit/api-libraries/dedot){target=\_blank} (TypeScript-first API) or [Polkadot API](/develop/toolkit/api-libraries/papi){target=\_blank} (modern, type-safe API) as actively maintained alternatives. ## Introduction The [Polkadot.js API](https://github.com/polkadot-js/api){target=\_blank} uses JavaScript/TypeScript to interact with Polkadot SDK-based chains. It allows you to query nodes, read chain state, and submit transactions through a dynamic, auto-generated API interface. ### Dynamic API Generation Unlike traditional static APIs, the Polkadot.js API generates its interfaces automatically when connecting to a node. Here's what happens when you connect: 1. The API connects to your node 2. It retrieves the chain's metadata 3. Based on this metadata, it creates specific endpoints in this format: `api...
` ### Available API Categories You can access three main categories of chain interactions: - **[Runtime constants](https://polkadot.js.org/docs/api/start/api.consts){target=\_blank}** (`api.consts`) - Access runtime constants directly - Returns values immediately without function calls - Example - `api.consts.balances.existentialDeposit` - **[State queries](https://polkadot.js.org/docs/api/start/api.query/){target=\_blank}** (`api.query`) - Read chain state - Example - `api.query.system.account(accountId)` - **[Transactions](https://polkadot.js.org/docs/api/start/api.tx/){target=\_blank}** (`api.tx`) - Submit extrinsics (transactions) - Example - `api.tx.balances.transfer(accountId, value)` The available methods and interfaces will automatically reflect what's possible on your connected chain. ## Installation To add the Polkadot.js API to your project, use the following command to install the version `{{ dependencies.javascript_packages.polkadot_js_api.version }}` which supports any Polkadot SDK-based chain: === "npm" ```bash npm i @polkadot/api@{{ dependencies.javascript_packages.polkadot_js_api.version }} ``` === "pnpm" ```bash pnpm add @polkadot/api@{{ dependencies.javascript_packages.polkadot_js_api.version }} ``` === "yarn" ```bash yarn add @polkadot/api@{{ dependencies.javascript_packages.polkadot_js_api.version }} ``` For more detailed information about installation, see the [Installation](https://polkadot.js.org/docs/api/start/install/){target=\_blank} section in the official Polkadot.js API documentation. ## Get Started ### Creating an API Instance To interact with a Polkadot SDK-based chain, you must establish a connection through an API instance. The API provides methods for querying chain state, sending transactions, and subscribing to updates. To create an API connection: ```js import { ApiPromise, WsProvider } from '@polkadot/api'; // Create a WebSocket provider const wsProvider = new WsProvider('wss://rpc.polkadot.io'); // Initialize the API const api = await ApiPromise.create({ provider: wsProvider }); // Verify the connection by getting the chain's genesis hash console.log('Genesis Hash:', api.genesisHash.toHex()); ``` !!!warning All `await` operations must be wrapped in an async function or block since the API uses promises for asynchronous operations. ### Reading Chain Data The API provides several ways to read data from the chain. You can access: - **Constants** - values that are fixed in the runtime and don't change without a runtime upgrade ```js // Get the minimum balance required for a new account const minBalance = api.consts.balances.existentialDeposit.toNumber(); ``` - **State** - current chain state that updates with each block ```js // Example address const address = '5DTestUPts3kjeXSTMyerHihn1uwMfLj8vU8sqF7qYrFabHE'; // Get current timestamp const timestamp = await api.query.timestamp.now(); // Get account information const { nonce, data: balance } = await api.query.system.account(address); console.log(` Timestamp: ${timestamp} Free Balance: ${balance.free} Nonce: ${nonce} `); ``` ### Sending Transactions Transactions (also called extrinsics) modify the chain state. Before sending a transaction, you need: - A funded account with sufficient balance to pay transaction fees - The account's keypair for signing To make a transfer: ```js // Assuming you have an `alice` keypair from the Keyring const recipient = 'INSERT_RECIPIENT_ADDRESS'; const amount = 'INSERT_VALUE'; // Amount in the smallest unit (e.g., Planck for DOT) // Sign and send a transfer const txHash = await api.tx.balances .transfer(recipient, amount) .signAndSend(alice); console.log('Transaction Hash:', txHash); ``` The `alice` keypair in the example comes from a `Keyring` object. For more details about managing keypairs, see the [Keyring documentation](https://polkadot.js.org/docs/keyring){target=\_blank}. ## Where to Go Next For more detailed information about the Polkadot.js API, check the [official documentation](https://polkadot.js.org/docs/){target=\_blank}. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/api-libraries/py-substrate-interface/ --- BEGIN CONTENT --- --- title: Python Substrate Interface description: Learn how to connect to Polkadot SDK-based nodes, query data, submit transactions, and manage blockchain interactions using the Python Substrate Interface. categories: Tooling, Dapps --- # Python Substrate Interface ## Introduction The [Python Substrate Interface](https://github.com/polkascan/py-substrate-interface){target=\_blank} is a powerful library that enables interaction with Polkadot SDK-based chains. It provides essential functionality for: - Querying on-chain storage - Composing and submitting extrinsics - SCALE encoding/decoding - Interacting with Substrate runtime metadata - Managing blockchain interactions through convenient utility methods ## Installation Install the library using `pip`: ```py pip install substrate-interface ``` For more installation details, see the [Installation](https://jamdottech.github.io/py-polkadot-sdk/getting-started/installation/){target=\_blank} section in the official Python Substrate Interface documentation. ## Get Started This guide will walk you through the basic operations with the Python Substrate Interface: connecting to a node, reading chain state, and submitting transactions. ### Establishing Connection The first step is to establish a connection to a Polkadot SDK-based node. You can connect to either a local or remote node: ```py from substrateinterface import SubstrateInterface # Connect to a node using websocket substrate = SubstrateInterface( # For local node: "ws://127.0.0.1:9944" # For Polkadot: "wss://rpc.polkadot.io" # For Kusama: "wss://kusama-rpc.polkadot.io" url="INSERT_WS_URL" ) # Verify connection print(f"Connected to chain: {substrate.chain}") ``` ### Reading Chain State You can query various on-chain storage items. To retrieve data, you need to specify three key pieces of information: - **Pallet name** - module or pallet that contains the storage item you want to access - **Storage item** - specific storage entry you want to query within the pallet - **Required parameters** - any parameters needed to retrieve the desired data Here's an example of how to check an account's balance and other details: ```py # ... # Query account balance and info account_info = substrate.query( module="System", # The pallet name storage_function="Account", # The storage item params=["INSERT_ADDRESS"], # Account address in SS58 format ) # Access account details from the result free_balance = account_info.value["data"]["free"] reserved = account_info.value["data"]["reserved"] nonce = account_info.value["nonce"] print( f""" Account Details: - Free Balance: {free_balance} - Reserved: {reserved} - Nonce: {nonce} """ ) ``` ### Submitting Transactions To modify the chain state, you need to submit transactions (extrinsics). Before proceeding, ensure you have: - A funded account with sufficient balance to pay transaction fees - Access to the account's keypair Here's how to create and submit a balance transfer: ```py #... # Compose the transfer call call = substrate.compose_call( call_module="Balances", # The pallet name call_function="transfer_keep_alive", # The extrinsic function call_params={ 'dest': 'INSERT_ADDRESS', # Recipient's address 'value': 'INSERT_VALUE' # Amount in smallest unit (e.g., Planck for DOT) } ) # Create a signed extrinsic extrinsic = substrate.create_signed_extrinsic( call=call, keypair=keypair # Your keypair for signing ) # Submit and wait for inclusion receipt = substrate.submit_extrinsic( extrinsic, wait_for_inclusion=True # Wait until the transaction is in a block ) if receipt.is_success: print( f""" Transaction successful: - Extrinsic Hash: {receipt.extrinsic_hash} - Block Hash: {receipt.block_hash} """ ) else: print(f"Transaction failed: {receipt.error_message}") ``` The `keypair` object is essential for signing transactions. See the [Keypair](https://jamdottech.github.io/py-polkadot-sdk/reference/keypair/){target=\_blank} documentation for more details. ## Where to Go Next Now that you understand the basics, you can: - Explore more complex queries and transactions - Learn about batch transactions and utility functions - Discover how to work with custom pallets and types For comprehensive reference materials and advanced features, see the [Python Substrate Interface](https://jamdottech.github.io/py-polkadot-sdk/){target=\_blank} documentation. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/api-libraries/sidecar/ --- BEGIN CONTENT --- --- title: Sidecar Rest API description: Learn about Substrate API Sidecar, a REST service that provides endpoints for interacting with Polkadot SDK-based chains and simplifies blockchain interactions. categories: Tooling, Dapps --- # Sidecar API ## Introduction The [Sidecar Rest API](https://github.com/paritytech/substrate-api-sidecar){target=\_blank} is a service that provides a REST interface for interacting with Polkadot SDK-based blockchains. With this API, developers can easily access a broad range of endpoints for nodes, accounts, transactions, parachains, and more. Sidecar functions as a caching layer between your application and a Polkadot SDK-based node, offering standardized REST endpoints that simplify interactions without requiring complex, direct RPC calls. This approach is especially valuable for developers who prefer REST APIs or build applications in languages with limited WebSocket support. Some of the key features of the Sidecar API include: - **REST API interface** - provides a familiar REST API interface for interacting with Polkadot SDK-based chains - **Standardized endpoints** - offers consistent endpoint formats across different chain implementations - **Caching layer** - acts as a caching layer to improve performance and reduce direct node requests - **Multiple chain support** - works with any Polkadot SDK-based chain, including Polkadot, Kusama, and custom chains ## Prerequisites Sidecar API requires Node.js version 18.14 LTS or higher. Verify your Node.js version: ```bash node --version ``` If you need to install or update Node.js, visit the [official Node.js website](https://nodejs.org/){target=\_blank} to download and install the latest LTS version. ## Installation To install Substrate API Sidecar, use one of the following commands: === "npm" ```bash npm install -g @substrate/api-sidecar ``` === "pnpm" ```bash pnpm install -g @substrate/api-sidecar ``` === "yarn" ```bash yarn global add @substrate/api-sidecar ``` You can confirm the installation by running: ```bash substrate-api-sidecar --version ``` For more information about the Sidecar API installation, see the [installation and usage](https://github.com/paritytech/substrate-api-sidecar?tab=readme-ov-file#npm-package-installation-and-usage){target=\_blank} section of the Sidecar API README. ## Usage To use the Sidecar API, you have two options: - **Local node** - run a node locally, which Sidecar will connect to by default, requiring no additional configuration. To start, run: ``` substrate-api-sidecar ``` - **Remote node** - connect Sidecar to a remote node by specifying the RPC endpoint for that chain. For example, to gain access to the Polkadot Asset Hub associated endpoints: ``` SAS_SUBSTRATE_URL=wss://polkadot-asset-hub-rpc.polkadot.io substrate-api-sidecar ``` For more configuration details, see the [Configuration](https://github.com/paritytech/substrate-api-sidecar?tab=readme-ov-file#configuration){target=\_blank} section of the Sidecar API documentation. Once the Sidecar API is running, you’ll see output similar to this:
SAS_SUBSTRATE_URL=wss://polkadot-asset-hub-rpc.polkadot.io substrate-api-sidecar
SAS: 📦 LOG: ✅ LEVEL: "info" ✅ JSON: false ✅ FILTER_RPC: false ✅ STRIP_ANSI: false ✅ WRITE: false ✅ WRITE_PATH: "/opt/homebrew/lib/node_modules/@substrate/api-sidecar/build/src/logs" ✅ WRITE_MAX_FILE_SIZE: 5242880 ✅ WRITE_MAX_FILES: 5 📦 SUBSTRATE: ✅ URL: "wss://polkadot-asset-hub-rpc.polkadot.io" ✅ TYPES_BUNDLE: undefined ✅ TYPES_CHAIN: undefined ✅ TYPES_SPEC: undefined ✅ TYPES: undefined ✅ CACHE_CAPACITY: undefined 📦 EXPRESS: ✅ BIND_HOST: "127.0.0.1" ✅ PORT: 8080 ✅ KEEP_ALIVE_TIMEOUT: 5000 📦 METRICS: ✅ ENABLED: false ✅ PROM_HOST: "127.0.0.1" ✅ PROM_PORT: 9100 ✅ LOKI_HOST: "127.0.0.1" ✅ LOKI_PORT: 3100 ✅ INCLUDE_QUERYPARAMS: false
2024-11-06 08:06:01 info: Version: 19.3.0 2024-11-06 08:06:02 warn: API/INIT: RPC methods not decorated: chainHead_v1_body, chainHead_v1_call, chainHead_v1_continue, chainHead_v1_follow, chainHead_v1_header, chainHead_v1_stopOperation, chainHead_v1_storage, chainHead_v1_unfollow, chainHead_v1_unpin, chainSpec_v1_chainName, chainSpec_v1_genesisHash, chainSpec_v1_properties, transactionWatch_v1_submitAndWatch, transactionWatch_v1_unwatch, transaction_v1_broadcast, transaction_v1_stop 2024-11-06 08:06:02 info: Connected to chain Polkadot Asset Hub on the statemint client at wss://polkadot-asset-hub-rpc.polkadot.io 2024-11-06 08:06:02 info: Listening on http://127.0.0.1:8080/ 2024-11-06 08:06:02 info: Check the root endpoint (http://127.0.0.1:8080/) to see the available endpoints for the current node
With Sidecar running, you can access the exposed endpoints via a browser, [`Postman`](https://www.postman.com/){target=\_blank}, [`curl`](https://curl.se/){target=\_blank}, or your preferred tool. ### Endpoints Sidecar API provides a set of REST endpoints that allow you to query different aspects of the chain, including blocks, accounts, and transactions. Each endpoint offers specific insights into the chain’s state and activities. For example, to retrieve the version of the node, use the `/node/version` endpoint: ```bash curl -X 'GET' \ 'http://127.0.0.1:8080/node/version' \ -H 'accept: application/json' ``` Alternatively, you can access `http://127.0.0.1:8080/node/version` directly in a browser since it’s a `GET` request. In response, you’ll see output similar to this (assuming you’re connected to Polkadot Asset Hub):
curl -X 'GET' 'http://127.0.0.1:8080/node/version' -H 'accept: application/json'
{ "clientVersion": "1.16.1-835e0767fe8", "clientImplName": "statemint", "chain": "Polkadot Asset Hub" }
For a complete list of available endpoints and their documentation, visit the [Sidecar API list endpoints](https://paritytech.github.io/substrate-api-sidecar/dist/){target=\_blank}. You can learn about the endpoints and how to use them in your applications. ## Where to Go Next To dive deeper, refer to the [official Sidecar documentation](https://github.com/paritytech/substrate-api-sidecar?tab=readme-ov-file#substrateapi-sidecar){target=\_blank}. This provides a comprehensive guide to the available configurations and advanced usage. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/api-libraries/subxt/ --- BEGIN CONTENT --- --- title: Subxt Rust API description: Subxt is a Rust library for type-safe interaction with Polkadot SDK blockchains, enabling transactions, state queries, runtime API access, and more. categories: Tooling, Dapps --- # Subxt Rust API ## Introduction Subxt is a Rust library designed to interact with Polkadot SDK-based blockchains. It provides a type-safe interface for submitting transactions, querying on-chain state, and performing other blockchain interactions. By leveraging Rust's strong type system, subxt ensures that your code is validated at compile time, reducing runtime errors and improving reliability. ## Prerequisites Before using subxt, ensure you have the following requirements: - Rust and Cargo installed on your system. You can install them using [Rustup](https://rustup.rs/){target=\_blank} - A Rust project initialized. If you don't have one, create it with: ```bash cargo new my_project && cd my_project ``` ## Installation To use subxt in your project, you must install the necessary dependencies. Each plays a specific role in enabling interaction with the blockchain: 1. **Install the subxt CLI** - [`subxt-cli`](https://crates.io/crates/subxt-cli){target=\_blank} is a command-line tool that provides utilities for working with Polkadot SDK metadata. In the context of subxt, it is essential to download chain metadata, which is required to generate type-safe Rust interfaces for interacting with the blockchain. Install it using: ```bash cargo install subxt-cli@{{dependencies.crates.subxt_cli.version}} ``` 2. **Add core dependencies** - these dependencies are essential for interacting with the blockchain: - **[subxt](https://crates.io/crates/subxt){target=\_blank}** - the main library for communicating with Polkadot SDK nodes. It handles RPC requests, encoding/decoding, and type generation ```bash cargo add subxt@{{dependencies.crates.subxt.version}} ``` - **[subxt-signer](https://crates.io/crates/subxt-signer){target=\_blank}** - provides cryptographic functionality for signing transactions. Without this, you can only read data but cannot submit transactions ```bash cargo add subxt-signer@{{dependencies.crates.subxt_signer.version}} ``` - **[tokio](https://crates.io/crates/tokio){target=\_blank}** - an asynchronous runtime for Rust. Since blockchain operations are async, Tokio enables the efficient handling of network requests. The `rt` feature enables Tokio's runtime, including the current-thread single-threaded scheduler, which is necessary for async execution. The `macros` feature provides procedural macros like `#[tokio::main]` to simplify runtime setup ```bash cargo add tokio@{{dependencies.crates.tokio.version}} --features rt,macros ``` After adding the dependencies, your `Cargo.toml` should look like this: ```toml [package] name = "my_project" version = "0.1.0" edition = "2021" [dependencies] subxt = "0.41.0" subxt-signer = "0.41.0" tokio = { version = "1.44.2", features = ["rt", "macros"] } ``` ## Get Started This guide will walk you through the fundamental operations of subxt, from setting up your environment to executing transactions and querying blockchain state. ### Download Chain Metadata Before interacting with a blockchain, you need to retrieve its metadata. This metadata defines storage structures, extrinsics, and other runtime details. Use the `subxt-cli` tool to download the metadata, replacing `INSERT_NODE_URL` with the URL of the node you want to interact with: ```bash subxt metadata --url INSERT_NODE_URL > polkadot_metadata.scale ``` ### Generate Type-Safe Interfaces Use the `#[subxt::subxt]` macro to generate a type-safe Rust interface from the downloaded metadata: ```rust // Generate an interface that we can use from the node's metadata. #[subxt::subxt(runtime_metadata_path = "./polkadot_metadata.scale")] pub mod polkadot {} ``` Once subxt interfaces are generated, you can interact with your node in the following ways. You can use the links below to view the related subxt documentation: - **[Transactions](https://docs.rs/subxt/latest/subxt/book/usage/transactions/index.html){target=\_blank}** - builds and submits transactions, monitors their inclusion in blocks, and retrieves associated events - **[Storage](https://docs.rs/subxt/latest/subxt/book/usage/storage/index.html){target=\_blank}** - enables querying of node storage data - **[Events](https://docs.rs/subxt/latest/subxt/book/usage/events/index.html){target=\_blank}** - retrieves events emitted from recent blocks - **[Constants](https://docs.rs/subxt/latest/subxt/book/usage/constants/index.html){target=\_blank}** - accesses constant values stored in nodes that remain unchanged across a specific runtime version. - **[Blocks](https://docs.rs/subxt/latest/subxt/book/usage/blocks/index.html){target=\_blank}** - loads recent blocks or subscribes to new/finalized blocks, allowing examination of extrinsics, events, and storage at those blocks - **[Runtime APIs](https://docs.rs/subxt/latest/subxt/book/usage/runtime_apis/index.html){target=\_blank}** - makes calls into pallet runtime APIs to fetch data - **[Custom values](https://docs.rs/subxt/latest/subxt/book/usage/custom_values/index.html){target=\_blank}** - accesses "custom values" contained within metadata - **[Raw RPC calls](https://docs.rs/subxt/latest/subxt/book/usage/rpc/index.html){target=\_blank}** - facilitates raw RPC requests to compatible nodes ### Initialize the Subxt Client To interact with a blockchain node using subxt, create an asynchronous main function and initialize the client. Replace `INSERT_NODE_URL` with the URL of your target node: ```rust use std::str::FromStr; use subxt::utils::AccountId32; use subxt::{OnlineClient, PolkadotConfig}; use subxt_signer::{bip39::Mnemonic,sr25519::Keypair}; // Generate an interface that we can use from the node's metadata. #[subxt::subxt(runtime_metadata_path = "./polkadot_metadata.scale")] pub mod polkadot {} #[tokio::main(flavor = "current_thread")] async fn main() -> Result<(), Box> { // Define the node URL. const NODE_URL: &str = "INSERT_NODE_URL"; // Initialize the Subxt client to interact with the blockchain. let api = OnlineClient::::from_url(NODE_URL).await?; // A query to obtain some constant. let constant_query = polkadot::constants().balances().existential_deposit(); // Obtain the value. let value = api.constants().at(&constant_query)?; println!("Existential deposit: {:?}", value); // Define the target account address. const ADDRESS: &str = "INSERT_ADDRESS"; let account = AccountId32::from_str(ADDRESS).unwrap(); // Build a storage query to access account information. let storage_query = polkadot::storage().system().account(&account.into()); // Fetch the latest state for the account. let result = api .storage() .at_latest() .await? .fetch(&storage_query) .await? .unwrap(); println!("Account info: {:?}", result); // Define the recipient address and transfer amount. const DEST_ADDRESS: &str = "INSERT_DEST_ADDRESS"; const AMOUNT: u128 = INSERT_AMOUNT; // Convert the recipient address into an `AccountId32`. let dest = AccountId32::from_str(DEST_ADDRESS).unwrap(); // Build the balance transfer extrinsic. let balance_transfer_tx = polkadot::tx() .balances() .transfer_allow_death(dest.into(), AMOUNT); // Load the sender's keypair from a mnemonic phrase. const SECRET_PHRASE: &str = "INSERT_SECRET_PHRASE"; let mnemonic = Mnemonic::parse(SECRET_PHRASE).unwrap(); let sender_keypair = Keypair::from_phrase(&mnemonic, None).unwrap(); // Sign and submit the extrinsic, then wait for it to be finalized. let events = api .tx() .sign_and_submit_then_watch_default(&balance_transfer_tx, &sender_keypair) .await? .wait_for_finalized_success() .await?; // Check for a successful transfer event. if let Some(event) = events.find_first::()? { println!("Balance transfer successful: {:?}", event); } Ok(()) } // Your code here... Ok(()) } ``` ### Read Chain Data subxt provides multiple ways to access on-chain data: - **Constants** - constants are predefined values in the runtime that remain unchanged unless modified by a runtime upgrade For example, to retrieve the existential deposit, use: ```rust // A query to obtain some constant. let constant_query = polkadot::constants().balances().existential_deposit(); // Obtain the value. let value = api.constants().at(&constant_query)?; println!("Existential deposit: {:?}", value); ``` - **State** - state refers to the current chain data, which updates with each block To fetch account information, replace `INSERT_ADDRESS` with the address you want to fetch data from and use: ```rust // Define the target account address. const ADDRESS: &str = "INSERT_ADDRESS"; let account = AccountId32::from_str(ADDRESS).unwrap(); // Build a storage query to access account information. let storage_query = polkadot::storage().system().account(&account.into()); // Fetch the latest state for the account. let result = api .storage() .at_latest() .await? .fetch(&storage_query) .await? .unwrap(); println!("Account info: {:?}", result); ``` ### Submit Transactions To submit a transaction, you must construct an extrinsic, sign it with your private key, and send it to the blockchain. Replace `INSERT_DEST_ADDRESS` with the recipient's address, `INSERT_AMOUNT` with the amount to transfer, and `INSERT_SECRET_PHRASE` with the sender's mnemonic phrase: ```rust // Define the recipient address and transfer amount. const DEST_ADDRESS: &str = "INSERT_DEST_ADDRESS"; const AMOUNT: u128 = INSERT_AMOUNT; // Convert the recipient address into an `AccountId32`. let dest = AccountId32::from_str(DEST_ADDRESS).unwrap(); // Build the balance transfer extrinsic. let balance_transfer_tx = polkadot::tx() .balances() .transfer_allow_death(dest.into(), AMOUNT); // Load the sender's keypair from a mnemonic phrase. const SECRET_PHRASE: &str = "INSERT_SECRET_PHRASE"; let mnemonic = Mnemonic::parse(SECRET_PHRASE).unwrap(); let sender_keypair = Keypair::from_phrase(&mnemonic, None).unwrap(); // Sign and submit the extrinsic, then wait for it to be finalized. let events = api .tx() .sign_and_submit_then_watch_default(&balance_transfer_tx, &sender_keypair) .await? .wait_for_finalized_success() .await?; // Check for a successful transfer event. if let Some(event) = events.find_first::()? { println!("Balance transfer successful: {:?}", event); } ``` ## Where to Go Next Now that you've covered the basics dive into the official [subxt documentation](https://docs.rs/subxt/latest/subxt/book/index.html){target=\_blank} for comprehensive reference materials and advanced features. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/ --- BEGIN CONTENT --- --- title: Toolkit description: Learn about Polkadot's core development toolkit, from blockchain construction tools to API libraries and cross-chain messaging capabilities. template: index-page.html --- # Toolkit Explore Polkadot's core development toolkit, designed to support a variety of developers and use cases within the ecosystem. Whether you're building blockchain infrastructure, developing cross-chain applications, or integrating with external services, this section offers essential tools and resources to help you succeed. Key tools for different audiences: - **Parachain developers** - leverage development tools for building and managing Polkadot SDK-based blockchains, optimizing the infrastructure of the ecosystem - **Application developers** - develop decentralized applications (dApps) that interact seamlessly with the Polkadot network, using APIs, SDKs, and integration tools for efficient application development - **All development paths** - use Polkadot’s XCM and messaging tools to enable interoperability and asset transfers ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/integrations/ --- BEGIN CONTENT --- --- title: Integrations description: Explore fundamental integrations in the Polkadot ecosystem, including indexers for querying blockchain data, oracles for external data, and wallets. template: index-page.html --- # Integrations Polkadot offers a wide range of integrations that allow developers to enhance their decentralized applications (dApps) and leverage the full capabilities of the ecosystem. Whether you’re looking to extend your application’s functionality, integrate with other chains, or access specialized services, these integrations provide the tools and resources you need to build efficiently and effectively. Explore the available options to find the solutions that best suit your development needs. ## Key Integration Solutions Polkadot’s ecosystem offers a variety of integrations designed to enhance dApp functionality, improve data management, and bridge the gap between on-chain and off-chain systems. These integrations provide the building blocks needed for creating more robust, efficient, and user-friendly decentralized applications. Some of the available integrations are explained [in this section](#in-this-section). ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/integrations/indexers/ --- BEGIN CONTENT --- --- title: Indexers description: Discover blockchain indexers. Enhance data access, enable fast and complex queries, and optimize blockchain data for seamless app performance. categories: Tooling, Dapps --- # Indexers ## The Challenge of Blockchain Data Access Blockchain data is inherently sequential and distributed, with information stored chronologically across numerous blocks. While retrieving data from a single block through JSON-RPC API calls is straightforward, more complex queries that span multiple blocks present significant challenges: - Data is scattered and unorganized across the blockchain - Retrieving large datasets can take days or weeks to sync - Complex operations (like aggregations, averages, or cross-chain queries) require additional processing - Direct blockchain queries can impact dApp performance and responsiveness ## What is a Blockchain Indexer? A blockchain indexer is a specialized infrastructure tool that processes, organizes, and stores blockchain data in an optimized format for efficient querying. Think of it as a search engine for blockchain data that: - Continuously monitors the blockchain for new blocks and transactions - Processes and categorizes this data according to predefined schemas - Stores the processed data in an easily queryable database - Provides efficient APIs (typically [GraphQL](https://graphql.org/){target=\_blank}) for data retrieval ## Indexer Implementations
- __Subsquid__ --- Subsquid is a data network that allows rapid and cost-efficient retrieval of blockchain data from 100+ chains using Subsquid's decentralized data lake and open-source SDK. In simple terms, Subsquid can be considered an ETL (extract, transform, and load) tool with a GraphQL server included. It enables comprehensive filtering, pagination, and even full-text search capabilities. Subsquid has native and full support for EVM and Substrate data, even within the same project. [:octicons-arrow-right-24: Reference](https://www.sqd.ai/){target=\_blank} - __Subquery__ --- SubQuery is a fast, flexible, and reliable open-source data decentralised infrastructure network that provides both RPC and indexed data to consumers worldwide. It provides custom APIs for your web3 project across multiple supported chains. [:octicons-arrow-right-24: Reference](https://subquery.network/){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/integrations/oracles/ --- BEGIN CONTENT --- --- title: Oracles description: Learn about blockchain oracles, the essential bridges connecting blockchains with real-world data for decentralized applications in the Polkadot ecosystem. categories: Tooling, Dapps --- # Oracles ## What is a Blockchain Oracle? Oracles enable blockchains to access external data sources. Since blockchains operate as isolated networks, they cannot natively interact with external systems - this limitation is known as the "blockchain oracle problem." Oracles solves this by extracting data from external sources (like APIs, IoT devices, or other blockchains), validating it, and submitting it on-chain. While simple oracle implementations may rely on a single trusted provider, more sophisticated solutions use decentralized networks where multiple providers stake assets and reach consensus on data validity. Typical applications include DeFi price feeds, weather data for insurance contracts, and cross-chain asset verification. ## Oracle Implementations
- __Acurast__ --- Acurast is a decentralized, serverless cloud platform that uses a distributed network of mobile devices for oracle services, addressing centralized trust and data ownership issues. In the Polkadot ecosystem, it allows developers to define off-chain data and computation needs, which are processed by these devices acting as decentralized oracle nodes, delivering results to Substrate (Wasm) and EVM environments. [:octicons-arrow-right-24: Reference](https://acurast.com/){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/integrations/wallets/ --- BEGIN CONTENT --- --- title: Wallets description: Explore blockchain wallets. Securely manage digital assets with hot wallets for online access or cold wallets for offline, enhanced security. categories: Tooling, Dapps --- # Wallets ## What is a Blockchain Wallet? A wallet serves as your gateway to interacting with blockchain networks. Rather than storing funds, wallets secure your private keys, controlling access to your blockchain assets. Your private key provides complete control over all permitted transactions on your blockchain account, making it essential to keep it secure. Wallet types fall into two categories based on their connection to the internet: - [**Hot wallets**](#hot-wallets) - online storage through websites, browser extensions or smartphone apps - [**Cold wallets**](#cold-wallets) - offline storage using hardware devices or air-gapped systems ## Hot Wallets
- __Nova Wallet__ --- A non-custodial, mobile-first wallet for managing assets and interacting with the Polkadot and Kusama ecosystems. It supports staking, governance, cross-chain transfers, and crowdloans. With advanced features, seamless multi-network support, and strong security, Nova Wallet empowers users to explore the full potential of Polkadot parachains on the go. [:octicons-arrow-right-24: Reference](https://novawallet.io/){target=\_blank} - __Talisman__ --- A non-custodial web browser extension that allows you to manage your portfolio and interact with Polkadot and Ethereum applications. It supports Web3 apps, asset storage, and account management across over 150 Polkadot SDK-based and EVM networks. Features include NFT management, Ledger support, fiat on-ramp, and portfolio tracking. [:octicons-arrow-right-24: Reference](https://talisman.xyz/){target=\_blank} - __Subwallet__ --- A non-custodial web browser extension and mobile wallet for Polkadot and Ethereum. Track, send, receive, and monitor multi-chain assets on 150+ networks. Import account with seed phrase, private key, QR code, and JSON file. Import token & NFT, attach read-only account. XCM Transfer, NFT Management, Parity Signer & Ledger support, light clients support, EVM dApp support, MetaMask compatibility, custom endpoints, fiat on-ramp, phishing detection, transaction history. [:octicons-arrow-right-24: Reference](https://www.subwallet.app/){target=\_blank}
## Cold Wallets
- __Ledger__ --- A hardware wallet that securely stores cryptocurrency private keys offline, protecting them from online threats. Using a secure chip and the Ledger Live app allows safe transactions and asset management while keeping keys secure. [:octicons-arrow-right-24: Reference](https://www.ledger.com/){target=\_blank} - __Polkadot Vault__ --- This cold storage solution lets you use a phone in airplane mode as an air-gapped wallet, turning any spare phone, tablet, or iOS/Android device into a hardware wallet. [:octicons-arrow-right-24: Reference](https://vault.novasama.io/){target=\_blank}
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/interoperability/asset-transfer-api/ --- BEGIN CONTENT --- --- title: Asset Transfer API description: Asset Transfer API is a library that simplifies the transfer of assets for Polkadot SDK-based chains. It provides methods for cross-chain and local transfers. template: index-page.html --- # Asset Transfer API The Asset Transfer API is a library designed to streamline asset transfers for Polkadot SDK-based chains, offering methods for both cross-chain and local transactions. ## What Can I Do with the Asset Transfer API? - Facilitate cross-chain transfers to and from the relay chain, system chains, and parachains - Facilitate local asset transfers - Initiate liquid pool asset transfers in Asset Hub - Claim trapped assets - Retrieve fee information ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/interoperability/asset-transfer-api/overview/ --- BEGIN CONTENT --- --- title: Asset Transfer API description: Asset Transfer API is a library that simplifies the transfer of assets for Polkadot SDK-based chains. It provides methods for cross-chain and local transfers. categories: Basics, Tooling, Dapps --- # Asset Transfer API ## Introduction [Asset Transfer API](https://github.com/paritytech/asset-transfer-api){target=\_blank}, a tool developed and maintained by [Parity](https://www.parity.io/){target=\_blank}, is a specialized library designed to streamline asset transfers for Polkadot SDK-based blockchains. This API provides a simplified set of methods for users to: - Execute asset transfers to other parachains or locally within the same chain - Facilitate transactions involving system parachains like Asset Hub (Polkadot and Kusama) Using this API, developers can manage asset transfers more efficiently, reducing the complexity of cross-chain transactions and enabling smoother operations within the ecosystem. For additional support and information, please reach out through [GitHub Issues](https://github.com/paritytech/asset-transfer-api/issues){target=\_blank}. ## Prerequisites Before you begin, ensure you have the following installed: - [Node.js](https://nodejs.org/en/){target=\_blank} (recommended version 21 or greater) - Package manager - [npm](https://www.npmjs.com/){target=\_blank} should be installed with Node.js by default. Alternatively, you can use other package managers like [Yarn](https://yarnpkg.com/){target=\_blank} This documentation covers version `{{dependencies.javascript_packages.asset_transfer_api.version}}` of Asset Transfer API. ## Install Asset Transfer API To use `asset-transfer-api`, you need a TypeScript project. If you don't have one, you can create a new one: 1. Create a new directory for your project: ```bash mkdir my-asset-transfer-project \ && cd my-asset-transfer-project ``` 2. Initialize a new TypeScript project: ```bash npm init -y \ && npm install typescript ts-node @types/node --save-dev \ && npx tsc --init ``` Once you have a project set up, you can install the `asset-transfer-api` package. Run the following command to install the package: ```bash npm install @substrate/asset-transfer-api@{{dependencies.javascript_packages.asset_transfer_api.version}} ``` ## Set Up Asset Transfer API To initialize the Asset Transfer API, you need three key components: - A Polkadot.js API instance - The `specName` of the chain - The XCM version to use ### Using Helper Function from Library Leverage the `constructApiPromise` helper function provided by the library for the simplest setup process. It not only constructs a Polkadot.js `ApiPromise` but also automatically retrieves the chain's `specName` and fetches a safe XCM version. By using this function, developers can significantly reduce boilerplate code and potential configuration errors, making the initial setup both quicker and more robust. ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'INSERT_WEBSOCKET_URL', ); const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion); // Your code using assetsApi goes here } main(); ``` !!!warning The code example is enclosed in an async main function to provide the necessary asynchronous context. However, you can use the code directly if you're already working within an async environment. The key is to ensure you're in an async context when working with these asynchronous operations, regardless of your specific setup. ## Asset Transfer API Reference For detailed information on the Asset Transfer API, including available methods, data types, and functionalities, refer to the [Asset Transfer API Reference](/develop/toolkit/interoperability/asset-transfer-api/reference){target=\_blank} section. This resource provides in-depth explanations and technical specifications to help you integrate and utilize the API effectively. ## Examples ### Relay to System Parachain Transfer This example demonstrates how to initiate a cross-chain token transfer from a relay chain to a system parachain. Specifically, 1 WND will be transferred from a Westend (relay chain) account to a Westmint (system parachain) account. ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'wss://westend-rpc.polkadot.io', ); const assetApi = new AssetTransferApi(api, specName, safeXcmVersion); let callInfo; try { callInfo = await assetApi.createTransferTransaction( '1000', '5EWNeodpcQ6iYibJ3jmWVe85nsok1EDG8Kk3aFg8ZzpfY1qX', ['WND'], ['1000000000000'], { format: 'call', xcmVersion: safeXcmVersion, }, ); console.log(`Call data:\n${JSON.stringify(callInfo, null, 4)}`); } catch (e) { console.error(e); throw Error(e as string); } const decoded = assetApi.decodeExtrinsic(callInfo.tx, 'call'); console.log(`\nDecoded tx:\n${JSON.stringify(JSON.parse(decoded), null, 4)}`); } main() .catch((err) => console.error(err)) .finally(() => process.exit()); ``` After running the script, you'll see the following output in the terminal, which shows the call data for the cross-chain transfer and its decoded extrinsic details:
ts-node relayToSystem.ts
Call data: { "origin": "westend", "dest": "westmint", "direction": "RelayToSystem", "xcmVersion": 3, "method": "transferAssets", "format": "call", "tx": "0x630b03000100a10f03000101006c0c32faf970eacb2d4d8e538ac0dab3642492561a1be6f241c645876c056c1d030400000000070010a5d4e80000000000" } Decoded tx: { "args": { "dest": { "V3": { "parents": "0", "interior": { "X1": { "Parachain": "1,000" } } } }, "beneficiary": { "V3": { "parents": "0", "interior": { "X1": { "AccountId32": { "network": null, "id": "0x6c0c32faf970eacb2d4d8e538ac0dab3642492561a1be6f241c645876c056c1d" } } } } }, "assets": { "V3": [ { "id": { "Concrete": { "parents": "0", "interior": "Here" } }, "fun": { "Fungible": "1,000,000,000,000" } } ] }, "fee_asset_item": "0", "weight_limit": "Unlimited" }, "method": "transferAssets", "section": "xcmPallet" }
### Local Parachain Transfer The following example demonstrates a local GLMR transfer within Moonbeam, using the `balances` pallet. It transfers 1 GLMR token from one account to another account, where both the sender and recipient accounts are located on the same parachain. ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'wss://wss.api.moonbeam.network', ); const assetApi = new AssetTransferApi(api, specName, safeXcmVersion); let callInfo; try { callInfo = await assetApi.createTransferTransaction( '2004', '0xF977814e90dA44bFA03b6295A0616a897441aceC', [], ['1000000000000000000'], { format: 'call', keepAlive: true, }, ); console.log(`Call data:\n${JSON.stringify(callInfo, null, 4)}`); } catch (e) { console.error(e); throw Error(e as string); } const decoded = assetApi.decodeExtrinsic(callInfo.tx, 'call'); console.log(`\nDecoded tx:\n${JSON.stringify(JSON.parse(decoded), null, 4)}`); } main() .catch((err) => console.error(err)) .finally(() => process.exit()); ``` Upon executing this script, the terminal will display the following output, illustrating the encoded extrinsic for the cross-chain message and its corresponding decoded format:
ts-node localParachainTx.ts
Call data: { "origin": "moonbeam", "dest": "moonbeam", "direction": "local", "xcmVersion": null, "method": "balances::transferKeepAlive", "format": "call", "tx": "0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600" } Decoded tx: { "args": { "dest": "0xF977814e90dA44bFA03b6295A0616a897441aceC", "value": "1,000,000,000,000,000,000" }, "method": "transferKeepAlive", "section": "balances" }
### Parachain to Parachain Transfer This example demonstrates creating a cross-chain asset transfer between two parachains. It shows how to send vMOVR and vBNC from a Moonriver account to a Bifrost Kusama account using the safe XCM version. It connects to Moonriver, initializes the API, and uses the `createTransferTransaction` method to prepare a transaction. ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'wss://moonriver.public.blastapi.io', ); const assetApi = new AssetTransferApi(api, specName, safeXcmVersion); let callInfo; try { callInfo = await assetApi.createTransferTransaction( '2001', '0xc4db7bcb733e117c0b34ac96354b10d47e84a006b9e7e66a229d174e8ff2a063', ['vMOVR', '72145018963825376852137222787619937732'], ['1000000', '10000000000'], { format: 'call', xcmVersion: safeXcmVersion, }, ); console.log(`Call data:\n${JSON.stringify(callInfo, null, 4)}`); } catch (e) { console.error(e); throw Error(e as string); } const decoded = assetApi.decodeExtrinsic(callInfo.tx, 'call'); console.log(`\nDecoded tx:\n${JSON.stringify(JSON.parse(decoded), null, 4)}`); } main() .catch((err) => console.error(err)) .finally(() => process.exit()); ``` After running this script, you'll see the following output in your terminal. This output presents the encoded extrinsic for the cross-chain message, along with its decoded format, providing a clear view of the transaction details.
ts-node paraToPara.ts
Call data: { "origin": "moonriver", "dest": "bifrost", "direction": "ParaToPara", "xcmVersion": 2, "method": "transferMultiassets", "format": "call", "tx": "0x6a05010800010200451f06080101000700e40b540200010200451f0608010a0002093d000000000001010200451f0100c4db7bcb733e117c0b34ac96354b10d47e84a006b9e7e66a229d174e8ff2a06300" } Decoded tx: { "args": { "assets": { "V2": [ { "id": { "Concrete": { "parents": "1", "interior": { "X2": [ { "Parachain": "2,001" }, { "GeneralKey": "0x0101" } ] } } }, "fun": { "Fungible": "10,000,000,000" } }, { "id": { "Concrete": { "parents": "1", "interior": { "X2": [ { "Parachain": "2,001" }, { "GeneralKey": "0x010a" } ] } } }, "fun": { "Fungible": "1,000,000" } } ] }, "fee_item": "0", "dest": { "V2": { "parents": "1", "interior": { "X2": [ { "Parachain": "2,001" }, { "AccountId32": { "network": "Any", "id": "0xc4db7bcb733e117c0b34ac96354b10d47e84a006b9e7e66a229d174e8ff2a063" } } ] } } }, "dest_weight_limit": "Unlimited" }, "method": "transferMultiassets", "section": "xTokens" }
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/interoperability/asset-transfer-api/reference/ --- BEGIN CONTENT --- --- title: Asset Transfer API Reference description: Explore the Asset Transfer API Reference for comprehensive details on methods, data types, and functionalities. Essential for cross-chain asset transfers. categories: Reference, Dapps --- # Asset Transfer API Reference
- :octicons-download-16:{ .lg .middle } __Install the Asset Transfer API__ --- Learn how to install [`asset-transfer-api`](https://github.com/paritytech/asset-transfer-api){target=\_blank} into a new or existing project.
[:octicons-arrow-right-24: Get started](/develop/toolkit/interoperability/asset-transfer-api/overview/#install-asset-transfer-api){target=\_blank} - :octicons-code-16:{ .lg .middle } __Dive in with a tutorial__ --- Ready to start coding? Follow along with a step-by-step tutorial.
[:octicons-arrow-right-24: How to use the Asset Transfer API](/develop/toolkit/interoperability/asset-transfer-api/overview/#examples)

## Asset Transfer API Class Holds open an API connection to a specified chain within the `ApiPromise` to help construct transactions for assets and estimate fees. For a more in-depth explanation of the Asset Transfer API class structure, check the [source code](https://github.com/paritytech/asset-transfer-api/blob/{{dependencies.repositories.asset_transfer_api.version}}/src/AssetTransferApi.ts#L128){target=\_blank}. ### Methods #### Create Transfer Transaction Generates an XCM transaction for transferring assets between chains. It simplifies the process by inferring what type of transaction is required given the inputs, ensuring that the assets are valid, and that the transaction details are correctly formatted. After obtaining the transaction, you must handle the signing and submission process separately. ```ts public async createTransferTransaction( destChainId: string, destAddr: string, assetIds: string[], amounts: string[], opts: TransferArgsOpts = {}, ): Promise> { ``` ??? interface "Request parameters" `destChainId` ++"string"++ ++"required"++ ID of the destination chain (`'0'` for relay chain, other values for parachains). --- `destAddr` ++"string"++ ++"required"++ Address of the recipient account on the destination chain. --- `assetIds` ++"string[]"++ ++"required"++ Array of asset IDs to be transferred. When asset IDs are provided, the API dynamically selects the appropriate pallet for the current chain to handle these specific assets. If the array is empty, the API defaults to using the `balances` pallet. --- `amounts` ++"string[]"++ ++"required"++ Array of amounts corresponding to each asset in `assetIds`. --- `opts` ++"TransferArgsOpts"++ Options for customizing the claim assets transaction. These options allow you to specify the transaction format, fee payment details, weight limits, XCM versions, and more. ??? child "Show more" `format` ++"T extends Format"++ Specifies the format for returning a transaction. ??? child "Type `Format`" ```ts export type Format = 'payload' | 'call' | 'submittable'; ``` --- `paysWithFeeOrigin` ++"string"++ The Asset ID to pay fees on the current common good parachain. The defaults are as follows: - Polkadot Asset Hub - `'DOT'` - Kusama Asset Hub - `'KSM'` --- `paysWithFeeDest` ++"string"++ Asset ID to pay fees on the destination parachain. --- `weightLimit` ++"{ refTime?: string, proofSize?: string }"++ Custom weight limit option. If not provided, it will default to unlimited. --- `xcmVersion` ++"number"++ Sets the XCM version for message construction. If this is not present a supported version will be queried, and if there is no supported version a safe version will be queried. --- `keepAlive` ++"boolean"++ Enables `transferKeepAlive` for local asset transfers. For creating local asset transfers, if `true` this will allow for a `transferKeepAlive` as opposed to a `transfer`. --- `transferLiquidToken` ++"boolean"++ Declares if this will transfer liquidity tokens. Default is `false`. --- `assetTransferType` ++"string"++ The XCM transfer type used to transfer assets. The `AssetTransferType` type defines the possible values for this parameter. ??? child "Type `AssetTransferType`" ```ts export type AssetTransferType = LocalReserve | DestinationReserve | Teleport | RemoteReserve; ``` !!! note To use the `assetTransferType` parameter, which is a string, you should use the `AssetTransferType` type as if each of its variants are strings. For example: `assetTransferType = 'LocalReserve'`. --- `remoteReserveAssetTransferTypeLocation` ++"string"++ The remove reserve location for the XCM transfer. Should be provided when specifying an `assetTransferType` of `RemoteReserve`. --- `feesTransferType` ++"string"++ XCM TransferType used to pay fees for XCM transfer. The `AssetTransferType` type defines the possible values for this parameter. ??? child "Type `AssetTransferType`" ```ts export type AssetTransferType = LocalReserve | DestinationReserve | Teleport | RemoteReserve; ``` !!! note To use the `feesTransferType` parameter, which is a string, you should use the `AssetTransferType` type as if each of its variants are strings. For example: `feesTransferType = 'LocalReserve'`. --- `remoteReserveFeesTransferTypeLocation` ++"string"++ The remote reserve location for the XCM transfer fees. Should be provided when specifying a `feesTransferType` of `RemoteReserve`. --- `customXcmOnDest` ++"string"++ A custom XCM message to be executed on the destination chain. Should be provided if a custom XCM message is needed after transferring assets. Defaults to: ```bash Xcm(vec![DepositAsset { assets: Wild(AllCounted(assets.len())), beneficiary }]) ``` ??? interface "Response parameters" ++"Promise"++ A promise containing the result of constructing the transaction. ??? child "Show more" `dest` ++"string"++ The destination `specName` of the transaction. --- `origin` ++"string"++ The origin `specName` of the transaction. --- `format` ++"Format | 'local'"++ The format type the transaction is outputted in. ??? child "Type `Format`" ```ts export type Format = 'payload' | 'call' | 'submittable'; ``` --- `xcmVersion` ++"number | null"++ The XCM version that was used to construct the transaction. --- `direction` ++"Direction | 'local'"++ The direction of the cross-chain transfer. ??? child "Enum `Direction` values" `Local` Local transaction. --- `SystemToPara` System parachain to parachain. --- `SystemToRelay` System paracahin to system relay chain. --- `SystemToSystem` System parachain to System parachain chain. --- `SystemToBridge` System parachain to an external `GlobalConsensus` chain. --- `ParaToPara` Parachain to Parachain. --- `ParaToRelay` Parachain to Relay chain. --- `ParaToSystem` Parachain to System parachain. --- `RelayToSystem` Relay to System Parachain. --- `RelayToPara` Relay chain to Parachain. --- `RelayToBridge` Relay chain to an external `GlobalConsensus` chain. `method` ++"Methods"++ The method used in the transaction. ??? child "Type `Methods`" ```ts export type Methods = | LocalTransferTypes | 'transferAssets' | 'transferAssetsUsingTypeAndThen' | 'limitedReserveTransferAssets' | 'limitedTeleportAssets' | 'transferMultiasset' | 'transferMultiassets' | 'transferMultiassetWithFee' | 'claimAssets'; ``` ??? child "Type `LocalTransferTypes`" ```ts export type LocalTransferTypes = | 'assets::transfer' | 'assets::transferKeepAlive' | 'assets::transferAll' | 'foreignAssets::transfer' | 'foreignAssets::transferKeepAlive' | 'foreignAssets::transferAll' | 'balances::transfer' | 'balances::transferKeepAlive' | 'balances::transferAll' | 'poolAssets::transfer' | 'poolAssets::transferKeepAlive' | 'poolAssets::transferAll' | 'tokens::transfer' | 'tokens::transferKeepAlive' | 'tokens::transferAll'; ``` --- `tx` ++"ConstructedFormat"++ The constructed transaction. ??? child "Type `ConstructedFormat`" ```ts export type ConstructedFormat = T extends 'payload' ? GenericExtrinsicPayload : T extends 'call' ? `0x${string}` : T extends 'submittable' ? SubmittableExtrinsic<'promise', ISubmittableResult> : never; ``` The `ConstructedFormat` type is a conditional type that returns a specific type based on the value of the TxResult `format` field. - **Payload format** - if the format field is set to `'payload'`, the `ConstructedFormat` type will return a [`GenericExtrinsicPayload`](https://github.com/polkadot-js/api/blob/v15.8.1/packages/types/src/extrinsic/ExtrinsicPayload.ts#L87){target=\_blank} - **Call format** - if the format field is set to `'call'`, the `ConstructedFormat` type will return a hexadecimal string (`0x${string}`). This is the encoded representation of the extrinsic call - **Submittable format** - if the format field is set to `'submittable'`, the `ConstructedFormat` type will return a [`SubmittableExtrinsic`](https://github.com/polkadot-js/api/blob/v15.8.1/packages/api-base/src/types/submittable.ts#L56){target=\_blank}. This is a Polkadot.js type that represents a transaction that can be submitted to the blockchain ??? interface "Example" ***Request*** ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'wss://wss.api.moonbeam.network', ); const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion); let callInfo; try { callInfo = await assetsApi.createTransferTransaction( '2004', '0xF977814e90dA44bFA03b6295A0616a897441aceC', [], ['1000000000000000000'], { format: 'call', keepAlive: true, }, ); console.log(`Call data:\n${JSON.stringify(callInfo, null, 4)}`); } catch (e) { console.error(e); throw Error(e as string); } } main() .catch((err) => console.error(err)) .finally(() => process.exit()); ``` ***Response***
Call data: { "origin": "moonbeam", "dest": "moonbeam", "direction": "local", "xcmVersion": null, "method": "balances::transferKeepAlive", "format": "call", "tx": "0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600" }
#### Claim Assets Creates a local XCM transaction to retrieve trapped assets. This function can be used to claim assets either locally on a system parachain, on the relay chain, or on any chain that supports the `claimAssets` runtime call. ```ts public async claimAssets( assetIds: string[], amounts: string[], beneficiary: string, opts: TransferArgsOpts, ): Promise> { ``` ??? interface "Request parameters" `assetIds` ++"string[]"++ ++"required"++ Array of asset IDs to be claimed from the `AssetTrap`. --- `amounts` ++"string[]"++ ++"required"++ Array of amounts corresponding to each asset in `assetIds`. --- `beneficiary` ++"string"++ ++"required"++ Address of the account to receive the trapped assets. --- `opts` ++"TransferArgsOpts"++ Options for customizing the claim assets transaction. These options allow you to specify the transaction format, fee payment details, weight limits, XCM versions, and more. ??? child "Show more" `format` ++"T extends Format"++ Specifies the format for returning a transaction. ??? child "Type `Format`" ```ts export type Format = 'payload' | 'call' | 'submittable'; ``` --- `paysWithFeeOrigin` ++"string"++ The Asset ID to pay fees on the current common good parachain. The defaults are as follows: - Polkadot Asset Hub - `'DOT'` - Kusama Asset Hub - `'KSM'` --- `paysWithFeeDest` ++"string"++ Asset ID to pay fees on the destination parachain. --- `weightLimit` ++"{ refTime?: string, proofSize?: string }"++ Custom weight limit option. If not provided, it will default to unlimited. --- `xcmVersion` ++"number"++ Sets the XCM version for message construction. If this is not present a supported version will be queried, and if there is no supported version a safe version will be queried. --- `keepAlive` ++"boolean"++ Enables `transferKeepAlive` for local asset transfers. For creating local asset transfers, if `true` this will allow for a `transferKeepAlive` as opposed to a `transfer`. --- `transferLiquidToken` ++"boolean"++ Declares if this will transfer liquidity tokens. Default is `false`. --- `assetTransferType` ++"string"++ The XCM transfer type used to transfer assets. The `AssetTransferType` type defines the possible values for this parameter. ??? child "Type `AssetTransferType`" ```ts export type AssetTransferType = LocalReserve | DestinationReserve | Teleport | RemoteReserve; ``` !!! note To use the `assetTransferType` parameter, which is a string, you should use the `AssetTransferType` type as if each of its variants are strings. For example: `assetTransferType = 'LocalReserve'`. --- `remoteReserveAssetTransferTypeLocation` ++"string"++ The remove reserve location for the XCM transfer. Should be provided when specifying an `assetTransferType` of `RemoteReserve`. --- `feesTransferType` ++"string"++ XCM TransferType used to pay fees for XCM transfer. The `AssetTransferType` type defines the possible values for this parameter. ??? child "Type `AssetTransferType`" ```ts export type AssetTransferType = LocalReserve | DestinationReserve | Teleport | RemoteReserve; ``` !!! note To use the `feesTransferType` parameter, which is a string, you should use the `AssetTransferType` type as if each of its variants are strings. For example: `feesTransferType = 'LocalReserve'`. --- `remoteReserveFeesTransferTypeLocation` ++"string"++ The remote reserve location for the XCM transfer fees. Should be provided when specifying a `feesTransferType` of `RemoteReserve`. --- `customXcmOnDest` ++"string"++ A custom XCM message to be executed on the destination chain. Should be provided if a custom XCM message is needed after transferring assets. Defaults to: ```bash Xcm(vec![DepositAsset { assets: Wild(AllCounted(assets.len())), beneficiary }]) ``` ??? interface "Response parameters" ++"Promise>"++ A promise containing the result of constructing the transaction. ??? child "Show more" `dest` ++"string"++ The destination `specName` of the transaction. --- `origin` ++"string"++ The origin `specName` of the transaction. --- `format` ++"Format | 'local'"++ The format type the transaction is outputted in. ??? child "Type `Format`" ```ts export type Format = 'payload' | 'call' | 'submittable'; ``` --- `xcmVersion` ++"number | null"++ The XCM version that was used to construct the transaction. --- `direction` ++"Direction | 'local'"++ The direction of the cross-chain transfer. ??? child "Enum `Direction` values" `Local` Local transaction. --- `SystemToPara` System parachain to parachain. --- `SystemToRelay` System paracahin to system relay chain. --- `SystemToSystem` System parachain to System parachain chain. --- `SystemToBridge` System parachain to an external `GlobalConsensus` chain. --- `ParaToPara` Parachain to Parachain. --- `ParaToRelay` Parachain to Relay chain. --- `ParaToSystem` Parachain to System parachain. --- `RelayToSystem` Relay to System Parachain. --- `RelayToPara` Relay chain to Parachain. --- `RelayToBridge` Relay chain to an external `GlobalConsensus` chain. `method` ++"Methods"++ The method used in the transaction. ??? child "Type `Methods`" ```ts export type Methods = | LocalTransferTypes | 'transferAssets' | 'transferAssetsUsingTypeAndThen' | 'limitedReserveTransferAssets' | 'limitedTeleportAssets' | 'transferMultiasset' | 'transferMultiassets' | 'transferMultiassetWithFee' | 'claimAssets'; ``` ??? child "Type `LocalTransferTypes`" ```ts export type LocalTransferTypes = | 'assets::transfer' | 'assets::transferKeepAlive' | 'assets::transferAll' | 'foreignAssets::transfer' | 'foreignAssets::transferKeepAlive' | 'foreignAssets::transferAll' | 'balances::transfer' | 'balances::transferKeepAlive' | 'balances::transferAll' | 'poolAssets::transfer' | 'poolAssets::transferKeepAlive' | 'poolAssets::transferAll' | 'tokens::transfer' | 'tokens::transferKeepAlive' | 'tokens::transferAll'; ``` --- `tx` ++"ConstructedFormat"++ The constructed transaction. ??? child "Type `ConstructedFormat`" ```ts export type ConstructedFormat = T extends 'payload' ? GenericExtrinsicPayload : T extends 'call' ? `0x${string}` : T extends 'submittable' ? SubmittableExtrinsic<'promise', ISubmittableResult> : never; ``` The `ConstructedFormat` type is a conditional type that returns a specific type based on the value of the TxResult `format` field. - **Payload format** - if the format field is set to `'payload'`, the `ConstructedFormat` type will return a [`GenericExtrinsicPayload`](https://github.com/polkadot-js/api/blob/v15.8.1/packages/types/src/extrinsic/ExtrinsicPayload.ts#L87){target=\_blank} - **Call format** - if the format field is set to `'call'`, the `ConstructedFormat` type will return a hexadecimal string (`0x${string}`). This is the encoded representation of the extrinsic call - **Submittable format** - if the format field is set to `'submittable'`, the `ConstructedFormat` type will return a [`SubmittableExtrinsic`](https://github.com/polkadot-js/api/blob/v15.8.1/packages/api-base/src/types/submittable.ts#L56){target=\_blank}. This is a Polkadot.js type that represents a transaction that can be submitted to the blockchain ??? interface "Example" ***Request*** ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'wss://westend-rpc.polkadot.io', ); const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion); let callInfo; try { callInfo = await assetsApi.claimAssets( [ `{"parents":"0","interior":{"X2":[{"PalletInstance":"50"},{"GeneralIndex":"1984"}]}}`, ], ['1000000000000'], '0xf5d5714c084c112843aca74f8c498da06cc5a2d63153b825189baa51043b1f0b', { format: 'call', xcmVersion: 2, }, ); console.log(`Call data:\n${JSON.stringify(callInfo, null, 4)}`); } catch (e) { console.error(e); throw Error(e as string); } } main() .catch((err) => console.error(err)) .finally(() => process.exit()); ``` ***Response***
Call data: { "origin": "0", "dest": "westend", "direction": "local", "xcmVersion": 2, "method": "claimAssets", "format": "call", "tx": "0x630c0104000002043205011f00070010a5d4e80100010100f5d5714c084c112843aca74f8c498da06cc5a2d63153b825189baa51043b1f0b" }
#### Decode Extrinsic Decodes the hex of an extrinsic into a string readable format. ```ts public decodeExtrinsic(encodedTransaction: string, format: T): string { ``` ??? interface "Request parameters" `encodedTransaction` ++"string"++ ++"required"++ A hex encoded extrinsic. --- `format` ++"T extends Format"++ ++"required"++ Specifies the format for returning a transaction. ??? child "Type `Format`" ```ts export type Format = 'payload' | 'call' | 'submittable'; ``` ??? interface "Response parameters" ++"string"++ Decoded extrinsic in string readable format. ??? interface "Example" ***Request*** ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'wss://wss.api.moonbeam.network', ); const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion); const encodedExt = '0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600'; try { const decodedExt = assetsApi.decodeExtrinsic(encodedExt, 'call'); console.log( `Decoded tx:\n ${JSON.stringify(JSON.parse(decodedExt), null, 4)}`, ); } catch (e) { console.error(e); throw Error(e as string); } } main() .catch((err) => console.error(err)) .finally(() => process.exit()); ``` ***Response***
Decoded tx: { "args": { "dest": "0xF977814e90dA44bFA03b6295A0616a897441aceC", "value": "100,000" }, "method": "transferKeepAlive", "section": "balances" }
#### Fetch Fee Info Fetch estimated fee information for an extrinsic. ```ts public async fetchFeeInfo( tx: ConstructedFormat, format: T, ): Promise { ``` ??? interface "Request parameters" `tx` ++"ConstructedFormat"++ ++"required"++ The constructed transaction. ??? child "Type `ConstructedFormat`" ```ts export type ConstructedFormat = T extends 'payload' ? GenericExtrinsicPayload : T extends 'call' ? `0x${string}` : T extends 'submittable' ? SubmittableExtrinsic<'promise', ISubmittableResult> : never; ``` The `ConstructedFormat` type is a conditional type that returns a specific type based on the value of the TxResult `format` field. - **Payload format** - if the format field is set to `'payload'`, the `ConstructedFormat` type will return a [`GenericExtrinsicPayload`](https://github.com/polkadot-js/api/blob/{{ dependencies.javascript_packages.asset_transfer_api.polkadot_js_api_version}}/packages/types/src/extrinsic/ExtrinsicPayload.ts#L87){target=\_blank} - Call format - if the format field is set to `'call'`, the `ConstructedFormat` type will return a hexadecimal string (`0x${string}`). This is the encoded representation of the extrinsic call - **Submittable format** - if the format field is set to `'submittable'`, the `ConstructedFormat` type will return a [`SubmittableExtrinsic`](https://github.com/polkadot-js/api/blob/{{dependencies.javascript_packages.asset_transfer_api.polkadot_js_api_version}}/packages/api-base/src/types/submittable.ts#L56){target=\_blank}. This is a Polkadot.js type that represents a transaction that can be submitted to the blockchain --- `format` ++"T extends Format"++ ++"required"++ Specifies the format for returning a transaction. ??? child "Type `Format`" ```ts export type Format = 'payload' | 'call' | 'submittable'; ``` ??? interface "Response parameters" ++"Promise"++ A promise containing the estimated fee information for the provided extrinsic. ??? child "Type `RuntimeDispatchInfo`" ```ts export interface RuntimeDispatchInfo extends Struct { readonly weight: Weight; readonly class: DispatchClass; readonly partialFee: Balance; } ``` For more information on the underlying types and fields of `RuntimeDispatchInfo`, check the [`RuntimeDispatchInfo`](https://github.com/polkadot-js/api/blob/{{ dependencies.javascript_packages.asset_transfer_api.polkadot_js_api_version}}/packages/types/src/interfaces/payment/types.ts#L21){target=\_blank} source code. ??? child "Type `RuntimeDispatchInfoV1`" ```ts export interface RuntimeDispatchInfoV1 extends Struct { readonly weight: WeightV1; readonly class: DispatchClass; readonly partialFee: Balance; } ``` For more information on the underlying types and fields of `RuntimeDispatchInfoV1`, check the [`RuntimeDispatchInfoV1`](https://github.com/polkadot-js/api/blob/{{dependencies.javascript_packages.asset_transfer_api.polkadot_js_api_version}}/packages/types/src/interfaces/payment/types.ts#L28){target=\_blank} source code. ??? interface "Example" ***Request*** ```ts import { AssetTransferApi, constructApiPromise, } from '@substrate/asset-transfer-api'; async function main() { const { api, specName, safeXcmVersion } = await constructApiPromise( 'wss://wss.api.moonbeam.network', ); const assetsApi = new AssetTransferApi(api, specName, safeXcmVersion); const encodedExt = '0x0a03f977814e90da44bfa03b6295a0616a897441acec821a0600'; try { const decodedExt = await assetsApi.fetchFeeInfo(encodedExt, 'call'); console.log(`Fee info:\n${JSON.stringify(decodedExt, null, 4)}`); } catch (e) { console.error(e); throw Error(e as string); } } main() .catch((err) => console.error(err)) .finally(() => process.exit()); ``` ***Response***
Fee info: { "weight": { "refTime": 163777000, "proofSize": 3581 }, "class": "Normal", "partialFee": 0 }
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/interoperability/ --- BEGIN CONTENT --- --- title: Interoperability description: Explore Polkadot's XCM tooling ecosystem, featuring the Asset Transfer API and other utilities for implementing cross-chain messaging and transfers. template: index-page.html --- # Interoperability Polkadot's XCM tooling ecosystem redefines the boundaries of cross-chain communication and asset movement. With unparalleled flexibility and scalability, these advanced tools empower developers to build decentralized applications that connect parachains, relay chains, and external networks. By bridging siloed blockchains, Polkadot paves the way for a unified, interoperable ecosystem that accelerates innovation and collaboration. From enabling cross-chain messaging to facilitating secure asset transfers and integrating with external blockchains, Polkadot's XCM tools serve as the cornerstone for next-generation blockchain solutions. These resources not only enhance developer workflows but also lower technical barriers, unlocking opportunities for scalable, interconnected systems. Whether you're a blockchain pioneer or an emerging builder, Polkadot's tools provide the foundation to create impactful, future-ready applications. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/interoperability/xcm-tools/ --- BEGIN CONTENT --- --- title: XCM Tools description: Explore essential XCM tools across Polkadot, crafted to enhance cross-chain functionality and integration within the ecosystem. categories: Basics, Tooling, Dapps --- # XCM Tools ## Introduction As described in the [Interoperability](/develop/interoperability){target=\_blank} section, XCM (Cross-Consensus Messaging) is a protocol used in the Polkadot and Kusama ecosystems to enable communication and interaction between chains. It facilitates cross-chain communication, allowing assets, data, and messages to flow seamlessly across the ecosystem. As XCM is central to enabling communication between blockchains, developers need robust tools to help interact with, build, and test XCM messages. Several XCM tools simplify working with the protocol by providing libraries, frameworks, and utilities that enhance the development process, ensuring that applications built within the Polkadot ecosystem can efficiently use cross-chain functionalities. ## Popular XCM Tools ### Moonsong Labs XCM Tools [Moonsong Labs XCM Tools](https://github.com/Moonsong-Labs/xcm-tools){target=\_blank} provides a collection of scripts for managing and testing XCM operations between Polkadot SDK-based runtimes. These tools allow performing tasks like asset registration, channel setup, and XCM initialization. Key features include: - **Asset registration** - registers assets, setting units per second (up-front fees), and configuring error (revert) codes - **XCM initializer** - initializes XCM, sets default XCM versions, and configures revert codes for XCM-related precompiles - **HRMP manipulator** - manages HRMP channel actions, including opening, accepting, or closing channels - **XCM-Transactor-Info-Setter** - configures transactor information, including extra weight and fee settings - **Decode XCM** - decodes XCM messages on the relay chain or parachains to help interpret cross-chain communication To get started, clone the repository and install the required dependencies: ```bash git clone https://github.com/Moonsong-Labs/xcm-tools && cd xcm-tools && yarn install ``` For a full overview of each script, visit the [scripts](https://github.com/Moonsong-Labs/xcm-tools/tree/main/scripts){target=\_blank} directory or refer to the [official documentation](https://github.com/Moonsong-Labs/xcm-tools/blob/main/README.md){target=\_blank} on GitHub. ### ParaSpell [ParaSpell](https://paraspell.xyz/){target=\_blank} is a collection of open-source XCM tools designed to streamline cross-chain asset transfers and interactions within the Polkadot and Kusama ecosystems. It equips developers with an intuitive interface to manage and optimize XCM-based functionalities. Some key points included by ParaSpell are: - [**XCM SDK**](https://paraspell.xyz/#xcm-sdk){target=\_blank} - provides a unified layer to incorporate XCM into decentralized applications, simplifying complex cross-chain interactions - [**XCM API**](https://paraspell.xyz/#xcm-api){target=\_blank} - offers an efficient, package-free approach to integrating XCM functionality while offloading heavy computing tasks, minimizing costs and improving application performance - [**XCM router**](https://paraspell.xyz/#xcm-router){target=\_blank} - enables cross-chain asset swaps in a single command, allowing developers to send one asset type (such as DOT on Polkadot) and receive a different asset on another chain (like ASTR on Astar) - [**XCM analyser**](https://paraspell.xyz/#xcm-analyser){target=\_blank} - decodes and translates complex XCM multilocation data into readable information, supporting easier troubleshooting and debugging - [**XCM visualizator**](https://paraspell.xyz/#xcm-visualizator){target=\_blank} - a tool designed to give developers a clear, interactive view of XCM activity across the Polkadot ecosystem, providing insights into cross-chain communication flow ParaSpell's tools make it simple for developers to build, test, and deploy cross-chain solutions without needing extensive knowledge of the XCM protocol. With features like message composition, decoding, and practical utility functions for parachain interactions, ParaSpell is especially useful for debugging and optimizing cross-chain communications. ### Astar XCM Tools The [Astar parachain](https://github.com/AstarNetwork/Astar/tree/master){target=\_blank} offers a crate with a set of utilities for interacting with the XCM protocol. The [xcm-tools](https://github.com/AstarNetwork/Astar/tree/master/bin/xcm-tools){target=\_blank} crate provides a straightforward method for users to locate a sovereign account or calculate an XC20 asset ID. Some commands included by the xcm-tools crate allow users to perform the following tasks: - **Sovereign accounts** - obtain the sovereign account address for any parachain, either on the Relay Chain or for sibling parachains, using a simple command - **XC20 EVM addresses** - generate XC20-compatible Ethereum addresses for assets by entering the asset ID, making it easy to integrate assets across Ethereum-compatible environments - **Remote accounts** - retrieve remote account addresses needed for multi-location compatibility, using flexible options to specify account types and parachain IDs To start using these tools, clone the [Astar repository](https://github.com/AstarNetwork/Astar){target=\_blank} and compile the xcm-tools package: ```bash git clone https://github.com/AstarNetwork/Astar && cd Astar && cargo build --release -p xcm-tools ``` After compiling, verify the setup with the following command: ```bash ./target/release/xcm-tools --help ``` For more details on using Astar xcm-tools, consult the [official documentation](https://docs.astar.network/docs/learn/interoperability/xcm/integration/tools/){target=\_blank}. ### Chopsticks The Chopsticks library provides XCM functionality for testing XCM messages across networks, enabling you to fork multiple parachains along with a relay chain. For further details, see the [Chopsticks documentation](/tutorials/polkadot-sdk/testing/fork-live-chains/){target=\_blank} about XCM. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/e2e-testing/ --- BEGIN CONTENT --- --- title: E2E Testing on Polkadot SDK Chains description: Discover a suite of tools for E2E testing on Polkadot SDK-based blockchains, including configuration management, automation, and debugging utilities. template: index-page.html --- # E2E Testing ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/e2e-testing/moonwall/ --- BEGIN CONTENT --- --- title: E2E Testing with Moonwall description: Enhance blockchain end-to-end testing with Moonwall's standardized environment setup, comprehensive configuration management, and simple network interactions. categories: Parachains, Tooling --- # E2E Testing with Moonwall ## Introduction Moonwall is an end-to-end testing framework designed explicitly for Polkadot SDK-based blockchain networks. It addresses one of the most significant challenges in blockchain development: managing complex test environments and network configurations. Moonwall consolidates this complexity by providing the following: - A centralized configuration management system that explicitly defines all network parameters - A standardized approach to environment setup across different Substrate-based chains - Built-in utilities for common testing scenarios and network interactions Developers can focus on writing meaningful tests rather than managing infrastructure complexities or searching through documentation for configuration options. ## Prerequisites Before you begin, ensure you have the following installed: - [Node.js](https://nodejs.org/en/){target=\_blank} (version 20.10 or higher) - A package manager such as [npm](https://www.npmjs.com/){target=\_blank}, [yarn](https://yarnpkg.com/){target=\_blank}, or [pnpm](https://pnpm.io/){target=\_blank} ## Install Moonwall Moonwall can be installed globally for system-wide access or locally within specific projects. This section covers both installation methods. !!! tip This documentation corresponds to Moonwall version `{{ dependencies.javascript_packages.moonwall.version }}`. To avoid compatibility issues with the documented features, ensure you're using the matching version. ### Global Installation Global installation provides system-wide access to the Moonwall CLI, making it ideal for developers working across multiple blockchain projects. Install it by running one of the following commands: === "npm" ```bash npm install -g @moonwall/cli@{{ dependencies.javascript_packages.moonwall.version }} ``` === "pnpm" ```bash pnpm -g install @moonwall/cli@{{ dependencies.javascript_packages.moonwall.version }} ``` === "yarn" ```bash yarn global add @moonwall/cli@{{ dependencies.javascript_packages.moonwall.version }} ``` Now, you can run the `moonwall` command from your terminal. ### Local Installation Local installation is recommended for better dependency management and version control within a specific project. First, initialize your project: ```bash mkdir my-moonwall-project cd my-moonwall-project npm init -y ``` Then, install it as a local dependency: === "npm" ```bash npm install @moonwall/cli@{{ dependencies.javascript_packages.moonwall.version }} ``` === "pnpm" ```bash pnpm install @moonwall/cli@{{ dependencies.javascript_packages.moonwall.version }} ``` === "yarn" ```bash yarn add @moonwall/cli@{{ dependencies.javascript_packages.moonwall.version }} ``` ## Initialize Moonwall The `moonwall init` command launches an interactive wizard to create your configuration file: ```bash moonwall init ``` During setup, you will see prompts for the following parameters: - **`label`** - identifies your test configuration - **`global timeout`** - maximum time (ms) for test execution - **`environment name`** - name for your testing environment - **`network foundation`** - type of blockchain environment to use - **`tests directory`** - location of your test files Select `Enter` to accept defaults or input custom values. You should see something like this:
moonwall init ✔ Provide a label for the config file moonwall_config ✔ Provide a global timeout value 30000 ✔ Provide a name for this environment default_env ✔ What type of network foundation is this? dev ✔ Provide the path for where tests for this environment are kept tests/ ? Would you like to generate this config? (no to restart from beginning) (Y/n)
The wizard generates a `moonwall.config` file: ```json { "label": "moonwall_config", "defaultTestTimeout": 30000, "environments": [ { "name": "default_env", "testFileDir": ["tests/"], "foundation": { "type": "dev" } } ] } ``` The default configuration requires specific details about your blockchain node and test requirements: - The `foundation` object defines how your test blockchain node will be launched and managed. The dev foundation, which runs a local node binary, is used for local development For more information about available options, check the [Foundations](https://moonsong-labs.github.io/moonwall/guide/intro/foundations.html){target=\_blank} section. - The `connections` array specifies how your tests will interact with the blockchain node. This typically includes provider configuration and endpoint details A provider is a tool that allows you or your application to connect to a blockchain network and simplifies the low-level details of the process. A provider handles submitting transactions, reading state, and more. For more information on available providers, check the [Providers supported](https://moonsong-labs.github.io/moonwall/guide/intro/providers.html#providers-supported){target=\_blank} page in the Moonwall documentation. Here's a complete configuration example for testing a local node using Polkadot.js as a provider: ```json { "label": "moonwall_config", "defaultTestTimeout": 30000, "environments": [ { "name": "default_env", "testFileDir": ["tests/"], "foundation": { "launchSpec": [ { "binPath": "./node-template", "newRpcBehaviour": true, "ports": { "rpcPort": 9944 } } ], "type": "dev" }, "connections": [ { "name": "myconnection", "type": "polkadotJs", "endpoints": ["ws://127.0.0.1:9944"] } ] } ] } ``` ## Writing Tests Moonwall uses the [`describeSuite`](https://github.com/Moonsong-Labs/moonwall/blob/7568048c52e9f7844f38fb4796ae9e1b9205fdaa/packages/cli/src/lib/runnerContext.ts#L65){target=\_blank} function to define test suites, like using [Mocha](https://mochajs.org/){target=\_blank}. Each test suite requires the following: - **`id`** - unique identifier for the suite - **`title`** - descriptive name for the suite - **`foundationMethods`** - specifies the testing environment (e.g., `dev` for local node testing) - **`testCases`** - a callback function that houses the individual test cases of this suite The following example shows how to test a balance transfer between two accounts: ```ts import '@polkadot/api-augment'; import { describeSuite, expect } from '@moonwall/cli'; import { Keyring } from '@polkadot/api'; describeSuite({ id: 'D1', title: 'Demo suite', foundationMethods: 'dev', testCases: ({ it, context, log }) => { it({ id: 'T1', title: 'Test Case', test: async () => { // Set up polkadot.js API and testing accounts let api = context.polkadotJs(); let alice = new Keyring({ type: 'sr25519' }).addFromUri('//Alice'); let charlie = new Keyring({ type: 'sr25519' }).addFromUri('//Charlie'); // Query Charlie's account balance before transfer const balanceBefore = (await api.query.system.account(charlie.address)) .data.free; // Before transfer, Charlie's account balance should be 0 expect(balanceBefore.toString()).toEqual('0'); log('Balance before: ' + balanceBefore.toString()); // Transfer from Alice to Charlie const amount = 1000000000000000; await api.tx.balances .transferAllowDeath(charlie.address, amount) .signAndSend(alice); // Wait for the transaction to be included in a block. // This is necessary because the balance is not updated immediately. // Block time is 6 seconds. await new Promise((resolve) => setTimeout(resolve, 6000)); // Query Charlie's account balance after transfer const balanceAfter = (await api.query.system.account(charlie.address)) .data.free; // After transfer, Charlie's account balance should be 1000000000000000 expect(balanceAfter.toString()).toEqual(amount.toString()); log('Balance after: ' + balanceAfter.toString()); }, }); }, }); ``` This test demonstrates several key concepts: - Initializing the Polkadot.js API through Moonwall's context and setting up test accounts - Querying on-chain state - Executing transactions - Waiting for block inclusion - Verifying results using assertions ## Running the Tests Execute your tests using the `test` Moonwall CLI command. For the default environment setup run: ```bash moonwall test default_env -c moonwall.config ``` The test runner will output detailed results showing: - Test suite execution status - Individual test case results - Execution time - Detailed logs and error messages (if any) Example output:
moonwall test default_env -c moonwall.config stdout | tests/test1.ts > 🗃️ D1 Demo suite > 📁 D1T1 Test Case 2025-01-21T19:27:55.624Z test:default_env Balance before: 0 stdout | tests/test1.ts > 🗃️ D1 Demo suite > 📁 D1T1 Test Case 2025-01-21T19:28:01.637Z test:default_env Balance after: 1000000000000000 ✓ default_env tests/test1.ts (1 test) 6443ms ✓ 🗃️ D1 Demo suite > 📁 D1T1 Test Case 6028ms Test Files 1 passed (1) Tests 1 passed (1) Start at 16:27:53 Duration 7.95s (transform 72ms, setup 0ms, collect 1.31s, tests 6.44s, environment 0ms, prepare 46ms) ✅ All tests passed
## Where to Go Next For a comprehensive guide to Moonwall's full capabilities, available configurations, and advanced usage, see the official [Moonwall](https://moonsong-labs.github.io/moonwall/){target=\_blank} documentation. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/fork-chains/chopsticks/get-started/ --- BEGIN CONTENT --- --- title: Get Started description: Simplify Polkadot SDK development with Chopsticks. Learn essential features, how to install Chopsticks, and how to configure local blockchain forks. categories: Parachains, Tooling --- # Get Started ## Introduction [Chopsticks](https://github.com/AcalaNetwork/chopsticks/){target=\_blank}, developed by the [Acala Foundation](https://github.com/AcalaNetwork){target=\_blank}, is a versatile tool tailored for developers working on Polkadot SDK-based blockchains. With Chopsticks, you can fork live chains locally, replay blocks to analyze extrinsics, and simulate complex scenarios like XCM interactions all without deploying to a live network. This guide walks you through installing Chopsticks and provides information on configuring a local blockchain fork. By streamlining testing and experimentation, Chopsticks empowers developers to innovate and accelerate their blockchain projects within the Polkadot ecosystem. For additional support and information, please reach out through [GitHub Issues](https://github.com/AcalaNetwork/chopsticks/issues){target=_blank}. !!! warning Chopsticks uses [Smoldot](https://github.com/smol-dot/smoldot){target=_blank} light client, which only supports the native Polkadot SDK API. Consequently, a Chopsticks-based fork doesn't support Ethereum JSON-RPC calls, meaning you cannot use it to fork your chain and connect Metamask. ## Prerequisites Before you begin, ensure you have the following installed: - [Node.js](https://nodejs.org/en/){target=\_blank} - A package manager such as [npm](https://www.npmjs.com/){target=\_blank}, which should be installed with Node.js by default, or [Yarn](https://yarnpkg.com/){target=\_blank} ## Install Chopsticks You can install Chopsticks globally or locally in your project. Choose the option that best fits your development workflow. This documentation explains the features of Chopsticks version `{{ dependencies.javascript_packages.chopsticks.version }}`. Make sure you're using the correct version to match these instructions. ### Global Installation To install Chopsticks globally, allowing you to use it across multiple projects, run: ```bash npm i -g @acala-network/chopsticks@{{ dependencies.javascript_packages.chopsticks.version }} ``` Now, you should be able to run the `chopsticks` command from your terminal. ### Local Installation To use Chopsticks in a specific project, first create a new directory and initialize a Node.js project: ```bash mkdir my-chopsticks-project cd my-chopsticks-project npm init -y ``` Then, install Chopsticks as a local dependency: ```bash npm i @acala-network/chopsticks@{{ dependencies.javascript_packages.chopsticks.version }} ``` Finally, you can run Chopsticks using the `npx` command. To see all available options and commands, run it with the `--help` flag: ```bash npx @acala-network/chopsticks --help ``` ## Configure Chopsticks To run Chopsticks, you need to configure some parameters. This can be set either through using a configuration file or the command line interface (CLI). The parameters that can be configured are as follows: - `genesis` - the link to a parachain's raw genesis file to build the fork from, instead of an endpoint - `timestamp` - timestamp of the block to fork from - `endpoint` - the endpoint of the parachain to fork - `block` - use to specify at which block hash or number to replay the fork - `wasm-override` - path of the Wasm to use as the parachain runtime, instead of an endpoint's runtime - `db` - path to the name of the file that stores or will store the parachain's database - `config` - path or URL of the config file - `port` - the port to expose an endpoint on - `build-block-mode` - how blocks should be built in the fork: batch, manual, instant - `import-storage` - a pre-defined JSON/YAML storage path to override in the parachain's storage - `allow-unresolved-imports` - whether to allow Wasm unresolved imports when using a Wasm to build the parachain - `html` - include to generate storage diff preview between blocks - `mock-signature-host` - mock signature host so that any signature starts with `0xdeadbeef` and filled by `0xcd` is considered valid ### Configuration File The Chopsticks source repository includes a collection of [YAML](https://yaml.org/){target=\_blank} files that can be used to set up various Polkadot SDK chains locally. You can download these configuration files from the [repository's `configs` folder](https://github.com/AcalaNetwork/chopsticks/tree/master/configs){target=\_blank}. An example of a configuration file for Polkadot is as follows: ```yaml endpoint: - wss://rpc.ibp.network/polkadot - wss://polkadot-rpc.dwellir.com mock-signature-host: true block: ${env.POLKADOT_BLOCK_NUMBER} db: ./db.sqlite runtime-log-level: 5 import-storage: System: Account: - - - 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY - providers: 1 data: free: '10000000000000000000' ParasDisputes: $removePrefix: ['disputes'] # those can makes block building super slow ``` The configuration file allows you to modify the storage of the forked network by rewriting the pallet, state component and value that you want to change. For example, Polkadot's file rewrites Alice's `system.Account` storage so that the free balance is set to `10000000000000000000`. ### CLI Flags Alternatively, all settings (except for genesis and timestamp) can be configured via command-line flags, providing a comprehensive method to set up the environment. ## WebSocket Commands Chopstick's internal WebSocket server has special endpoints that allow the manipulation of the local Polkadot SDK chain. These are the methods that can be invoked and their parameters: - **dev_newBlock** (newBlockParams) — generates one or more new blocks === "Parameters" - `newBlockParams` ++"NewBlockParams"++ - the parameters to build the new block with. Where the `NewBlockParams` interface includes the following properties: - `count` ++"number"++ - the number of blocks to build - `dmp` ++"{ msg: string, sentAt: number }[]"++ - the downward messages to include in the block - `hrmp` ++"Record"++ - the horizontal messages to include in the block - `to` ++"number"++ - the block number to build to - `transactions` ++"string[]"++ - the transactions to include in the block - `ump` ++"Record"++ - the upward messages to include in the block - `unsafeBlockHeight` ++"number"++ - build block using a specific block height (unsafe) === "Example" ```js import { ApiPromise, WsProvider } from '@polkadot/api'; async function main() { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; await api.rpc('dev_newBlock', { count: 1 }); } main(); ``` - **dev_setBlockBuildMode** (buildBlockMode) — sets block build mode === "Parameter" - `buildBlockMode` ++"BuildBlockMode"++ - the build mode. Can be any of the following modes: ```ts export enum BuildBlockMode { Batch = 'Batch', /** One block per batch (default) */ Instant = 'Instant', /** One block per transaction */ Manual = 'Manual', /** Only build when triggered */ } ``` === "Example" ```js import { ApiPromise, WsProvider } from '@polkadot/api'; async function main() { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; await api.rpc('dev_setBlockBuildMode', 'Instant'); } main(); ``` - **dev_setHead** (hashOrNumber) — sets the head of the blockchain to a specific hash or number === "Parameter" - `hashOrNumber` ++"string | number"++ - the block hash or number to set as head === "Example" ```js import { ApiPromise, WsProvider } from '@polkadot/api'; async function main() { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; await api.rpc('dev_setHead', 500); } main(); ``` - **dev_setRuntimeLogLevel** (runtimeLogLevel) — sets the runtime log level === "Parameter" - `runtimeLogLevel` ++"number"++ - the runtime log level to set === "Example" ```js import { ApiPromise, WsProvider } from '@polkadot/api'; async function main() { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; await api.rpc('dev_setRuntimeLogLevel', 1); } main(); ``` - **dev_setStorage** (values, blockHash) — creates or overwrites the value of any storage === "Parameters" - `values` ++"object"++ - JSON object resembling the path to a storage value - `blockHash` ++"string"++ - the block hash to set the storage value === "Example" ```js import { ApiPromise, WsProvider } from '@polkadot/api'; import { Keyring } from '@polkadot/keyring'; async function main() { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; const keyring = new Keyring({ type: 'ed25519' }); const bob = keyring.addFromUri('//Bob'); const storage = { System: { Account: [[[bob.address], { data: { free: 100000 }, nonce: 1 }]], }, }; await api.rpc('dev_setStorage', storage); } main(); ``` - **dev_timeTravel** (date) — sets the timestamp of the block to a specific date" === "Parameter" - `date` ++"string"++ - timestamp or date string to set. All future blocks will be sequentially created after this point in time === "Example" ```js import { ApiPromise, WsProvider } from '@polkadot/api'; async function main() { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; await api.rpc('dev_timeTravel', '2030-08-15T00:00:00'); } main(); ``` ## Where to Go Next
- Tutorial __Fork a Chain with Chopsticks__ --- Visit this guide for step-by-step instructions for configuring and interacting with your forked chain. [:octicons-arrow-right-24: Reference](/tutorials/polkadot-sdk/testing/fork-live-chains/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/fork-chains/chopsticks/ --- BEGIN CONTENT --- --- title: Chopsticks description: Learn how to install, configure, and use Chopsticks for debugging and forking Polkadot SDK-based networks in a local development environment. template: index-page.html --- # Fork Live Chains with Chopsticks Chopsticks is a powerful tool that lets you create local copies of running Polkadot SDK-based networks. By forking live chains locally, you can safely test features, analyze network behavior, and simulate complex scenarios without affecting production networks. ## What Can I Do with Chopsticks? - Create local forks of live networks - Replay blocks to analyze behavior - Test XCM interactions - Simulate complex scenarios - Modify network storage and state Whether you're debugging an issue, testing new features, or exploring cross-chain interactions, Chopsticks provides a safe environment for blockchain experimentation and validation. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/fork-chains/ --- BEGIN CONTENT --- --- title: Fork Chains for Testing description: Explore tools for forking live blockchain networks, enabling you to replicate real-world conditions in a local environment for accurate testing and debugging. template: index-page.html --- # Fork Live Chains for Testing Explore tools for forking live blockchain networks. These tools enable you to replicate real-world conditions in a local environment for accurate testing and debugging. They also allow you to analyze network behavior, test new features, and simulate complex scenarios in a controlled environment without affecting production systems. Ready to get started? Jump straight to the [Chopsticks getting started](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/) guide. ## Why Fork a Live Chain? Forking a live chain creates a controlled environment that mirrors live network conditions. This approach enables you to: - Test features safely before deployment - Debug complex interactions - Validate runtime changes - Experiment with network modifications ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/ --- BEGIN CONTENT --- --- title: Parachains description: Discover essential parachain development resources for building in the Polkadot ecosystem, highlighting tools to streamline your development process. template: index-page.html --- # Parachain Tools Within the Polkadot ecosystem, you'll find a robust set of development tools that empower developers to build, test, and deploy blockchain applications efficiently. Whether you're designing a custom parachain, testing new features, or validating network configurations, these tools streamline the development process and ensure your blockchain setup is secure and optimized. This section explores essential tools for blockchain testing, forking live networks, and interacting with the Polkadot ecosystem, giving you the resources needed to bring your blockchain project to life. ## Quick Links - [Use Pop CLI to start your parachain project](/develop/toolkit/parachains/quickstart/pop-cli/) - [Use Zombienet to spawn a chain](/develop/toolkit/parachains/spawn-chains/zombienet/get-started/) - [Use Chopsticks to fork a chain](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/) - [Use Moonwall to execute E2E testing](/develop/toolkit/parachains/e2e-testing/moonwall/) ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/light-clients/ --- BEGIN CONTENT --- --- title: Light Clients description:Light clients enable secure and efficient blockchain interaction without running a full node. Learn everything you need to know about light clients on Polkadot. categories: Parachains, Tooling --- # Light Clients ## Introduction Light clients enable secure and efficient blockchain interaction without running a full node. They provide a trust-minimized alternative to JSON-RPC by verifying data through cryptographic proofs rather than blindly trusting remote nodes. This guide covers: - What light clients are and how they work - Their advantages compared to full nodes and JSON-RPC - Available implementations in the Polkadot ecosystem - How to use light clients in your applications Light clients are particularly valuable for resource-constrained environments and applications requiring secure, decentralized blockchain access without the overhead of maintaining full nodes. !!!note "Light node or light client?" The terms _light node_ and _light client_ are interchangeable. Both refer to a blockchain client that syncs without downloading the entire blockchain state. All nodes in a blockchain network are fundamentally clients, engaging in peer-to-peer communication. ## Light Clients Workflow Unlike JSON-RPC interfaces, where an application must maintain a list of providers or rely on a single node, light clients are not limited to or dependent on a single node. They use cryptographic proofs to verify the blockchain's state, ensuring it is up-to-date and accurate. By verifying only block headers, light clients avoid syncing the entire state, making them ideal for resource-constrained environments. ```mermaid flowchart LR DAPP([dApp])-- Query Account Info -->LC([Light Client]) LC -- Request --> FN(((Full Node))) LC -- Response --> DAPP FN -- Response (validated via Merkle proof) --> LC ``` In the diagram above, the decentralized application queries on-chain account information through the light client. The light client runs as part of the application and requires minimal memory and computational resources. It uses Merkle proofs to verify the state retrieved from a full node in a trust-minimized manner. Polkadot-compatible light clients utilize [warp syncing](https://spec.polkadot.network/sect-lightclient#sect-sync-warp-lightclient){target=\_blank}, which downloads only block headers. Light clients can quickly verify the blockchain's state, including [GRANDPA finality](/polkadot-protocol/glossary#grandpa){target=\_blank} justifications. !!!note "What does it mean to be trust-minimized?" _Trust-minimized_ means that the light client does not need to fully trust the full node from which it retrieves the state. This is achieved through the use of Merkle proofs, which allow the light client to verify the correctness of the state by checking the Merkle tree root. ## JSON-RPC and Light Client Comparison Another common method of communication between a user interface (UI) and a node is through the JSON-RPC protocol. Generally, the UI retrieves information from the node, fetches network or [pallet](/polkadot-protocol/glossary#pallet){target=\_blank} data, and interacts with the blockchain. This is typically done in one of two ways: - **User-controlled nodes** - the UI connects to a node client installed on the user's machine - These nodes are secure, but installation and maintenance can be inconvenient - **Publicly accessible nodes** - the UI connects to a third-party-owned publicly accessible node client - These nodes are convenient but centralized and less secure. Applications must maintain a list of backup nodes in case the primary node becomes unavailable While light clients still communicate with [full nodes](/polkadot-protocol/glossary#full-node), they offer significant advantages for applications requiring a secure alternative to running a full node: | Full Node | Light Client | | :---------------------------------------------------------------------------------------------: | :------------------------------------------------------------: | | Fully verifies all blocks of the chain | Verifies only the authenticity of blocks | | Stores previous block data and the chain's storage in a database | Does not require a database | | Installation, maintenance, and execution are resource-intensive and require technical expertise | No installation is typically included as part of the application | ## Using Light Clients The [`smoldot`](https://github.com/smol-dot/smoldot){target=\_blank} client is the cornerstone of light client implementation for Polkadot SDK-based chains. It provides the primitives needed to build light clients and is also integrated into libraries such as [PAPI](#papi-light-client-support). ### PAPI Light Client Support The [Polkadot API (PAPI)](/develop/toolkit/api-libraries/papi){target=\_blank} library natively supports light client configurations powered by [`smoldot`](https://github.com/smol-dot/smoldot){target=\_blank}. This allows developers to connect to multiple chains simultaneously using a light client. ### Substrate Connect - Browser Extension The [Substrate Connect browser extension](https://www.npmjs.com/package/@substrate/connect-extension-protocol){target=\_blank} enables end-users to interact with applications connected to multiple blockchains or to connect their own blockchains to supported applications. Establishing a sufficient number of peers can be challenging due to browser limitations on WebSocket connections from HTTPS pages, as many nodes require TLS. The Substrate Connect browser extension addresses this limitation by keeping chains synced in the background, enabling faster application performance. Substrate Connect automatically detects whether the user has the extension installed. If not, an in-page Wasm light client is created for them. ## Resources - [What is a light client and why you should care?](https://medium.com/paritytech/what-is-a-light-client-and-why-you-should-care-75f813ae2670){target=\_blank} - [Introducing Substrate Connect: Browser-Based Light Clients for Connecting to Substrate Chains](https://www.parity.io/blog/introducing-substrate-connect){target=\_blank} - [Substrate Connect GitHub Repository](https://github.com/paritytech/substrate-connect/tree/master/projects/extension){target=\_blank} - [Light Clients - Polkadot Specification](https://spec.polkadot.network/sect-lightclient){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/polkadot-omni-node/ --- BEGIN CONTENT --- --- title: Polkadot Omni Node description: Run parachain nodes easily with the polkadot-omni-node, a white-labeled binary that can run parachain nodes using a single pre-built solution. categories: Parachains, Tooling --- # Polkadot Omni Node ## Introduction The [`polkadot-omni-node`](https://crates.io/crates/polkadot-omni-node/{{dependencies.crates.polkadot_omni_node.version}}){target=\_blank} crate is a versatile, pre-built binary designed to simplify running parachains in the Polkadot ecosystem. Unlike traditional node binaries that are tightly coupled to specific runtime code, the `polkadot-omni-node` operates using an external [chain specification](/polkadot-protocol/glossary#chain-specification){target=\_blank} file, allowing it to adapt dynamically to different parachains. This approach enables it to act as a white-labeled node binary, capable of running most parachains that do not require custom node-level logic or extensions. Developers can leverage this flexibility to test, deploy, or operate parachain nodes without maintaining a dedicated codebase for each network. This guide provides step-by-step instructions for installing the `polkadot-omni-node`, obtaining a chain specification, and spinning up a parachain node. ## Prerequisites Before getting started, ensure you have the following prerequisites: - **[Rust](https://www.rust-lang.org/tools/install){target=\_blank}** - required to build and install the `polkadot-omni-node` binary Ensure Rust's `cargo` command is available in your terminal by running: ```bash cargo --version ``` ## Install Polkadot Omni Node To install `polkadot-omni-node` globally using `cargo`, run: ```bash cargo install --locked polkadot-omni-node@{{dependencies.crates.polkadot_omni_node.version}} ``` This command downloads and installs version {{dependencies.crates.polkadot_omni_node.version}} of the binary, making it available system-wide. To confirm the installation, run: ```bash polkadot-omni-node --version ``` You should see the installed version number printed to the terminal, confirming a successful installation. ## Obtain Chain Specifications The `polkadot-omni-node` binary uses a chain specification file to configure and launch a parachain node. This file defines the parachain's genesis state and network settings. The most common source for official chain specifications is the [`paritytech/chainspecs`](https://github.com/paritytech/chainspecs){target=\_blank} repository. These specifications are also browsable in a user-friendly format via the [Chainspec Collection](https://paritytech.github.io/chainspecs/){target=\_blank} website. To obtain a chain specification: 1. Visit the [Chainspec Collection](https://paritytech.github.io/chainspecs/){target=\_blank} website 2. Find the parachain you want to run 3. Click the chain spec to open it 4. Copy the JSON content and save it locally as a `.json` file, e.g., `chain_spec.json` ## Run a Parachain Full Node Once you've installed `polkadot-omni-node` and saved the appropriate chain specification file, you can start a full node for your chosen parachain. To see all available flags and configuration options, run: ```bash polkadot-omni-node --help ``` To launch the node, run the following command, replacing `./INSERT_PARACHAIN_CHAIN_SPEC.json` with the actual path to your saved chain spec file. This command will: - Load the chain specification - Initialize the node using the provided network configuration - Begin syncing with the parachain network ```bash polkadot-omni-node --chain ./INSERT_PARACHAIN_CHAIN_SPEC.json --sync warp ``` - The `--chain` flag tells the `polkadot-omni-node` which parachain to run by pointing to its chain specification file - The `--sync warp` flag enables warp sync, allowing the node to quickly catch up to the latest finalized state. Historical blocks are fetched in the background as the node continues operating Once started, the node will begin connecting to peers and syncing with the network. You’ll see logs in your terminal reflecting its progress. ## Interact with the Node By default, `polkadot-omni-node` exposes a WebSocket endpoint at `ws://localhost:9944`, which you can use to interact with the running node. You can connect using: - [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} — a web-based interface for exploring and interacting with Polkadot SDK-based chains - Custom scripts using compatible [libraries](/develop/toolkit/api-libraries/){target=\_blank} Once connected, you can review blocks, call extrinsics, inspect storage, and interact with the runtime. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/quickstart/ --- BEGIN CONTENT --- --- title: Quickstart Parachain Development description: Bootstrap your development environment, scaffold template-based projects, deploy local networks, and simplify your workflow with tools for parachain developers. template: index-page.html --- # Quickstart Parachain Development ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/quickstart/pop-cli/ --- BEGIN CONTENT --- --- title: Quickstart Parachain Development with Pop CLI description: Quickly bootstrap parachain projects, scaffold templates, deploy local networks, and streamline development workflows using Pop CLI. categories: Parachains, Tooling --- # Quickstart Parachain Development With Pop CLI ## Introduction [Pop CLI](https://onpop.io/cli/){target=\_blank} is a powerful command-line tool designed explicitly for rapid parachain development within the Polkadot ecosystem. It addresses essential developer needs by providing streamlined commands to set up development environments, scaffold parachain templates, and manage local blockchain networks. Pop CLI simplifies parachain development with features like: - Quick initialization of parachain development environments - Project scaffolding from predefined parachain templates - Easy deployment and management of local development networks Developers can quickly begin coding and testing, significantly reducing setup overhead. ### Install Pop CLI To install Pop CLI, run the following command: ```bash cargo install --force --locked pop-cli ``` Confirm that Pop CLI is installed by running `pop --help` in your terminal: ```bash pop --help ``` ### Set Up Your Development Environment To develop and build Polkadot SDK-based chains, preparing your local environment with the necessary tools and dependencies is essential. The [Install Polkadot SDK Dependencies](/develop/parachains/install-polkadot-sdk/){target=\_blank} guide walks you through this setup step-by-step. However, you can automate this entire process by running: ```bash pop install ``` This command provides an interactive experience that checks and installs all necessary dependencies for you. It’s the fastest and easiest way to prepare your development environment for building parachains with Pop CLI.
pop install ┌ Pop CLI : Install dependencies for development ⚙ ℹ️ Mac OS (Darwin) detected. ⚙ More information about the packages to be installed here: https://docs.substrate.io/install/macos/ ◆ 📦 Do you want to proceed with the installation of the following packages: homebrew, protobuf, openssl, rustup and cmake ? │ ● Yes / ○ No
### Initialize a Project Start a new project quickly using Pop CLI's `pop new parachain` command:
pop new
The command above scaffolds a new parachain project using the default template included with Pop CLI. For more specialized implementations, additional templates are available; you can explore them by running `pop new parachain --help`. Once the project is generated, move into the new directory and build your parachain: ``` cd my-parachain pop build --release ``` !!! note Under the hood, `pop build --release` runs `cargo build --release`, but `pop build` adds functionality specific to Polkadot SDK projects, such as [deterministic runtime builds](/develop/parachains/deployment/build-deterministic-runtime/){target=\_blank} and automatic management of feature flags like `benchmark` or `try-runtime`. Pop CLI integrates the [Zombienet SDK](https://github.com/paritytech/zombienet-sdk){target=\_blank} allowing you to easily launch ephemeral local networks for development and testing. To start a network, simply run the following: ```bash pop up network -f ./network.toml ``` This command will automatically fetch the necessary binaries and spin up a Polkadot network with your configured parachains. You can also interact with your local network using Pop CLI's `pop call chain` command:
pop call
## Where to Go Next For a comprehensive guide to all Pop CLI features and advanced usage, see the official [Pop CLI](https://learn.onpop.io/appchains) documentation. !!! tip Pop CLI also offers powerful solutions for smart contract developers. If you're interested in that path, check out the [Pop CLI Smart Contracts](https://learn.onpop.io/contracts) documentation. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/rpc-calls/ --- BEGIN CONTENT --- --- title: RPC Calls to Polkadot SDK chains. description: Learn how to interact with Polkadot SDK-based chains using RPC calls. This guide covers essential methods and usage via curl. --- # RPC Calls ## Introduction [Remote Procedure Call](https://en.wikipedia.org/wiki/Remote_procedure_call){target=\_blank} (RPC) interfaces are the primary way to interact programmatically with Polkadot SDK-based parachains and relay chains. RPC calls allow you to query chain state, submit transactions, and monitor network health from external applications or scripts. This guide covers: - What RPC calls are and how they work in the Polkadot SDK. - How to make RPC calls using `curl` or similar tools. - The most useful and commonly used RPC methods. RPC endpoints are available on every node and can be accessed via HTTP and WebSocket. Most developer tools, dashboards, and libraries (like [Polkadot.js](/develop/toolkit/api-libraries/polkadot-js-api){target=\_blank}, [Subxt](/develop/toolkit/api-libraries/subxt){target=\_blank}, and others) utilize these endpoints internally. ## How Do RPC Calls Work? RPC (Remote Procedure Call) is a protocol that allows you to invoke functions on a remote server (in this case, a blockchain node) as if they were local. Polkadot SDK nodes implement the [JSON-RPC 2.0](https://www.jsonrpc.org/specification){target=\_blank} standard, making it easy to interact with them using standard HTTP requests. ```mermaid flowchart LR CLIENT([Client Application])-- JSON-RPC Request -->NODE([Node]) NODE -- JSON Response --> CLIENT ``` RPC calls are stateless and can be used to: - Query chain state (e.g., block number, storage values) - Submit extrinsics (transactions) - Monitor node and network health - Retrieve metadata and runtime information ## Making RPC Calls with Curl You can make RPC calls to a node using [`curl`](https://curl.se/){target=\_blank} or any HTTP client. The general format that the RPC calls stick to is the following: ```bash curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "INSERT_METHOD_NAME", "params": [INSERT_PARAMS]}' \ NODE_ENDPOINT ``` - **`method`**: The RPC method you want to call (e.g., `system_health`). - **`params`**: Parameters for the method (if any). - **`NODE_ENDPOINT`**: The HTTP endpoint of your node (e.g., `http://localhost:9933` or a public endpoint). Here's a simple example to get the latest block number of the Polkadot relay chain; you can use the following node endpoint: ```bash curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "chain_getBlock"}' \ https://rpc.polkadot.io ``` ## Essential RPC Methods Below are some of the most useful and commonly used RPC methods for Polkadot SDK-based chains. Each method includes a description, parameters, and an example request. --- ### system_health Checks the health of your node. **Parameters:** None **Example:** ```bash title="system_health" curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "system_health", "params":[]}' \ http://localhost:9933 ``` --- ### chain_getBlock Returns the latest block or a specific block by hash. **Parameters:** - `blockHash` *(optional, string)* – The hash of the block to retrieve. If omitted, returns the latest block. **Example:** ```bash title="chain_getBlock" curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "chain_getBlock", "params":[]}' \ http://localhost:9933 ``` --- ### state_getStorage Queries on-chain storage by key (requires [SCALE-encoded](/polkadot-protocol/parachain-basics/data-encoding){target=_blank} storage key). **Parameters:** - `storageKey` *(string)* – The SCALE-encoded storage key to query. **Example:** ```bash title="state_getStorage" curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "state_getStorage", "params":["0x..."]}' \ http://localhost:9933 ``` --- ### author_submitExtrinsic Submits a signed extrinsic (transaction) to the node. **Parameters:** - `extrinsic` *(string)* – The SCALE-encoded, signed extrinsic (transaction). **Example:** ```bash title="author_submitExtrinsic" curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "author_submitExtrinsic", "params":["0x..."]}' \ http://localhost:9933 ``` --- ### state_getMetadata Fetches the runtime metadata (needed for decoding storage and extrinsics). **Parameters:** None **Example:** ```bash title="state_getMetadata" curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "state_getMetadata", "params":[]}' \ http://localhost:9933 ``` --- ## Check Available RPC Calls To check all the RPC methods exposed by your node, you can use the `rpc_methods` call to get a comprehensive list of available methods. This is particularly useful when working with different chain implementations or custom runtimes that may have additional RPC endpoints. You can do this via [`curl`](#using-curl) or the [Polkadot.Js Apps](#using-polkadotjs-apps). ### Using curl To check the available RPC methods using `curl`, you can use the following command: ```bash curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "rpc_methods", "params":[]}' \ https://rpc.polkadot.io ``` You can replace `https://rpc.polkadot.io` with the node endpoint you need to query. ### Using Polkadot.js Apps 1. Go to the [Polkadot.js Apps UI](https://polkadot.js.org/apps){target=\_blank} and navigate to the RPC calls section. ![](/images/develop/toolkit/parachains/rpc-calls/rpc-calls-01.webp) 2. Select **`rpc`** from the dropdown menu. ![](/images/develop/toolkit/parachains/rpc-calls/rpc-calls-02.webp) 3. Choose the **`methods`** method. ![](/images/develop/toolkit/parachains/rpc-calls/rpc-calls-03.webp) 4. Submit the call to get a list of all available RPC methods. ![](/images/develop/toolkit/parachains/rpc-calls/rpc-calls-04.webp) This will return a JSON response containing all the RPC methods supported by your node. ![](/images/develop/toolkit/parachains/rpc-calls/rpc-calls-05.webp) From this interface, you can also query the RPC methods directly, as you would do with curl. ## Resources - [Polkadot JSON-RPC API Reference](https://polkadot.js.org/docs/substrate/rpc/){target=\_blank} - [Parity DevOps: Important Flags for Running an RPC Node](https://paritytech.github.io/devops-guide/guides/rpc_index.html?#important-flags-for-running-an-rpc-node){target=\_blank} - [Polkadot.js Apps RPC Explorer](https://polkadot.js.org/apps/#/rpc){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/ --- BEGIN CONTENT --- --- title: Spawn Networks for Testing description: Discover tools that enable you to spawn blockchains for testing, allowing for debugging, and validation of your blockchain setups in a controlled environment. template: index-page.html --- # Spawn Networks for Testing Testing blockchain networks in a controlled environment is essential for development and validation. The Polkadot ecosystem provides specialized tools that enable you to spawn test networks, helping you verify functionality and catch issues before deploying to production. Ready to get started? Jump straight to the [Zombienet getting started](/develop/toolkit/parachains/spawn-chains/zombienet/get-started/) guide. ## Why Spawn a Network? Spawning a network provides a controlled environment to test and validate various aspects of your blockchain. Use these tools to: - Validate network configurations - Test cross-chain messaging - Verify runtime upgrades - Debug complex interactions ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/zombienet/get-started/ --- BEGIN CONTENT --- --- title: Get Started description: Quickly install and configure Zombienet to deploy and test Polkadot-based blockchain networks with this comprehensive getting-started guide. categories: Parachains, Tooling --- # Get Started ## Introduction Zombienet is a robust testing framework designed for Polkadot SDK-based blockchain networks. It enables developers to efficiently deploy and test ephemeral blockchain environments on platforms like Kubernetes, Podman, and native setups. With its simple and versatile CLI, Zombienet provides an all-in-one solution for spawning networks, running tests, and validating performance. This guide will outline the different installation methods for Zombienet, provide step-by-step instructions for setting up on various platforms, and highlight essential provider-specific features and requirements. By following this guide, Zombienet will be up and running quickly, ready to streamline your blockchain testing and development workflows. ## Install Zombienet Zombienet releases are available on the [Zombienet repository](https://github.com/paritytech/zombienet){target=\_blank}. Multiple options are available for installing Zombienet, depending on the user's preferences and the environment where it will be used. The following section will guide you through the installation process for each option. === "Use the executable" Install Zombienet using executables by visiting the [latest release](https://github.com/paritytech/zombienet/releases){target=\_blank} page and selecting the appropriate asset for your operating system. You can download the executable and move it to a directory in your PATH. Each release includes executables for Linux and macOS. Executables are generated using [pkg](https://github.com/vercel/pkg){target=\_blank}, which allows the Zombienet CLI to operate without requiring Node.js to be installed. Then, ensure the downloaded file is executable: ```bash chmod +x zombienet-{{ dependencies.repositories.zombienet.architecture }} ``` Finally, you can run the following command to check if the installation was successful. If so, it will display the version of the installed Zombienet: ```bash ./zombienet-{{ dependencies.repositories.zombienet.architecture }} version ``` If you want to add the `zombienet` executable to your PATH, you can move it to a directory in your PATH, such as `/usr/local/bin`: ```bash mv zombienet-{{ dependencies.repositories.zombienet.architecture }} /usr/local/bin/zombienet ``` Now you can refer to the `zombienet` executable directly. ```bash zombienet version ``` === "Use Nix" For Nix users, the Zombienet repository provides a [`flake.nix`](https://github.com/paritytech/zombienet/blob/main/flake.nix){target=\_blank} file to install Zombienet making it easy to incorporate Zombienet into Nix-based projects. To install Zombienet utilizing Nix, users can run the following command, triggering the fetching of the flake and subsequently installing the Zombienet package: ```bash nix run github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION -- \ spawn INSERT_ZOMBIENET_CONFIG_FILE_NAME.toml ``` Replace the `INSERT_ZOMBIENET_VERSION` with the desired version of Zombienet and the `INSERT_ZOMBIENET_CONFIG_FILE_NAME` with the name of the configuration file you want to use. To run the command above, you need to have [Flakes](https://nixos.wiki/wiki/Flakes#Enable_flakes){target=\_blank} enabled. Alternatively, you can also include the Zombienet binary in the PATH for the current shell using the following command: ```bash nix shell github:paritytech/zombienet/INSERT_ZOMBIENET_VERSION ``` === "Use Docker" Zombienet can also be run using Docker. The Zombienet repository provides a Docker image that can be used to run the Zombienet CLI. To run Zombienet using Docker, you can use the following command: ```bash docker run -it --rm \ -v $(pwd):/home/nonroot/zombie-net/host-current-files \ paritytech/zombienet ``` The command above will run the Zombienet CLI inside a Docker container and mount the current directory to the `/home/nonroot/zombie-net/host-current-files` directory. This allows Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace `$(pwd)` with the desired directory path. Inside the Docker container, you can run the Zombienet CLI commands. First, you need to set up Zombienet to download the necessary binaries: ```bash npm run zombie -- setup polkadot polkadot-parachain ``` After that, you need to add those binaries to the PATH: ```bash export PATH=/home/nonroot/zombie-net:$PATH ``` Finally, you can run the Zombienet CLI commands. For example, to spawn a network using a specific configuration file, you can run the following command: ```bash pm run zombie -- -p native spawn host-current-files/minimal.toml ``` The command above mounts the current directory to the `/workspace` directory inside the Docker container, allowing Zombienet to access the configuration file and other files in the current directory. If you want to mount a different directory, replace `$(pwd)` with the desired directory path. ## Providers Zombienet supports different backend providers for running the nodes. At this moment, [Kubernetes](https://kubernetes.io/){target=\_blank}, [Podman](https://podman.io/){target=\_blank}, and local providers are supported, which can be declared as `kubernetes`, `podman`, or `native`, respectively. To use a particular provider, you can specify it in the network file or use the `--provider` flag in the CLI: ```bash zombienet spawn network.toml --provider INSERT_PROVIDER ``` Alternatively, you can set the provider in the network file: ```toml [settings] provider = "INSERT_PROVIDER" ... ``` It's important to note that each provider has specific requirements and associated features. The following sections cover each provider's requirements and added features. ### Kubernetes Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. Zombienet is designed to be compatible with a variety of Kubernetes clusters, including: - [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine){target=\_blank} - [Docker Desktop](https://docs.docker.com/desktop/features/kubernetes/){target=\_blank} - [kind](https://kind.sigs.k8s.io/){target=\_blank} #### Requirements To effectively interact with your cluster, you'll need to ensure that [`kubectl`](https://kubernetes.io/docs/reference/kubectl/){target=\_blank} is installed on your system. This Kubernetes command-line tool allows you to run commands against Kubernetes clusters. If you don't have `kubectl` installed, you can follow the instructions provided in the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/#kubectl){target=\_blank}. To create resources such as namespaces, pods, and CronJobs within the target cluster, you must grant your user or service account the appropriate permissions. These permissions are essential for managing and deploying applications effectively within Kubernetes. #### Features If available, Zombienet uses the Prometheus operator to oversee monitoring and visibility. This configuration ensures that only essential networking-related pods are deployed. Using the Prometheus operator, Zombienet improves its ability to monitor and manage network activities within the Kubernetes cluster efficiently. ### Podman Podman is a daemonless container engine for developing, managing, and running Open Container Initiative (OCI) containers and container images on Linux-based systems. Zombienet supports Podman rootless as a provider on Linux machines. Although Podman has support for macOS through an internal virtual machine (VM), the Zombienet provider code requires Podman to run natively on Linux. #### Requirements To use Podman as a provider, you need to have Podman installed on your system. You can install Podman by following the instructions provided on the [Podman website](https://podman.io/getting-started/installation){target=\_blank}. #### Features Using Podman, Zombienet deploys additional pods to enhance the monitoring and visibility of the active network. Specifically, pods for [Prometheus](https://prometheus.io/){target=\_blank}, [Tempo](https://grafana.com/docs/tempo/latest/operations/monitor/){target=\_blank}, and [Grafana](https://grafana.com/){target=\_blank} are included in the deployment. Grafana is configured with Prometheus and Tempo as data sources. Upon launching Zombienet, access to these monitoring services is facilitated through specific URLs provided in the output: - Prometheus - `http://127.0.0.1:34123` - Tempo - `http://127.0.0.1:34125` - Grafana - `http://127.0.0.1:41461` It's important to note that Grafana is deployed with default administrator access. When network operations cease, either from halting a running spawn with the `Ctrl+C` command or test completion, Zombienet automatically removes all associated pods. ### Local Provider The Zombienet local provider, also called native, enables you to run nodes as local processes in your environment. #### Requirements You must have the necessary binaries for your network (such as `polkadot` and `polkadot-parachain`). These binaries should be available in your PATH, allowing Zombienet to spawn the nodes as local processes. To install the necessary binaries, you can use the Zombienet CLI command: ```bash zombienet setup polkadot polkadot-parachain ``` This command will download and prepare the necessary binaries for Zombienet's use. If you need to use a custom binary, ensure the binary is available in your PATH. You can also specify the binary path in the network configuration file. The following example uses the custom [OpenZeppelin template](https://github.com/OpenZeppelin/polkadot-runtime-templates){target=\_blank}: First, clone the OpenZeppelin template repository using the following command: ```bash git clone https://github.com/OpenZeppelin/polkadot-runtime-templates \ && cd polkadot-runtime-templates/generic-template ``` Next, run the command to build the custom binary: ```bash cargo build --release ``` Finally, add the custom binary to your PATH as follows: ```bash export PATH=$PATH:INSERT_PATH_TO_RUNTIME_TEMPLATES/parachain-template-node/target/release ``` Alternatively, you can specify the binary path in the network configuration file. The local provider exclusively utilizes the command configuration for nodes, which supports both relative and absolute paths. You can employ the `default_command` configuration to specify the binary for spawning all nodes in the relay chain. ```toml [relaychain] chain = "rococo-local" default_command = "./bin-v1.6.0/polkadot" [parachain] id = 1000 [parachain.collators] name = "collator01" command = "./target/release/parachain-template-node" ``` #### Features The local provider does not offer any additional features. ## Configure Zombienet Effective network configuration is crucial for deploying and managing blockchain systems. Zombienet simplifies this process by offering versatile configuration options in both JSON and TOML formats. Whether setting up a simple test network or a complex multi-node system, Zombienet's tools provide the flexibility to customize every aspect of your network's setup. The following sections will explore the structure and usage of Zombienet configuration files, explain key settings for network customization, and walk through CLI commands and flags to optimize your development workflow. ### Configuration Files The network configuration file can be either JSON or TOML format. The Zombienet repository also provides a collection of [example configuration files](https://github.com/paritytech/zombienet/tree/main/examples){target=\_blank} that can be used as a reference. Each section may include provider-specific keys that aren't recognized by other providers. For example, if you use the local provider, any references to images for nodes will be disregarded. ### CLI Usage Zombienet provides a CLI that allows interaction with the tool. The CLI can receive commands and flags to perform different kinds of operations. These operations use the following syntax: ```bash zombienet ``` The following sections will guide you through the primary usage of the Zombienet CLI and the available commands and flags. #### CLI Commands - **`spawn `** - spawn the network defined in the [configuration file](#configuration-files) - **`test `** - run tests on the spawned network using the assertions and tests defined in the [test file](/develop/toolkit/parachains/spawn-chains/zombienet/write-tests/#the-test-file){target=\_blank} - **`setup `** - set up the Zombienet development environment to download and use the `polkadot` or `polkadot-parachain` executable - **`convert `** - transforms a [polkadot-launch](https://github.com/paritytech/polkadot-launch){target=\_blank} configuration file with a `.js` or `.json` extension into a Zombienet configuration file - **`version`** - prints Zombienet version - **`help`** - prints help information #### CLI Flags You can use the following flags to customize the behavior of the CLI: - **`-p`, `--provider`** - override the [provider](#providers) to use - **`-d`, `--dir`** - specify a directory path for placing the network files instead of using the default temporary path - **`-f`, `--force`** - force override all prompt commands - **`-l`, `--logType`** - type of logging on the console. Defaults to `table` - **`-m`, `--monitor`** - start as monitor and don't auto clean up network - **`-c`, `--spawn-concurrency`** - number of concurrent spawning processes to launch. Defaults to `1` - **`-h`, `--help`** - display help for command ### Settings Through the keyword `settings`, it's possible to define the general settings for the network. The available keys are: - **`global_volumes?`** ++"GlobalVolume[]"++ - a list of global volumes to use ??? child "`GlobalVolume` interface definition" ```js export interface GlobalVolume { name: string; fs_type: string; mount_path: string; } ``` - **`bootnode`** ++"boolean"++ - add bootnode to network. Defaults to `true` - **`bootnode_domain?`** ++"string"++ - domain to use for bootnode - **`timeout`** ++"number"++ - global timeout to use for spawning the whole network - **`node_spawn_timeout?`** ++"number"++ - timeout to spawn pod/process - **`grafana?`** ++"boolean"++ - deploy an instance of Grafana - **`prometheus?`** ++"boolean"++ - deploy an instance of Prometheus - **`telemetry?`** ++"boolean"++ - enable telemetry for the network - **`jaeger_agent?`** ++"string"++ - the Jaeger agent endpoint passed to the nodes. Only available on Kubernetes - **`tracing_collator_url?`** ++"string"++ - the URL of the tracing collator used to query by the tracing assertion. Should be tempo query compatible - **`tracing_collator_service_name?`** ++"string"++ - service name for tempo query frontend. Only available on Kubernetes. Defaults to `tempo-tempo-distributed-query-frontend` - **`tracing_collator_service_namespace?`** ++"string"++ - namespace where tempo is running. Only available on Kubernetes. Defaults to `tempo` - **`tracing_collator_service_port?`** ++"number"++ - port of the query instance of tempo. Only available on Kubernetes. Defaults to `3100` - **`enable_tracing?`** ++"boolean"++ - enable the tracing system. Only available on Kubernetes. Defaults to `true` - **`provider`** ++"string"++ - provider to use. Default is `kubernetes`" - **`polkadot_introspector?`** ++"boolean"++ - deploy an instance of polkadot-introspector. Only available on Podman and Kubernetes. Defaults to `false` - **`backchannel?`** ++"boolean"++ - deploy an instance of backchannel server. Only available on Kubernetes. Defaults to `false` - **`image_pull_policy?`** ++"string"++ - image pull policy to use in the network. Possible values are `Always`, `IfNotPresent`, and `Never` - **`local_ip?`** ++"string"++ - IP used for exposing local services (rpc/metrics/monitors). Defaults to `"127.0.0.1"` - **`global_delay_network_global_settings?`** ++"number"++ - delay in seconds to apply to the network - **`node_verifier?`** ++"string"++ - specify how to verify node readiness or deactivate by using `None`. Possible values are `None` and `Metric`. Defaults to `Metric` For example, the following configuration file defines a minimal example for the settings: === "TOML" ```toml title="base-example.toml" [settings] timeout = 1000 bootnode = false provider = "kubernetes" backchannel = false # ... ``` === "JSON" ```json title="base-example.json" { "settings": { "timeout": 1000, "bootnode": false, "provider": "kubernetes", "backchannel": false, "...": {} }, "...": {} } ``` ### Relay Chain Configuration You can use the `relaychain` keyword to define further parameters for the relay chain at start-up. The available keys are: - **`default_command?`** ++"string"++ - the default command to run. Defaults to `polkadot` - **`default_image?`** ++"string"++ - the default Docker image to use - **`default_resources?`** ++"Resources"++ - represents the resource limits/reservations the nodes need by default. Only available on Kubernetes ??? child "`Resources` interface definition" ```js export interface Resources { resources: { requests?: { memory?: string; cpu?: string; }; limits?: { memory?: string; cpu?: string; }; }; } ``` - **`default_db_snapshot?`** ++"string"++ - the default database snapshot to use - **`default_prometheus_prefix`** ++"string"++ - a parameter for customizing the metric's prefix. Defaults to `substrate` - **`default_substrate_cli_args_version?`** ++"SubstrateCliArgsVersion"++ - set the Substrate CLI arguments version ??? child "`SubstrateCliArgsVersion` enum definition" ```js export enum SubstrateCliArgsVersion { V0 = 0, V1 = 1, V2 = 2, V3 = 3, } ``` - **`default_keystore_key_types?`** ++"string[]"++ - defines which keystore keys should be created - **`chain`** ++"string"++ - the chain name - **`chain_spec_path?`** ++"string"++ - path to the chain spec file. Should be the plain version to allow customizations - **`chain_spec_command?`** ++"string"++ - command to generate the chain spec. It can't be used in combination with `chain_spec_path` - **`default_args?`** ++"string[]"++ - an array of arguments to use as default to pass to the command - **`default_overrides?`** ++"Override[]"++ - an array of overrides to upload to the node ??? child "`Override` interface definition" ```js export interface Override { local_path: string; remote_name: string; } ``` - **`random_nominators_count?`** ++"number"++ - if set and the stacking pallet is enabled, Zombienet will generate the input quantity of nominators and inject them into the genesis - **`max_nominations`** ++"number"++ - the max number of nominations allowed by a nominator. Should match the value set in the runtime. Defaults to `24` - **`nodes?`** ++"Node[]"++ - an array of nodes to spawn. It is further defined in the [Node Configuration](#node-configuration) section - **`node_groups?`** ++"NodeGroup[]"++ - an array of node groups to spawn. It is further defined in the [Node Group Configuration](#node-group-configuration) section - **`total_node_in_group?`** ++"number"++ - the total number of nodes in the group. Defaults to `1` - **`genesis`** ++"JSON"++ - the genesis configuration - **`default_delay_network_settings?`** ++"DelayNetworkSettings"++ - sets the expected configuration to delay the network ??? child "`DelayNetworkSettings` interface definition" ```js export interface DelayNetworkSettings { latency: string; correlation?: string; // should be parsable as float by k8s jitter?: string; } ``` #### Node Configuration One specific key capable of receiving more subkeys is the `nodes` key. This key is used to define further parameters for the nodes. The available keys: - **`name`** ++"string"++ - name of the node. Any whitespace will be replaced with a dash (for example, `new alice` will be converted to `new-alice`) - **`image?`** ++"string"++ - override default Docker image to use for this node - **`command?`** ++"string"++ - override default command to run - **`command_with_args?`** ++"string"++ - override default command and arguments - **`args?`** ++"string[]"++ - arguments to be passed to the command - **`env?`** ++"envVars[]"++ - environment variables to set in the container ??? child "`envVars` interface definition" ```js export interface EnvVars { name: string; value: string; } ``` - **`prometheus_prefix?`** ++"string"++ - customizes the metric's prefix for the specific node. Defaults to `substrate` - **`db_snapshot?`** ++"string"++ - database snapshot to use - **`substrate_cli_args_version?`** ++"SubstrateCliArgsVersion"++ - set the Substrate CLI arguments version directly to skip binary evaluation overhead ??? child "`SubstrateCliArgsVersion` enum definition" ```js export enum SubstrateCliArgsVersion { V0 = 0, V1 = 1, V2 = 2, V3 = 3, } ``` - **`resources?`** ++"Resources"++ - represent the resources limits/reservations needed by the node ??? child "`Resources` interface definition" ```js export interface Resources { resources: { requests?: { memory?: string; cpu?: string; }; limits?: { memory?: string; cpu?: string; }; }; } ``` - **`keystore_key_types?`** ++"string[]"++ - defines which keystore keys should be created - **`validator`** ++"boolean"++ - pass the `--validator` flag to the command. Defaults to `true` - **`invulnerable`** ++"boolean"++ - if true, add the node to invulnerables in the chain spec. Defaults to `false` - **`balance`** ++"number"++ - balance to set in balances for node's account. Defaults to `2000000000000` - **`bootnodes?`** ++"string[]"++ - array of bootnodes to use - **`add_to_bootnodes?`** ++"boolean"++ - add this node to the bootnode list. Defaults to `false` - **`ws_port?`** ++"number"++ - WS port to use - **`rpc_port?`** ++"number"++ - RPC port to use - **`prometheus_port?`** ++"number"++ - Prometheus port to use - **`p2p_cert_hash?`** ++"string"++ - libp2p certhash to use with webRTC transport - **`delay_network_settings?`** ++"DelayNetworkSettings"++ - sets the expected configuration to delay the network ??? child "`DelayNetworkSettings` interface definition" ```js export interface DelayNetworkSettings { latency: string; correlation?: string; // should be parsable as float by k8s jitter?: string; } ``` The following configuration file defines a minimal example for the relay chain, including the `nodes` key: === "TOML" ```toml title="relaychain-example-nodes.toml" [relaychain] default_command = "polkadot" default_image = "polkadot-debug:master" chain = "rococo-local" chain_spec_path = "INSERT_PATH_TO_CHAIN_SPEC" default_args = ["--chain", "rococo-local"] [[relaychain.nodes]] name = "alice" validator = true balance = 1000000000000 [[relaychain.nodes]] name = "bob" validator = true balance = 1000000000000 # ... ``` === "JSON" ```json title="relaychain-example-nodes.json" { "relaychain": { "default_command": "polkadot", "default_image": "polkadot-debug:master", "chain": "rococo-local", "chain_spec_path": "INSERT_PATH_TO_CHAIN-SPEC.JSON", "default_args": ["--chain", "rococo-local"], "nodes": [ { "name": "alice", "validator": true, "balance": 1000000000000 }, { "name": "bob", "validator": true, "balance": 1000000000000 } ] } } ``` #### Node Group Configuration The `node_groups` key defines further parameters for the node groups. The available keys are: - **`name`** ++"string"++ - name of the node. Any whitespace will be replaced with a dash (for example, `new alice` will be converted to `new-alice`) - **`image?`** ++"string"++ - override default Docker image to use for this node - **`command?`** ++"string"++ - override default command to run - **`args?`** ++"string[]"++ - arguments to be passed to the command - **`env?`** ++"envVars[]"++ - environment variables to set in the container ??? child "`envVars` interface definition" ```js export interface EnvVars { name: string; value: string; } ``` - **`overrides?`** ++"Override[]"++ - array of overrides definitions ??? child "`Override` interface definition" ```js export interface Override { local_path: string; remote_name: string; } ``` - **`prometheus_prefix?`** ++"string"++ - customizes the metric's prefix for the specific node. Defaults to `substrate` - **`db_snapshot?`** ++"string"++ - database snapshot to use - **`substrate_cli_args_version?`** ++"SubstrateCliArgsVersion"++ - set the Substrate CLI arguments version directly to skip binary evaluation overhead ??? child "`SubstrateCliArgsVersion` enum definition" ```js export enum SubstrateCliArgsVersion { V0 = 0, V1 = 1, V2 = 2, V3 = 3, } ``` - **`resources?`** ++"Resources"++ - represent the resources limits/reservations needed by the node ??? child "`Resources` interface definition" ```js export interface Resources { resources: { requests?: { memory?: string; cpu?: string; }; limits?: { memory?: string; cpu?: string; }; }; } ``` - **`keystore_key_types?`** ++"string[]"++ - defines which keystore keys should be created - **`count`** ++"number | string"++ - number of nodes to launch for this group - **`delay_network_settings?`** ++"DelayNetworkSettings"++ - sets the expected configuration to delay the network ??? child "`DelayNetworkSettings` interface definition" ```js export interface DelayNetworkSettings { latency: string; correlation?: string; // should be parsable as float by k8s jitter?: string; } ``` The following configuration file defines a minimal example for the relay chain, including the `node_groups` key: === "TOML" ```toml title="relaychain-example-node-groups.toml" [relaychain] default_command = "polkadot" default_image = "polkadot-debug:master" chain = "rococo-local" chain_spec_path = "INSERT_PATH_TO_CHAIN_SPEC" default_args = ["--chain", "rococo-local"] [[relaychain.node_groups]] name = "group-1" count = 2 image = "polkadot-debug:master" command = "polkadot" args = ["--chain", "rococo-local"] # ... ``` === "JSON" ```json title="relaychain-example-node-groups.json" { "relaychain": { "default_command": "polkadot", "default_image": "polkadot-debug:master", "chain": "rococo-local", "chain_spec_path": "INSERT_PATH_TO_CHAIN-SPEC.JSON", "default_args": ["--chain", "rococo-local"], "node_groups": [ { "name": "group-1", "count": 2, "image": "polkadot-debug:master", "command": "polkadot", "args": ["--chain", "rococo-local"] } ], "...": {} }, "...": {} } ``` ### Parachain Configuration The `parachain` keyword defines further parameters for the parachain. The available keys are: - **`id`** ++"number"++ - the id to assign to this parachain. Must be unique - **`chain?`** ++"string"++ - the chain name - **`force_decorator?`** ++"string"++ - force the use of a specific decorator - **`genesis?`** ++"JSON"++ - the genesis configuration - **`balance?`** ++"number"++ - balance to set in balances for parachain's account - **`delay_network_settings?`** ++"DelayNetworkSettings"++ - sets the expected configuration to delay the network ??? child "`DelayNetworkSettings` interface definition" ```js export interface DelayNetworkSettings { latency: string; correlation?: string; // should be parsable as float by k8s jitter?: string; } ``` - **`add_to_genesis?`** ++"boolean"++ - flag to add parachain to genesis or register in runtime. Defaults to `true` - **`register_para?`** ++"boolean"++ - flag to specify whether the para should be registered. The `add_to_genesis` flag must be set to false for this flag to have any effect. Defaults to `true` - **`onboard_as_parachain?`** ++"boolean"++ - flag to specify whether the para should be onboarded as a parachain, rather than remaining a parathread. Defaults to `true` - **`genesis_wasm_path?`** ++"string"++ - path to the Wasm file to use - **`genesis_wasm_generator?`** ++"string"++ - command to generate the Wasm file - **`genesis_state_path?`** ++"string"++ - path to the state file to use - **`genesis_state_generator?`** ++"string"++ - command to generate the state file - **`chain_spec_path?`** ++"string"++ - path to the chain spec file - **`chain_spec_command?`** ++"string"++ - command to generate the chain spec - **`cumulus_based?`** ++"boolean"++ - flag to use cumulus command generation. Defaults to `true` - **`bootnodes?`** ++"string[]"++ - array of bootnodes to use - **`prometheus_prefix?`** ++"string"++ - parameter for customizing the metric's prefix for all parachain nodes/collators. Defaults to `substrate` - **`collator?`** ++"Collator"++ - further defined in the [Collator Configuration](#collator-configuration) section - **`collator_groups?`** ++"CollatorGroup[]"++ - an array of collator groups to spawn. It is further defined in the [Collator Groups Configuration](#collator-groups-configuration) section For example, the following configuration file defines a minimal example for the parachain: === "TOML" ```toml title="parachain-example.toml" [parachain] id = 100 add_to_genesis = true cumulus_based = true genesis_wasm_path = "INSERT_PATH_TO_WASM" genesis_state_path = "INSERT_PATH_TO_STATE" # ... ``` === "JSON" ```json title="parachain-example.json" { "parachain": { "id": 100, "add_to_genesis": true, "cumulus_based": true, "genesis_wasm_path": "INSERT_PATH_TO_WASM", "genesis_state_path": "INSERT_PATH_TO_STATE", "...": {} }, "...": {} } ``` #### Collator Configuration One specific key capable of receiving more subkeys is the `collator` key. This key defines further parameters for the nodes. The available keys are: - **`name`** ++"string"++ - name of the collator. Any whitespace will be replaced with a dash (for example, `new alice` will be converted to `new-alice`) - **`image?`** ++"string"++ - image to use for the collator - **`command_with_args?`** ++"string"++ - overrides both command and arguments for the collator - **`validator`** ++"boolean"++ - pass the `--validator` flag to the command. Defaults to `true` - **`invulnerable`** ++"boolean"++ - if true, add the collator to invulnerables in the chain spec. Defaults to `false` - **`balance`** ++"number"++ - balance to set in balances for collator's account. Defaults to `2000000000000` - **`bootnodes?`** ++"string[]"++ - array of bootnodes to use - **`add_to_bootnodes?`** ++"boolean"++ - add this collator to the bootnode list. Defaults to `false` - **`ws_port?`** ++"number"++ - WS port to use - **`rpc_port?`** ++"number"++ - RPC port to use - **`prometheus_port?`** ++"number"++ - Prometheus port to use - **`p2p_port?`** ++"number"++ - P2P port to use - **`p2p_cert_hash?`** ++"string"++ - libp2p certhash to use with webRTC transport - **`delay_network_settings?`** ++"DelayNetworkSettings"++ - sets the expected configuration to delay the network ??? child "`DelayNetworkSettings` interface definition" ```js export interface DelayNetworkSettings { latency: string; correlation?: string; // should be parsable as float by k8s jitter?: string; } ``` - **`command?`** ++"string"++ - override default command to run - **`args?`** ++"string[]"++ - arguments to be passed to the command - **`env?`** ++"envVars[]"++ - environment variables to set in the container ??? child "`envVars` interface definition" ```js export interface EnvVars { name: string; value: string; } ``` - **`overrides?`** ++"Override[]"++ - array of overrides definitions ??? child "`Override` interface definition" ```js export interface Override { local_path: string; remote_name: string; } ``` - **`prometheus_prefix?`** ++"string"++ - customizes the metric's prefix for the specific node. Defaults to `substrate` - **`db_snapshot?`** ++"string"++ - database snapshot to use - **`substrate_cli_args_version?`** ++"SubstrateCliArgsVersion"++ - set the Substrate CLI arguments version directly to skip binary evaluation overhead ??? child "`SubstrateCliArgsVersion` enum definition" ```js export enum SubstrateCliArgsVersion { V0 = 0, V1 = 1, V2 = 2, V3 = 3, } ``` - **`resources?`** ++"Resources"++ - represent the resources limits/reservations needed by the node ??? child "`Resources` interface definition" ```js export interface Resources { resources: { requests?: { memory?: string; cpu?: string; }; limits?: { memory?: string; cpu?: string; }; }; } ``` - **`keystore_key_types?`** ++"string[]"++ - defines which keystore keys should be created The configuration file below defines a minimal example for the collator: === "TOML" ```toml title="collator-example.toml" [parachain] id = 100 add_to_genesis = true cumulus_based = true genesis_wasm_path = "INSERT_PATH_TO_WASM" genesis_state_path = "INSERT_PATH_TO_STATE" [[parachain.collators]] name = "alice" image = "polkadot-parachain" command = "polkadot-parachain" # ... ``` === "JSON" ```json title="collator-example.json" { "parachain": { "id": 100, "add_to_genesis": true, "cumulus_based": true, "genesis_wasm_path": "INSERT_PATH_TO_WASM", "genesis_state_path": "INSERT_PATH_TO_STATE", "collators": [ { "name": "alice", "image": "polkadot-parachain", "command": "polkadot-parachain", "...": {} } ] }, "...": {} } ``` #### Collator Groups Configuration The `collator_groups` key defines further parameters for the collator groups. The available keys are: - **`name`** ++"string"++ - name of the node. Any whitespace will be replaced with a dash (for example, `new alice` will be converted to `new-alice`) - **`image?`** ++"string"++ - override default Docker image to use for this node - **`command?`** ++"string"++ - override default command to run - **`args?`** ++"string[]"++ - arguments to be passed to the command - **`env?`** ++"envVars[]"++ - environment variables to set in the container ??? child "`envVars` interface definition" ```js export interface EnvVars { name: string; value: string; } ``` - **`overrides?`** ++"Override[]"++ - array of overrides definitions ??? child "`Override` interface definition" ```js export interface Override { local_path: string; remote_name: string; } ``` - **`prometheus_prefix?`** ++"string"++ - customizes the metric's prefix for the specific node. Defaults to `substrate` - **`db_snapshot?`** ++"string"++ - database snapshot to use - **`substrate_cli_args_version?`** ++"SubstrateCliArgsVersion"++ - set the Substrate CLI arguments version directly to skip binary evaluation overhead ??? child "`SubstrateCliArgsVersion` enum definition" ```js export enum SubstrateCliArgsVersion { V0 = 0, V1 = 1, V2 = 2, V3 = 3, } ``` - **`resources?`** ++"Resources"++ - represent the resources limits/reservations needed by the node ??? child "`Resources` interface definition" ```js export interface Resources { resources: { requests?: { memory?: string; cpu?: string; }; limits?: { memory?: string; cpu?: string; }; }; } ``` - **`keystore_key_types?`** ++"string[]"++ - defines which keystore keys should be created - **`count`** ++"number | string"++ - number of nodes to launch for this group - **`delay_network_settings?`** ++"DelayNetworkSettings"++ - sets the expected configuration to delay the network ??? child "`DelayNetworkSettings` interface definition" ```js export interface DelayNetworkSettings { latency: string; correlation?: string; // should be parsable as float by k8s jitter?: string; } ``` For instance, the configuration file below defines a minimal example for the collator groups: === "TOML" ```toml title="collator-groups-example.toml" [parachain] id = 100 add_to_genesis = true cumulus_based = true genesis_wasm_path = "INSERT_PATH_TO_WASM" genesis_state_path = "INSERT_PATH_TO_STATE" [[parachain.collator_groups]] name = "group-1" count = 2 image = "polkadot-parachain" command = "polkadot-parachain" # ... ``` === "JSON" ```json title="collator-groups-example.json" { "parachain": { "id": 100, "add_to_genesis": true, "cumulus_based": true, "genesis_wasm_path": "INSERT_PATH_TO_WASM", "genesis_state_path": "INSERT_PATH_TO_STATE", "collator_groups": [ { "name": "group-1", "count": 2, "image": "polkadot-parachain", "command": "polkadot-parachain", "...": {} } ] }, "...": {} } ``` ### XCM Configuration You can use the `hrmp_channels` keyword to define further parameters for the XCM channels at start-up. The available keys are: - **`hrmp_channels`** ++"HrmpChannelsConfig[]"++ - array of Horizontal Relay-routed Message Passing (HRMP) channel configurations ??? child "`HrmpChannelsConfig` interface definition" ```js export interface HrmpChannelsConfig { sender: number; recipient: number; max_capacity: number; max_message_size: number; } ``` Each of the `HrmpChannelsConfig` keys are defined as follows: - `sender` ++"number"++ - parachain ID of the sender - `recipient` ++"number"++ - parachain ID of the recipient - `max_capacity` ++"number"++ - maximum capacity of the HRMP channel - `max_message_size` ++"number"++ - maximum message size allowed in the HRMP channel ## Where to Go Next
- External __Zombienet Support__ --- [Parity Technologies](https://www.parity.io/){target=\_blank} has designed and developed this framework, now maintained by the Zombienet team. For further support and information, refer to the following contact points: [:octicons-arrow-right-24: Zombienet repository](https://github.com/paritytech/zombienet){target=\_blank} [:octicons-arrow-right-24: Element public channel](https://matrix.to/#/!FWyuEyNvIFygLnWNMh:parity.io?via=parity.io&via=matrix.org&via=web3.foundation){target=\_blank} - Tutorial __Spawn a Basic Chain with Zombienet__ --- Learn to spawn, connect to and monitor a basic blockchain network with Zombienet, using customizable configurations for streamlined development and debugging. [:octicons-arrow-right-24: Reference](/tutorials/polkadot-sdk/testing/spawn-basic-chain/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/zombienet/ --- BEGIN CONTENT --- --- title: Zombienet description: Learn how to install, configure, and use Zombienet for testing and simulating Polkadot SDK-based networks in a local development environment. template: index-page.html --- # Test Networks with Zombienet Zombienet is a testing framework that lets you quickly spin up ephemeral blockchain networks for development and testing. With support for multiple deployment targets, such as Kubernetes, Podman, and native environments, Zombienet makes it easy to validate your blockchain implementation in a controlled environment. ## What Can I Do with Zombienet? - Deploy test networks with multiple nodes - Validate network behavior and performance - Monitor metrics and system events - Execute custom test scenarios Whether you're building a new parachain or testing runtime upgrades, Zombienet provides the tools needed to ensure your blockchain functions correctly before deployment to production. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/develop/toolkit/parachains/spawn-chains/zombienet/write-tests/ --- BEGIN CONTENT --- --- title: Write Tests description: Write and execute tests for blockchain networks with Zombienet's DSL. Learn to evaluate metrics, logs, events, and more for robust validation. categories: Parachains, Tooling --- # Write Tests ## Introduction Testing is a critical step in blockchain development, ensuring reliability, performance, and security. Zombienet simplifies this process with its intuitive Domain Specific Language (DSL), enabling developers to write natural-language test scripts tailored to their network needs. This guide provides an in-depth look at how to create and execute test scenarios using Zombienet's flexible testing framework. You’ll learn how to define tests for metrics, logs, events, and more, allowing for comprehensive evaluation of your blockchain network’s behavior and performance. ## Testing DSL Zombienet provides a Domain Specific Language (DSL) for writing tests. The DSL is designed to be human-readable and allows you to write tests using natural language expressions. You can define assertions and tests against the spawned network using this DSL. This way, users can evaluate different metrics, such as: - **On-chain storage** - the storage of each of the chains running via Zombienet - **Metrics** - the metrics provided by the nodes - **Histograms** - visual representations of metrics data - **Logs** - detailed records of system activities and events - **System events** - notifications of significant occurrences within the network - **Tracing** - detailed analysis of execution paths and operations - **Custom API calls (through Polkadot.js)** - personalized interfaces for interacting with the network - **Commands** - instructions or directives executed by the network These abstractions are expressed by sentences defined in a natural language style. Therefore, each test line will be mapped to a test to run. Also, the test file (`*.zndsl`) includes pre-defined header fields used to define information about the suite, such as network configuration and credentials location. For more details about the Zombienet DSL, see the [Testing DSL](https://paritytech.github.io/zombienet/cli/test-dsl-definition-spec.html){target=\_blank} specification. ## The Test File The test file is a text file with the extension `.zndsl`. It is divided into two parts: the header and the body. The header contains the network configuration and the credentials to use, while the body contains the tests to run. The header is defined by the following fields: - **`description`** ++"string"++ - long description of the test suite (optional) - **`network`** ++"string"++ - path to the network definition file, supported in both `.json` and `.toml` formats - **`creds`** ++"string"++ - credentials filename or path to use (available only with Kubernetes provider). Looks in the current directory or `$HOME/.kube/` if a filename is passed The body contains the tests to run. Each test is defined by a sentence in the DSL, which is mapped to a test to run. Each test line defines an assertion or a command to be executed against the spawned network. ### Name The test name in Zombienet is derived from the filename by removing any leading numeric characters before the first hyphen. For example, a file named `0001-zombienet-test.zndsl` will result in a test name of `zombienet-test`, which will be displayed in the test report output of the runner. ### Assertions Assertions are defined by sentences in the DSL that evaluate different metrics, such as on-chain storage, metrics, histograms, logs, system events, tracing, and custom API calls. Each assertion is defined by a sentence in the DSL, which is mapped to a test to run. - **`Well known functions`** - already mapped test function === "Syntax" `node-name well-known_defined_test [within x seconds]` === "Examples" ```bash alice: is up alice: parachain 100 is registered within 225 seconds alice: parachain 100 block height is at least 10 within 250 seconds ``` - **`Histogram`** - get metrics from Prometheus, calculate the histogram, and assert on the target value === "Syntax" `node-name reports histogram metric_name has comparator target_value samples in buckets ["bucket","bucket",...] [within x seconds]` === "Example" ```bash alice: reports histogram polkadot_pvf_execution_time has at least 2 samples in buckets ["0.1", "0.25", "0.5", "+Inf"] within 100 seconds ``` - **`Metric`** - get metric from Prometheus and assert on the target value === "Syntax" `node-name reports metric_name comparator target_value (e.g "is at least x", "is greater than x") [within x seconds]` === "Examples" ```bash alice: reports node_roles is 4 alice: reports sub_libp2p_is_major_syncing is 0 ``` - **`Log line`** - get logs from nodes and assert on the matching pattern === "Syntax" `node-name log line (contains|matches) (regex|glob) "pattern" [within x seconds]` === "Example" ```bash alice: log line matches glob "rted #1" within 10 seconds ``` - **`Count of log lines`** - get logs from nodes and assert on the number of lines matching pattern === "Syntax" `node-name count of log lines (containing|matching) (regex|glob) "pattern" [within x seconds]` === "Example" ```bash alice: count of log lines matching glob "rted #1" within 10 seconds ``` - **`System events`** - find a system event from subscription by matching a pattern === "Syntax" `node-name system event (contains|matches)(regex| glob) "pattern" [within x seconds]` === "Example" ```bash alice: system event matches ""paraId":[0-9]+" within 10 seconds ``` - **`Tracing`** - match an array of span names from the supplied `traceID` === "Syntax" `node-name trace with traceID contains ["name", "name2",...]` === "Example" ```bash alice: trace with traceID 94c1501a78a0d83c498cc92deec264d9 contains ["answer-chunk-request", "answer-chunk-request"] ``` - **`Custom JS scripts`** - run a custom JavaScript script and assert on the return value === "Syntax" `node-name js-script script_relative_path [return is comparator target_value] [within x seconds]` === "Example" ```bash alice: js-script ./0008-custom.js return is greater than 1 within 200 seconds ``` - **`Custom TS scripts`** - run a custom TypeScript script and assert on the return value === "Syntax" `node-name ts-script script_relative_path [return is comparator target_value] [within x seconds]` === "Example" ```bash alice: ts-script ./0008-custom-ts.ts return is greater than 1 within 200 seconds ``` - **`Backchannel`** - wait for a value and register to use === "Syntax" `node-name wait for var name and use as X [within x seconds]` === "Example" ```bash alice: wait for name and use as X within 30 seconds ``` ### Commands Commands allow interaction with the nodes and can run pre-defined commands or an arbitrary command in the node. Commonly used commands are as follows: - **`restart`** - stop the process and start again after the `X` amount of seconds or immediately - **`pause`** - pause (SIGSTOP) the process - **`resume`** - resume (SIGCONT) the process - **`sleep`** - sleep the test-runner for `x` amount of seconds ## Running a Test To run a test against the spawned network, you can use the [Zombienet DSL](#testing-dsl) to define the test scenario. Follow these steps to create an example test: 1. Create a file named `spawn-a-basic-network-test.zndsl` ```bash touch spawn-a-basic-network-test.zndsl ``` 2. Add the following code to the file you just created. ```toml title="spawn-a-basic-network-test.zndsl" Description = "Test the basic functionality of the network (minimal example)" Network = "./spawn-a-basic-network.toml" Creds = "config" # Alice's tasks [[tasks]] name = "alice" is_up = true parachain_100_registered = { condition = "within", timeout = 225 } parachain_100_block_height = { condition = "at least 10", timeout = 250 } # Bob's tasks [[tasks]] name = "bob" is_up = true parachain_100_registered = { condition = "within", timeout = 225 } parachain_100_block_height = { condition = "at least 10", timeout = 250 } # Metrics [[metrics]] name = "alice" node_roles = 4 sub_libp2p_is_major_syncing = 0 [[metrics]] name = "bob" node_roles = 4 [[metrics]] name = "collator01" node_roles = 4 ``` This test scenario checks to verify the following: - Nodes are running - The parachain with ID 100 is registered within a certain timeframe (255 seconds in this example) - Parachain block height is at least a certain number within a timeframe (in this case, 10 within 255 seconds) - Nodes are reporting metrics You can define any test scenario you need following the Zombienet DSL syntax. To run the test, execute the following command: ```bash zombienet -p native test spawn-a-basic-network-test.zndsl ``` This command will execute the test scenario defined in the `spawn-a-basic-network-test.zndsl` file on the network. If successful, the terminal will display the test output, indicating whether the test passed or failed. ## Example Test Files The following example test files define two tests, a small network test and a big network test. Each test defines a network configuration file and credentials to use. The tests define assertions to evaluate the network’s metrics and logs. The assertions are defined by sentences in the DSL, which are mapped to tests to run. ```toml title="small-network-test.zndsl" Description = "Small Network test" Network = "./0000-test-config-small-network.toml" Creds = "config" # Metrics [[metrics]] node_roles = 4 sub_libp2p_is_major_syncing = 0 # Logs [[logs]] bob_log_line_glob = "*rted #1*" bob_log_line_regex = "Imported #[0-9]+" ``` And the second test file: ```toml title="big-network-test.zndsl" Description = "Big Network test" Network = "./0001-test-config-big-network.toml" Creds = "config" # Metrics [[metrics]] node_roles = 4 sub_libp2p_is_major_syncing = 0 # Logs [[logs]] bob_log_line_glob = "*rted #1*" bob_log_line_regex = "Imported #[0-9]+" # Custom JS script [[custom_scripts]] alice_js_script = { path = "./0008-custom.js", condition = "return is greater than 1", timeout = 200 } # Custom TS script [[custom_scripts]] alice_ts_script = { path = "./0008-custom-ts.ts", condition = "return is greater than 1", timeout = 200 } # Backchannel [[backchannel]] alice_wait_for_name = { use_as = "X", timeout = 30 } # Well-known functions [[functions]] alice_is_up = true alice_parachain_100_registered = { condition = "within", timeout = 225 } alice_parachain_100_block_height = { condition = "at least 10", timeout = 250 } # Histogram [[histogram]] alice_polkadot_pvf_execution_time = { min_samples = 2, buckets = [ "0.1", "0.25", "0.5", "+Inf", ], timeout = 100 } # System events [[system_events]] alice_system_event_matches = { pattern = "\"paraId\":[0-9]+", timeout = 10 } # Tracing [[tracing]] alice_trace = { traceID = "94c1501a78a0d83c498cc92deec264d9", contains = [ "answer-chunk-request", "answer-chunk-request", ] } ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/get-support/ai-ready-docs/ --- BEGIN CONTENT --- --- title: AI Ready Docs description: Download LLM-optimized files of the Polkadot documentation, including full content and category-specific resources for AI agents. --- # AI Ready Docs Polkadot provides `.txt` files containing the documentation content and navigation structure, optimized for use with large language models (LLMs) and AI tools. These resources help build AI assistants, power code search, or enable custom tooling trained on Polkadot’s documentation. Each category file includes foundational content from the **Basics** and **Reference** categories to ensure LLMs have the necessary context. ## Download LLM Files | Category | Description | File | Actions | |--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Index | Navigation index of all Polkadot documentation pages | `llms.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms.txt" } [:octicons-download-16:](/llms.txt){ download="llms.txt" } | | Full Documentation | Full content of all documentation pages | `llms-full.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-full.txt" } [:octicons-download-16:](/llms-full.txt){ download="llms-full.txt" } | | Basics | Polkadot general knowledge base to provide context around overview and beginner level content | `llms-basics.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-basics.txt" } [:octicons-download-16:](/llms-files/llms-basics.txt){ download="llms-basics.txt" } | | Reference | Reference material including key functions and glossary | `llms-reference.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-reference.txt"} [:octicons-download-16:](/llms-files/llms-reference.txt){ download="llms-reference.txt" } | | Smart Contracts | How to develop and deploy Solidity smart contracts on the Polkadot Hub | `llms-smart-contracts.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-smart-contracts.txt" } [:octicons-download-16:](/llms-files/llms-smart-contracts.txt){ download="llms-smart-contracts.txt" } | | Parachains | How to guides related to building, customizing, deploying, and maintaining a parachain | `llms-parachains.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-parachains.txt" } [:octicons-download-16:](/llms-files/llms-parachains.txt){ download="llms-parachains.txt" } | | DApps | Information and tutorials for application developers | `llms-dapps.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-dapps.txt" } [:octicons-download-16:](/llms-files/llms-dapps.txt){ download="llms-dapps.txt" } | | Networks | Information about the various Polkadot networks (Polkadot, Kusama, Westend, Paseo), their purposes, and how they fit into the development workflow | `llms-networks.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-networks.txt" } [:octicons-download-16:](/llms-files/llms-networks.txt){ download="llms-networks.txt" } | | Polkadot Protocol | Polkadot's core architecture, including the Relay Chain, Parachains, System Chains, Interoperability, and Main Actors. | `llms-polkadot-protocol.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-polkadot-protocol.txt" } [:octicons-download-16:](/llms-files/llms-polkadot-protocol.txt){ download="llms-polkadot-protocol.txt" } | | Infrastructure | Operational aspects of supporting the Polkadot network including how to run a node or validator and staking mechanics | `llms-infrastructure.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-infrastructure.txt" } [:octicons-download-16:](/llms-files/llms-infrastructure.txt){ download="llms-infrastructure.txt" } | | Tooling | An overview of various development tools available for Polkadot development | `llms-tooling.txt` | [:octicons-copy-16:](){ .llms data-action="copy" data-value="llms-tooling.txt" } [:octicons-download-16:](/llms-files/llms-tooling.txt){ download="llms-tooling.txt" } | !!! note The `llms-full.txt` file may exceed the input limits of some language models due to its size. If you encounter limitations, consider using the files by category. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/get-support/explore-resources/ --- BEGIN CONTENT --- --- title: Subscribe to Updates description: Find Polkadot developer resources, tutorials, forums, governance proposals, and community platforms like StackExchange, Reddit, and YouTube. hide: - footer --- # Ask the Community and Explore Resources Looking for answers beyond the documentation? These platforms are full of useful content and experienced developers sharing insights. ## 🧠 Stack Exchange - Browse commonly asked technical questions. - Ask your own and get detailed responses from experienced devs. 👉 **[Visit Polkadot Stack Exchange](https://substrate.stackexchange.com/){target=\_blank}** ## 🧵 Reddit: r/Polkadot - General Polkadot discussions and community perspectives. - Developer questions are welcome — just tag them appropriately. 👉 **[Visit r/Polkadot](https://www.reddit.com/r/Polkadot/){target=\_blank}** ## 💬 Discord (Community Threads Only) - Beyond the official support threads, most channels are community-driven. - Great place to connect with fellow builders and share insights. 👉 **[Join the Polkadot Discord](https://polkadot-discord.w3f.tools/){target=\_blank}** ## 🎥 YouTube: @PolkadotNetwork - Developer tutorials - Ecosystem interviews - Event recordings and walkthroughs 👉 **[Watch on YouTube](https://www.youtube.com/@PolkadotNetwork){target=\_blank}** ## Community-Led Platforms and Ecosystem Updates Stay in sync with what's happening across the Polkadot ecosystem — from official announcements to community-driven insights and governance activity. ### 🔷 X (Twitter): Official Accounts - [@PolkadotDevs](https://twitter.com/PolkadotDevs){target=\_blank}: Updates for developers - [@Polkadot](https://twitter.com/Polkadot){target=\_blank}: Network-wide news - [@Kusamanetwork](https://twitter.com/kusamanetwork){target=\_blank}: Kusama-specific updates - [@Web3Foundation](https://twitter.com/web3foundation){target=\_blank}: Grants, research, and ecosystem programs ### 🔁 X (Twitter): Community Accounts - [@PolkadotDeploy](https://twitter.com/PolkadotDeploy){target=\_blank}: News from the deployment portal and tooling updates ### 🗣️ Polkadot Forum - Join community discussions around the direction of the ecosystem. 👉 **[Visit the Polkadot Forum](https://forum.polkadot.network/){target=\_blank}** ### 🧑‍⚖️ Polkassembly: OpenGov - Explore and vote on governance proposals for Polkadot and Kusama. - Help shape the future of the network. 👉 **[Explore on Polkassembly](https://polkadot.polkassembly.io/){target=\_blank}** ### 📸 Instagram - **[@Polkadotnetwork](https://www.instagram.com/polkadotnetwork){target=\_blank}**: Visual highlights from the ecosystem _(Note: not developer-specific)_ --- END CONTENT --- Doc-Content: https://docs.polkadot.com/get-support/get-in-touch/ --- BEGIN CONTENT --- --- title: Get in Touch description: Developer support for Polkadot via Telegram, Matrix, and Discord. Get help with parachains, smart contracts, nodes, and ecosystem tools. hide: - footer --- # Get in Touch Directly ## Need Help Fast? Use one of the channels below to get live technical support or ask questions. Prefer to see all available channels? Below are your options. ## 📱 Telegram: Polkadot Developer Support The fastest way to get support. - **Who’s there:** DevRel team and active developer community. - **Response time:** Within **2 business days (usually faster)**. - **Topics:** Any developer-related question is welcome. 👉 **[Join Telegram](https://t.me/substratedevs){target=\_blank}** ## 🔌 Discord: Polkadot Official Server Focused support for smart contracts and general developer chat. - **Smart contracts:** Ask in `#solidity-smart-contracts` and `#ink_smart-contracts`. - **General developer support:** Ask in `#solidity-smart-contracts`. - **Response time:** Within **1 business day (usually faster)**. - **Other topics:** Community-led discussion only. 👉 **[Join Discord](https://polkadot-discord.w3f.tools/){target=\_blank}** ## 🧬 Matrix: Polkadot Developer Support This is the **support channel** staffed by engineers from **Parity**, **Web3 Foundation**, and **Polkadot DevRel**. - **Who’s there:** Parity, W3F, DevRel, and community contributors. - **Response time:** Within **1 business day (usually faster)**. - **Topics:** Full-spectrum developer support. - Bridged with Telegram (all messages synced). 👉 **[Join Matrix](https://matrix.to/#/#substratedevs:matrix.org){target=\_blank}** --- Not sure where to start? **Join [Telegram](#telegram-polkadot-developer-support)**: Let us know what you need, and we’ll help you get unstuck. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/get-support/ --- BEGIN CONTENT --- --- title: Support description: Start here to get developer support for Polkadot. Connect with the team, find help, and explore resources beyond the documentation. template: index-page.html hide: - footer --- # Need Help Fast? Use one of the channels below to get live technical support or ask questions. ## Need More than Just Documentation? You're already in the docs — solid start. But sometimes you need more: answers, real examples, someone to talk to. This support hub is here to help you move forward — faster. Whether you're building something new, integrating into the ecosystem, or running into blockers — **don't stay stuck**. ## What You Can Do Here - 📨 [**Get In Touch**](/support/get-in-touch/) Reach out to the Polkadot support team and community via Telegram, Matrix, or Discord. Ask technical questions, report blockers, or share feedback — and get a human response. - 🧠 [**Explore Available Resources**](/support/explore-resources/) Find answers beyond the documentation: developer forums, Stack Exchange, Reddit, YouTube, governance hubs, and more. This hub is evolving. More support tools and shortcuts are on the way, including enhanced onboarding, CLI helpers, development environments, and live feedback channels. ## Help Us Improve If something’s missing, unclear, or broken — **tell us**. Your feedback makes the whole ecosystem better for everyone. 👉 [**Get In Touch**](/support/get-in-touch/) and help shape the future of developer support. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/images/README/ --- BEGIN CONTENT --- # Images TODO --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/ --- BEGIN CONTENT --- --- title: Infrastructure description: Learn how to set up and manage various types of Polkadot infrastructure, from running nodes to operating validators and contributing to the network. template: index-page.html --- # Infrastructure Running infrastructure on Polkadot is essential to supporting the network’s performance and security. Operators must focus on reliability, ensure proper configuration, and meet the necessary hardware requirements to contribute effectively to the decentralized ecosystem. - Not sure where to start? Visit the [Choosing the Right Role](#choosing-the-right-role) section for guidance - Ready to get started? Jump to [In This Section](#in-this-section) to get started ## Choosing the Right Role Selecting your role within the Polkadot ecosystem depends on your goals, resources, and expertise. Below are detailed considerations for each role: - **Running a node**: - **Purpose** - a node provides access to network data and supports API queries. It is commonly used for: - **Development and testing** - offers a local instance to simulate network conditions and test applications - **Production use** - acts as a data source for dApps, clients, and other applications needing reliable access to the blockchain - **Requirements** - moderate hardware resources to handle blockchain data efficiently - **Responsibilities** - a node’s responsibilities vary based on its purpose: - **Development and testing** - enables developers to test features, debug code, and simulate network interactions in a controlled environment - **Production use** - provides consistent and reliable data access for dApps and other applications, ensuring minimal downtime - **Running a validator**: - **Purpose** - validators play a critical role in securing the Polkadot relay chain. They validate parachain block submissions, participate in consensus, and help maintain the network's overall integrity - **Requirements** - becoming a validator requires: - **Staking** - a variable amount of DOT tokens to secure the network and demonstrate commitment - **Hardware** - high-performing hardware resources capable of supporting intensive blockchain operations - **Technical expertise** - proficiency in setting up and maintaining nodes, managing updates, and understanding Polkadot's consensus mechanisms - **Community involvement** - building trust and rapport within the community to attract nominators willing to stake with your validator - **Responsibilities** - validators have critical responsibilities to ensure network health: - **Uptime** - maintain near-constant availability to avoid slashing penalties for downtime or unresponsiveness - **Network security** - participate in consensus and verify parachain transactions to uphold the network's security and integrity - **Availability** - monitor the network for events and respond to issues promptly, such as misbehavior reports or protocol updates ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-node/ --- BEGIN CONTENT --- --- title: Running a Node description: Learn how to run and connect to a Polkadot node, including setup, configuration, and best practices for connectivity and security. template: index-page.html --- # Running a Node Running a node on the Polkadot network enables you to access blockchain data, interact with the network, and support decentralized applications (dApps). This guide will walk you through the process of setting up and connecting to a Polkadot node, including essential configuration steps for ensuring connectivity and security. ## Full Nodes vs Bootnodes Full nodes and bootnodes serve different roles within the network, each contributing in unique ways to connectivity and data access: - **Full node** - stores blockchain data, validates transactions, and can serve as a source for querying data - **Bootnode** - assists new nodes in discovering peers and connecting to the network, but doesn’t store blockchain data The following sections describe the different types of full nodes—pruned, archive, and light nodes—and the unique features of each for various use cases. ## Types of Full Nodes The three main types of nodes are as follows: - **Pruned node** - prunes historical states of all finalized block states older than a specified number except for the genesis block's state - **Archive node** - preserves all the past blocks and their states, making it convenient to query the past state of the chain at any given time. Archive nodes use a lot of disk space, which means they should be limited to use cases that require easy access to past on-chain data, such as block explorers - **Light node** - has only the runtime and the current state but doesn't store past blocks, making them useful for resource-restricted devices Each node type can be configured to provide remote access to blockchain data via RPC endpoints, allowing external clients, like dApps or developers, to submit transactions, query data, and interact with the blockchain remotely. !!!tip On [Stakeworld](https://stakeworld.io/docs/dbsize){target=\_blank}, you can find a list of the database sizes of Polkadot and Kusama nodes. ### State vs. Block Pruning A pruned node retains only a subset of finalized blocks, discarding older data. The two main types of pruning are: - **State pruning** - removes the states of old blocks while retaining block headers - **Block pruning** - removes both the full content of old blocks and their associated states, but keeps the block headers Despite these deletions, pruned nodes are still capable of performing many essential functions, such as displaying account balances, making transfers, setting up session keys, and participating in staking. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-node/setup-bootnode/ --- BEGIN CONTENT --- --- title: Set Up a Bootnode description: Learn how to configure and run a bootnode for Polkadot, including P2P, WS, and secure WSS connections with network key management and proxies. categories: Infrastructure --- # Set Up a Bootnode ## Introduction Bootnodes are essential for helping blockchain nodes discover peers and join the network. When a node starts, it needs to find other nodes, and bootnodes provide an initial point of contact. Once connected, a node can expand its peer connections and play its role in the network, like participating as a validator. This guide will walk you through setting up a Polkadot bootnode, configuring P2P, WebSocket (WS), secure WSS connections, and managing network keys. You'll also learn how to test your bootnode to ensure it is running correctly and accessible to other nodes. ## Prerequisites Before you start, you need to have the following prerequisites: - Verify a working Polkadot (`polkadot`) binary is available on your machine - Ensure you have nginx installed. Please refer to the [Installation Guide](https://nginx.org/en/docs/install.html){target=\_blank} for help with installation if needed - A VPS or other dedicated server setup ## Accessing the Bootnode Bootnodes must be accessible through three key channels to connect with other nodes in the network: - **P2P** - a direct peer-to-peer connection, set by: ```bash --listen-addr /ip4/0.0.0.0/tcp/INSERT_PORT ``` This is not enabled by default on non-validator nodes like archive RPC nodes. - **P2P/WS** - a WebSocket (WS) connection, also configured via `--listen-addr` - **P2P/WSS** - a secure WebSocket (WSS) connection using SSL, often required for light clients. An SSL proxy is needed, as the node itself cannot handle certificates ## Node Key A node key is the ED25519 key used by `libp2p` to assign your node an identity or peer ID. Generating a known node key for a bootnode is crucial, as it gives you a consistent key that can be placed in chain specifications as a known, reliable bootnode. Starting a node creates its node key in the `chains/INSERT_CHAIN/network/secret_ed25519` file. You can create a node key using: ``` bash polkadot key generate-node-key ``` This key can be used in the startup command line. It is imperative that you backup the node key. If it is included in the `polkadot` binary, it is hardcoded into the binary, which must be recompiled to change the key. ## Running the Bootnode A bootnode can be run as follows: ``` bash polkadot --chain polkadot \ --name dot-bootnode \ --listen-addr /ip4/0.0.0.0/tcp/30310 \ --listen-addr /ip4/0.0.0.0/tcp/30311/ws ``` This assigns the p2p to port 30310 and p2p/ws to port 30311. For the p2p/wss port, a proxy must be set up with a DNS name and a corresponding certificate. The following example is for the popular nginx server and enables p2p/wss on port 30312 by adding a proxy to the p2p/ws port 30311: ``` conf title="/etc/nginx/sites-enabled/dot-bootnode" server { listen 30312 ssl http2 default_server; server_name dot-bootnode.stakeworld.io; root /var/www/html; ssl_certificate "INSERT_YOUR_CERT"; ssl_certificate_key "INSERT_YOUR_KEY"; location / { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_pass http://localhost:30311; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; } } ``` ## Testing Bootnode Connection If the preceding node is running with DNS name `dot-bootnode.stakeworld.io`, which contains a proxy with a valid certificate and node-id `12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg` then the following commands should output `syncing 1 peers`. !!!tip You can add `-lsub-libp2p=trace` on the end to get libp2p trace logging for debugging purposes. ### P2P ```bash polkadot --chain polkadot \ --base-path /tmp/node \ --name "Bootnode testnode" \ --reserved-only \ --reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30310/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \ --no-hardware-benchmarks ``` ### P2P/WS ```bash polkadot --chain polkadot \ --base-path /tmp/node \ --name "Bootnode testnode" \ --reserved-only \ --reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30311/ws/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \ --no-hardware-benchmarks ``` ### P2P/WSS ```bash polkadot --chain polkadot \ --base-path /tmp/node \ --name "Bootnode testnode" \ --reserved-only \ --reserved-nodes "/dns/dot-bootnode.stakeworld.io/tcp/30312/wss/p2p/12D3KooWAb5MyC1UJiEQJk4Hg4B2Vi3AJdqSUhTGYUqSnEqCFMFg" \ --no-hardware-benchmarks ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-node/setup-full-node/ --- BEGIN CONTENT --- --- title: Set Up a Node description: Learn how to install, configure, and run Polkadot nodes, including setting up different node types and connecting to the network. categories: Infrastructure --- # Set Up a Node ## Introduction Running a node on Polkadot provides direct interaction with the network, enhanced privacy, and full control over RPC requests, transactions, and data queries. As the backbone of the network, nodes ensure decentralized data propagation, transaction validation, and seamless communication across the ecosystem. Polkadot supports multiple node types, including pruned, archive, and light nodes, each suited to specific use cases. During setup, you can use configuration flags to choose the node type you wish to run. This guide walks you through configuring, securing, and maintaining a node on Polkadot or any Polkadot SDK-based chain. It covers instructions for the different node types and how to safely expose your node's RPC server for external access. Whether you're building a local development environment, powering dApps, or supporting network decentralization, this guide provides all the essentials. ## Set Up a Node Now that you're familiar with the different types of nodes, this section will walk you through configuring, securing, and maintaining a node on Polkadot or any Polkadot SDK-based chain. ### Prerequisites Before getting started, ensure the following prerequisites are met: - Ensure [Rust](https://www.rust-lang.org/tools/install){target=\_blank} is installed on your operating system - [Install the necessary dependencies for the Polkadot SDK](/develop/parachains/install-polkadot-sdk/){target=\_blank} !!! warning This setup is not recommended for validators. If you plan to run a validator, refer to the [Running a Validator](/infrastructure/running-a-validator/){target=\_blank} guide for proper instructions. ### Install and Build the Polkadot Binary This section will walk you through installing and building the Polkadot binary for different operating systems and methods. ??? interface "macOS" To get started, update and configure the Rust toolchain by running the following commands: ```bash source ~/.cargo/env rustup default stable rustup update rustup update nightly rustup target add wasm32-unknown-unknown --toolchain nightly rustup component add rust-src --toolchain stable-aarch64-apple-darwin ``` You can verify your installation by running: ```bash rustup show rustup +nightly show ``` You should see output similar to the following:
rustup show
rustup +nightly show
active toolchain ---------------- stable-aarch64-apple-darwin (default) rustc 1.82.0 (f6e511eec 2024-10-15) active toolchain ---------------- nightly-aarch64-apple-darwin (overridden by +toolchain on the command line) rustc 1.84.0-nightly (03ee48451 2024-11-18)
Then, run the following commands to clone and build the Polkadot binary: ```bash git clone https://github.com/paritytech/polkadot-sdk polkadot-sdk cd polkadot-sdk cargo build --release ``` Depending upon the specs of your machine, compiling the binary may take an hour or more. After building the Polkadot node from source, the executable binary will be located in the `./target/release/polkadot` directory. ??? interface "Windows" To get started, make sure that you have [WSL and Ubuntu](https://learn.microsoft.com/en-us/windows/wsl/install){target=\_blank} installed on your Windows machine. Once installed, you have a couple options for installing the Polkadot binary: - If Rust is installed, then `cargo` can be used similar to the macOS instructions - Or, the instructions in the Linux section can be used ??? interface "Linux (pre-built binary)" To grab the [latest release of the Polkadot binary](https://github.com/paritytech/polkadot-sdk/releases){target=\_blank}, you can use `wget`: ```bash wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION/polkadot ``` Ensure you note the executable binary's location, as you'll need to use it when running the start-up command. If you prefer, you can specify the output location of the executable binary with the `-O` flag, for example: ```bash wget https://github.com/paritytech/polkadot-sdk/releases/download/polkadot-INSERT_VERSION/polkadot \ - O /var/lib/polkadot-data/polkadot ``` !!!tip The nature of pre-built binaries means that they may not work on your particular architecture or Linux distribution. If you see an error like `cannot execute binary file: Exec format error` it likely means the binary is incompatible with your system. You will either need to compile the binary or use [Docker](#use-docker). Ensure that you properly configure the permissions to make the Polkadot release binary executable: ```bash sudo chmod +x polkadot ``` ??? interface "Linux (compile binary)" The most reliable (although perhaps not the fastest) way of launching a full node is to compile the binary yourself. Depending on your machine's specs, this may take an hour or more. To get started, run the following commands to configure the Rust toolchain: ```bash rustup default stable rustup update rustup update nightly rustup target add wasm32-unknown-unknown --toolchain nightly rustup target add wasm32-unknown-unknown --toolchain stable-x86_64-unknown-linux-gnu rustup component add rust-src --toolchain stable-x86_64-unknown-linux-gnu ``` You can verify your installation by running: ```bash rustup show ``` You should see output similar to the following:
rustup show
rustup +nightly show
active toolchain ---------------- stable-x86_64-unknown-linux-gnu (default) rustc 1.82.0 (f6e511eec 2024-10-15)
Once Rust is configured, run the following commands to clone and build Polkadot: ```bash git clone https://github.com/paritytech/polkadot-sdk polkadot-sdk cd polkadot-sdk cargo build --release ``` Compiling the binary may take an hour or more, depending on your machine's specs. After building the Polkadot node from the source, the executable binary will be located in the `./target/release/polkadot` directory. ??? interface "Linux (snap package)" Polkadot can be installed as a [snap package](https://snapcraft.io/polkadot){target=\_blank}. If you don't already have Snap installed, take the following steps to install it: ```bash sudo apt update sudo apt install snapd ``` Install the Polkadot snap package: ```bash sudo snap install polkadot ``` Before continuing on with the following instructions, check out the [Configure and Run Your Node](#configure-and-run-your-node) section to learn more about the configuration options. To configure your Polkadot node with your desired options, you'll run a command similar to the following: ```bash sudo snap set polkadot service-args="--name=MyName --chain=polkadot" ``` Then to start the node service, run: ```bash sudo snap start polkadot ``` You can review the logs to check on the status of the node: ```bash snap logs polkadot -f ``` And at any time, you can stop the node service: ```bash sudo snap stop polkadot ``` You can optionally prevent the service from stopping when snap is updated with the following command: ```bash sudo snap set polkadot endure=true ``` ### Use Docker As an additional option, you can use Docker to run your node in a container. Doing this is more advanced, so it's best left up to those already familiar with Docker or who have completed the other set-up instructions in this guide. You can review the latest versions on [DockerHub](https://hub.docker.com/r/parity/polkadot/tags){target=\_blank}. Be aware that when you run Polkadot in Docker, the process only listens on `localhost` by default. If you would like to connect to your node's services (RPC and Prometheus) you need to ensure that you run the node with the `--rpc-external`, and `--prometheus-external` commands. ```bash docker run -p 9944:9944 -p 9615:9615 parity/polkadot:v1.16.2 --name "my-polkadot-node-calling-home" --rpc-external --prometheus-external ``` If you're running Docker on an Apple Silicon machine (e.g. M4), you'll need to adapt the command slightly: ```bash docker run --platform linux/amd64 -p 9944:9944 -p 9615:9615 parity/polkadot:v1.16.2 --name "kearsarge-calling-home" --rpc-external --prometheus-external ``` ## Configure and Run Your Node Now that you've installed and built the Polkadot binary, the next step is to configure the start-up command depending on the type of node that you want to run. You'll need to modify the start-up command accordingly based on the location of the binary. In some cases, it may be located within the `./target/release/` folder, so you'll need to replace polkadot with `./target/release/polkadot` in the following commands. Also, note that you can use the same binary for Polkadot as you would for Kusama or any other relay chain. You'll need to use the `--chain` flag to differentiate between chains. If you aren't sure which type of node to run, see the [Types of Full Nodes](/infrastructure/running-a-node/#types-of-nodes){target=\_blank} section. The base commands for running a Polkadot node are as follows: === "Default pruned node" This uses the default pruning value of the last 256 blocks: ```bash polkadot --chain polkadot \ --name "INSERT_NODE_NAME" ``` === "Custom pruned node" You can customize the pruning value, for example, to the last 1000 finalized blocks: ```bash polkadot --chain polkadot \ --name INSERT_YOUR_NODE_NAME \ --state-pruning 1000 \ --blocks-pruning archive \ --rpc-cors all \ --rpc-methods safe ``` === "Archive node" To support the full state, use the `archive` option: ```bash polkadot --chain polkadot \ --name INSERT_YOUR_NODE_NAME \ --state-pruning archive \ --blocks-pruning archive \ ``` If you want to run an RPC node, please refer to the following [RPC Configurations](#rpc-configurations) section. To review a complete list of the available commands, flags, and options, you can use the `--help` flag: ```bash polkadot --help ``` Once you've fully configured your start-up command, you can execute it in your terminal and your node will start [syncing](#sync-your-node). ### RPC Configurations The node startup settings allow you to choose what to expose, how many connections to expose, and which systems should be granted access through the RPC server. - You can limit the methods to use with `--rpc-methods`; an easy way to set this to a safe mode is `--rpc-methods safe` - You can set your maximum connections through `--rpc-max-connections`, for example, `--rpc-max-connections 200` - By default, localhost and Polkadot.js can access the RPC server. You can change this by setting `--rpc-cors`. To allow access from everywhere, you can use `--rpc-cors all` For a list of important flags when running RPC nodes, refer to the Parity DevOps documentation: [Important Flags for Running an RPC Node](https://paritytech.github.io/devops-guide/guides/rpc_index.html?#important-flags-for-running-an-rpc-node){target=\_blank}. ## Sync Your Node The syncing process will take a while, depending on your capacity, processing power, disk speed, and RAM. The process may be completed on a $10 DigitalOcean droplet in about ~36 hours. While syncing, your node name should be visible in gray on Polkadot Telemetry, and once it is fully synced, your node name will appear in white on [Polkadot Telemetry](https://telemetry.polkadot.io/#list/Polkadot){target=_blank}. A healthy node syncing blocks will output logs like the following:
2024-11-19 23:49:57 Parity Polkadot 2024-11-19 23:49:57 ✌️ version 1.14.1-7c4cd60da6d 2024-11-19 23:49:57 ❤️ by Parity Technologies <admin@parity.io>, 2017-2024 2024-11-19 23:49:57 📋 Chain specification: Polkadot 2024-11-19 23:49:57 🏷 Node name: myPolkadotNode 2024-11-19 23:49:57 👤 Role: FULL 2024-11-19 23:49:57 💾 Database: RocksDb at /home/ubuntu/.local/share/polkadot/chains/polkadot/db/full 2024-11-19 23:50:00 🏷 Local node identity is: 12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h 2024-11-19 23:50:00 Running libp2p network backend 2024-11-19 23:50:00 💻 Operating system: linux 2024-11-19 23:50:00 💻 CPU architecture: x86_64 2024-11-19 23:50:00 💻 Target environment: gnu 2024-11-19 23:50:00 💻 CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz 2024-11-19 23:50:00 💻 CPU cores: 4 2024-11-19 23:50:00 💻 Memory: 32001MB 2024-11-19 23:50:00 💻 Kernel: 5.15.0-113-generic 2024-11-19 23:50:00 💻 Linux distribution: Ubuntu 22.04.5 LTS 2024-11-19 23:50:00 💻 Virtual machine: no 2024-11-19 23:50:00 📦 Highest known block at #9319 2024-11-19 23:50:00 〽️ Prometheus exporter started at 127.0.0.1:9615 2024-11-19 23:50:00 Running JSON-RPC server: addr=127.0.0.1:9944, allowed origins=["http://localhost:*", "http://127.0.0.1:*", "https://localhost:*", "https://127.0.0.1:*", "https://polkadot.js.org"] 2024-11-19 23:50:00 🏁 CPU score: 671.67 MiBs 2024-11-19 23:50:00 🏁 Memory score: 7.96 GiBs 2024-11-19 23:50:00 🏁 Disk score (seq. writes): 377.87 MiBs 2024-11-19 23:50:00 🏁 Disk score (rand. writes): 147.92 MiBs 2024-11-19 23:50:00 🥩 BEEFY gadget waiting for BEEFY pallet to become available... 2024-11-19 23:50:00 🔍 Discovered new external address for our node: /ip4/37.187.93.17/tcp/30333/ws/p2p/12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h 2024-11-19 23:50:01 🔍 Discovered new external address for our node: /ip6/2001:41d0:a:3511::1/tcp/30333/ws/p2p/12D3KooWDmhHEgPRJUJnUpJ4TFWn28EENqvKWH4dZGCN9TS51y9h 2024-11-19 23:50:05 ⚙️ Syncing, target=#23486325 (5 peers), best: #12262 (0x8fb5…f310), finalized #11776 (0x9de1…32fb), ⬇ 430.5kiB/s ⬆ 17.8kiB/s 2024-11-19 23:50:10 ⚙️ Syncing 628.8 bps, target=#23486326 (6 peers), best: #15406 (0x9ce1…2d76), finalized #15360 (0x0e41…a064), ⬇ 255.0kiB/s ⬆ 1.8kiB/s
Congratulations, you're now syncing a Polkadot full node! Remember that the process is identical when using any other Polkadot SDK-based chain, although individual chains may have chain-specific flag requirements. ### Connect to Your Node Open [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/explorer){target=\_blank} and click the logo in the top left to switch the node. Activate the **Development** toggle and input your node's domain or IP address. The default WSS endpoint for a local node is: ```bash ws://127.0.0.1:9944 ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-node/setup-secure-wss/ --- BEGIN CONTENT --- --- title: Set Up Secure WebSocket description: Instructions on enabling SSL for your node and setting up a secure WebSocket proxy server using nginx for remote connections. categories: Infrastructure --- # Set Up Secure WebSocket ## Introduction Ensuring secure WebSocket communication is crucial for maintaining the integrity and security of a Polkadot or Kusama node when interacting with remote clients. This guide walks you through setting up a secure WebSocket (WSS) connection for your node by leveraging SSL encryption with popular web server proxies like nginx or Apache. By the end of this guide, you'll be able to secure your node's WebSocket port, enabling safe remote connections without exposing your node to unnecessary risks. The instructions in this guide are for UNIX-based systems. ## Secure a WebSocket Port You can convert a non-secured WebSocket port to a secure WSS port by placing it behind an SSL-enabled proxy. This approach can be used to secure a bootnode or RPC server. The SSL-enabled apache2/nginx/other proxy server redirects requests to the internal WebSocket and converts it to a secure (WSS) connection. You can use a service like [LetsEncrypt](https://letsencrypt.org/){target=\_blank} to obtain an SSL certificate. ### Obtain an SSL Certificate LetsEncrypt suggests using the [Certbot ACME client](https://letsencrypt.org/getting-started/#with-shell-access/){target=\_blank} for your respective web server implementation to get a free SSL certificate: - [nginx](https://certbot.eff.org/instructions?ws=nginx&os=ubuntufocal){target=\_blank} - [apache2](https://certbot.eff.org/instructions?ws=apache&os=ubuntufocal){target=\_blank} LetsEncrypt will auto-generate an SSL certificate and include it in your configuration. When connecting, you can generate a self-signed certificate and rely on your node's raw IP address. However, self-signed certificates aren't optimal because you must include the certificate in an allowlist to access it from a browser. Use the following command to generate a self-signed certificate using OpenSSL: ```bash sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/selfsigned.key -out /etc/ssl/certs/selfsigned.crt sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 ``` ## Install a Proxy Server There are a lot of different implementations of a WebSocket proxy; some of the more widely used are [nginx](https://www.f5.com/go/product/welcome-to-nginx){target=\_blank} and [apache2](https://httpd.apache.org/){target=\_blank}, both of which are commonly used web server implementations. See the following section for configuration examples for both implementations. ### Use nginx 1. Install the `nginx` web server: ```bash apt install nginx ``` 2. In an SSL-enabled virtual host, add: ```conf server { (...) location / { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_pass http://localhost:9944; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; } } ``` 3. Optionally, you can introduce some form of rate limiting: ```conf http { limit_req_zone "$http_x_forwarded_for" zone=zone:10m rate=2r/s; (...) } location / { limit_req zone=zone burst=5; (...) } ``` ### Use Apache2 Apache2 can run in various modes, including `prefork`, `worker`, and `event`. In this example, the [`event`](https://httpd.apache.org/docs/2.4/mod/event.html){target=\_blank} mode is recommended for handling higher traffic loads, as it is optimized for performance in such environments. However, depending on the specific requirements of your setup, other modes like `prefork` or `worker` may also be appropriate. 1. Install the `apache2` web server: ```bash apt install apache2 a2dismod mpm_prefork a2enmod mpm_event proxy proxy_html proxy_http proxy_wstunnel rewrite ssl ``` 2. The [`mod_proxy_wstunnel`](https://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html){target=\_blank} provides support for the tunneling of WebSocket connections to a backend WebSocket server. The connection is automatically upgraded to a WebSocket connection. In an SSL-enabled virtual host add: ```apacheconf # (...) SSLProxyEngine on ProxyRequests off ProxyPass / ws://localhost:9944 ProxyPassReverse / ws://localhost:9944 ``` !!!warning Older versions of `mod_proxy_wstunnel` don't upgrade the connection automatically and will need the following config added: ```apacheconf RewriteEngine on RewriteCond %{HTTP:Upgrade} websocket [NC] RewriteRule /(.*) ws://localhost:9944/$1 [P,L] RewriteRule /(.*) http://localhost:9944/$1 [P,L] ``` 3. Optionally, some form of rate limiting can be introduced by first running the following command: ```bash apt install libapache2-mod-qos a2enmod qos ``` Then edit `/etc/apache2/mods-available/qos.conf` as follows: ```conf # allows max 50 connections from a single IP address: QS_SrvMaxConnPerIP 50 ``` ## Connect to the Node 1. Open [Polkadot.js Apps interface](https://polkadot.js.org/apps){target=\_blank} and click the logo in the top left to switch the node 2. Activate the **Development** toggle and input either your node's domain or IP address. Remember to prefix with `wss://` and, if you're using the 443 port, append `:443` as follows: ```bash wss://example.com:443 ``` ![A sync-in-progress chain connected to Polkadot.js UI](/images/infrastructure/running-a-validator/running-a-node/setup-secure-wss/setup-secure-wss-1.webp) --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/ --- BEGIN CONTENT --- --- title: Running a Validator description: Learn the requirements for setting up a Polkadot validator node, along with detailed steps on how to install, run, upgrade, and maintain the node. template: index-page.html --- # Running a Validator Running a Polkadot validator is crucial for securing the network and maintaining its integrity. Validators play a key role in verifying parachain blocks, participating in consensus, and ensuring the reliability of the Polkadot relay chain. Learn the requirements for setting up a Polkadot validator node, along with detailed steps on how to install, run, upgrade, and maintain the node. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/ --- BEGIN CONTENT --- --- title: Onboarding and Offboarding description: Get familiar with onboarding and offboarding a Polkadot validator node, including setup, bond and key management, and activation and deactivation processes. template: index-page.html --- # Onboarding and Offboarding Successfully onboarding and offboarding a Polkadot validator node is crucial to maintaining the security and integrity of the network. This process involves setting up, activating, deactivating, and securely managing your validator’s key and staking details. This section provides guidance on how to set up, activate, and deactivate your validator. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/key-management/ --- BEGIN CONTENT --- --- title: Validator Key Management description: Learn how to generate and manage validator keys, including session keys for consensus participation and node keys for maintaining a stable network identity. categories: Infrastructure --- # Key Management ## Introduction After setting up your node environment as shown in the [Setup](/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator){target=\_blank} section, you'll need to configure multiple keys for your validator to operate properly. This includes setting up session keys, which are essential for participating in the consensus process, and configuring a node key that maintains a stable network identity. This guide walks you through the key management process, showing you how to generate, store, and register these keys. ## Set Session Keys Setting up your validator's session keys is essential to associate your node with your stash account on the Polkadot network. Validators use session keys to participate in the consensus process. Your validator can only perform its role in the network by properly setting session keys which consist of several key pairs for different parts of the protocol (e.g., GRANDPA, BABE). These keys must be registered on-chain and associated with your validator node to ensure it can participate in validating blocks. ### Generate Session Keys There are multiple ways to create the session keys. It can be done by interacting with the [Polkadot.js Apps UI](https://polkadot.js.org/apps/#/explorer){target=\_blank}, using the curl command or by using [Subkey](https://paritytech.github.io/polkadot-sdk/master/subkey/index.html){target=\_blank}. === "Polkadot.js Apps UI" 1. In Polkadot.js Apps, connect to your local node, navigate to the **Developer** dropdown, and select the **RPC Calls** option 2. Construct an `author_rotateKeys` RPC call and execute it 1. Select the **author** endpoint 2. Choose the **rotateKeys()** call 3. Click the **Submit RPC Call** button 4. Copy the hex-encoded public key from the response ![](/images/infrastructure/running-a-validator/onboarding-and-offboarding/key-management/key-management-1.webp) === "Curl" Generate session keys by running the following command on your validator node: ``` bash curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "author_rotateKeys", "params":[]}' \ http://localhost:9944 ``` This command will return a JSON object. The `result` key is the hex-encoded public part of the newly created session key. Save this for later use. ```json {"jsonrpc":"2.0","result":"0xda3861a45e0197f3ca145c2c209f9126e5053fas503e459af4255cf8011d51010","id":1} ``` === "Subkey" To create a keypair for your node's session keys, use the `subkey generate` command. This generates a set of cryptographic keys that must be stored in your node's keystore directory. When you run the command, it produces output similar to this example:
subkey generate
Secret phrase:       twist buffalo mixture excess device drastic vague mammal fitness punch match hammer
  Network ID:        substrate
  Secret seed:       0x5faa9e5defe42b201388d5c2b8202d6625a344abc9aa52943a71f12cb90b88a9
  Public key (hex):  0x28cc2fdb6e28835e2bbac9a16feb65c23d448c9314ef12fe083b61bab8fc2755
  Account ID:        0x28cc2fdb6e28835e2bbac9a16feb65c23d448c9314ef12fe083b61bab8fc2755
  Public key (SS58): 5CzCRpXzHYhuo6G3gYFR3cgV6X3qCNwVt51m8q14ZcChsSXQ
  SS58 Address:      5CzCRpXzHYhuo6G3gYFR3cgV6X3qCNwVt51m8q14ZcChsSXQ
  
To properly store these keys, create a file in your keystore directory with a specific naming convention. The filename must consist of the hex string `61757261` (which represents "aura" in hex) followed by the public key without its `0x` prefix. Using the example above, you would create a file named: ``` ./keystores/6175726128cc2fdb6e28835e2bbac9a16feb65c23d448c9314ef12fe083b61bab8fc2755 ``` And store only the secret phrase in the file: ``` "twist buffalo mixture excess device drastic vague mammal fitness punch match hammer" ``` ### Submit Transaction to Set Keys Now that you have generated your session keys, you must submit them to the chain. Follow these steps: 1. Go to the **Network > Staking > Accounts** section on Polkadot.js Apps 2. Select **Set Session Key** on the bonding account you generated earlier 3. Paste the hex-encoded session key string you generated (from either the UI or CLI) into the input field and submit the transaction ![](/images/infrastructure/running-a-validator/onboarding-and-offboarding/key-management/key-management-2.webp) Once the transaction is signed and submitted, your session keys will be registered on-chain. ### Verify Session Key Setup To verify that your session keys are properly set, you can use one of two RPC calls: - **`hasKey`** - checks if the node has a specific key by public key and key type - **`hasSessionKeys`** - verifies if your node has the full session key string associated with the validator For example, you can [check session keys on the Polkadot.js Apps](https://polkadot.js.org/apps/#/rpc){target=\_blank} interface or by running an RPC query against your node. Once this is done, your validator node is ready for its role. ## Set the Node Key Validators on Polkadot need a static network key (also known as the node key) to maintain a stable node identity. This key ensures that your validator can maintain a consistent peer ID, even across restarts, which is crucial for maintaining reliable network connections. Starting with Polkadot version 1.11, validators without a stable network key may encounter the following error on startup:
polkadot --validator --name "INSERT_NAME_FROM_TELEMETRY" Error: 0: Starting an authority without network key This is not a safe operation because other authorities in the network may depend on your node having a stable identity. Otherwise these other authorities may not being able to reach you. If it is the first time running your node you could use one of the following methods: 1. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --base-path INSERT_YOUR_BASE_PATH 2. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --file INSERT_YOUR_PATH_TO_NODE_KEY 3. [Preferred] Separately generate the key with: INSERT_NODE_BINARY key generate-node-key --default-base-path 4. [Unsafe] Pass --unsafe-force-node-key-generation and make sure you remove it for subsequent node restarts
### Generate the Node Key Use one of the following methods to generate your node key: === "Save to file" The recommended solution is to generate a node key and save it to a file using the following command: ``` bash polkadot key generate-node-key --file INSERT_PATH_TO_NODE_KEY ``` === "Use default path" You can also generate the node key with the following command, which will automatically save the key to the base path of your node: ``` bash polkadot key generate-node-key --default-base-path ``` Save the file path for reference. You will need it in the next step to configure your node with a static identity. ### Set Node Key After generating the node key, configure your node to use it by specifying the path to the key file when launching your node. Add the following flag to your validator node's startup command: ``` bash polkadot --node-key-file INSERT_PATH_TO_NODE_KEY ``` Following these steps ensures that your node retains its identity, making it discoverable by peers without the risk of conflicting identities across sessions. For further technical background, see Polkadot SDK [Pull Request #3852](https://github.com/paritytech/polkadot-sdk/pull/3852){target=\_blank} for the rationale behind requiring static keys. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/ --- BEGIN CONTENT --- --- title: Set Up a Validator description: Set up a Polkadot validator node to secure the network and earn staking rewards. Follow this step-by-step guide to install, configure, and manage your node. categories: Infrastructure --- # Set Up a Validator ## Introduction Setting up a Polkadot validator node is essential for securing the network and earning staking rewards. This guide walks you through the technical steps to set up a validator, from installing the necessary software to managing keys and synchronizing your node with the chain. Running a validator requires a commitment to maintaining a stable, secure infrastructure. Validators are responsible for their own stakes and those of nominators who trust them with their tokens. Proper setup and ongoing management are critical to ensuring smooth operation and avoiding potential penalties such as slashing. ## Prerequisites To get the most from this guide, ensure you've done the following before going forward: - Read [Validator Requirements](/infrastructure/running-a-validator/requirements/){target=\_blank} and understand the recommended minimum skill level and hardware needs - Read [General Management](/infrastructure/running-a-validator/operational-tasks/general-management){target=\_blank}, [Upgrade Your Node](/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/){target=\_blank}, and [Pause Validating](/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/){target=\_blank} and understand the tasks required to keep your validator operational - Read [Rewards Payout](/infrastructure/staking-mechanics/rewards-payout/){target=\_blank} and understand how validator rewards are determined and paid out - Read [Offenses and Slashes](/infrastructure/staking-mechanics/offenses-and-slashes/){target=\_blank} and understand how validator performance and security can affect tokens staked by you or your nominators ## Initial Setup Before running your validator, you must configure your server environment to meet the operational and security standards required for validating. You must use a Linux-based operating system with Kernel 5.16 or later. Configuration includes setting up time synchronization, ensuring critical security features are active, and installing the necessary binaries. Proper setup at this stage is essential to prevent issues like block production errors or being penalized for downtime. Below are the essential steps to get your system ready. ### Install Network Time Protocol Client Accurate timekeeping is critical to ensure your validator is synchronized with the network. Validators need local clocks in sync with the blockchain to avoid missing block authorship opportunities. Using [Network Time Protocol (NTP)](https://en.wikipedia.org/wiki/Network_Time_Protocol){target=\_blank} is the standard solution to keep your system's clock accurate. If you are using Ubuntu version 18.04 or newer, the NTP Client should be installed by default. You can check whether you have the NTP client by running: ```sh timedatectl ``` If NTP is running, you should see a message like the following: ``` sh System clock synchronized: yes ``` If NTP is not installed or running, you can install it using: ```sh sudo apt-get install ntp ``` After installation, NTP will automatically start. To check its status: ```sh sudo ntpq -p ``` This command will return a message with the status of the NTP synchronization. Skipping this step could result in your validator node missing blocks due to minor clock drift, potentially affecting its network performance. ### Verify Landlock is Activated [Landlock](https://docs.kernel.org/userspace-api/landlock.html){target=\_blank} is an important security feature integrated into Linux kernels starting with version 5.13. It allows processes, even those without special privileges, to limit their access to the system to reduce the machine's attack surface. This feature is crucial for validators, as it helps ensure the security and stability of the node by preventing unauthorized access or malicious behavior. To use Landlock, ensure you use the reference kernel or newer versions. Most Linux distributions should already have Landlock activated. You can check if Landlock is activated on your machine by running the following command as root: ```sh dmesg | grep landlock || journalctl -kg landlock ``` If Landlock is not activated, your system logs won't show any related output. In this case, you will need to activate it manually or ensure that your Linux distribution supports it. Most modern distributions with the required kernel version should have Landlock activated by default. However, if your system lacks support, you may need to build the kernel with Landlock activated. For more information on doing so, refer to the [official kernel documentation](https://docs.kernel.org/userspace-api/landlock.html#kernel-support){target=\_blank}. Implementing Landlock ensures your node operates in a restricted, self-imposed sandbox, limiting potential damage from security breaches or bugs. While not a mandatory requirement, enabling this feature greatly improves the security of your validator setup. ## Install the Polkadot Binaries You must install the Polkadot binaries required to run your validator node. These binaries include the main `polkadot`, `polkadot-prepare-worker`, and `polkadot-execute-worker` binaries. All three are needed to run a fully functioning validator node. Depending on your preference and operating system setup, there are multiple methods to install these binaries. Below are the main options: ### Install from Official Releases The preferred, most straightforward method to install the required binaries is downloading the latest versions from the official releases. You can visit the [Github Releases](https://github.com/paritytech/polkadot-sdk/releases){target=\_blank} page for the most current versions of the `polkadot`, `polkadot-prepare-worker`, and `polkadot-execute-worker` binaries. You can also download the binaries by using the following direct links: === "`polkadot`" ``` bash # Download the binary curl -LO https://github.com/paritytech/polkadot-sdk/releases/download/{{ dependencies.repositories.polkadot_sdk.version }}/polkadot # Verify signature curl -LO https://github.com/paritytech/polkadot-sdk/releases/download/{{ dependencies.repositories.polkadot_sdk.version }}/polkadot.asc gpg --keyserver hkps://keyserver.ubuntu.com --receive-keys 90BD75EBBB8E95CB3DA6078F94A4029AB4B35DAE gpg --verify polkadot.asc ``` === "`polkadot-prepare-worker`" ``` bash # Download the binary curl -LO https://github.com/paritytech/polkadot-sdk/releases/download/{{ dependencies.repositories.polkadot_sdk.version }}/polkadot-prepare-worker # Verify signature curl -LO https://github.com/paritytech/polkadot-sdk/releases/download/{{ dependencies.repositories.polkadot_sdk.version }}/polkadot-prepare-worker.asc gpg --keyserver hkps://keyserver.ubuntu.com --receive-keys 90BD75EBBB8E95CB3DA6078F94A4029AB4B35DAE gpg --verify polkadot-prepare-worker.asc ``` === "`polkadot-execute-worker`" ``` bash # Download the binary curl -LO https://github.com/paritytech/polkadot-sdk/releases/download/{{ dependencies.repositories.polkadot_sdk.version }}/polkadot-execute-worker # Verify signature curl -LO https://github.com/paritytech/polkadot-sdk/releases/download/{{ dependencies.repositories.polkadot_sdk.version }}/polkadot-execute-worker.asc gpg --keyserver hkps://keyserver.ubuntu.com --receive-keys 90BD75EBBB8E95CB3DA6078F94A4029AB4B35DAE gpg --verify polkadot-execute-worker.asc ``` Signature verification cryptographically ensures the downloaded binaries are authentic and have not been tampered with by using GPG signing keys. Polkadot releases use two different signing keys: - ParityReleases (release-team@parity.io) with key [`90BD75EBBB8E95CB3DA6078F94A4029AB4B35DAE`](https://keyserver.ubuntu.com/pks/lookup?search=9D4B2B6EB8F97156D19669A9FF0812D491B96798&fingerprint=on&op=index){target=\_blank} for current and new releases - Parity Security Team (security@parity.io) with key [`9D4B2B6EB8F97156D19669A9FF0812D491B96798`](https://keyserver.ubuntu.com/pks/lookup?search=90BD75EBBB8E95CB3DA6078F94A4029AB4B35DAE&fingerprint=on&op=index){target=\_blank} for old releases !!!warning When verifying a signature, a "Good signature" message indicates successful verification, while any other output signals a potential security risk. ### Install with Package Managers Users running Debian-based distributions like Ubuntu can install the binaries using the [APT](https://wiki.debian.org/Apt){target=\_blank} package manager. Execute the following commands as root to add the official repository and install the binaries: ```bash # Import the security@parity.io GPG key gpg --recv-keys --keyserver hkps://keys.mailvelope.com 9D4B2B6EB8F97156D19669A9FF0812D491B96798 gpg --export 9D4B2B6EB8F97156D19669A9FF0812D491B96798 > /usr/share/keyrings/parity.gpg # Add the Parity repository and update the package index echo 'deb [signed-by=/usr/share/keyrings/parity.gpg] https://releases.parity.io/deb release main' > /etc/apt/sources.list.d/parity.list apt update # Install the `parity-keyring` package - This will ensure the GPG key # used by APT remains up-to-date apt install parity-keyring # Install polkadot apt install polkadot ``` Once installation completes, verify the binaries are correctly installed by following the steps in the [verify installation](#verify-installation) section. ### Install with Ansible You can also manage Polkadot installations using Ansible. This approach can be beneficial for users managing multiple validator nodes or requiring automated deployment. The [Parity chain operations Ansible collection](https://github.com/paritytech/ansible-galaxy/){target=\_blank} provides a Substrate node role for this purpose. ### Install with Docker If you prefer using Docker or an OCI-compatible container runtime, the official Polkadot Docker image can be pulled directly from Docker Hub. To pull the latest stable image, run the following command: ```bash docker pull parity/polkadot:{{ dependencies.repositories.polkadot_sdk.docker_image_version }} ``` ### Build from Sources You may build the binaries from source by following the instructions on the [Polkadot SDK repository](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/polkadot#building){target=\_blank}. ## Verify Installation Once the Polkadot binaries are installed, it's essential to verify that everything is set up correctly and that all the necessary components are in place. Follow these steps to ensure the binaries are installed and functioning as expected. 1. **Check the versions** - run the following commands to verify the versions of the installed binaries: ```bash polkadot --version polkadot-execute-worker --version polkadot-prepare-worker --version ``` The output should show the version numbers for each of the binaries. Ensure that the versions match and are consistent, similar to the following example (the specific version may vary):
polkadot --version polkadot-execute-worker --version polkadot-prepare-worker --version 1.16.1-36264cb36db 1.16.1-36264cb36db 1.16.1-36264cb36db
If the versions do not match or if there is an error, double-check that all the binaries were correctly installed and are accessible within your `$PATH`. 2. **Ensure all binaries are in the same directory** - all the binaries must be in the same directory for the Polkadot validator node to function properly. If the binaries are not in the same location, move them to a unified directory and ensure this directory is added to your system's `$PATH` To verify the `$PATH`, run the following command: ```bash echo $PATH ``` If necessary, you can move the binaries to a shared location, such as `/usr/local/bin/`, and add it to your `$PATH`. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/start-validating/ --- BEGIN CONTENT --- --- title: Start Validating description: Learn how to start validating on Polkadot by choosing a network, syncing your node, bonding DOT tokens, and activating your validator. categories: Infrastructure --- # Start Validating ## Introduction After configuring your node keys as shown in the [Key Management](/infrastructure/running-a-validator/onboarding-and-offboarding/key-management){target=\_blank} section and ensuring your system is set up, you're ready to begin the validator setup process. This guide will walk you through choosing a network, synchronizing your node with the blockchain, bonding your DOT tokens, and starting your validator. ## Choose a Network Running your validator on a test network like Westend or Kusama is a smart way to familiarize yourself with the process and identify any setup issues in a lower-stakes environment before joining the Polkadot MainNet. - **Westend** - Polkadot's primary TestNet is open to anyone for testing purposes. Validator slots are intentionally limited to keep the network stable for the Polkadot release process, so it may not support as many validators at any given time - **Kusama** - often called Polkadot's "canary network," Kusama has real economic value but operates with a faster and more experimental approach. Running a validator here provides an experience closer to MainNet with the benefit of more frequent validation opportunities with an era time of 6 hours vs 24 hours for Polkadot - **Polkadot** - the main network, where validators secure the Polkadot relay chain. It has a slower era time of 24 hours and requires a higher minimum bond amount to participate ## Synchronize Chain Data The next step is to sync your node with the chosen blockchain network. Synchronization is necessary to download and validate the blockchain data, ensuring your node is ready to participate as a validator. Follow these steps to sync your node: 1. **Start syncing** - you can run a full or warp sync === "Full sync" Polkadot defaults to using a full sync, which downloads and validates the entire blockchain history from the genesis block. Start the syncing process by running the following command: ```sh polkadot ``` This command starts your Polkadot node in non-validator mode, allowing you to synchronize the chain data. === "Warp sync" You can opt to use warp sync which initially downloads only GRANDPA finality proofs and the latest finalized block's state. Use the following command to start a warp sync: ``` bash polkadot --sync warp ``` Warp sync ensures that your node quickly updates to the latest finalized state. The historical blocks are downloaded in the background as the node continues to operate. If you're planning to run a validator on a TestNet, you can specify the chain using the `--chain` flag. For example, the following will run a validator on Kusama: ```sh polkadot --chain=kusama ``` 2. **Monitor sync progress** - once the sync starts, you will see a stream of logs providing information about the node's status and progress. Here's an example of what the output might look like:
polkadot 2021-06-17 03:07:07 Parity Polkadot 2021-06-17 03:07:07 ✌️ version 0.9.5-95f6aa201-x86_64-linux-gnu 2021-06-17 03:07:07 ❤️ by Parity Technologies <admin@parity.io>, 2017-2021 2021-06-17 03:07:07 📋 Chain specification: Polkadot 2021-06-17 03:07:07 🏷 Node name: boiling-pet-7554 2021-06-17 03:07:07 👤 Role: FULL 2021-06-17 03:07:07 💾 Database: RocksDb at /root/.local/share/polkadot/chains/polkadot/db 2021-06-17 03:07:07 ⛓ Native runtime: polkadot-9050 (parity-polkadot-0.tx7.au0) 2021-06-17 03:07:10 🏷 Local node identity is: 12D3KooWLtXFWf1oGrnxMGmPKPW54xWCHAXHbFh4Eap6KXmxoi9u 2021-06-17 03:07:10 📦 Highest known block at #17914 2021-06-17 03:07:10 〽️ Prometheus server started at 127.0.0.1:9615 2021-06-17 03:07:10 Listening for new connections on 127.0.0.1:9944 ...
The output logs provide information such as the current block number, node name, and network connections. Monitor the sync progress and any errors that might occur during the process. Look for information about the latest processed block and compare it with the current highest block using tools like [Telemetry](https://telemetry.polkadot.io/#list/Polkadot%20CC1){target=\_blank} or [Polkadot.js Apps Explorer](https://polkadot.js.org/apps/#/explorer){target=\_blank}. ### Database Snapshot Services If you'd like to speed up the process further, you can use a database snapshot. Snapshots are compressed backups of the blockchain's database directory and can significantly reduce the time required to sync a new node. Here are a few public snapshot providers: - [Stakeworld](https://stakeworld.io/snapshot){target=\_blank} - [Polkachu](https://polkachu.com/substrate_snapshots){target=\_blank} - [Polkashots](https://polkashots.io/){target=\_blank} !!!warning Although snapshots are convenient, syncing from scratch is recommended for security purposes. If snapshots become corrupted and most nodes rely on them, the network could inadvertently run on a non-canonical chain.
polkadot 2021-06-17 03:07:07 Idle (0 peers), best: #0 (0x3fd7...5baf), finalized #0 (0x3fd7...5baf), ⬇ 2.9kiB/s ⬆ 3.7kiB/s 2021-06-17 03:07:12 Idle (0 peers), best: #0 (0x3fd7...5baf), finalized #0 (0x3fd7...5baf), ⬇ 1.7kiB/s ⬆ 2.0kiB/s 2021-06-17 03:07:17 Idle (0 peers), best: #0 (0x3fd7...5baf), finalized #0 (0x3fd7...5baf), ⬇ 0.9kiB/s ⬆ 1.2kiB/s 2021-06-17 03:07:19 Libp2p => Random Kademlia query has yielded empty results 2021-06-17 03:08:00 Idle (0 peers), best: #0 (0x3fd7...5baf), finalized #0 (0x3fd7...5baf), ⬇ 1.6kiB/s ⬆ 1.9kiB/s 2021-06-17 03:08:05 Idle (0 peers), best: #0 (0x3fd7...5baf), finalized #0 (0x3fd7...5baf), ⬇ 0.6kiB/s ⬆ 0.9kiB/s ...
If you see terminal output similar to the preceding, and you are unable to synchronize the chain due to having zero peers, make sure you have libp2p port `30333` activated. It will take some time to discover other peers over the network. ## Bond DOT Once your validator node is synced, the next step is bonding DOT. A bonded account, or stash, holds your staked tokens (DOT) that back your validator node. Bonding your DOT means locking it for a period, during which it cannot be transferred or spent but is used to secure your validator's role in the network. Visit the [Minimum Bond Requirement](/infrastructure/running-a-validator/requirements/#minimum-bond-requirement) section for details on how much DOT is required. The following sections will guide you through bonding DOT for your validator. ### Bonding DOT on Polkadot.js Apps Once you're ready to bond your DOT, head over to the [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} staking page by clicking the **Network** dropdown at the top of the page and selecting [**Staking**](https://polkadot.js.org/apps/#/staking/actions){target=\_blank}. To get started with the bond submission, click on the **Accounts** tab, then the **+ Stash** button, and then enter the following information: 1. **Stash account** - select your stash account (which is the account with the DOT/KSM balance) 2. **Value bonded** - enter how much DOT from the stash account you want to bond/stake. You are not required to bond all of the DOT in that account and you may bond more DOT at a later time. Be aware, withdrawing any bonded amount requires waiting for the unbonding period. The unbonding period is seven days for Kusama and 28 days for Polkadot 3. **Payment destination** - add the recipient account for validator rewards. If you'd like to redirect payments to an account that is not the stash account, you can do it by entering the address here. Note that it is extremely unsafe to set an exchange address as the recipient of the staking rewards Once everything is filled in properly, select **Bond** and sign the transaction with your stash account. If successful, you should see an `ExtrinsicSuccess` message. Your bonded account will be available under **Stashes**. After refreshing the screen, you should now see a card with all your accounts. The bonded amount on the right corresponds to the funds bonded by the stash account. ## Validate Once your validator node is fully synced and ready, the next step is to ensure it's visible on the network and performing as expected. Below are steps for monitoring and managing your node on the Polkadot network. ### Verify Sync via Telemetry To confirm that your validator is live and synchronized with the Polkadot network, visit the [Telemetry](https://telemetry.polkadot.io/#list/Polkadot%20CC1){target=\_blank} page. Telemetry provides real-time information on node performance and can help you check if your validator is connected properly. Search for your node by name. You can search all nodes currently active on the network, which is why you should use a unique name for easy recognition. Now, confirm that your node is fully synced by comparing the block height of your node with the network's latest block. Nodes that are fully synced will appear white in the list, while nodes that are not yet fully synced will appear gray. In the following example, a node named `techedtest` is successfully located and synchronized, ensuring it's prepared to participate in the network: ![Polkadot telemetry dashboard](/images/infrastructure/running-a-validator/onboarding-and-offboarding/start-validating/start-validating-01.webp) ### Activate using Polkadot.js Apps Follow these steps to use Polkadot.js Apps to activate your validator: 1. Go to the **Validator** tab in the Polkadot.js Apps UI and locate the section where you input the keys generated from `rotateKeys`. Paste the output from `author_rotateKeys`, which is a hex-encoded key that links your validator with its session keys: ![](/images/infrastructure/running-a-validator/onboarding-and-offboarding/start-validating/start-validating-02.webp) 2. Set a reward commission percentage if desired. You can set a percentage of the rewards to pay to your validator and the remainder pays to your nominators. A 100% commission rate indicates the validator intends to keep all rewards and is seen as a signal the validator is not seeking nominators 3. Toggle the **allows new nominations** option if your validator is open to more nominations from DOT holders 4. Once everything is configured, select **Bond & Validate** to activate your validator status ![](/images/infrastructure/running-a-validator/onboarding-and-offboarding/start-validating/start-validating-03.webp) 5. Edit the **commission** and the **blocked** option via `staking.validate` extrinsic. By default, the blocked option is set to FALSE (i.e., the validator accepts nominations) ![](/images/infrastructure/running-a-validator/onboarding-and-offboarding/start-validating/start-validating-04.webp) ### Monitor Validation Status and Slots On the [**Staking**](https://polkadot.js.org/apps/#/staking){target=\_blank} tab in Polkadot.js Apps, you can see your validator's status, the number of available validator slots, and the nodes that have signaled their intent to validate. Your node may initially appear in the waiting queue, especially if the validator slots are full. The following is an example view of the **Staking** tab: ![staking queue](/images/infrastructure/running-a-validator/onboarding-and-offboarding/start-validating/start-validating-05.webp) The validator set refreshes each era. If there's an available slot in the next era, your node may be selected to move from the waiting queue to the active validator set, allowing it to start validating blocks. If your validator is not selected, it remains in the waiting queue. Increasing your stake or gaining more nominators may improve your chance of being selected in future eras. ## Run a Validator Using Systemd Running your Polkadot validator as a [systemd](https://en.wikipedia.org/wiki/Systemd){target=\_blank} service is an effective way to ensure its high uptime and reliability. Using systemd allows your validator to automatically restart after server reboots or unexpected crashes, significantly reducing the risk of slashing due to downtime. This following sections will walk you through creating and managing a systemd service for your validator, allowing you to seamlessly monitor and control it as part of your Linux system. Ensure the following requirements are met before proceeding with the systemd setup: - Confirm your system meets the [requirements](/infrastructure/running-a-validator/requirements/){target=\_blank} for running a validator - Ensure you meet the [minimum bond requirements](https://wiki.polkadot.network/general/chain-state-values/#minimum-validator-bond){target=\_blank} for validating - Verify the Polkadot binary is [installed](#install-the-polkadot-binaries) ### Create the Systemd Service File First create a new unit file called `polkadot-validator.service` in `/etc/systemd/system/`: ```bash touch /etc/systemd/system/polkadot-validator.service ``` In this unit file, you will write the commands that you want to run on server boot/restart: ```systemd title="/etc/systemd/system/polkadot-validator.service" [Unit] Description=Polkadot Node After=network.target Documentation=https://github.com/paritytech/polkadot-sdk [Service] EnvironmentFile=-/etc/default/polkadot ExecStart=/usr/bin/polkadot $POLKADOT_CLI_ARGS User=polkadot Group=polkadot Restart=always RestartSec=120 CapabilityBoundingSet= LockPersonality=true NoNewPrivileges=true PrivateDevices=true PrivateMounts=true PrivateTmp=true PrivateUsers=true ProtectClock=true ProtectControlGroups=true ProtectHostname=true ProtectKernelModules=true ProtectKernelTunables=true ProtectSystem=strict RemoveIPC=true RestrictAddressFamilies=AF_INET AF_INET6 AF_NETLINK AF_UNIX RestrictNamespaces=false RestrictSUIDSGID=true SystemCallArchitectures=native SystemCallFilter=@system-service SystemCallFilter=landlock_add_rule landlock_create_ruleset landlock_restrict_self seccomp mount umount2 SystemCallFilter=~@clock @module @reboot @swap @privileged SystemCallFilter=pivot_root UMask=0027 [Install] WantedBy=multi-user.target ``` !!! warning "Restart delay and equivocation risk" It is recommended that a node's restart be delayed with `RestartSec` in the case of a crash. It's possible that when a node crashes, consensus votes in GRANDPA aren't persisted to disk. In this case, there is potential to equivocate when immediately restarting. Delaying the restart will allow the network to progress past potentially conflicting votes. ### Run the Service Activate the systemd service to start on system boot by running: ```bash systemctl enable polkadot-validator.service ``` To start the service manually, use: ```bash systemctl start polkadot-validator.service ``` Check the service's status to confirm it is running: ```bash systemctl status polkadot-validator.service ``` To view the logs in real-time, use [journalctl](https://www.freedesktop.org/software/systemd/man/latest/journalctl.html){target=\_blank} like so: ```bash journalctl -f -u polkadot-validator ``` With these steps, you can effectively manage and monitor your validator as a systemd service. Once your validator is active, it's officially part of Polkadot's security infrastructure. For questions or further support, you can reach out to the [Polkadot Validator chat](https://matrix.to/#/!NZrbtteFeqYKCUGQtr:matrix.parity.io?via=matrix.parity.io&via=matrix.org&via=web3.foundation){target=\_blank} for tips and troubleshooting. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/onboarding-and-offboarding/stop-validating/ --- BEGIN CONTENT --- --- title: Stop Validating description: Learn to safely stop validating on Polkadot, including chilling, unbonding tokens, and purging validator keys. categories: Infrastructure --- # Stop Validating ## Introduction If you're ready to stop validating on Polkadot, there are essential steps to ensure a smooth transition while protecting your funds and account integrity. Whether you're taking a break for maintenance or unbonding entirely, you'll need to chill your validator, purge session keys, and unbond your tokens. This guide explains how to use Polkadot's tools and extrinsics to safely withdraw from validation activities, safeguarding your account's future usability. ## Pause Versus Stop If you wish to remain a validator or nominator (for example, stopping for planned downtime or server maintenance), submitting the `chill` extrinsic in the `staking` pallet should suffice. Additional steps are only needed to unbond funds or reap an account. The following are steps to ensure a smooth stop to validation: - Chill the validator - Purge validator session keys - Unbond your tokens ## Chill Validator When stepping back from validating, the first step is to chill your validator status. This action stops your validator from being considered for the next era without fully unbonding your tokens, which can be useful for temporary pauses like maintenance or planned downtime. Use the `staking.chill` extrinsic to initiate this. For more guidance on chilling your node, refer to the [Pause Validating](/infrastructure/running-a-validator/operational-tasks/pause-validating/){target=\_blank} guide. You may also claim any pending staking rewards at this point. ## Purge Validator Session Keys Purging validator session keys is a critical step in removing the association between your validator account and its session keys, which ensures that your account is fully disassociated from validator activities. The `session.purgeKeys` extrinsic removes the reference to your session keys from the stash or staking proxy account that originally set them. Here are a couple of important things to know about purging keys: - **Account used to purge keys** - always use the same account to purge keys you originally used to set them, usually your stash or staking proxy account. Using a different account may leave an unremovable reference to the session keys on the original account, preventing its reaping - **Account reaping issue** - failing to purge keys will prevent you from reaping (fully deleting) your stash account. If you attempt to transfer tokens without purging, you'll need to rebond, purge the session keys, unbond again, and wait through the unbonding period before any transfer ## Unbond Your Tokens After chilling your node and purging session keys, the final step is to unbond your staked tokens. This action removes them from staking and begins the unbonding period (usually 28 days for Polkadot and seven days for Kusama), after which the tokens will be transferable. To unbond tokens, go to **Network > Staking > Account Actions** on Polkadot.js Apps. Select your stash account, click on the dropdown menu, and choose **Unbond Funds**. Alternatively, you can use the `staking.unbond` extrinsic if you handle this via a staking proxy account. Once the unbonding period is complete, your tokens will be available for use in transactions or transfers outside of staking. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/general-management/ --- BEGIN CONTENT --- --- title: General Management description: Optimize your Polkadot validator setup with advanced configuration techniques. Learn how to boost performance, enhance security, and ensure seamless operations. categories: Infrastructure --- # General Management ## Introduction Validator performance is pivotal in maintaining the security and stability of the Polkadot network. As a validator, optimizing your setup ensures efficient transaction processing, minimizes latency, and maintains system reliability during high-demand periods. Proper configuration and proactive monitoring also help mitigate risks like slashing and service interruptions. This guide covers essential practices for managing a validator, including performance tuning techniques, security hardening, and tools for real-time monitoring. Whether you're fine-tuning CPU settings, configuring NUMA balancing, or setting up a robust alert system, these steps will help you build a resilient and efficient validator operation. ## Configuration Optimization For those seeking to optimize their validator's performance, the following configurations can improve responsiveness, reduce latency, and ensure consistent performance during high-demand periods. ### Deactivate Simultaneous Multithreading Polkadot validators operate primarily in single-threaded mode for critical tasks, so optimizing single-core CPU performance can reduce latency and improve stability. Deactivating simultaneous multithreading (SMT) can prevent virtual cores from affecting performance. SMT is called Hyper-Threading on Intel and 2-way SMT on AMD Zen. Take the following steps to deactivate every other (vCPU) core: 1. Loop though all the CPU cores and deactivate the virtual cores associated with them: ```bash for cpunum in $(cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | \ cut -s -d, -f2- | tr ',' '\n' | sort -un) do echo 0 > /sys/devices/system/cpu/cpu$cpunum/online done ``` 2. To permanently save the changes, add `nosmt=force` to the `GRUB_CMDLINE_LINUX_DEFAULT` variable in `/etc/default/grub`: ```bash sudo nano /etc/default/grub # Add to GRUB_CMDLINE_LINUX_DEFAULT ``` ```config title="/etc/default/grub" -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/grub-config-01.js:1:7' ``` 3. Update GRUB to apply changes: ```bash sudo update-grub ``` 4. After the reboot, you should see that half of the cores are offline. To confirm, run: ```bash lscpu --extended ``` ### Deactivate Automatic NUMA Balancing Deactivating NUMA (Non-Uniform Memory Access) balancing for multi-CPU setups helps keep processes on the same CPU node, minimizing latency. Follow these stpes: 1. Deactivate NUMA balancing in runtime: ```bash sysctl kernel.numa_balancing=0 ``` 2. Deactivate NUMA balancing permanently by adding `numa_balancing=disable` to the GRUB settings: ```bash sudo nano /etc/default/grub # Add to GRUB_CMDLINE_LINUX_DEFAULT ``` ```config title="/etc/default/grub" -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/grub-config-01.js:9:15' ``` 3. Update GRUB to apply changes: ```bash sudo update-grub ``` 4. Confirm the deactivation: ```bash sysctl -a | grep 'kernel.numa_balancing' ``` If you successfully deactivated NUMA balancing, the preceding command should return `0`. ### Spectre and Meltdown Mitigations [Spectre](https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)){target=\_blank} and [Meltdown](https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)){target=\_blank} are well-known CPU vulnerabilities that exploit speculative execution to access sensitive data. These vulnerabilities have been patched in recent Linux kernels, but the mitigations can slightly impact performance, especially in high-throughput or containerized environments. If your security requirements allow it, you can deactivate specific mitigations, such as Spectre V2 and Speculative Store Bypass Disable (SSBD), to improve performance. To selectively deactivate the Spectre mitigations, take these steps: 1. Update the `GRUB_CMDLINE_LINUX_DEFAULT` variable in your `/etc/default/grub` configuration: ```bash sudo nano /etc/default/grub # Add to GRUB_CMDLINE_LINUX_DEFAULT ``` ```config title="/etc/default/grub" -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/grub-config-01.js:17:23' ``` 2. Update GRUB to apply changes and then reboot: ```bash sudo update-grub sudo reboot ``` This approach selectively deactivates the Spectre V2 and Spectre V4 mitigations, leaving other protections intact. For full security, keep mitigations activated unless there's a significant performance need, as disabling them could expose the system to potential attacks on affected CPUs. ## Monitor Your Node Monitoring your node's performance is critical for network reliability and security. Tools like the following provide valuable insights: - **[Prometheus](https://prometheus.io/){target=\_blank}** - an open-source monitoring toolkit for collecting and querying time-series data - **[Grafana](https://grafana.com/){target=\_blank}** - a visualization tool for real-time metrics, providing interactive dashboards - **[Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/){target=\_blank}** - a tool for managing and routing alerts based on Prometheus data. This section covers setting up these tools and configuring alerts to notify you of potential issues. ### Environment Setup Before installing Prometheus, ensure the environment is set up securely by running Prometheus with restricted user privileges. Follow these steps: 1. Create a Prometheus user to ensure Prometheus runs with minimal permissions: ```bash sudo useradd --no-create-home --shell /usr/sbin/nologin prometheus ``` 2. Create directories for configuration and data storage: ```bash sudo mkdir /etc/prometheus sudo mkdir /var/lib/prometheus ``` 3. Change directory ownership to ensure Prometheus has access: ```bash sudo chown -R prometheus:prometheus /etc/prometheus sudo chown -R prometheus:prometheus /var/lib/prometheus ``` ### Install and Configure Prometheus After setting up the environment, install and configure the latest version of Prometheus as follows: 1. Download Prometheus for your system architecture from the [releases page](https://github.com/prometheus/prometheus/releases/){target=\_blank}. Replace `INSERT_RELEASE_DOWNLOAD` with the release binary URL (e.g., `https://github.com/prometheus/prometheus/releases/download/v3.0.0/prometheus-3.0.0.linux-amd64.tar.gz`): ```bash sudo apt-get update && sudo apt-get upgrade wget INSERT_RELEASE_DOWNLOAD_LINK tar xfz prometheus-*.tar.gz cd prometheus-3.0.0.linux-amd64 ``` 2. Set up Prometheus: 1. Copy binaries: ```bash sudo cp ./prometheus /usr/local/bin/ sudo cp ./promtool /usr/local/bin/ sudo cp ./prometheus /usr/local/bin/ ``` 2. Copy directories and assign ownership of these files to the `prometheus` user: ```bash sudo cp -r ./consoles /etc/prometheus sudo cp -r ./console_libraries /etc/prometheus sudo chown -R prometheus:prometheus /etc/prometheus/consoles sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries ``` 3. Clean up the download directory: ```bash cd .. && rm -r prometheus* ``` 3. Create `prometheus.yml` to define global settings, rule files, and scrape targets: ```bash sudo nano /etc/prometheus/prometheus.yml ``` ```yaml title="prometheus-config.yml" global: scrape_interval: 15s evaluation_interval: 15s rule_files: # - "first.rules" # - "second.rules" scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'substrate_node' scrape_interval: 5s static_configs: - targets: ['localhost:9615'] ``` Prometheus is scraped every 5 seconds in this example configuration file, ensuring detailed internal metrics. Node metrics with customizable intervals are scraped from port `9615` by default. 4. Verify the configuration with `promtool`, an open source monitoring tool: ```bash promtool check config /etc/prometheus/prometheus.yml ``` 5. Save the configuration and change the ownership of the file to `prometheus` user: ```bash sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml ``` ### Start Prometheus 1. Launch Prometheus with the appropriate configuration file, storage location, and necessary web resources, running it with restricted privileges for security: ```bash sudo -u prometheus /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml \ --storage.tsdb.path /var/lib/prometheus/ \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries ``` If you set the server up properly, you should see terminal output similar to the following: -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/terminal-ouput-01.html' 2. Verify you can access the Prometheus interface by navigating to: ```text http://SERVER_IP_ADDRESS:9090/graph ``` If the interface appears to work as expected, exit the process using `Control + C`. 3. Create a systemd service file to ensure Prometheus starts on boot: ```bash sudo nano /etc/systemd/system/prometheus.service ``` ```bash title="prometheus.service" [Unit] Description=Prometheus Monitoring Wants=network-online.target After=network-online.target [Service] User=prometheus Group=prometheus Type=simple ExecStart=/usr/local/bin/prometheus \ --config.file /etc/prometheus/prometheus.yml \ --storage.tsdb.path /var/lib/prometheus/ \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target ``` 4. Reload systemd and enable the service to start on boot: ```bash sudo systemctl daemon-reload && sudo systemctl enable prometheus && sudo systemctl start prometheus ``` 5. Verify the service is running by visiting the Prometheus interface again at: ```text http://SERVER_IP_ADDRESS:9090/ ``` ### Install and Configure Grafana This guide follows [Grafana's canonical installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/debian/#install-from-apt-repository){target=\_blank}. To install and configure Grafana, follow these steps: 1. Install Grafana prerequisites: ```bash sudo apt-get install -y apt-transport-https software-properties-common wget ``` 2. Import the [GPG key](https://gnupg.org/){target=\_blank}: ```bash sudo mkdir -p /etc/apt/keyrings/ wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null ``` 3. Configure the stable release repo and update packages: ```bash echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list sudo apt-get update ``` 4. Install the latest stable version of Grafana: ```bash sudo apt-get install grafana ``` To configure Grafana, take these steps: 1. Configure Grafana to start automatically on boot and start the service: ```bash sudo systemctl daemon-reload sudo systemctl enable grafana-server.service sudo systemctl start grafana-server ``` 2. Check if Grafana is running: ```bash sudo systemctl status grafana-server ``` If necessary, you can stop or restart the service with the following commands: ```bash sudo systemctl stop grafana-server sudo systemctl restart grafana-server ``` 3. Access Grafana by navigating to the following URL and logging in with the default username and password (`admin`): ```text http://SERVER_IP_ADDRESS:3000/login ``` !!! tip "Change default port" To change Grafana's port, edit `/usr/share/grafana/conf/defaults.ini`: ```bash sudo vim /usr/share/grafana/conf/defaults.ini ``` Modify the `http_port` value, then restart Grafana: ```bash sudo systemctl restart grafana-server ``` ![Grafana login screen](/images/infrastructure/running-a-validator/operational-tasks/general-management/general-management-1.webp) To visualize node metrics, follow these steps: 1. Select the gear icon to access **Data Sources** settings 2. Select **Add data source** to define the data source ![Select Prometheus](/images/infrastructure/running-a-validator/operational-tasks/general-management/general-management-2.webp) 3. Select **Prometheus** ![Save and test](/images/infrastructure/running-a-validator/operational-tasks/general-management/general-management-3.webp) 4. Enter `http://localhost:9090` in the **URL** field and click **Save & Test**. If **"Data source is working"** appears, your connection is configured correctly ![Import dashboard](/images/infrastructure/running-a-validator/operational-tasks/general-management/general-management-4.webp) 5. Select **Import** from the left menu, choose **Prometheus** from the dropdown, and click **Import** 6. Start your Polkadot node by running `./polkadot`. You should now be able to monitor node performance, block height, network traffic, and tasks tasks on the Grafana dashboard ![Live dashboard](/images/infrastructure/running-a-validator/operational-tasks/general-management/general-management-5.webp) The [Grafana dashboards](https://grafana.com/grafana/dashboards){target=\_blank} page features user created dashboards made available for public use. For an example, see the [Substrate Node Metrics](https://grafana.com/grafana/dashboards/21715-substrate-node-metrics/){target=\_blank} dashboard. ### Install and Configure Alertmanager [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/){target=\_blank} is an optional component that complements Prometheus by managing alerts and notifying users about potential issues. Follow these steps to install and configure Alertmanager: 1. Download Alertmanager for your system architecture from the [releases page](https://github.com/prometheus/alertmanager/releases){target=\_blank}. Replace `INSERT_RELEASE_DOWNLOAD` with the release binary URL (e.g., `https://github.com/prometheus/alertmanager/releases/download/v0.28.0-rc.0/alertmanager-0.28.0-rc.0.linux-amd64.tar.gz`): ```bash wget INSERT_RELEASE_DOWNLOAD_LINK tar -xvzf alertmanager* ``` 2. Copy the binaries to the system directory and set permissions: ```bash cd alertmanager-0.28.0-rc.0.linux-amd64 sudo cp ./alertmanager /usr/local/bin/ sudo cp ./amtool /usr/local/bin/ sudo chown prometheus:prometheus /usr/local/bin/alertmanager sudo chown prometheus:prometheus /usr/local/bin/amtool ``` 3. Create the `alertmanager.yml` configuration file under `/etc/alertmanager`: ```bash sudo mkdir /etc/alertmanager sudo nano /etc/alertmanager/alertmanager.yml ``` Generate an [app password in your Google account](https://support.google.com/accounts/answer/185833?hl=en){target=\_blank} to enable email notifications from Alertmanager. Then, add the following code to the configuration file to define email notifications using your email and app password: ```yml title="alertmanager.yml" -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/alertmanager.yml' ``` ```bash sudo chown -R prometheus:prometheus /etc/alertmanager ``` 5. Configure Alertmanager as a service by creating a systemd service file: ```bash sudo nano /etc/systemd/system/alertmanager.service ``` ```yml title="alertmanager.service" -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/systemd-alert-config.md' ``` 6. Reload and enable the service: ```bash sudo systemctl daemon-reload sudo systemctl enable alertmanager sudo systemctl start alertmanager ``` 7. Verify the service status: ```bash sudo systemctl status alertmanager ``` If you have configured Alertmanager properly, the **Active** field should display **active (running)** similar to below: -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/alertmanager-status.html' #### Grafana Plugin There is an [Alertmanager plugin in Grafana](https://grafana.com/grafana/plugins/alertmanager/){target=\_blank} that can help you monitor alert information. Follow these steps to use the plugin: 1. Install the plugin: ```bash sudo grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource ``` 2. Restart Grafana: ```bash sudo systemctl restart grafana-server ``` 3. Configure Alertmanager as a data source in your Grafana dashboard (`SERVER_IP:3000`): 1. Go to **Configuration** > **Data Sources** and search for **Prometheus Alertmanager** 2. Enter the server URL and port for the Alertmanager service, and select **Save & Test** to verify the connection 4. Import the [8010](https://grafana.com/grafana/dashboards/8010-prometheus-alertmanager/){target=\_blank} dashboard for Alertmanager, selecting **Prometheus Alertmanager** in the last column, then select **Import** #### Integrate Alertmanager Complete the integration by following these steps to enable communication between Prometheus and Alertmanager and configure detection and alert rules: 1. Update the `etc/prometheus/prometheus.yml` configuration file to include the following code: ```yml title="prometheus.yml" rule_files: - 'rules.yml' alerting: alertmanagers: - static_configs: - targets: - localhost:9093 ``` Expand the following item to view the complete `prometheus.yml` file. ??? code "prometheus.yml" ```yml title="prometheus.yml" global: scrape_interval: 15s evaluation_interval: 15s rule_files: - 'rules.yml' alerting: alertmanagers: - static_configs: - targets: - localhost:9093 scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'substrate_node' scrape_interval: 5s static_configs: - targets: ['localhost:9615'] ``` 2. Create the rules file for detection and alerts: ```bash sudo nano /etc/prometheus/rules.yml ``` Add a sample rule to trigger email notifications for node downtime over five minutes: ```yml title="rules.yml" -8<-- 'code/infrastructure/running-a-validator/operational-tasks/general-management/instance-down.yml' ``` If any of the conditions defined in the rules file are met, an alert will be triggered. For more on alert rules, refer to [Alerting Rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/){target=\_blank} and [additional alerts](https://samber.github.io/awesome-prometheus-alerts/rules.html){target=\_blank}. 3. Update the file ownership to `prometheus`: ```bash sudo chown prometheus:prometheus rules.yml ``` 4. Validate the rules syntax: ```bash sudo -u prometheus promtool check rules rules.yml ``` 5. Restart Prometheus and Alertmanager: ```bash sudo systemctl restart prometheus && sudo systemctl restart alertmanager ``` Now you will receive an email alert if one of your rule triggering conditions is met. ## Secure Your Validator Validators in Polkadot's Proof of Stake (PoS) network play a critical role in maintaining network integrity and security by keeping the network in consensus and verifying state transitions. To ensure optimal performance and minimize risks, validators must adhere to strict guidelines around security and reliable operations. ### Key Management Though they don't transfer funds, session keys are essential for validators as they sign messages related to consensus and parachains. Securing session keys is crucial as allowing them to be exploited or used across multiple nodes can lead to a loss of staked funds via [slashing](/infrastructure/staking-mechanics/offenses-and-slashes/){target=\_blank}. Given the current limitations in high-availability setups and the risks associated with double-signing, it’s recommended to run only a single validator instance. Keys should be securely managed, and processes automated to minimize human error. There are two approaches for generating session keys: - **Generate and store in node** - using the `author.rotateKeys` RPC call. For most users, generating keys directly within the client is recommended. You must submit a session certificate from your staking proxy to register new keys. See the [How to Validate](/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/){target=\_blank} guide for instructions on setting keys - **Generate outside node and insert** - using the `author.setKeys` RPC call. This flexibility accommodates advanced security setups and should only be used by experienced validator operators ### Signing Outside the Client Polkadot plans to support external signing, allowing session keys to reside in secure environments like Hardware Security Modules (HSMs). However, these modules can sign any payload they receive, potentially enabling an attacker to perform slashable actions. ### Secure-Validator Mode Polkadot's Secure-Validator mode offers an extra layer of protection through strict filesystem, networking, and process sandboxing. This secure mode is activated by default if the machine meets the following requirements: - **Linux (x86-64 architecture)** - usually Intel or AMD - **Enabled `seccomp`** - this kernel feature facilitates a more secure approach for process management on Linux. Verify by running: ```bash cat /boot/config-`uname -r` | grep CONFIG_SECCOMP= ``` If `seccomp` is enabled, you should see output similar to the following: ```bash CONFIG_SECCOMP=y ``` !!! tip Optionally, **Linux 5.13** may also be used, as it provides access to even more strict filesystem protections. ### Linux Best Practices Follow these best practices to keep your validator secure: - Use a non-root user for all operations - Regularly apply OS security patches - Enable and configure a firewall - Use key-based SSH authentication; deactivate password-based login - Regularly back up data and harden your SSH configuration. Visit this [SSH guide](https://blog.stribik.technology/2015/01/04/secure-secure-shell.html){target=\_blank} for more details ### Validator Best Practices Additional best practices can add an additional layer of security and operational reliability: - Only run the Polkadot binary, and only listen on the configured p2p port - Run on bare-metal machines, as opposed to virtual machines - Provisioning of the validator machine should be automated and defined in code which is kept in private version control, reviewed, audited, and tested - Generate and provide session keys in a secure way - Start Polkadot at boot and restart if stopped for any reason - Run Polkadot as a non-root user - Establish and maintain an on-call rotation for managing alerts - Establish and maintain a clear protocol with actions to perform for each level of each alert with an escalation policy ## Additional Resources - [Certus One's Knowledge Base](https://knowledgebase.certus.com/FAQ/){target=\_blank} - [EOS Block Producer Security List](https://github.com/slowmist/eos-bp-nodes-security-checklist){target=\_blank} - [HSM Policies and the Importance of Validator Security](https://medium.com/loom-network/hsm-policies-and-the-importance-of-validator-security-ec8a4cc1b6f){target=\_blank} For additional guidance, connect with other validators and the Polkadot engineering team in the [Polkadot Validator Lounge](https://matrix.to/#/#polkadotvalidatorlounge:web3.foundation){target=\_blank} on Element. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/ --- BEGIN CONTENT --- --- title: Operational Tasks description: Learn how to manage your Polkadot validator node, including monitoring performance, running a backup validator for maintenance, and rotating keys. template: index-page.html --- # Operational Tasks Running a Polkadot validator node involves several key operational tasks to ensure secure and efficient participation in the network. In this section, you'll learn how to manage and maintain your validator node by monitoring its performance, conducting regular maintenance, and ensuring high availability through strategies like running a backup validator. You'll also find instructions on rotating your session keys to enhance security and minimize vulnerabilities. Mastering these tasks is essential for maintaining a reliable and trusted presence within your network. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/pause-validating/ --- BEGIN CONTENT --- --- title: Pause Validating description: Learn how to temporarily pause staking activity in Polkadot using the chill extrinsic, with guidance for validators and nominators. categories: Infrastructure --- # Pause Validating ## Introduction If you need to temporarily stop participating in Polkadot staking activities without fully unbonding your funds, chilling your account allows you to do so efficiently. Chilling removes your node from active validation or nomination in the next era while keeping your funds bonded, making it ideal for planned downtimes or temporary pauses. This guide covers the steps for chilling as a validator or nominator, using the `chill` and `chillOther` extrinsics, and how these affect your staking status and nominations. ## Chilling Your Node If you need to temporarily step back from staking without unbonding your funds, you can "chill" your account. Chilling pauses your active staking participation, setting your account to inactive in the next era while keeping your funds bonded. To chill your account, go to the **Network > Staking > Account Actions** page on [Polkadot.js Apps](https://polkadot.js.org/apps){target=\_blank}, and select **Stop**. Alternatively, you can call the [`chill`](https://paritytech.github.io/polkadot-sdk/master/pallet_staking/enum.Call.html#variant.chill){target=\_blank} extrinsic in the Staking pallet. ## Staking Election Timing Considerations When a node actively participates in staking but then chills, it will continue contributing for the remainder of the current era. However, its eligibility for the next election depends on the chill status at the start of the new era: - **Chilled during previous era** - will not participate in the current era election and will remain inactive until reactivated -**Chilled during current era** - will not be selected for the next era's election -**Chilled after current era** - may be selected if it was active during the previous era and is now chilled ## Chilling as a Nominator When you choose to chill as a nominator, your active nominations are reset. Upon re-entering the nominating process, you must reselect validators to support manually. Depending on preferences, these can be the same validators as before or a new set. Remember that your previous nominations won’t be saved or automatically reactivated after chilling. While chilled, your nominator account remains bonded, preserving your staked funds without requiring a full unbonding process. When you’re ready to start nominating again, you can issue a new nomination call to activate your bond with a fresh set of validators. This process bypasses the need for re-bonding, allowing you to maintain your stake while adjusting your involvement in active staking. ## Chilling as a Validator When you chill as a validator, your active validator status is paused. Although your nominators remain bonded to you, the validator bond will no longer appear as an active choice for new or revised nominations until reactivated. Any existing nominators who take no action will still have their stake linked to the validator, meaning they don’t need to reselect the validator upon reactivation. However, if nominators adjust their stakes while the validator is chilled, they will not be able to nominate the chilled validator until it resumes activity. Upon reactivating as a validator, you must also reconfigure your validator preferences, such as commission rate and other parameters. These can be set to match your previous configuration or updated as desired. This step is essential for rejoining the active validator set and regaining eligibility for nominations. ## Chill Other Historical constraints in the runtime prevented unlimited nominators and validators from being supported. These constraints created a need for checks to keep the size of the staking system manageable. One of these checks is the `chillOther` extrinsic, allowing users to chill accounts that no longer met standards such as minimum staking requirements set through on-chain governance. This control mechanism included a `ChillThreshold`, which was structured to define how close to the maximum number of nominators or validators the staking system would be allowed to get before users could start chilling one another. With the passage of [Referendum #90](https://polkadot.polkassembly.io/referendum/90){target=\_blank}, the value for `maxNominatorCount` on Polkadot was set to `None`, effectively removing the limit on how many nominators and validators can participate. This means the `ChillThreshold` will never be met; thus, `chillOther` no longer has any effect. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/operational-tasks/upgrade-your-node/ --- BEGIN CONTENT --- --- title: Upgrade a Validator Node description: Guide to seamlessly upgrading your Polkadot validator node, managing session keys, and executing server maintenance while avoiding downtime and slashing risks. categories: Infrastructure --- # Upgrade a Validator Node ## Introduction Upgrading a Polkadot validator node is essential for staying current with network updates and maintaining optimal performance. This guide covers routine and extended maintenance scenarios, including software upgrades and major server changes. Following these steps, you can manage session keys and transition smoothly between servers without risking downtime, slashing, or network disruptions. The process requires strategic planning, especially if you need to perform long-lead maintenance, ensuring your validator remains active and compliant. This guide will allow validators to seamlessly substitute an active validator server to allow for maintenance operations. The process can take several hours, so ensure you understand the instructions first and plan accordingly. ## Prerequisites Before beginning the upgrade process for your validator node, ensure the following: - You have a fully functional validator setup with all required binaries installed. See [Set Up a Validator](/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/){target=\_blank} and [Validator Requirements](/infrastructure/running-a-validator/requirements/){target=\_blank} for additional guidance - Your VPS infrastructure has enough capacity to run a secondary validator instance temporarily for the upgrade process ## Session Keys Session keys are used to sign validator operations and establish a connection between your validator node and your staking proxy account. These keys are stored in the client, and any change to them requires a waiting period. Specifically, if you modify your session keys, the change will take effect only after the current session is completed and two additional sessions have passed. Remembering this delayed effect when planning upgrades is crucial to ensure that your validator continues to function correctly and avoids interruptions. To learn more about session keys and their importance, visit the [Keys section](https://wiki.polkadot.network/learn/learn-cryptography/#keys){target=\_blank}. ## Keystore Your validator server's `keystore` folder holds the private keys needed for signing network-level transactions. It is important not to duplicate or transfer this folder between validator instances. Doing so could result in multiple validators signing with the duplicate keys, leading to severe consequences such as [equivocation slashing](/infrastructure/staking-mechanics/offenses-and-slashes/#equivocation-slash){target=\_blank}. Instead, always generate new session keys for each validator instance. The default path to the `keystore` is as follows: ```bash /home/polkadot/.local/share/polkadot/chains//keystore ``` Taking care to manage your keys securely ensures that your validator operates safely and without the risk of slashing penalties. ## Upgrade Using Backup Validator The following instructions outline how to temporarily switch between two validator nodes. The original active validator is referred to as Validator A and the backup node used for maintenance purposes as Validator B. ### Session `N` 1. **Start Validator B** - launch a secondary node and wait until it is fully synced with the network. Once synced, start it with the `--validator` flag. This node will now act as Validator B 2. **Generate session keys** - create new session keys specifically for Validator B 3. **Submit the `set_key` extrinsic** - use your staking proxy account to submit a `set_key` extrinsic, linking the session keys for Validator B to your staking setup 4. **Record the session** - make a note of the session in which you executed this extrinsic 5. **Wait for session changes** - allow the current session to end and then wait for two additional full sessions for the new keys to take effect !!! warning "Keep Validator A running" It is crucial to keep Validator A operational during this entire waiting period. Since `set_key` does not take effect immediately, turning off Validator A too early may result in chilling or even slashing. ### Session `N+3` At this stage, Validator B becomes your active validator. You can now safely perform any maintenance tasks on Validator A. Complete the following steps when you are ready to bring Validator A back online: 1. **Start Validator A** - launch Validator A, sync the blockchain database, and ensure it is running with the `--validator` flag 2. **Generate new session keys for Validator A** - create fresh session keys for Validator A 3. **Submit the `set_key` extrinsic** - using your staking proxy account, submit a `set_key` extrinsic with the new Validator A session keys 4. **Record the session** - again, make a note of the session in which you executed this extrinsic Keep Validator B active until the session during which you executed the `set-key` extrinsic completes plus two additional full sessions have passed. Once Validator A has successfully taken over, you can safely stop Validator B. This process helps ensure a smooth handoff between nodes and minimizes the risk of downtime or penalties. Verify the transition by checking for finalized blocks in the new session. The logs should indicate the successful change, similar to the example below:
INSERT_COMMAND 2019-10-28 21:44:13 Applying authority set change scheduled at block #450092 2019-10-28 21:44:13 Applying GRANDPA set change to new set with 20 authorities
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/running-a-validator/requirements/ --- BEGIN CONTENT --- --- title: Validator Requirements description: Explore the technical and system requirements for running a Polkadot validator, including setup, hardware, staking prerequisites, and security best practices. categories: Infrastructure --- # Validator Requirements ## Introduction Running a validator in the Polkadot ecosystem is essential for maintaining network security and decentralization. Validators are responsible for validating transactions and adding new blocks to the chain, ensuring the system operates smoothly. In return for their services, validators earn rewards. However, the role comes with inherent risks, such as slashing penalties for misbehavior or technical failures. If you’re new to validation, starting on Kusama provides a lower-stakes environment to gain valuable experience before progressing to the Polkadot network. This guide covers everything you need to know about becoming a validator, including system requirements, staking prerequisites, and infrastructure setup. Whether you’re deploying on a VPS or running your node on custom hardware, you’ll learn how to optimize your validator for performance and security, ensuring compliance with network standards while minimizing risks. ## Prerequisites Running a validator requires solid system administration skills and a secure, well-maintained infrastructure. Below are the primary requirements you need to be aware of before getting started: - **System administration expertise** - handling technical anomalies and maintaining node infrastructure is critical. Validators must be able to troubleshoot and optimize their setup - **Security** - ensure your setup follows best practices for securing your node. Refer to the [Secure Your Validator](/infrastructure/running-a-validator/operational-tasks/general-management/#secure-your-validator){target=\_blank} section to learn about important security measures - **Network choice** - start with [Kusama](/infrastructure/running-a-validator/onboarding-and-offboarding/set-up-validator/#run-a-kusama-validator){target=\_blank} to gain experience. Look for "Adjustments for Kusama" throughout these guides for tips on adapting the provided instructions for the Kusama network - **Staking requirements** - a minimum amount of native token (KSM or DOT) is required to be elected into the validator set. The required stake can come from your own holdings or from nominators - **Risk of slashing** - any DOT you stake is at risk if your setup fails or your validator misbehaves. If you’re unsure of your ability to maintain a reliable validator, consider nominating your DOT to a trusted validator ## Minimum Hardware Requirements Polkadot validators rely on high-performance hardware to process blocks efficiently. The recommended minimum hardware requirements to ensure a fully functional and performant validator are as follows: - **CPU**: - x86-64 compatible - Eight physical cores @ 3.4 GHz - Processor: - Intel - Ice Lake or newer (Xeon or Core series) - AMD - Zen3 or newer (EPYC or Ryzen) - Simultaneous multithreading disabled: - Intel - Hyper-Threading - AMD - SMT - [Single-threaded performance](https://www.cpubenchmark.net/singleThread.html){target=\_blank} is prioritized over higher cores count - **Storage**: - NVMe SSD - at least 2 TB for blockchain data recommended (prioritize latency rather than throughput) - Storage requirements will increase as the chain grows. For current estimates, see the [current chain snapshot](https://stakeworld.io/docs/dbsize){target=\_blank} - **Memory**: - 32 GB DDR4 ECC - **Network**: - Symmetric networking speed of 500 Mbit/s is required to handle large numbers of parachains and ensure congestion control during peak times ## VPS Provider List When selecting a VPS provider for your validator node, prioritize reliability, consistent performance, and adherence to the specific hardware requirements set for Polkadot validators. The following server types have been tested and showed acceptable performance in benchmark tests. However, this is not an endorsement and actual performance may vary depending on your workload and VPS provider. Be aware that some providers may overprovision the underlying host and use shared storage such as NVMe over TCP, which appears as local storage. These setups might result in poor or inconsistent performance. Benchmark your infrastructure before deploying. - [**Google Cloud Platform (GCP)**](https://cloud.google.com/){target=\_blank} - `c2` and `c2d` machine families offer high-performance configurations suitable for validators - [**Amazon Web Services (AWS)**](https://aws.amazon.com/){target=\_blank} - `c6id` machine family provides strong performance, particularly for I/O-intensive workloads - [**OVH**](https://www.ovhcloud.com/en-au/){target=\_blank} - can be a budget-friendly solution if it meets your minimum hardware specifications - [**Digital Ocean**](https://www.digitalocean.com/){target=\_blank} - popular among developers, Digital Ocean's premium droplets offer configurations suitable for medium to high-intensity workloads - [**Vultr**](https://www.vultr.com/){target=\_blank} - offers flexibility with plans that may meet validator requirements, especially for high-bandwidth needs - [**Linode**](https://www.linode.com/){target=\_blank} - provides detailed documentation, which can be helpful for setup - [**Scaleway**](https://www.scaleway.com/en/){target=\_blank} - offers high-performance cloud instances that can be suitable for validator nodes - [**OnFinality**](https://onfinality.io/){target=\_blank} - specialized in blockchain infrastructure, OnFinality provides validator-specific support and configurations !!! warning "Acceptable use policies" Different VPS providers have varying acceptable use policies, and not all allow cryptocurrency-related activities. For example, Digital Ocean, requires explicit permission to use servers for cryptocurrency mining and defines unauthorized mining as [network abuse](https://www.digitalocean.com/legal/acceptable-use-policy#network-abuse){target=\_blank} in their acceptable use policy. Review the terms for your VPS provider to avoid account suspension or server shutdown due to policy violations. ## Minimum Bond Requirement Before bonding DOT, ensure you meet the minimum bond requirement to start a validator instance. The minimum bond is the least DOT you need to stake to enter the validator set. To become eligible for rewards, your validator node must be nominated by enough staked tokens. For example, on November 19, 2024, the minimum stake backing a validator in Polkadot's era 1632 was 1,159,434.248 DOT. You can check the current minimum stake required using these tools: - [**Chain State Values**](https://wiki.polkadot.network/general/chain-state-values/){target=\_blank} - [**Subscan**](https://polkadot.subscan.io/validator_list?status=validator){target=\_blank} - [**Staking Dashboard**](https://staking.polkadot.cloud/#/overview){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/staking-mechanics/ --- BEGIN CONTENT --- --- title: Staking Mechanics description: Explore the staking mechanics in Polkadot, focusing on how they relate to validators, including offenses and slashes, as well as reward payouts. template: index-page.html --- # Staking Mechanics Gain a deep understanding of the staking mechanics in Polkadot, with a focus on how they impact validators. In this section, you'll explore key concepts such as offenses, slashing, and reward payouts, and learn how these mechanisms influence the behavior and performance of validators within the network. Understanding these elements is crucial for optimizing your validator's participation and ensuring alignment with Polkadot's governance and security protocols. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/staking-mechanics/offenses-and-slashes/ --- BEGIN CONTENT --- --- title: Offenses and Slashes description: Learn about how Polkadot discourages validator misconduct via an offenses and slashing system, including details on offenses and their consequences. categories: Infrastructure --- # Offenses and Slashes ## Introduction In Polkadot's Nominated Proof of Stake (NPoS) system, validator misconduct is deterred through a combination of slashing, disabling, and reputation penalties. Validators and nominators who stake tokens face consequences for validator misbehavior, which range from token slashes to restrictions on network participation. This page outlines the types of offenses recognized by Polkadot, including block equivocations and invalid votes, as well as the corresponding penalties. While some parachains may implement additional custom slashing mechanisms, this guide focuses on the offenses tied to staking within the Polkadot ecosystem. ## Offenses Polkadot is a public permissionless network. As such, it has a mechanism to disincentivize offenses and incentivize good behavior. You can review the [parachain protocol](https://wiki.polkadot.network/learn/learn-parachains-protocol/#parachain-protocol){target=\_blank} to understand better the terminology used to describe offenses. Polkadot validator offenses fall into two categories: invalid votes and equivocations. ### Invalid Votes A validator will be penalized for inappropriate voting activity during the block inclusion and approval processes. The invalid voting related offenses are as follows: - **Backing an invalid block** - a para-validator backs an invalid block for inclusion in a fork of the relay chain - **`ForInvalid` vote** - when acting as a secondary checker, the validator votes in favor of an invalid block - **`AgainstValid` vote** - when acting as a secondary checker, the validator votes against a valid block. This type of vote wastes network resources required to resolve the disparate votes and resulting dispute ### Equivocations Equivocation occurs when a validator produces statements that conflict with each other when producing blocks or voting. Unintentional equivocations usually occur when duplicate signing keys reside on the validator host. If keys are never duplicated, the probability of an honest equivocation slash decreases to near zero. The equivocation related offenses are as follows: - **Equivocation** - the validator produces two or more of the same block or vote - **GRANDPA and BEEFY equivocation** - the validator signs two or more votes in the same round on different chains - **BABE equivocation** - the validator produces two or more blocks on the relay chain in the same time slot - **Double seconded equivocation** - the validator attempts to second, or back, more than one block in the same round - **Seconded and valid equivocation** - the validator seconds, or backs, a block and then attempts to hide their role as the responsible backer by later placing a standard validation vote ## Penalties On Polkadot, offenses to the network incur different penalties depending on severity. There are three main penalties: slashing, disabling, and reputation changes. ### Slashing Validators engaging in bad actor behavior in the network may be subject to slashing if they commit a qualifying offense. When a validator is slashed, they and their nominators lose a percentage of their staked DOT or KSM, from as little as 0.01% up to 100% based on the severity of the offense. Nominators are evaluated for slashing against their active validations at any given time. Validator nodes are evaluated as discrete entities, meaning an operator can't attempt to mitigate the offense on another node they operate in order to avoid a slash. Any slashed DOT or KSM will be added to the [Treasury](https://wiki.polkadot.network/learn/learn-polkadot-opengov-treasury/){target=\_blank} rather than burned or distributed as rewards. Moving slashed funds to the Treasury allows tokens to be quickly moved away from malicious validators while maintaining the ability to revert faulty slashes when needed. A nominator with a very large bond may nominate several validators in a single era. In this case, a slash is proportionate to the amount staked to the offending validator. Stake allocation and validator activation is controlled by the [Phragmén algorithm](https://wiki.polkadot.network/learn/learn-phragmen/#understanding-phragm%C3%A9n){target=\_blank}. A validator slash creates an `unapplied` state transition. You can view pending slashes on [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.polkadot.io#/staking/slashes){target=\_blank}. The UI will display the slash per validator, the affected nominators, and the slash amounts. The unapplied state includes a 27-day grace period during which a governance proposal can be made to reverse the slash. Once this grace period expires, the slash is applied. #### Equivocation Slash The Web3 Foundation's [Slashing mechanisms](https://research.web3.foundation/Polkadot/security/slashing/amounts){target=\_blank} page provides guidelines for evaluating the security threat level of different offenses and determining penalties proportionate to the threat level of the offense. Offenses requiring coordination between validators or extensive computational costs to the system will typically call for harsher penalties than those more likely to be unintentional than malicious. A description of potential offenses for each threat level and the corresponding penalties is as follows: - **Level 1** - honest misconduct such as isolated cases of unresponsiveness - **Penalty** - validator can be kicked out or slashed up to 0.1% of stake in the validator slot - **Level 2** - misconduct that can occur honestly but is a sign of bad practices. Examples include repeated cases of unresponsiveness and isolated cases of equivocation - **Penalty** - slash of up to 1% of stake in the validator slot - **Level 3** - misconduct that is likely intentional but of limited effect on the performance or security of the network. This level will typically include signs of coordination between validators. Examples include repeated cases of equivocation or isolated cases of unjustified voting on GRANDPA - **Penalty** - reduction in networking reputation metrics, slash of up to 10% of stake in the validator slot - **Level 4** - misconduct that poses severe security or monetary risk to the system or mass collusion. Examples include signs of extensive coordination, creating a serious security risk to the system, or forcing the system to use extensive resources to counter the misconduct - **Penalty** - slash of up to 100% of stake in the validator slot See the next section to understand how slash amounts for equivocations are calculated. If you want to know more details about slashing, please look at the research page on [Slashing mechanisms](https://research.web3.foundation/Polkadot/security/slashing/amounts){target=\_blank}. #### Slash Calculation for Equivocation The slashing penalty for GRANDPA, BABE, and BEEFY equivocations is calculated using the formula below, where `x` represents the number of offenders and `n` is the total number of validators in the active set: ```text min((3 * x / n )^2, 1) ``` The following scenarios demonstrate how this formula means slash percentages can increase exponentially based on the number of offenders involved compared to the size of the validator pool: - **Minor offense** - assume 1 validator out of a 100 validator active set equivocates in a slot. A single validator committing an isolated offense is most likely a mistake rather than malicious attack on the network. This offense results in a 0.09% slash to the stake in the validator slot ``` mermaid flowchart LR N["Total Validators = 100"] X["Offenders = 1"] F["min(3 * 1 / 100)^2, 1) = 0.0009"] G["0.09% slash of stake"] N --> F X --> F F --> G ``` - **Moderate offense** - assume 5 validators out a 100 validator active set equivocate in a slot. This is a slightly more serious event as there may be some element of coordination involved. This offense results in a 2.25% slash to the stake in the validator slot ``` mermaid flowchart LR N["Total Validators = 100"] X["Offenders = 5"] F["min((3 * 5 / 100)^2, 1) = 0.0225"] G["2.25% slash of stake"] N --> F X --> F F --> G ``` - **Major offense** - assume 20 validators out a 100 validator active set equivocate in a slot. This is a major security threat as it possible represents a coordinated attack on the network. This offense results in a 36% slash and all slashed validators will also be chilled ``` mermaid flowchart LR N["Total Validators = 100"] X["Offenders = 20"] F["min((3 * 20 / 100)^2, 1) = 0.36"] G["36% slash of stake"] N --> F X --> F F --> G ``` The examples above show the risk of nominating or running many validators in the active set. While rewards grow linearly (two validators will get you approximately twice as many staking rewards as one), slashing grows exponentially. Going from a single validator equivocating to two validators equivocating causes a slash four time as much as the single validator. Validators may run their nodes on multiple machines to ensure they can still perform validation work if one of their nodes goes down. Still, validator operators should be cautious when setting these up. Equivocation is possible if they don't coordinate well in managing signing machines. #### Best Practices to Avoid Slashing The following are advised to node operators to ensure that they obtain pristine binaries or source code and to ensure the security of their node: - Always download either source files or binaries from the official Parity repository - Verify the hash of downloaded files - Use the W3F secure validator setup or adhere to its principles - Ensure essential security items are checked, use a firewall, manage user access, use SSH certificates - Avoid using your server as a general-purpose system. Hosting a validator on your workstation or one that hosts other services increases the risk of maleficence - Avoid cloning servers (copying all contents) when migrating to new hardware. If an image is needed, create it before generating keys - High Availability (HA) systems are generally not recommended as equivocation may occur if concurrent operations happen—such as when a failed server restarts or two servers are falsely online simultaneously - Copying the keystore folder when moving a database between instances can cause equivocation. Even brief use of duplicated keystores can result in slashing Below are some examples of small equivocations that happened in the past: | Network | Era | Event Type | Details | Action Taken | |----------|------|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------| | Polkadot | 774 | Small Equivocation | [The validator](https://matrix.to/#/!NZrbtteFeqYKCUGQtr:matrix.parity.io/$165562246360408hKCfC:matrix.org?via=matrix.parity.io&via=corepaper.org&via=matrix.org){target=\_blank} migrated servers and cloned the keystore folder. The on-chain event can be viewed on [Subscan](https://polkadot.subscan.io/extrinsic/11190109-0?event=11190109-5){target=\_blank}. | The validator didn't submit a request for the slash to be canceled. | | Kusama | 3329 | Small Equivocation | The validator operated a test machine with cloned keys. The test machine was online simultaneously as the primary, which resulted in a slash. | The validator requested a slash cancellation, but the council declined. | | Kusama | 3995 | Small Equivocation | The validator noticed several errors, after which the client crashed, and a slash was applied. The validator recorded all events and opened GitHub issues to allow for technical opinions to be shared. | The validator requested to cancel the slash. The council approved the request as they believed the error wasn't operator-related. | #### Slashing Across Eras There are three main difficulties to account for with slashing in NPoS: - A nominator can nominate multiple validators and be slashed as a result of actions taken by any of them - Until slashed, the stake is reused from era to era - Slashable offenses can be found after the fact and out of order To balance this, the system applies only the maximum slash a participant can receive in a given time period rather than the sum. This ensures protection from excessive slashing. ### Disabling The disabling mechanism is triggered when validators commit serious infractions, such as backing invalid blocks or engaging in equivocations. Disabling stops validators from performing specific actions after they have committed an offense. Disabling is further divided into: - **On-chain disabling** - lasts for a whole era and stops validators from authoring blocks, backing, and initiating a dispute - **Off-chain disabling** - lasts for a session, is caused by losing a dispute, and stops validators from initiating a dispute Off-chain disabling is always a lower priority than on-chain disabling. Off-chain disabling prioritizes disabling first backers and then approval checkers. The material in this guide reflects the changes introduced in Stage 2. For more details, see the [State of Disabling issue](https://github.com/paritytech/polkadot-sdk/issues/4359){target=\_blank} on GitHub. ### Reputation Changes Some minor offenses, such as spamming, are only punished by networking reputation changes. Validators use a reputation metric when choosing which peers to connect with. The system adds reputation if a peer provides valuable data and behaves appropriately. If they provide faulty or spam data, the system reduces their reputation. If a validator loses enough reputation, their peers will temporarily close their channels to them. This helps in fighting against Denial of Service (DoS) attacks. Performing validator tasks under reduced reputation will be harder, resulting in lower validator rewards. ### Penalties by Offense Below, you can find a summary of penalties for specific offenses: | Offense | [Slash (%)](#slashing) | [On-Chain Disabling](#disabling) | [Off-Chain Disabling](#disabling) | [Reputational Changes](#reputation-changes) | |:------------------------------------:|:----------------------:|:--------------------------------------:|:------------------------------------:|:-------------------------------------------:| | Backing Invalid | 100% | Yes | Yes (High Priority) | No | | ForInvalid Vote | - | No | Yes (Mid Priority) | No | | AgainstValid Vote | - | No | Yes (Low Priority) | No | | GRANDPA / BABE / BEEFY Equivocations | 0.01-100% | Yes | No | No | | Seconded + Valid Equivocation | - | No | No | No | | Double Seconded Equivocation | - | No | No | Yes | --- END CONTENT --- Doc-Content: https://docs.polkadot.com/infrastructure/staking-mechanics/rewards-payout/ --- BEGIN CONTENT --- --- title: Rewards Payout description: Learn how validator rewards work on the network, including era points, payout distribution, running multiple validators, and nominator payments. categories: Infrastructure --- # Rewards Payout ## Introduction Understanding how rewards are distributed to validators and nominators is essential for network participants. In Polkadot and Kusama, validators earn rewards based on their era points, which are accrued through actions like block production and parachain validation. This guide explains the payout scheme, factors influencing rewards, and how multiple validators affect returns. Validators can also share rewards with nominators, who contribute by staking behind them. By following the payout mechanics, validators can optimize their earnings and better engage with their nominators. ## Era Points The Polkadot ecosystem measures its reward cycles in a unit called an era. Kusama eras are approximately 6 hours long, and Polkadot eras are 24 hours long. At the end of each era, validators are paid proportionally to the amount of [era points](/infrastructure/staking-mechanics/rewards-payout/#era-points){target=\_blank} they have collected. Era points are reward points earned for payable actions like: - Issuing validity statements for [parachain blocks](/polkadot-protocol/parachain-basics/blocks-transactions-fees/blocks/){target=\_blank} - Producing a non-uncle block in the relay chain - Producing a reference to a previously unreferenced uncle block - Producing a referenced uncle block An uncle block is a relay chain block that is valid in every regard but has failed to become canonical. This can happen when two or more validators are block producers in a single slot, and the block produced by one validator reaches the next block producer before the others. The lagging blocks are called uncle blocks. ## Reward Variance Rewards in Polkadot and Kusama staking systems can fluctuate due to differences in era points earned by para-validators and non-para-validators. Para-validators generally contribute more to the overall reward distribution due to their role in validating parachain blocks, thus influencing the variance in staking rewards. To illustrate this relationship: - Para-validator era points tend to have a higher impact on the expected value of staking rewards compared to non-para-validator points - The variance in staking rewards increases as the total number of validators grows relative to the number of para-validators - In simpler terms, when more validators are added to the active set without increasing the para-validator pool, the disparity in rewards between validators becomes more pronounced However, despite this increased variance, rewards tend to even out over time due to the continuous rotation of para-validators across eras. The network's design ensures that over multiple eras, each validator has an equal opportunity to participate in para-validation, eventually leading to a balanced distribution of rewards. ??? interface "Probability in Staking Rewards" This should only serve as a high-level overview of the probabilistic nature for staking rewards. Let: - `pe` = para-validator era points - `ne` = non-para-validator era points - `EV` = expected value of staking rewards Then, `EV(pe)` has more influence on the `EV` than `EV(ne)`. Since `EV(pe)` has a more weighted probability on the `EV`, the increase in variance against the `EV` becomes apparent between the different validator pools (aka. validators in the active set and the ones chosen to para-validate). Also, let: - `v` = the variance of staking rewards - `p` = number of para-validators - `w` = number validators in the active set - `e` = era Then, `v` ↑ if `w` ↑, as this reduces `p` : `w`, with respect to `e`. Increased `v` is expected, and initially keeping `p` ↓ using the same para-validator set for all parachains ensures [availability](https://spec.polkadot.network/chapter-anv){target=\_blank} and [voting](https://wiki.polkadot.network/learn/learn-polkadot-opengov/){target=\_blank}. In addition, despite `v` ↑ on an `e` to `e` basis, over time, the amount of rewards each validator receives will equal out based on the continuous selection of para-validators. There are plans to scale the active para-validation set in the future. ## Payout Scheme Validator rewards are distributed equally among all validators in the active set, regardless of the total stake behind each validator. However, individual payouts may differ based on the number of era points a validator has earned. Although factors like network connectivity can affect era points, well-performing validators should accumulate similar totals over time. Validators can also receive tips from users, which incentivize them to include certain transactions in their blocks. Validators retain 100% of these tips. Rewards are paid out in the network's native token (DOT for Polkadot and KSM for Kusama). The following example illustrates a four member validator set with their names, amount they have staked, and how payout of rewards is divided. This scenario assumes all validators earned the same amount of era points and no one received tips: ``` mermaid flowchart TD A["Alice (18 DOT)"] B["Bob (9 DOT)"] C["Carol (8 DOT)"] D["Dave (7 DOT)"] E["Payout (8 DOT total)"] E --"2 DOT"--> A E --"2 DOT"--> B E --"2 DOT"--> C E --"2 DOT"--> D ``` Note that this is different than most other Proof of Stake (PoS) systems. As long as a validator is in the validator set, it will receive the same block reward as every other validator. Validator Alice, who had 18 DOT staked, received the same 2 DOT reward in this era as Dave, who had only 7 DOT staked. ## Running Multiple Validators Running multiple validators can offer a more favorable risk/reward ratio compared to running a single one. If you have sufficient DOT or nominators staking on your validators, maintaining multiple validators within the active set can yield higher rewards. In the preceding section, with 18 DOT staked and no nominators, Alice earned 2 DOT in one era. This example uses DOT, but the same principles apply for KSM on the Kusama network. By managing stake across multiple validators, you can potentially increase overall returns. Recall the set of validators from the preceding section: ``` mermaid flowchart TD A["Alice (18 DOT)"] B["Bob (9 DOT)"] C["Carol (8 DOT)"] D["Dave (7 DOT)"] E["Payout (8 DOT total)"] E --"2 DOT"--> A E --"2 DOT"--> B E --"2 DOT"--> C E --"2 DOT"--> D ``` Now, assume Alice decides to split their stake and run two validators, each with a nine DOT stake. This validator set only has four spots and priority is given to validators with a larger stake. In this example, Dave has the smallest stake and loses his spot in the validator set. Now, Alice will earn two shares of the total payout each era as illustrated below: ``` mermaid flowchart TD A["Alice (9 DOT)"] F["Alice (9 DOT)"] B["Bob (9 DOT)"] C["Carol (8 DOT)"] E["Payout (8 DOT total)"] E --"2 DOT"--> A E --"2 DOT"--> B E --"2 DOT"--> C E --"2 DOT"--> F ``` With enough stake, you could run more than two validators. However, each validator must have enough stake behind it to maintain a spot in the validator set. ## Nominators and Validator Payments A nominator's stake allows them to vote for validators and earn a share of the rewards without managing a validator node. Although staking rewards depend on validator activity during an era, validators themselves never control or own nominator rewards. To trigger payouts, anyone can call the `staking.payoutStakers` or `staking.payoutStakerByPage` methods, which mint and distribute rewards directly to the recipients. This trustless process ensures nominators receive their earned rewards. Validators set a commission rate as a percentage of the block reward, affecting how rewards are shared with nominators. A 0% commission means the validator keeps only rewards from their self-stake, while a 100% commission means they retain all rewards, leaving none for nominators. The following examples model splitting validator payments between nominator and validator using various commission percentages. For simplicity, these examples assume a Polkadot-SDK based relay chain that uses DOT as a native token and a single nominator per validator. Calculations of KSM reward payouts for Kusama follow the same formula. Start with the original validator set from the previous section: ``` mermaid flowchart TD A["Alice (18 DOT)"] B["Bob (9 DOT)"] C["Carol (8 DOT)"] D["Dave (7 DOT)"] E["Payout (8 DOT total)"] E --"2 DOT"--> A E --"2 DOT"--> B E --"2 DOT"--> C E --"2 DOT"--> D ``` The preceding diagram shows each validator receiving a 2 DOT payout, but doesn't account for sharing rewards with nominators. The following diagram shows what nominator payout might look like for validator Alice. Alice has a 20% commission rate and holds 50% of the stake for their validator: ``` mermaid flowchart TD A["Gross Rewards = 2 DOT"] E["Commission = 20%"] F["Alice Validator Payment = 0.4 DOT"] G["Total Stake Rewards = 1.6 DOT"] B["Alice Validator Stake = 18 DOT"] C["9 DOT Alice (50%)"] H["Alice Stake Reward = 0.8 DOT"] I["Total Alice Validator Reward = 1.2 DOT"] D["9 DOT Nominator (50%)"] J["Total Nominator Reward = 0.8 DOT"] A --> E E --(2 x 0.20)--> F F --(2 - 0.4)--> G B --> C B --> D C --(1.6 x 0.50)--> H H --(0.4 + 0.8)--> I D --(1.60 x 0.50)--> J ``` Notice the validator commission rate is applied against the gross amount of rewards for the era. The validator commission is subtracted from the total rewards. After the commission is paid to the validator, the remaining amount is split among stake owners according to their percentage of the total stake. A validator's total rewards for an era include their commission plus their piece of the stake rewards. Now, consider a different scenario for validator Bob where the commission rate is 40%, and Bob holds 33% of the stake for their validator: ``` mermaid flowchart TD A["Gross Rewards = 2 DOT"] E["Commission = 40%"] F["Bob Validator Payment = 0.8 DOT"] G["Total Stake Rewards = 1.2 DOT"] B["Bob Validator Stake = 9 DOT"] C["3 DOT Bob (33%)"] H["Bob Stake Reward = 0.4 DOT"] I["Total Bob Validator Reward = 1.2 DOT"] D["6 DOT Nominator (67%)"] J["Total Nominator Reward = 0.8 DOT"] A --> E E --(2 x 0.4)--> F F --(2 - 0.8)--> G B --> C B --> D C --(1.2 x 0.33)--> H H --(0.8 + 0.4)--> I D --(1.2 x 0.67)--> J ``` Bob holds a smaller percentage of their node's total stake, making their stake reward smaller than Alice's. In this scenario, Bob makes up the difference by charging a 40% commission rate and ultimately ends up with the same total payment as Alice. Each validator will need to find their ideal balance between the amount of stake and commission rate to attract nominators while still making running a validator worthwhile. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/ --- BEGIN CONTENT --- --- title: Architecture description: Explore Polkadot's architecture, including the relay chain, parachains, and system chains, and discover the role each component plays in the broader ecosystem. template: index-page.html --- # Architecture Explore Polkadot's architecture, including the relay chain, parachains, and system chains, and discover the role each component plays in the broader ecosystem. ## A Brief Look at Polkadot’s Chain Ecosystem The following provides a brief overview of the role of each chain: - [**Polkadot relay chain**](/polkadot-protocol/architecture/polkadot-chain/) - the central hub and main chain responsible for the overall security, consensus, and interoperability between all connected chains - [**System chains**](/polkadot-protocol/architecture/system-chains/) - specialized chains that provide essential services to the ecosystem, like the Asset Hub, Bridge Hub, and Coretime chain - [**Parachains**](/polkadot-protocol/architecture/parachains/) - individual, specialized blockchains that run parallel to the relay chain and are connected to it Learn more about these components by checking out the articles in this section. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/parachains/consensus/ --- BEGIN CONTENT --- --- title: Parachain Consensus description: Understand how the blocks authored by parachain collators are secured by the relay chain validators and how the parachain transactions achieve finality. categories: Polkadot Protocol, Parachains --- # Parachain Consensus ## Introduction Parachains are independent blockchains built with the Polkadot SDK, designed to leverage Polkadot’s relay chain for shared security and transaction finality. These specialized chains operate as part of Polkadot’s execution sharding model, where each parachain manages its own state and transactions while relying on the relay chain for validation and consensus. At the core of parachain functionality are collators, specialized nodes that sequence transactions into blocks and maintain the parachain’s state. Collators optimize Polkadot’s architecture by offloading state management from the relay chain, allowing relay chain validators to focus solely on validating parachain blocks. This guide explores how parachain consensus works, including the roles of collators and validators, and the steps involved in securing parachain blocks within Polkadot’s scalable and decentralized framework. ## The Role of Collators Collators are responsible for sequencing end-user transactions into blocks and maintaining the current state of their respective parachains. Their role is akin to Ethereum’s sequencers but optimized for Polkadot's architecture. Key responsibilities include: - **Transaction sequencing** - organizing transactions into [Proof of Validity (PoV)](https://wiki.polkadot.network/general/glossary/){target=\_blank} blocks - **State management** - maintaining parachain states without burdening the relay chain validators - **Consensus participation** - sending PoV blocks to relay chain validators for approval ## Consensus and Validation Parachain consensus operates in tandem with the relay chain, leveraging Nominated Proof of Stake (NPoS) for shared security. The process ensures parachain transactions achieve finality through the following steps: 1. **Packaging transactions** - collators bundle transactions into PoV blocks (parablocks) 2. **Submission to validator** - parablocks are submitted to a randomly selected subset of relay chain validators, known as paravalidators 3. **Validation of PoV Blocks** - paravalidators use the parachain’s state transition function (already available on the relay chain) to verify transaction validity 4. **Backing and inclusion** - if a sufficient number of positive validations are received, the parablock is backed and included via a para-header on the relay chain The following sections describe the actions taking place during each stage of the process. ### Path of a Parachain Block Polkadot achieves scalability through execution sharding, where each parachain operates as an independent shard with its own blockchain and state. Shared security for all parachains is provided by the relay chain, powered by [Nominated Proof of Staking (NPoS)](/polkadot-protocol/glossary/#nominated-proof-of-stake-npos){target=\_blank}. This framework allows parachains to focus on transaction processing and state management, while the relay chain ensures validation and finality. The journey parachain transactions follow to reach consensus and finality can be described as follows: - **Collators and parablocks:** - Collators, specialized nodes on parachains, package network transactions into Proof of Validity (PoV) blocks, also called parablocks - These parablocks are sent to a subset of relay chain validators, known as paravalidators, for validation - The parachain's state transition function (Wasm blob) is not re-sent, as it is already stored on the relay chain ```mermaid flowchart TB %% Subgraph: Parachain subgraph Parachain direction LR Txs[Network Transactions] Collator[Collator Node] ParaBlock[ParaBlock + PoV] Txs -->|Package Transactions| Collator Collator -->|Create| ParaBlock end subgraph Relay["Relay Chain"] ParaValidator end %% Main Flow Parachain -->|Submit To| Relay ``` - **Validation by paravalidators:** - Paravalidators are groups of approximately five relay chain validators, randomly assigned to parachains and shuffled every minute - Each paravalidator downloads the parachain's Wasm blob and validates the parablock by ensuring all transactions comply with the parachain’s state transition rules - Paravalidators sign positive or negative validation statements based on the block’s validity - **Backing and approval:** - If a parablock receives sufficient positive validation statements, it is backed and included on the relay chain as a para-header - An additional approval process resolves disputes. If a parablock contains invalid transactions, additional validators are tasked with verification - Validators who back invalid parablocks are penalized through slashing, creating strong incentives for honest behavior ```mermaid flowchart subgraph RelayChain["Relay Chain"] direction TB subgraph InitialValidation["Initial Validation"] direction LR PValidators[ParaValidators] Backing[Backing\nProcess] Header[Submit Para-header\non Relay Chain] end subgraph Secondary["Secondary Validation"] Approval[Approval\nProcess] Dispute[Dispute\nResolution] Slashing[Slashing\nMechanism] end end %% Validation Process PValidators -->|Download\nWasm\nValidate Block| Backing Backing -->|If Valid\nSignatures| Header InitialValidation -->|Additional\nVerification| Secondary %% Dispute Flow Approval -->|If Invalid\nDetected| Dispute Dispute -->|Penalize\nDishonest\nValidators| Slashing ``` It is important to understand that relay chain blocks do not store full parachain blocks (parablocks). Instead, they include para-headers, which serve as summaries of the backed parablocks. The complete parablock remains within the parachain network, maintaining its autonomy while relying on the relay chain for validation and finality. ## Where to Go Next For more technical details, refer to the: - [Parachain Wiki](https://wiki.polkadot.network/learn/learn-parachains/){target=\_blank} page --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/parachains/ --- BEGIN CONTENT --- --- title: Parachains description: Explore how parachains achieve consensus and leverage shared security through Polkadot’s relay chain and validators within the network’s architecture. template: index-page.html --- # Parachains Discover how parachains secure their networks and reach consensus by harnessing Polkadot’s relay chain and its robust validator framework. This integrated architecture ensures shared security and seamless coordination across the entire ecosystem. Parachains serve as the foundation of Polkadot’s multichain ecosystem, enabling diverse, application-specific blockchains to operate in parallel. By connecting to the relay chain, parachains gain access to Polkadot’s shared security, interoperability, and decentralized governance. This design allows developers to focus on building innovative features while benefiting from a secure and scalable infrastructure. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/parachains/overview/ --- BEGIN CONTENT --- --- title: Overview description: Learn about the role, functionality, and implementation of parachains as a developer in the wider Polkadot architecture. categories: Basics, Polkadot Protocol, Parachains --- ## Introduction A [_parachain_](/polkadot-protocol/glossary#parachain){target=\_blank} is a coherent, application-specific blockchain that derives security from its respective relay chain. Parachains on Polkadot are each their own separate, fully functioning blockchain. The primary difference between a parachain and a regular, "solo" blockchain is that the relay chain verifies the state of all parachains that are connected to it. In many ways, parachains can be thought of as a ["cynical" rollup](#cryptoeconomic-security-elves-protocol), as the crypto-economic protocol used (ELVES) assumes the worst-case scenario, rather than the typical optimistic approach that many roll-up mechanisms take. Once enough validators attest that a block is valid, then the probability of that block being valid is high. As each parachain’s state is validated by the relay chain, the relay chain represents the collective state of all parachains. ```mermaid flowchart TB subgraph "Relay Chain" RC[Relay Chain Validators] State[Collective State Validation] end PA[Parachain A] PB[Parachain B] PC[Parachain C] RC -->|Validate State| PA RC -->|Validate State| PB RC -->|Validate State| PC State -->|Represents Collective\nParachain State| RC note["ELVES Protocol:\n- Crypto-economic security\n- Assumes worst-case scenario\n- High probability validation"] ``` ## Coherent Systems Coherency refers to the degree of synchronization, consistency, and interoperability between different components or chains within a system. It encompasses the internal coherence of individual chains and the external coherence between chains regarding how they interact. A single-state machine like Ethereum is very coherent, as all of its components (smart contracts, dApps/applications, staking, consensus) operate within a single environment with the downside of less scalability. Multi-protocol state machines, such as Polkadot, offer less coherency due to their sharded nature but more scalability due to the parallelization of their architecture. Parachains are coherent, as they are self-contained environments with domain-specific functionality. ## Flexible Ecosystem Parachains enable parallelization of different services within the same network. However, unlike most layer two rollups, parachains don't suffer the same interoperability pitfalls that most rollups suffer. [Cross-Consensus Messaging (XCM)](/develop/interoperability/intro-to-xcm/){target=\_blank} provides a common communication format for each parachain and can be configured to allow a parachain to communicate with just the relay chain or certain parachains. The diagram below highlights the flexibility of the Polkadot ecosystem, where each parachain specializes in a distinct domain. This example illustrates how parachains, like DeFi and GameFi, leverage XCM for cross-chain operations such as asset transfers and credential verification. ```mermaid flowchart TB subgraph "Polkadot Relay Chain" RC[Relay Chain\nCross-Consensus\nRouting] end subgraph "Parachain Ecosystem" direction TB DeFi[DeFi Parachain\nFinancial Services] GameFi[GameFi Parachain\nGaming Ecosystem] NFT[NFT Parachain\nDigital Collectibles] Identity[Identity Parachain\nUser Verification] end DeFi <-->|XCM: Asset Transfer| GameFi GameFi <-->|XCM: Token Exchange| NFT Identity <-->|XCM: Credential Verification| DeFi RC -->|Validate & Route XCM| DeFi RC -->|Validate & Route XCM| GameFi RC -->|Validate & Route XCM| NFT RC -->|Validate & Route XCM| Identity note["XCM Features:\n- Standardized Messaging\n- Cross-Chain Interactions\n- Secure Asset/Data Transfer"] ``` Most parachains are built using the Polkadot SDK, which provides all the tools to create a fully functioning parachain. However, it is possible to construct a parachain that can inherit the security of the relay chain as long as it implements the correct mechanisms expected by the relay chain. ## State Transition Functions (Runtimes) Determinism is a fundamental property where given the same input, a system will consistently produce identical outputs. In blockchain systems, this predictable behavior is essential for state machines, which are algorithms that transition between different states based on specific inputs to generate a new state. At their core, parachains, like most blockchains, are deterministic, finite-state machines that are often backed by game theory and economics. The previous state of the parachain, combined with external input in the form of [extrinsics](/polkadot-protocol/glossary#extrinsic){target=\_blank}, allows the state machine to progress forward, one block at a time. ```mermaid stateDiagram-v2 direction LR [*] --> StateA : Initial State StateA --> STF : Extrinsics/Transactions STF --> StateB : Deterministic Transformation StateB --> [*] : New State ``` The primary driver of this progression is the state transition function (STF), commonly referred to as a runtime. Each time a block is submitted, it represents the next proposed state for a parachain. By applying the state transition function to the previous state and including a new block that contains the proposed changes in the form of a list of extrinsics/transactions, the runtime defines just exactly how the parachain is to advance from state A to state B. The STF in a Polkadot SDK-based chain is compiled to Wasm and uploaded on the relay chain. This STF is crucial for the relay chain to validate the state changes coming from the parachain, as it is used to ensure that all proposed state transitions are happening correctly as part of the validation process. For more information on the Wasm meta protocol that powers runtimes, see the [WASM Meta Protocol](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/wasm_meta_protocol/index.html){target=\blank} in the Polkadot SDK Rust Docs. ## Shared Security: Validated by the Relay Chain The relay chain provides a layer of economic security for its parachains. Parachains submit proof of validation (PoV) data to the relay chain for validation through [collators](/polkadot-protocol/glossary/#collator), upon which the relay chains' validators ensure the validity of this data in accordance with the STF for that particular parachain. In other words, the consensus for a parachain follows the relay chain. While parachains choose how a block is authored, what it contains, and who authors it, the relay chain ultimately provides finality and consensus for those blocks. For more information about the parachain and relay chain validation process, see the [Parachains' Protocol Overview: Protocols' Summary](https://wiki.polkadot.network/learn/learn-parachains-protocol/#protocols-summary){target=\blank} entry in the Polkadot Wiki. Parachains need at least one honest collator to submit PoV data to the relay chain. Without this, the parachain can't progress. The mechanisms that facilitate this are found in the Cumulus portion of the Polkadot SDK, some of which are found in the [`cumulus_pallet_parachain_system`](https://paritytech.github.io/polkadot-sdk/master/cumulus_pallet_parachain_system/index.html){target=\blank} ### Cryptoeconomic Security: ELVES Protocol The [ELVES (Economic Last Validation Enforcement System)](https://eprint.iacr.org/2024/961){target=\_blank} protocol forms the foundation of Polkadot's cryptoeconomic security model. ELVES assumes a worst-case scenario by enforcing strict validation rules before any state transitions are finalized. Unlike optimistic approaches that rely on post-facto dispute resolution, ELVES ensures that validators collectively confirm the validity of a block before it becomes part of the parachain's state. Validators are incentivized through staking and penalized for malicious or erroneous actions, ensuring adherence to the protocol. This approach minimizes the probability of invalid states being propagated across the network, providing robust security for parachains. ## Interoperability Polkadot's interoperability framework allows parachains to communicate with each other, fostering a diverse ecosystem of interconnected blockchains. Through [Cross-Consensus Messaging (XCM)](/develop/interoperability/intro-to-xcm/){target=_blank}, parachains can transfer assets, share data, and invoke functionalities on other chains securely. This standardized messaging protocol ensures that parachains can interact with the relay chain and each other, supporting efficient cross-chain operations. The XCM protocol mitigates common interoperability challenges in isolated blockchain networks, such as fragmented ecosystems and limited collaboration. By enabling decentralized applications to leverage resources and functionality across parachains, Polkadot promotes a scalable, cooperative blockchain environment that benefits all participants. ## Where to Go Next For further information about the consensus protocol used by parachains, see the [Consensus](/polkadot-protocol/architecture/parachains/consensus/) page.
- Learn __Consensus__ --- Understand how the blocks authored by parachain collators are secured by the relay chain validators and how the parachain transactions achieve finality. [:octicons-arrow-right-24: Reference](/polkadot-protocol/architecture/parachains/consensus/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/agile-coretime/ --- BEGIN CONTENT --- --- title: Agile Coretime description: Explore the efficient scheduling mechanisms to access Polkadot cores to produce blockspace continuously or on-demand. categories: Polkadot Protocol --- # Agile Coretime ## Introduction Agile Coretime is the [scheduling](https://en.wikipedia.org/wiki/Scheduling_(computing)){target=\_blank} framework on Polkadot that lets parachains efficiently access cores, which comprise an active validator set tasked with parablock validation. As the first blockchain to enable a flexible scheduling system for blockspace production, Polkadot offers unparalleled adaptability for parachains. ``` mermaid graph TB A[Cores Designation] B[Bulk Coretime] C[On-Demand Coretime] A --continuous--> B A --flexible--> C ``` Cores can be designated to a parachain either continuously through [bulk coretime](#bulk-coretime) or dynamically via [on-demand coretime](#on-demand-coretime). Additionally, Polkadot supports scheduling multiple cores in parallel through [elastic scaling](https://wiki.polkadot.network/learn/learn-elastic-scaling/){target=\_blank}, which is a feature under active development on Polkadot. This flexibility empowers parachains to optimize their resource usage and block production according to their unique needs. In this guide, you'll learn how bulk coretime enables continuous core access with features like interlacing and splitting, and how on-demand coretime provides flexible, pay-per-use scheduling for parachains. For a deep dive on Agile Coretime and its terminology, refer to the [Wiki doc](https://wiki.polkadot.network/learn/learn-agile-coretime/#introduction-to-agile-coretime){target=\_blank}. ## Bulk Coretime Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be purchased through [coretime sales](#coretime-sales) in DOT and can be split, shared, or resold. Currently, the duration of bulk coretime is set to 28 days. Coretime purchased in bulk and assigned to a single parachain is eligible for a price-capped renewal, providing a form of rent-controlled access, which is important for predicting the running costs in the near future. Suppose the bulk coretime is [interlaced](#coretime-interlacing) or [split](#coretime-splitting) or is kept idle without assigning it to a parachain. In that case, it will be ineligible for the price-capped renewal. ### Coretime Interlacing It is the action of dividing bulk coretime across multiple parachains that produce blocks spaced uniformly in time. For example, think of multiple parachains taking turns producing blocks, demonstrating a simple form of interlacing. This feature can be used by parachains with a low transaction volume and need not continuously produce blocks. ### Coretime Splitting It is the action of dividing bulk coretime into multiple contiguous regions. This feature can be used by parachains that need to produce blocks continuously but do not require the whole 28 days of bulk coretime and require only part of it. ## On-Demand Coretime Polkadot has dedicated cores assigned to provide core time on demand. These cores are excluded from the coretime sales and are reserved for on-demand parachains, which pay in DOT per block. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/elastic-scaling/ --- BEGIN CONTENT --- --- title: Elastic Scaling description: Learn how elastic scaling in Polkadot boosts parachain throughput, reduces latency, and supports dynamic, cost-efficient resource allocation. categories: Polkadot Protocol --- # Elastic Scaling ## Introduction Polkadot's architecture delivers scalability and security through its shared security model, where the relay chain coordinates and validates multiple parallel chains. Elastic scaling enhances this architecture by allowing parachains to utilize multiple computational cores simultaneously, breaking the previous 1:1 relationship between parachain and relay chain blocks. This technical advancement enables parachains to process multiple blocks within a single relay chain block, significantly increasing throughput capabilities. By leveraging [Agile Coretime](/polkadot-protocol/architecture/polkadot-chain/agile-coretime){target=\_blank}, parachains can dynamically adjust their processing capacity based on demand, creating an efficient and responsive infrastructure for high-throughput applications. ## How Elastic Scaling Works Elastic scaling enables parachains to process multiple blocks in parallel by utilizing additional cores on the relay chain. This section provides a technical analysis of the performance advantages and details of the implementation. Consider a parachain that needs to process four consecutive parablocks. With traditional single-core allocation, the validation process follows a strictly sequential pattern. Each parablock undergoes a two-phase process on the relay chain: 1. **Backing phase** - validators create and distribute validity statements 2. **Inclusion phase** - the parablock is included in the relay chain after availability verification Throughout the following diagrams, specific notation is used to represent different components of the system: - R1, R2, ... - relay chain blocks (produced at ~6-second intervals) - P1, P2, ... - parachain blocks that need validation and inclusion - C1, C2, ... - cores on the relay chain In the single-core scenario (assuming a 6-second relay chain block time), processing four parablocks requires approximately 30 seconds: ```mermaid sequenceDiagram participant R1 as R1 participant R2 as R2 participant R3 as R3 participant R4 as R4 participant R5 as R5 Note over R1,R5: Single Core Scenario rect rgb(200, 220, 240) Note right of R1: Core C1 R1->>R1: Back P1 R2->>R2: Include P1 R2->>R2: Back P2 R3->>R3: Include P2 R3->>R3: Back P3 R4->>R4: Include P3 R4->>R4: Back P4 R5->>R5: Include P4 end ``` With elastic scaling utilizing two cores simultaneously, the same four parablocks can be processed in approximately 18 seconds: ```mermaid sequenceDiagram participant R1 as R1 participant R2 as R2 participant R3 as R3 participant R4 as R4 participant R5 as R5 Note over R1,R3: Multi-Core Scenario rect rgb(200, 220, 240) Note right of R1: Core C1 R1->>R1: Back P1 R2->>R2: Include P1 R2->>R2: Back P2 R3->>R3: Include P2 end rect rgb(220, 200, 240) Note right of R1: Core C2 R1->>R1: Back P3 R2->>R2: Include P3 R2->>R2: Back P4 R3->>R3: Include P4 end ``` To help interpret the sequence diagrams above, note the following key elements: - The horizontal axis represents time progression through relay chain blocks (R1-R5) - Each colored rectangle shows processing on a specific core (C1 or C2) - In the single-core scenario, all blocks must be processed sequentially on one core - In the multi-core scenario, blocks are processed in parallel across multiple cores, reducing total time The relay chain processes these multiple parablocks as independent validation units during the backing, availability, and approval phases. However, during inclusion, it verifies that their state roots align properly to maintain chain consistency. From an implementation perspective: - **Parachain side** - collators must increase their block production rate to utilize multiple cores fully - **Validation process** - each core operates independently, but with coordinated state verification - **Resource management** - cores are dynamically allocated based on parachain requirements - **State consistency** - while backed and processed in parallel, the parablocks maintain sequential state transitions ## Benefits of Elastic Scaling - **Increased throughput** - multiple concurrent cores enable parachains to process transactions at multiples of their previous capacity. By allowing multiple parachain blocks to be validated within each relay chain block cycle, applications can achieve significantly higher transaction volumes - **Lower latency** - transaction finality improves substantially with multi-core processing. Parachains currently achieve 2-second latency with three cores, with projected improvements to 500ms using 12 cores, enabling near-real-time application responsiveness - **Resource efficiency** - applications acquire computational resources precisely matched to their needs, eliminating wasteful over-provisioning. Coretime can be purchased at granular intervals (blocks, hours, days), creating cost-effective operations, particularly for applications with variable transaction patterns - **Scalable growth** - new applications can launch with minimal initial resource commitment and scale dynamically as adoption increases. This eliminates the traditional paradox of either over-allocating resources (increasing costs) or under-allocating (degrading performance) during growth phases - **Workload distribution** - parachains intelligently distribute workloads across cores during peak demand periods and release resources when traffic subsides. Paired with secondary coretime markets, this ensures maximum resource utilization across the entire network ecosystem - **Reliable performance** - end-users experience reliable application performance regardless of network congestion levels. Applications maintain responsiveness even during traffic spikes, eliminating performance degradation that commonly impacts blockchain applications during high-demand periods ## Use Cases Elastic scaling enables applications to dynamically adjust their resource consumption based on real-time demand. This is especially valuable for decentralized applications where usage patterns can be highly variable. The following examples illustrate common scenarios where elastic scaling delivers significant performance and cost-efficiency benefits. ### Handling Sudden Traffic Spikes Many decentralized applications experience unpredictable, high-volume traffic bursts, especially in gaming, DeFi protocols, NFT auctions, messaging platforms, and social media. Elastic scaling allows these systems to acquire additional coretime during peak usage and release it during quieter periods, ensuring responsiveness without incurring constant high infrastructure costs. ### Supporting Early-Stage Growth Startups and new projects often begin with uncertain or volatile demand. With elastic scaling, teams can launch with minimal compute resources (e.g., a single core) and gradually scale as adoption increases. This prevents overprovisioning and enables cost-efficient growth until the application is ready for more permanent or horizontal scaling. ### Scaling Massive IoT Networks Internet of Things (IoT) applications often involve processing data from millions of devices in real time. Elastic scaling supports this need by enabling high-throughput transaction processing as demand fluctuates. Combined with Polkadot’s shared security model, it provides a reliable and privacy-preserving foundation for large-scale IoT deployments. ### Powering Real-Time, Low-Latency Systems Applications like payment processors, trading platforms, gaming engines, or real-time data feeds require fast, consistent performance. Elastic scaling can reduce execution latency during demand spikes, helping ensure low-latency, reliable service even under heavy load. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/ --- BEGIN CONTENT --- --- title: The Polkadot Relay Chain description: Explore the relay chain’s role in Polkadot, providing shared security, consensus, and enabling agile coretime for parachains to purchase blockspace on-demand. template: index-page.html --- # The Polkadot Relay Chain Discover the central role of the Polkadot relay chain in securing the network and fostering interoperability. As the backbone of Polkadot, the relay chain provides shared security and ensures consensus across the ecosystem. It empowers parachains with flexible coretime allocation, enabling them to purchase blockspace on demand, ensuring efficiency and scalability for diverse blockchain applications. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/overview/ --- BEGIN CONTENT --- --- title: Overview of the Polkadot Relay Chain description: Explore Polkadot's core architecture, including its multi-chain vision, shared security, and the DOT token's governance and staking roles. categories: Basics, Polkadot Protocol, Parachains --- # Overview ## Introduction Polkadot is a next-generation blockchain protocol designed to support a multi-chain future by enabling secure communication and interoperability between different blockchains. Built as a Layer-0 protocol, Polkadot introduces innovations like application-specific Layer-1 chains ([parachains](/polkadot-protocol/architecture/parachains/){targe=\_blank}), shared security through [Nominated Proof of Stake (NPoS)](/polkadot-protocol/glossary/#nominated-proof-of-stake-npos){target=\_blank}, and cross-chain interactions via its native [Cross-Consensus Messaging Format (XCM)](/develop/interoperability/intro-to-xcm/){target=\_blank}. This guide covers key aspects of Polkadot’s architecture, including its high-level protocol structure, blockspace commoditization, and the role of its native token, DOT, in governance, staking, and resource allocation. ## Polkadot 1.0 Polkadot 1.0 represents the state of Polkadot as of 2023, coinciding with the release of [Polkadot runtime v1.0.0](https://github.com/paritytech/polkadot/releases/tag/v1.0.0){target=\_blank}. This section will focus on Polkadot 1.0, along with philosophical insights into network resilience and blockspace. As a Layer-0 blockchain, Polkadot contributes to the multi-chain vision through several key innovations and initiatives, including: - **Application-specific Layer-1 blockchains (parachains)** - Polkadot's sharded network allows for parallel transaction processing, with shards that can have unique state transition functions, enabling custom-built L1 chains optimized for specific applications - **Shared security and scalability** - L1 chains connected to Polkadot benefit from its [Nominated Proof of Stake (NPoS)](/polkadot-protocol/architecture/polkadot-chain/pos-consensus/#nominated-proof-of-stake){target=\_blank} system, providing security out-of-the-box without the need to bootstrap their own - **Secure interoperability** - Polkadot's native interoperability enables seamless data and value exchange between parachains. This interoperability can also be used outside of the ecosystem for bridging with external networks - **Resilient infrastructure** - decentralized and scalable, Polkadot ensures ongoing support for development and community initiatives via its on-chain [treasury](https://wiki.polkadot.network/learn/learn-polkadot-opengov-treasury/){target=\_blank} and governance - **Rapid L1 development** - the [Polkadot SDK](/develop/parachains/intro-polkadot-sdk/){target=\_blank} allows fast, flexible creation and deployment of Layer-1 chains - **Cultivating the next generation of Web3 developers** - Polkadot supports the growth of Web3 core developers through initiatives such as: - [Polkadot Blockchain Academy](https://polkadot.com/blockchain-academy){target=\_blank} - [Polkadot Alpha Program](https://polkadot.com/alpha-program){target=\_blank} - [EdX courses](https://www.edx.org/school/web3x){target=\_blank} - Rust and Substrate courses (coming soon) ### High-Level Architecture Polkadot features a chain that serves as the central component of the system. This chain is depicted as a ring encircled by several parachains that are connected to it. According to Polkadot's design, any blockchain that can compile to WebAssembly (Wasm) and adheres to the Parachains Protocol becomes a parachain on the Polkadot network. Here’s a high-level overview of the Polkadot protocol architecture: ![](/images/polkadot-protocol/architecture/polkadot-chain/overview/overview-1.webp) Parachains propose blocks to Polkadot validators, who check for availability and validity before finalizing them. With the relay chain providing security, collators—full nodes of parachains—can focus on their tasks without needing strong incentives. The [Cross-Consensus Messaging Format (XCM)](/develop/interoperability/intro-to-xcm/){target=\_blank} allows parachains to exchange messages freely, leveraging the chain's security for trust-free communication. In order to interact with chains that want to use their own finalization process (e.g., Bitcoin), Polkadot has [bridges](/polkadot-protocol/parachain-basics/interoperability/#bridges-connecting-external-networks){target=\_blank} that offer two-way compatibility, meaning that transactions can be made between different parachains. ### Polkadot's Additional Functionalities Historically, obtaining core slots on Polkadot chain relied upon crowdloans and auctions. Chain cores were leased through auctions for three-month periods, up to a maximum of two years. Crowdloans enabled users to securely lend funds to teams for lease deposits in exchange for pre-sale tokens, which is the only way to access slots on Polkadot 1.0. Auctions are now deprecated in favor of [coretime](/polkadot-protocol/architecture/system-chains/coretime/){target=\_blank}. Additionally, the chain handles [staking](https://wiki.polkadot.network/learn/learn-staking/){target=\_blank}, [accounts](/polkadot-protocol/basics/accounts/){target=\_blank}, balances, and [governance](/polkadot-protocol/onchain-governance/){target=\_blank}. #### Agile Coretime The new and more efficient way of obtaining core on Polkadot is to go through the process of purchasing coretime. [Agile coretime](/polkadot-protocol/architecture/polkadot-chain/agile-coretime/){target=\_blank} improves the efficient use of Polkadot's network resources and offers economic flexibility for developers, extending Polkadot's capabilities far beyond the original vision outlined in the [whitepaper](https://polkadot.com/papers/Polkadot-whitepaper.pdf){target=\_blank}. It enables parachains to purchase monthly "bulk" allocations of coretime (the time allocated for utilizing a core, measured in Polkadot relay chain blocks), ensuring heavy-duty parachains that can author a block every six seconds with [Asynchronous Backing](https://wiki.polkadot.network/learn/learn-async-backing/#asynchronous-backing){target=\_blank} can reliably renew their coretime each month. Although six-second block times are now the default, parachains have the option of producing blocks less frequently. Renewal orders are prioritized over new orders, offering stability against price fluctuations and helping parachains budget more effectively for project costs. ### Polkadot's Resilience Decentralization is a vital component of blockchain networks, but it comes with trade-offs: - An overly decentralized network may face challenges in reaching consensus and require significant energy to operate - Also, a network that achieves consensus quickly risks centralization, making it easier to manipulate or attack A network should be decentralized enough to prevent manipulative or malicious influence. In this sense, decentralization is a tool for achieving resilience. Polkadot 1.0 currently achieves resilience through several strategies: - **Nominated Proof of Stake (NPoS)** - ensures that the stake per validator is maximized and evenly distributed among validators - **Decentralized nodes** - designed to encourage operators to join the network. This program aims to expand and diversify the validators in the ecosystem who aim to become independent of the program during their term. Feel free to explore more about the program on the official [Decentralized Nodes](https://nodes.web3.foundation/){target=\_blank} page - **On-chain treasury and governance** - known as [OpenGov](/polkadot-protocol/onchain-governance/overview/){target=\_blank}, this system allows every decision to be made through public referenda, enabling any token holder to cast a vote ### Polkadot's Blockspace Polkadot 1.0’s design allows for the commoditization of blockspace. Blockspace is a blockchain's capacity to finalize and commit operations, encompassing its security, computing, and storage capabilities. Its characteristics can vary across different blockchains, affecting security, flexibility, and availability. - **Security** - measures the robustness of blockspace in Proof of Stake (PoS) networks linked to the stake locked on validator nodes, the variance in stake among validators, and the total number of validators. It also considers social centralization (how many validators are owned by single operators) and physical centralization (how many validators run on the same service provider) - **Flexibility** - reflects the functionalities and types of data that can be stored, with high-quality data essential to avoid bottlenecks in critical processes - **Availability** - indicates how easily users can access blockspace. It should be easily accessible, allowing diverse business models to thrive, ideally regulated by a marketplace based on demand and supplemented by options for "second-hand" blockspace Polkadot is built on core blockspace principles, but there's room for improvement. Tasks like balance transfers, staking, and governance are managed on the relay chain. Delegating these responsibilities to [system chains](/polkadot-protocol/architecture/system-chains/){target=\_blank} could enhance flexibility and allow the relay chain to concentrate on providing shared security and interoperability. For more information about blockspace, watch [Robert Habermeier’s interview](https://www.youtube.com/watch?v=e1vISppPwe4){target=\_blank} or read his [technical blog post](https://www.rob.tech/blog/polkadot-blockspace-over-blockchains/){target=\_blank}. ## DOT Token DOT is the native token of the Polkadot network, much like BTC for Bitcoin and Ether for the Ethereum blockchain. DOT has 10 decimals, uses the Planck base unit, and has a balance type of `u128`. The same is true for Kusama's KSM token with the exception of having 12 decimals. ### Redenomination of DOT Polkadot conducted a community poll, which ended on 27 July 2020 at block 888,888, to decide whether to redenominate the DOT token. The stakeholders chose to redenominate the token, changing the value of 1 DOT from 1e12 plancks to 1e10 plancks. Importantly, this did not affect the network's total number of base units (plancks); it only affects how a single DOT is represented. The redenomination became effective 72 hours after transfers were enabled, occurring at block 1,248,328 on 21 August 2020 around 16:50 UTC. ### The Planck Unit The smallest unit of account balance on Polkadot SDK-based blockchains (such as Polkadot and Kusama) is called _Planck_, named after the Planck length, the smallest measurable distance in the physical universe. Similar to how BTC's smallest unit is the Satoshi and ETH's is the Wei, Polkadot's native token DOT equals 1e10 Planck, while Kusama's native token KSM equals 1e12 Planck. ### Uses for DOT DOT serves three primary functions within the Polkadot network: - **Governance** - it is used to participate in the governance of the network - **Staking** - DOT is staked to support the network's operation and security - **Buying coretime** - used to purchase coretime in-bulk or on-demand and access the chain to benefit from Polkadot's security and interoperability Additionally, DOT can serve as a transferable token. For example, DOT, held in the treasury, can be allocated to teams developing projects that benefit the Polkadot ecosystem. ## JAM and the Road Ahead The Join-Accumulate Machine (JAM) represents a transformative redesign of Polkadot's core architecture, envisioned as the successor to the current relay chain. Unlike traditional blockchain architectures, JAM introduces a unique computational model that processes work through two primary functions: - **Join** - handles data integration - **Accumulate** - folds computations into the chain's state JAM removes many of the opinions and constraints of the current relay chain while maintaining its core security properties. Expected improvements include: - **Permissionless code execution** - JAM is designed to be more generic and flexible, allowing for permissionless code execution through services that can be deployed without governance approval - **More effective block time utilization** - JAM's efficient pipeline processing model places the prior state root in block headers instead of the posterior state root, enabling more effective utilization of block time for computations This architectural evolution promises to enhance Polkadot's scalability and flexibility while maintaining robust security guarantees. JAM is planned to be rolled out to Polkadot as a single, complete upgrade rather than a stream of smaller updates. This approach seeks to minimize the developer overhead required to address any breaking changes. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/polkadot-chain/pos-consensus/ --- BEGIN CONTENT --- --- title: Proof of Stake Consensus description: Explore Polkadot's consensus protocols for secure, scalable, and decentralized network operation, including NPoS, BABE, GRANDPA, and BEEFY. categories: Polkadot Protocol --- # Proof of Stake Consensus ## Introduction Polkadot's Proof of Stake consensus model leverages a unique hybrid approach by design to promote decentralized and secure network operations. In traditional Proof of Stake (PoS) systems, a node's ability to validate transactions is tied to its token holdings, which can lead to centralization risks and limited validator participation. Polkadot addresses these concerns through its [Nominated Proof of Stake (NPoS)](/polkadot-protocol/glossary/#nominated-proof-of-stake-npos){target=\_blank} model and a combination of advanced consensus mechanisms to ensure efficient block production and strong finality guarantees. This combination enables the Polkadot network to scale while maintaining security and decentralization. ## Nominated Proof of Stake Polkadot uses Nominated Proof of Stake (NPoS) to select the validator set and secure the network. This model is designed to maximize decentralization and security by balancing the roles of [validators](https://wiki.polkadot.network/learn/learn-validator/){target=\_blank} and [nominators](https://wiki.polkadot.network/learn/learn-nominator/){target=\_blank}. - **Validators** - play a key role in maintaining the network's integrity. They produce new blocks, validate parachain blocks, and ensure the finality of transactions across the relay chain - **Nominators** - support the network by selecting validators to back with their stake. This mechanism allows users who don't want to run a validator node to still participate in securing the network and earn rewards based on the validators they support In Polkadot's NPoS system, nominators can delegate their tokens to trusted validators, giving them voting power in selecting validators while spreading security responsibilities across the network. ## Hybrid Consensus Polkadot employs a hybrid consensus model that combines two key protocols: a finality gadget called [GRANDPA](#finality-gadget-grandpa) and a block production mechanism known as [BABE](#block-production-babe). This hybrid approach enables the network to benefit from both rapid block production and provable finality, ensuring security and performance. The hybrid consensus model has some key advantages: - **Probabilistic finality** - with BABE constantly producing new blocks, Polkadot ensures that the network continues to make progress, even when a final decision has not yet been reached on which chain is the true canonical chain - **Provable finality** - GRANDPA guarantees that once a block is finalized, it can never be reverted, ensuring that all network participants agree on the finalized chain By using separate protocols for block production and finality, Polkadot can achieve rapid block creation and strong guarantees of finality while avoiding the typical trade-offs seen in traditional consensus mechanisms. ## Block Production - BABE Blind Assignment for Blockchain Extension (BABE) is Polkadot's block production mechanism, working with GRANDPA to ensure blocks are produced consistently across the network. As validators participate in BABE, they are assigned block production slots through a randomness-based lottery system. This helps determine which validator is responsible for producing a block at a given time. BABE shares similarities with [Ouroboros Praos](https://eprint.iacr.org/2017/573.pdf){target=\_blank} but differs in key aspects like chain selection rules and slot timing. Key features of BABE include: - **Epochs and slots** - BABE operates in phases called epochs, each of which is divided into slots (around 6 seconds per slot). Validators are assigned slots at the beginning of each epoch based on stake and randomness - **Randomized block production** - validators enter a lottery to determine which will produce a block in a specific slot. This randomness is sourced from the relay chain's [randomness cycle](/polkadot-protocol/parachain-basics/randomness/){target=\_blank} - **Multiple block producers per slot** - in some cases, more than one validator might win the lottery for the same slot, resulting in multiple blocks being produced. These blocks are broadcasted, and the network's fork choice rule helps decide which chain to follow - **Handling empty slots** - if no validators win the lottery for a slot, a secondary selection algorithm ensures that a block is still produced. Validators selected through this method always produce a block, ensuring no slots are skipped BABE's combination of randomness and slot allocation creates a secure, decentralized system for consistent block production while also allowing for fork resolution when multiple validators produce blocks for the same slot. ### Validator Participation In BABE, validators participate in a lottery for every slot to determine whether they are responsible for producing a block during that slot. This process's randomness ensures a decentralized and unpredictable block production mechanism. There are two lottery outcomes for any given slot that initiate additional processes: - **Multiple validators in a slot** - due to the randomness, multiple validators can be selected to produce a block for the same slot. When this happens, each validator produces a block and broadcasts it to the network resulting in a race condition. The network's topology and latency then determine which block reaches the majority of nodes first. BABE allows both chains to continue building until the finalization process resolves which one becomes canonical. The [Fork Choice](#fork-choice) rule is then used to decide which chain the network should follow - **No validators in a slot** - on occasions when no validator is selected by the lottery, a [secondary validator selection algorithm](https://spec.polkadot.network/sect-block-production#defn-babe-secondary-slots){target=\_blank} steps in. This backup ensures that a block is still produced, preventing skipped slots. However, if the primary block produced by a verifiable random function [(VRF)-selected](/polkadot-protocol/parachain-basics/randomness/#vrf){target=\_blank} validator exists for that slot, the secondary block will be ignored. As a result, every slot will have either a primary or a secondary block This design ensures continuous block production, even in cases of multiple competing validators or an absence of selected validators. ### Additional Resources For further technical insights about BABE, including cryptographic details and formal proofs, see the [BABE paper](https://research.web3.foundation/Polkadot/protocols/block-production/Babe){target=\_blank} from Web3 Foundation. For BABE technical definitions, constants, and formulas, see the [Block Production Lottery](https://spec.polkadot.network/sect-block-production#sect-block-production-lottery){target=\_blank} section of the Polkadot Protocol Specification. ## Finality Gadget - GRANDPA GRANDPA (GHOST-based Recursive ANcestor Deriving Prefix Agreement) serves as the finality gadget for Polkadot's relay chain. Operating alongside the BABE block production mechanism, it ensures provable finality, giving participants confidence that blocks finalized by GRANDPA cannot be reverted. Key features of GRANDPA include: - **Independent finality service** – GRANDPA runs separately from the block production process, operating in parallel to ensure seamless finalization - **Chain-based finalization** – instead of finalizing one block at a time, GRANDPA finalizes entire chains, speeding up the process significantly - **Batch finalization** – can finalize multiple blocks in a single round, enhancing efficiency and minimizing delays in the network - **Partial synchrony tolerance** – GRANDPA works effectively in a partially synchronous network environment, managing both asynchronous and synchronous conditions - **Byzantine fault tolerance** – can handle up to 1/5 Byzantine (malicious) nodes, ensuring the system remains secure even when faced with adversarial behavior ??? note "What is GHOST?" [GHOST (Greedy Heaviest-Observed Subtree)](https://eprint.iacr.org/2018/104.pdf){target=\blank} is a consensus protocol used in blockchain networks to select the heaviest branch in a block tree. Unlike traditional longest-chain rules, GHOST can more efficiently handle high block production rates by considering the weight of subtrees rather than just the chain length. ### Probabilistic vs. Provable Finality In traditional Proof of Work (PoW) blockchains, finality is probabilistic. As blocks are added to the chain, the probability that a block is final increases, but it can never be guaranteed. Eventual consensus means that all nodes will agree on a single version of the blockchain over time, but this process can be unpredictable and slow. Conversely, GRANDPA provides provable finality, which means that once a block is finalized, it is irreversible. By using Byzantine fault-tolerant agreements, GRANDPA finalizes blocks more efficiently and securely than probabilistic mechanisms like Nakamoto consensus. Like Ethereum's Casper the Friendly Finality Gadget (FFG), GRANDPA ensures that finalized blocks cannot be reverted, offering stronger consensus guarantees. ### Additional Resources For technical insights, including formal proofs and detailed algorithms, see the [GRANDPA paper](https://github.com/w3f/consensus/blob/master/pdf/grandpa.pdf){target=\_blank} from Web3 Foundation. For a deeper look at the code behind GRANDPA, see the following GitHub repositories: - [GRANDPA Rust implementation](https://github.com/paritytech/finality-grandpa){target=\_blank} - [GRANDPA Pallet](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/grandpa/src/lib.rs){target=\_blank} ## Fork Choice The fork choice of the relay chain combines BABE and GRANDPA: 1. BABE must always build on the chain that GRANDPA has finalized 2. When there are forks after the finalized head, BABE builds on the chain with the most primary blocks to provide probabilistic finality ![Fork choice diagram](/images/polkadot-protocol/architecture/polkadot-chain/pos-consensus/consensus-protocols-1.webp) In the preceding diagram, finalized blocks are black, and non-finalized blocks are yellow. Primary blocks are labeled '1', and secondary blocks are labeled '2.' The topmost chain is the longest chain originating from the last finalized block, but it is not selected because it only has one primary block at the time of evaluation. In comparison, the one below it originates from the last finalized block and has three primary blocks. ### Additional Resources To learn more about how BABE and GRANDPA work together to produce and finalize blocks on Kusama, see this [Block Production and Finalization in Polkadot](https://youtu.be/FiEAnVECa8c){target=\_blank} talk from Web3 Foundation's Bill Laboon. For an in-depth academic discussion about Polkadot's hybrid consensus model, see this [Block Production and Finalization in Polkadot: Understanding the BABE and GRANDPA Protocols](https://www.youtube.com/watch?v=1CuTSluL7v4&t=4s){target=\_blank} MIT Cryptoeconomic Systems 2020 talk by Web3 Foundation's Bill Laboon. ## Bridging - BEEFY Bridge Efficiency Enabling Finality Yielder (BEEFY) is a specialized protocol that extends the finality guarantees provided by GRANDPA. It is specifically designed to facilitate efficient bridging between Polkadot relay chains (such as Polkadot and Kusama) and external blockchains like Ethereum. While GRANDPA is well-suited for finalizing blocks within Polkadot, it has limitations when bridging external chains that weren't built with Polkadot's interoperability features in mind. BEEFY addresses these limitations by ensuring other networks can efficiently verify finality proofs. Key features of BEEFY include: - **Efficient finality proof verification** - BEEFY enables external networks to easily verify Polkadot finality proofs, ensuring seamless communication between chains - **Merkle Mountain Ranges (MMR)** - this data structure is used to efficiently store and transmit proofs between chains, optimizing data storage and reducing transmission overhead - **ECDSA signature schemes** - BEEFY uses ECDSA signatures, which are widely supported on Ethereum and other EVM-based chains, making integration with these ecosystems smoother - **Light client optimization** - BEEFY reduces the computational burden on light clients by allowing them to check for a super-majority of validator votes rather than needing to process all validator signatures, improving performance ### Additional Resources For BEEFY technical definitions, constants, and formulas, see the [Bridge design (BEEFY)](https://spec.polkadot.network/sect-finality#sect-grandpa-beefy){target=\_blank} section of the Polkadot Protocol Specification. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/asset-hub/ --- BEGIN CONTENT --- --- title: Asset Hub description: Learn about Asset Hub in Polkadot, managing on-chain assets, foreign asset integration, and using XCM for cross-chain asset transfers. categories: Polkadot Protocol --- # Asset Hub ## Introduction The Asset Hub is a critical component in the Polkadot ecosystem, enabling the management of fungible and non-fungible assets across the network. Since the relay chain focuses on maintaining security and consensus without direct asset management, Asset Hub provides a streamlined platform for creating, managing, and using on-chain assets in a fee-efficient manner. This guide outlines the core features of Asset Hub, including how it handles asset operations, cross-chain transfers, and asset integration using XCM, as well as essential tools like [API Sidecar](#api-sidecar) and [`TxWrapper`](#txwrapper) for developers working with on-chain assets. ## Assets Basics In the Polkadot ecosystem, the relay chain does not natively support additional assets beyond its native token (DOT for Polkadot, KSM for Kusama). The Asset Hub parachain on Polkadot and Kusama provides a fungible and non-fungible assets framework. Asset Hub allows developers and users to create, manage, and use assets across the ecosystem. Asset creators can use Asset Hub to track their asset issuance across multiple parachains and manage assets through operations such as minting, burning, and transferring. Projects that need a standardized method of handling on-chain assets will find this particularly useful. The fungible asset interface provided by Asset Hub closely resembles Ethereum's ERC-20 standard but is directly integrated into Polkadot's runtime, making it more efficient in terms of speed and transaction fees. Integrating with Asset Hub offers several key benefits, particularly for infrastructure providers and users: - **Support for non-native on-chain assets** - Asset Hub enables seamless asset creation and management, allowing projects to develop tokens or assets that can interact with the broader ecosystem - **Lower transaction fees** - Asset Hub offers significantly lower transaction costs—approximately one-tenth of the fees on the relay chain, providing cost-efficiency for regular operations - **Reduced deposit requirements** - depositing assets in Asset Hub is more accessible, with deposit requirements that are around one one-hundredth of those on the relay chain - **Payment of transaction fees with non-native assets** - users can pay transaction fees in assets other than the native token (DOT or KSM), offering more flexibility for developers and users Assets created on the Asset Hub are stored as part of a map, where each asset has a unique ID that links to information about the asset, including details like: - The management team - The total supply - The number of accounts holding the asset - Sufficiency for account existence - whether the asset alone is enough to maintain an account without a native token balance - The metadata of the asset, including its name, symbol, and the number of decimals for representation Some assets can be regarded as sufficient to maintain an account's existence, meaning that users can create accounts on the network without needing a native token balance (i.e., no existential deposit required). Developers can also set minimum balances for their assets. If an account's balance drops below the minimum, the balance is considered dust and may be cleared. ## Assets Pallet The Polkadot SDK's Assets pallet is a powerful module designated for creating and managing fungible asset classes with a fixed supply. It offers a secure and flexible way to issue, transfer, freeze, and destroy assets. The pallet supports various operations and includes permissioned and non-permissioned functions to cater to simple and advanced use cases. Visit the [Assets Pallet Rust docs](https://paritytech.github.io/polkadot-sdk/master/pallet_assets/index.html){target=\_blank} for more in-depth information. ### Key Features Key features of the Assets pallet include: - **Asset issuance** - allows the creation of a new asset, where the total supply is assigned to the creator's account - **Asset transfer** - enables transferring assets between accounts while maintaining a balance in both accounts - **Asset freezing** - prevents transfers of a specific asset from one account, locking it from further transactions - **Asset destruction** - allows accounts to burn or destroy their holdings, removing those assets from circulation - **Non-custodial transfers** - a non-custodial mechanism to enable one account to approve a transfer of assets on behalf of another ### Main Functions The Assets pallet provides a broad interface for managing fungible assets. Some of the main dispatchable functions include: - **`create()`** - create a new asset class by placing a deposit, applicable when asset creation is permissionless - **`issue()`** - mint a fixed supply of a new asset and assign it to the creator's account - **`transfer()`** - transfer a specified amount of an asset between two accounts - **`approve_transfer()`** - approve a non-custodial transfer, allowing a third party to move assets between accounts - **`destroy()`** - destroy an entire asset class, removing it permanently from the chain - **`freeze()` and `thaw()`** - administrators or privileged users can lock or unlock assets from being transferred For a full list of dispatchable and privileged functions, see the [dispatchables Rust docs](https://docs.rs/pallet-assets/latest/pallet_assets/pallet/enum.Call.html){target=\_blank}. ### Querying Functions The Assets pallet exposes several key querying functions that developers can interact with programmatically. These functions allow you to query asset information and perform operations essential for managing assets across accounts. The two main querying functions are: - **`balance(asset_id, account)`** - retrieves the balance of a given asset for a specified account. Useful for checking the holdings of an asset class across different accounts - **`total_supply(asset_id)`** - returns the total supply of the asset identified by `asset_id`. Allows users to verify how much of the asset exists on-chain In addition to these basic functions, other utility functions are available for querying asset metadata and performing asset transfers. You can view the complete list of querying functions in the [Struct Pallet Rust docs](https://docs.rs/pallet-assets/latest/pallet_assets/pallet/struct.Pallet.html){target=\_blank}. ### Permission Models and Roles The Assets pallet incorporates a robust permission model, enabling control over who can perform specific operations like minting, transferring, or freezing assets. The key roles within the permission model are: - **Admin** - can freeze (preventing transfers) and forcibly transfer assets between accounts. Admins also have the power to reduce the balance of an asset class across arbitrary accounts. They manage the more sensitive and administrative aspects of the asset class - **Issuer** - responsible for minting new tokens. When new assets are created, the Issuer is the account that controls their distribution to other accounts - **Freezer** - can lock the transfer of assets from an account, preventing the account holder from moving their balance. This function is useful for freezing accounts involved in disputes or fraud - **Owner** - has overarching control, including destroying an entire asset class. Owners can also set or update the Issuer, Freezer, and Admin roles These permissions provide fine-grained control over assets, enabling developers and asset managers to ensure secure, controlled operations. Each of these roles is crucial for managing asset lifecycles and ensuring that assets are used appropriately across the network. ### Asset Freezing The Assets pallet allows you to freeze assets. This feature prevents transfers or spending from a specific account, effectively locking the balance of an asset class until it is explicitly unfrozen. Asset freezing is beneficial when assets are restricted due to security concerns or disputes. Freezing assets is controlled by the Freezer role, as mentioned earlier. Only the account with the Freezer privilege can perform these operations. Here are the key freezing functions: - **`freeze(asset_id, account)`** - locks the specified asset of the account. While the asset is frozen, no transfers can be made from the frozen account - **`thaw(asset_id, account)`** - corresponding function for unfreezing, allowing the asset to be transferred again This approach enables secure and flexible asset management, providing administrators the tools to control asset movement in special circumstances. ### Non-Custodial Transfers (Approval API) The Assets pallet also supports non-custodial transfers through the Approval API. This feature allows one account to approve another account to transfer a specific amount of its assets to a third-party recipient without granting full control over the account's balance. Non-custodial transfers enable secure transactions where trust is required between multiple parties. Here's a brief overview of the key functions for non-custodial asset transfers: - **`approve_transfer(asset_id, delegate, amount)`** - approves a delegate to transfer up to a certain amount of the asset on behalf of the original account holder - **`cancel_approval(asset_id, delegate)`** - cancels a previous approval for the delegate. Once canceled, the delegate no longer has permission to transfer the approved amount - **`transfer_approved(asset_id, owner, recipient, amount)`** - executes the approved asset transfer from the owner’s account to the recipient. The delegate account can call this function once approval is granted These delegated operations make it easier to manage multi-step transactions and dApps that require complex asset flows between participants. ## Foreign Assets Foreign assets in Asset Hub refer to assets originating from external blockchains or parachains that are registered in the Asset Hub. These assets are typically native tokens from other parachains within the Polkadot ecosystem or bridged tokens from external blockchains such as Ethereum. Once a foreign asset is registered in the Asset Hub by its originating blockchain's root origin, users are able to send these tokens to the Asset Hub and interact with them as they would any other asset within the Polkadot ecosystem. ### Handling Foreign Assets The Foreign Assets pallet, an instance of the Assets pallet, manages these assets. Since foreign assets are integrated into the same interface as native assets, developers can use the same functionalities, such as transferring and querying balances. However, there are important distinctions when dealing with foreign assets. - **Asset identifier** - unlike native assets, foreign assets are identified using an XCM Multilocation rather than a simple numeric `AssetId`. This multilocation identifier represents the cross-chain location of the asset and provides a standardized way to reference it across different parachains and relay chains - **Transfers** - once registered in the Asset Hub, foreign assets can be transferred between accounts, just like native assets. Users can also send these assets back to their originating blockchain if supported by the relevant cross-chain messaging mechanisms ## Integration Asset Hub supports a variety of integration tools that make it easy for developers to manage assets and interact with the blockchain in their applications. The tools and libraries provided by Parity Technologies enable streamlined operations, such as querying asset information, building transactions, and monitoring cross-chain asset transfers. Developers can integrate Asset Hub into their projects using these core tools: ### API Sidecar [API Sidecar](https://github.com/paritytech/substrate-api-sidecar){target=\_blank} is a RESTful service that can be deployed alongside Polkadot and Kusama nodes. It provides endpoints to retrieve real-time blockchain data, including asset information. When used with Asset Hub, Sidecar allows querying: - **Asset look-ups** - retrieve specific assets using `AssetId` - **Asset balances** - view the balance of a particular asset on Asset Hub Public instances of API Sidecar connected to Asset Hub are available, such as: - [Polkadot Asset Hub Sidecar](https://polkadot-asset-hub-public-sidecar.parity-chains.parity.io/){target=\_blank} - [Kusama Asset Hub Sidecar](https://kusama-asset-hub-public-sidecar.parity-chains.parity.io/){target=\_blank} These public instances are primarily for ad-hoc testing and quick checks. ### TxWrapper [`TxWrapper`](https://github.com/paritytech/txwrapper-core){target=\_blank} is a library that simplifies constructing and signing transactions for Polkadot SDK-based chains, including Polkadot and Kusama. This tool includes support for working with Asset Hub, enabling developers to: - Construct offline transactions - Leverage asset-specific functions such as minting, burning, and transferring assets `TxWrapper` provides the flexibility needed to integrate asset operations into custom applications while maintaining the security and efficiency of Polkadot's transaction model. ### Asset Transfer API [Asset Transfer API](https://github.com/paritytech/asset-transfer-api){target=\_blank} is a library focused on simplifying the construction of asset transfers for Polkadot SDK-based chains that involve system parachains like Asset Hub. It exposes a reduced set of methods that facilitate users sending transfers to other parachains or locally. Refer to the [cross-chain support table](https://github.com/paritytech/asset-transfer-api/tree/main#current-cross-chain-support){target=\_blank} for the current status of cross-chain support development. Key features include: - Support for cross-chain transfers between parachains - Streamlined transaction construction with support for the necessary parachain metadata The API supports various asset operations, such as paying transaction fees with non-native tokens and managing asset liquidity. ### Parachain Node To fully leverage the Asset Hub's functionality, developers will need to run a system parachain node. Setting up an Asset Hub node allows users to interact with the parachain in real time, syncing data and participating in the broader Polkadot ecosystem. Guidelines for setting up an [Asset Hub node](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/cumulus#asset-hub-){target=\_blank} are available in the Parity documentation. Using these integration tools, developers can manage assets seamlessly and integrate Asset Hub functionality into their applications, leveraging Polkadot's powerful infrastructure. ## XCM Transfer Monitoring Since Asset Hub facilitates cross-chain asset transfers across the Polkadot ecosystem, XCM transfer monitoring becomes an essential practice for developers and infrastructure providers. This section outlines how to monitor the cross-chain movement of assets between parachains, the relay chain, and other systems. ### Monitor XCM Deposits As assets move between chains, tracking the cross-chain transfers in real time is crucial. Whether assets are transferred via a teleport from system parachains or through a reserve-backed transfer from any other parachain, each transfer emits a relevant event (such as the `balances.minted` event). To ensure accurate monitoring of these events: - **Track XCM deposits** - query every new block created in the relay chain or Asset Hub, loop through the events array, and filter for any `balances.minted` events which confirm the asset was successfully transferred to the account - **Track event origins** - each `balances.minted` event points to a specific address. By monitoring this, service providers can verify that assets have arrived in the correct account ### Track XCM Information Back to the Source While the `balances.minted` event confirms the arrival of assets, there may be instances where you need to trace the origin of the cross-chain message that triggered the event. In such cases, you can: 1. Query the relevant chain at the block where the `balances.minted` event was emitted 2. Look for a `messageQueue(Processed)` event within that block's initialization. This event contains a parameter (`Id`) that identifies the cross-chain message received by the relay chain or Asset Hub. You can use this `Id` to trace the message back to its origin chain, offering full visibility of the asset transfer's journey ### Practical Monitoring Examples The preceding sections outline the process of monitoring XCM deposits to specific accounts and then tracing back the origin of these deposits. The process of tracking an XCM transfer and the specific events to monitor may vary based on the direction of the XCM message. Here are some examples to showcase the slight differences: - **Transfer from parachain to relay chain** - track `parachainsystem(UpwardMessageSent)` on the parachain and `messagequeue(Processed)` on the relay chain - **Transfer from relay chain to parachain** - track `xcmPallet(sent)` on the relay chain and `dmpqueue(ExecutedDownward)` on the parachain - **Transfer between parachains** - track `xcmpqueue(XcmpMessageSent)` on the system parachain and `xcmpqueue(Success)` on the destination parachain ### Monitor for Failed XCM Transfers Sometimes, XCM transfers may fail due to liquidity or other errors. Failed transfers emit specific error events, which are key to resolving issues in asset transfers. Monitoring for these failure events helps catch issues before they affect asset balances. - **Relay chain to system parachain** - look for the `dmpqueue(ExecutedDownward)` event on the parachain with an `Incomplete` outcome and an error type such as `UntrustedReserveLocation` - **Parachain to parachain** - monitor for `xcmpqueue(Fail)` on the destination parachain with error types like `TooExpensive` For detailed error management in XCM, see Gavin Wood's blog post on [XCM Execution and Error Management](https://polkadot.com/blog/xcm-part-three-execution-and-error-management/){target=\_blank}. ## Where to Go Next
- Tutorial __Register a Local Asset__ --- Comprehensive guide to registering a local asset on the Asset Hub system parachain, including step-by-step instructions. [:octicons-arrow-right-24: Reference](/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/) - Tutorial __Register a Foreign Asset__ --- An in-depth guide to registering a foreign asset on the Asset Hub parachain, providing clear, step-by-step instructions. [:octicons-arrow-right-24: Reference](/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/) - Tutorial __Convert Assets__ --- A guide detailing the step-by-step process of converting assets on Asset Hub, helping users efficiently navigate asset management on the platform. [:octicons-arrow-right-24: Reference](/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/bridge-hub/ --- BEGIN CONTENT --- --- title: Bridge Hub description: Learn about the Bridge Hub system parachain, a parachain that facilitates the interactions from Polkadot to the rest of Web3. categories: Polkadot Protocol --- # Bridge Hub ## Introduction The Bridge Hub system parachain plays a crucial role in facilitating trustless interactions between Polkadot, Kusama, Ethereum, and other blockchain ecosystems. By implementing on-chain light clients and supporting protocols like BEEFY and GRANDPA, Bridge Hub ensures seamless message transmission and state verification across chains. It also provides essential [pallets](/polkadot-protocol/glossary/#pallet){target=\_blank} for sending and receiving messages, making it a cornerstone of Polkadot’s interoperability framework. With built-in support for XCM (Cross-Consensus Messaging), Bridge Hub enables secure, efficient communication between diverse blockchain networks. This guide covers the architecture, components, and deployment of the Bridge Hub system. You'll explore its trustless bridging mechanisms, key pallets for various blockchains, and specific implementations like Snowbridge and the Polkadot <> Kusama bridge. By the end, you'll understand how Bridge Hub enhances connectivity within the Polkadot ecosystem and beyond. ## Trustless Bridging Bridge Hub provides a mode of trustless bridging through its implementation of on-chain light clients and trustless relayers. Trustless bridges are essentially two one-way bridges, where each chain has a method of verifying the state of the other in a trustless manner through consensus proofs. In this context, "trustless" refers to the lack of need to trust a human when interacting with various system components. Trustless systems are based instead on trusting mathematics, cryptography, and code. The target chain and source chain both provide ways of verifying one another's state and actions (such as a transfer) based on the consensus and finality of both chains rather than an external mechanism controlled by a third party. [BEEFY (Bridge Efficiency Enabling Finality Yielder)](/polkadot-protocol/architecture/polkadot-chain/pos-consensus/#bridging-beefy){target=\_blank} is instrumental in this solution. It provides a more efficient way to verify the consensus on the relay chain. It allows the participants in a network to verify finality proofs, meaning a remote chain like Ethereum can verify the state of Polkadot at a given block height. For example, the Ethereum and Polkadot bridging solution that [Snowbridge](https://docs.snowbridge.network/){target=\_blank} implements involves two light clients: one which verifies the state of Polkadot and the other which verifies the state of Ethereum. The light client for Polkadot is implemented in the runtime as a pallet, whereas the light client for Ethereum is implemented as a smart contract on the beacon chain. ## Bridging Components In any given Bridge Hub implementation (Kusama, Polkadot, or other relay chains), there are a few primary pallets that are utilized: - [**Pallet Bridge GRANDPA**](https://paritytech.github.io/polkadot-sdk/master/pallet_bridge_grandpa/index.html){target=\_blank} - an on-chain GRANDPA light client for Substrate based chains - [**Pallet Bridge Parachains**](https://paritytech.github.io/polkadot-sdk/master/pallet_bridge_parachains/index.html){target=\_blank} - a finality module for parachains - [**Pallet Bridge Messages**](https://paritytech.github.io/polkadot-sdk/master/pallet_bridge_messages/index.html){target=\_blank} - a pallet which allows sending, receiving, and tracking of inbound and outbound messages - [**Pallet XCM Bridge**](https://paritytech.github.io/polkadot-sdk/master/pallet_xcm_bridge_hub/index.html){target=\_blank} - a pallet which, with the Bridge Messages pallet, adds XCM support to bridge pallets ### Ethereum-Specific Support Bridge Hub also has a set of components and pallets that support a bridge between Polkadot and Ethereum through [Snowbridge](https://github.com/Snowfork/snowbridge){target=\_blank}. To view the complete list of which pallets are included in Bridge Hub, visit the Subscan [Runtime Modules](https://bridgehub-polkadot.subscan.io/runtime){target=\_blank} page. Alternatively, the source code for those pallets can be found in the Polkadot SDK [Snowbridge Pallets](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/bridges/snowbridge/pallets){target=\_blank} repository. ## Deployed Bridges - [**Snowbridge**](https://wiki.polkadot.network/learn/learn-snowbridge/){target=\_blank} - a general-purpose, trustless bridge between Polkadot and Ethereum - [**Hyperbridge**](https://wiki.polkadot.network/learn/learn-hyperbridge/){target=\_blank} - a cross-chain solution built as an interoperability coprocessor, providing state-proof-based interoperability across all blockchains - [**Polkadot <> Kusama Bridge**](https://wiki.polkadot.network/learn/learn-dot-ksm-bridge/){target=\_blank} - a bridge that utilizes relayers to bridge the Polkadot and Kusama relay chains trustlessly ## Where to Go Next - Go over the Bridge Hub README in the Polkadot SDK [Bridge-hub Parachains](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/cumulus/parachains/runtimes/bridge-hubs/README.md){target=\_blank} repository - Take a deeper dive into bridging architecture in the Polkadot SDK [High-Level Bridge](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/bridges/docs/high-level-overview.md){target=\_blank} documentation - Read more about BEEFY and Bridging in the Polkadot Wiki: [Bridging: BEEFY](/polkadot-protocol/architecture/polkadot-chain/pos-consensus/#bridging-beefy){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/collectives/ --- BEGIN CONTENT --- --- title: Collectives Chain description: Learn how the Collectives chain provides infrastructure for governance organizations, enabling decentralized network stewardship and decision-making. categories: Polkadot Protocol --- ## Introduction Established through [Referendum 81](https://polkadot.polkassembly.io/referendum/81){target=\_blank}, the Collectives chain operates as a dedicated parachain exclusive to the Polkadot network with no counterpart on Kusama. This specialized infrastructure provides a foundation for various on-chain governance groups essential to Polkadot's ecosystem. The architecture enables entire networks to function as unified entities, allowing them to present cohesive positions and participate in cross-network governance through [Bridge Hub](/polkadot-protocol/architecture/system-chains/bridge-hub){target=\_blank}. This capability represents a fundamental advancement in Web3 principles, eliminating dependencies on traditional third-party intermediaries such as legal systems or jurisdictional authorities. ## Key Collectives The Collectives chain hosts several important governance bodies: - **[Polkadot Technical Fellowship](https://wiki.polkadot.network/learn/learn-polkadot-technical-fellowship/){target=\_blank}** - a self-governing assembly of protocol experts and developers who oversee technical aspects of the Polkadot and Kusama networks. The Fellowship operates both on-chain through the collectives system and off-chain via GitHub repositories, public discussion forums, and monthly development calls that are publicly accessible. - **[Polkadot Alliance](https://wiki.polkadot.network/general/glossary/#polkadot-alliance){target=\_blank}** - a consortium founded by seven leading parachain projects (Acala, Astar, Interlay, Kilt, Moonbeam, Phala, and Subscan) to establish development standards and ethical guidelines within the ecosystem. This ranked collective, comprised of "Fellows" and "Allies," focuses on promoting best practices and identifying potential bad actors. Membership is primarily designed for organizations, projects, and other networks rather than individuals. These collectives serve as pillars of Polkadot's decentralized governance model, enabling community-driven decision-making and establishing technical standards that shape the network's evolution. Through structured on-chain representation, they provide transparent mechanisms for ecosystem development while maintaining the core Web3 principles of trustlessness and decentralization. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/coretime/ --- BEGIN CONTENT --- --- title: Coretime Chain description: Learn about the role of the Coretime system parachain, which facilitates the sale, purchase, assignment, and mechanisms of bulk coretime. categories: Polkadot Protocol --- ## Introduction The Coretime system chain facilitates the allocation, procurement, sale, and scheduling of bulk [coretime](/polkadot-protocol/glossary/#coretime){target=\_blank}, enabling tasks (such as [parachains](/polkadot-protocol/glossary/#parachain){target=\_blank}) to utilize the computation and security provided by Polkadot. The [Broker pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/index.html){target=\_blank}, along with [Cross Consensus Messaging (XCM)](/develop/interoperability/intro-to-xcm/){target=\_blank}, enables this functionality to be delegated to the system chain rather than the relay chain. Using [XCMP's Upward Message Passing (UMP)](https://wiki.polkadot.network/learn/learn-xcm-transport/#ump-upward-message-passing){target=\_blank} to the relay chain allows for core assignments to take place for a task registered on the relay chain. The Fellowship RFC [RFC-1: Agile Coretime](https://github.com/polkadot-fellows/RFCs/blob/main/text/0001-agile-coretime.md){target=\_blank} contains the specification for the Coretime system chain and coretime as a concept. Besides core management, its responsibilities include: - The number of cores that should be made available - Which tasks should be running on which cores and in what ratios - Accounting information for the on-demand pool From the relay chain, it expects the following via [Downward Message Passing (DMP)](https://wiki.polkadot.network/learn/learn-xcm-transport/#dmp-downward-message-passing){target=\_blank}: - The number of cores available to be scheduled - Account information on on-demand scheduling The details for this interface can be found in [RFC-5: Coretime Interface](https://github.com/polkadot-fellows/RFCs/blob/main/text/0005-coretime-interface.md){target=\_blank}. ## Bulk Coretime Assignment The Coretime chain allocates coretime before its usage. It also manages the ownership of a core. As cores are made up of regions (by default, one core is a single region), a region is recognized as a non-fungible asset. The Coretime chain exposes Regions over XCM as an NFT. Users can transfer individual regions, partition, interlace, or allocate them to a task. Regions describe how a task may use a core. A core can be considered a logical representation of an active validator set on the relay chain, where these validators commit to verifying the state changes for a particular task running on that region. With partitioning, having more than one region per core is possible, allowing for different computational schemes. Therefore, running more than one task on a single core is possible. Regions can be managed in the following manner on the Coretime chain: - **Assigning region** - regions can be assigned to a task on the relay chain, such as a parachain/rollup using the [`assign`](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/dispatchables/fn.assign.html){target=\_blank} dispatchable - **Transferring regions** - regions may be transferred on the Coretime chain, upon which the [`transfer`](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/dispatchables/fn.transfer.html){target=\_blank} [dispatchable](/polkadot-protocol/glossary/#dispatchable){target=\_blank} in the Broker pallet would assign a new owner to that specific region - **Partitioning regions** - using the [`partition`](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/dispatchables/fn.partition.html){target=\_blank} dispatchable, regions may be partitioned into two non-overlapping subregions within the same core. A partition involves specifying a *pivot*, wherein the new region will be defined and available for use - **Interlacing regions** - using the [`interlace`](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/pallet/dispatchables/fn.interlace.html){target=\_blank} dispatchable, interlacing regions allows a core to have alternative-compute strategies. Whereas partitioned regions are mutually exclusive, interlaced regions overlap because multiple tasks may utilize a single core in an alternating manner When bulk coretime is obtained, block production is not immediately available. It becomes available to produce blocks for a task in the next Coretime cycle. To view the status of the current or next Coretime cycle, see the [Subscan Coretime Dashboard](https://coretime-polkadot.subscan.io/coretime_dashboard){target=\_blank}. For more information regarding these mechanisms, see the coretime page on the Polkadot Wiki: [Introduction to Agile Coretime](https://wiki.polkadot.network/learn/learn-agile-coretime/){target=\_blank}. ## On Demand Coretime At this writing, on-demand coretime is currently deployed on the relay chain and will eventually be deployed to the Coretime chain. On-demand coretime allows parachains (previously known as parathreads) to utilize available cores per block. The Coretime chain also handles coretime sales, details of which can be found on the Polkadot Wiki: [Agile Coretime: Coretime Sales](https://wiki.polkadot.network/learn/learn-agile-coretime/#coretime-sales){target=\_blank}. ## Where to Go Next - Learn about [Agile Coretime](https://wiki.polkadot.network/learn/learn-agile-coretime/#introduction-to-agile-coretime){target=\_blank} on the Polkadot Wiki --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/ --- BEGIN CONTENT --- --- title: System Chains description: Discover the unique role and functionality each of Polkadot’s system chains, including the Asset Hub, Bridge Hub, and Coretime chain, provides to the ecosystem. template: index-page.html --- # System Chains Explore the critical roles Polkadot’s system chains play in enhancing the network’s efficiency and scalability. From managing on-chain assets with the Asset Hub to enabling seamless Web3 integration through the Bridge Hub and facilitating coretime operations with the Coretime chain, each system chain is designed to offload specialized tasks from the relay chain, optimizing the entire ecosystem. These system chains are integral to Polkadot's architecture, ensuring that the relay chain remains focused on consensus and security while system chains handle vital functions like asset management, cross-chain communication, and resource allocation. By distributing responsibilities across specialized chains, Polkadot maintains high performance, scalability, and flexibility, enabling developers to build more efficient and interconnected blockchain solutions. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/overview/ --- BEGIN CONTENT --- --- title: Overview of Polkadot's System Chains description: Discover how system parachains enhance Polkadot's scalability and performance by offloading tasks like governance, asset management, and bridging from the relay chain. categories: Basics, Polkadot Protocol --- ## Introduction Polkadot's relay chain is designed to secure parachains and facilitate seamless inter-chain communication. However, resource-intensive—tasks like governance, asset management, and bridging are more efficiently handled by system parachains. These specialized chains offload functionality from the relay chain, leveraging Polkadot's parallel execution model to improve performance and scalability. By distributing key functionalities across system parachains, Polkadot can maximize its relay chain's blockspace for its core purpose of securing and validating parachains. This guide will explore how system parachains operate within Polkadot and Kusama, detailing their critical roles in network governance, asset management, and bridging. You'll learn about the currently deployed system parachains, their unique functions, and how they enhance Polkadot's decentralized ecosystem. ## System Chains System parachains contain core Polkadot protocol features, but in parachains rather than the relay chain. Execution cores for system chains are allocated via network [governance](/polkadot-protocol/onchain-governance/overview/){target=\_blank} rather than purchasing coretime on a marketplace. System parachains defer to on-chain governance to manage their upgrades and other sensitive actions as they do not have native tokens or governance systems separate from DOT or KSM. It is not uncommon to see a system parachain implemented specifically to manage network governance. !!!note You may see system parachains called common good parachains in articles and discussions. This nomenclature caused confusion as the network evolved, so system parachains is preferred. For more details on this evolution, review this [parachains forum discussion](https://forum.polkadot.network/t/polkadot-protocol-and-common-good-parachains/866){target=\_blank}. ## Existing System Chains ```mermaid --- title: System Parachains at a Glance --- flowchart TB subgraph POLKADOT["Polkadot"] direction LR PAH["Polkadot Asset Hub"] PCOL["Polkadot Collectives"] PBH["Polkadot Bridge Hub"] PPC["Polkadot People Chain"] PCC["Polkadot Coretime Chain"] end subgraph KUSAMA["Kusama"] direction LR KAH["Kusama Asset Hub"] KBH["Kusama Bridge Hub"] KPC["Kusama People Chain"] KCC["Kusama Coretime Chain"] E["Encointer"] end ``` All system parachains are on both Polkadot and Kusama with the following exceptions: - [**Collectives**](#collectives) - only on Polkadot - [**Encointer**](#encointer) - only on Kusama ### Asset Hub The [Asset Hub](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/cumulus#asset-hub-){target=\_blank} is an asset portal for the entire network. It helps asset creators, such as reserve-backed stablecoin issuers, track the total issuance of an asset in the network, including amounts transferred to other parachains. It also serves as the hub where asset creators can perform on-chain operations, such as minting and burning, to manage their assets effectively. This asset management logic is encoded directly in the runtime of the chain rather than in smart contracts. The efficiency of executing logic in a parachain allows for fees and deposits that are about 1/10th of what is required on the relay chain. These low fees mean that the Asset Hub is well suited for handling the frequent transactions required when managing balances, transfers, and on-chain assets. The Asset Hub also supports non-fungible assets (NFTs) via the [Uniques pallet](https://polkadot.js.org/docs/substrate/extrinsics#uniques){target=\_blank} and [NFTs pallet](https://polkadot.js.org/docs/substrate/extrinsics#nfts){target=\_blank}. For more information about NFTs, see the Polkadot Wiki section on [NFT Pallets](https://wiki.polkadot.network/learn/learn-nft-pallets/){target=\_blank}. ### Collectives The Polkadot Collectives parachain was added in [Referendum 81](https://polkadot.polkassembly.io/referendum/81){target=\_blank} and exists on Polkadot but not on Kusama. The Collectives chain hosts on-chain collectives that serve the Polkadot network, including the following: - [**Polkadot Alliance**](https://polkadot.polkassembly.io/referendum/94){target=\_blank} - provides a set of ethics and standards for the community to follow. Includes an on-chain means to call out bad actors - [**Polkadot Technical Fellowship**](https://wiki.polkadot.network/learn/learn-polkadot-technical-fellowship/){target=\_blank} - a rules-based social organization to support and incentivize highly-skilled developers to contribute to the technical stability, security, and progress of the network These on-chain collectives will play essential roles in the future of network stewardship and decentralized governance. Networks can use a bridge hub to help them act as collectives and express their legislative voices as single opinions within other networks. ### Bridge Hub Before parachains, the only way to design a bridge was to put the logic onto the relay chain. Since both networks now support parachains and the isolation they provide, each network can have a parachain dedicated to bridges. The Bridge Hub system parachain operates on the relay chain, and is responsible for faciliating bridges to the wider Web3 space. It contains the required bridge [pallets](/polkadot-protocol/glossary/#pallet){target=\_blank} in its runtime, which enable trustless bridging with other blockchain networks like Polkadot, Kusama, and Ethereum. The Bridge Hub uses the native token of the relay chain. See the [Bridge Hub](/polkadot-protocol/architecture/system-chains/bridge-hub/){target=\_blank} documentation for additional information. ### People Chain The People Chain provides a naming system that allows users to manage and verify their account [identity](https://wiki.polkadot.network/learn/learn-identity/){target=\_blank}. ### Coretime Chain The Coretime system chain lets users buy coretime to access Polkadot's computation. [Coretime marketplaces](https://wiki.polkadot.network/learn/learn-guides-coretime-marketplaces/){target=\_blank} run on top of the Coretime chain. Kusama does not use the Collectives system chain. Instead, Kusama relies on the Encointer system chain, which provides Sybil resistance as a service to the entire Kusama ecosystem. Visit [Introduction to Agile Coretime](https://wiki.polkadot.network/learn/learn-agile-coretime/#introduction-to-agile-coretime){target=\_blank} in the Polkadot Wiki for more information. ### Encointer [Encointer](https://encointer.org/encointer-for-web3/){target=\_blank} is a blockchain platform for self-sovereign ID and a global [universal basic income (UBI)](https://book.encointer.org/economics-ubi.html){target=\_blank}. The Encointer protocol uses a novel Proof of Personhood (PoP) system to create unique identities and resist Sybil attacks. PoP is based on the notion that a person can only be in one place at any given time. Encointer offers a framework that allows for any group of real people to create, distribute, and use their own digital community tokens. Participants are requested to attend physical key-signing ceremonies with small groups of random people at randomized locations. These local meetings are part of one global signing ceremony occurring at the same time. Participants use the Encointer wallet app to participate in these ceremonies and manage local community currencies. Referendums marking key Encointer adoption milestones include: - [**Referendum 158 - Register Encointer As a Common Good Chain**](https://kusama.polkassembly.io/referendum/158){target=\_blank} - registered Encointer as the second system parachain on Kusama's network - [**Referendum 187 - Encointer Runtime Upgrade to Full Functionality**](https://kusama.polkassembly.io/referendum/187){target=\_blank} - introduced a runtime upgrade bringing governance and full functionality for communities to use the protocol To learn more about Encointer, see the official [Encointer book](https://book.encointer.org/introduction.html){target=\_blank} or watch an [Encointer ceremony](https://www.youtube.com/watch?v=tcgpCCYBqko){target=\_blank} in action. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/architecture/system-chains/people/ --- BEGIN CONTENT --- --- title: People Chain description: Learn how People chain secures decentralized identity management, empowering users to control and verify digital identities without central authorities. categories: Polkadot Protocol --- # People Chain ## Introduction People chain is a specialized parachain within the Polkadot ecosystem dedicated to secure, decentralized identity management. This solution empowers users to create, control, and verify their digital identities without reliance on centralized authorities. By prioritizing user sovereignty and data privacy, People chain establishes a foundation for trusted interactions throughout the Polkadot ecosystem while returning control of personal information to individuals. ## Identity Management System People chain provides a comprehensive identity framework allowing users to: - Establish verifiable on-chain identities - Control disclosure of personal information - Receive verification from trusted registrars - Link multiple accounts under a unified identity Users must reserve funds in a bond to store their information on chain. These funds are locked, not spent, and returned when the identity is cleared. ### Sub-Identities The platform supports hierarchical identity structures through sub-accounts: - Primary accounts can establish up to 100 linked sub-accounts - Each sub-account maintains its own distinct identity - All sub-accounts require a separate bond deposit ## Verification Process ### Judgment Requests After establishing an on-chain identity, users can request verification from [registrars](#registrars): 1. Users specify the maximum fee they're willing to pay for judgment 2. Only registrars whose fees fall below this threshold can provide verification 3. Registrars assess the provided information and issue a judgment ### Judgment Classifications Registrars can assign the following confidence levels to identity information: - **Unknown** - default status; no judgment rendered yet - **Reasonable** - data appears valid but without formal verification (standard for most verified identities) - **Known good** - information certified correct through formal verification (requires documentation; limited to registrars) - **Out of date** - previously verified information that requires updating - **Low quality** - imprecise information requiring correction - **Erroneous** - incorrect information, potentially indicating fraudulent intent A temporary "Fee Paid" status indicates judgment in progress. Both "Fee Paid" and "Erroneous" statuses lock identity information from modification until resolved. ### Registrars Registrars serve as trusted verification authorities within the People chain ecosystem. These entities validate user identities and provide attestations that build trust in the network. - Registrars set specific fees for their verification services - They can specialize in verifying particular identity fields - Verification costs vary based on complexity and thoroughness When requesting verification, users specify their maximum acceptable fee. Only registrars whose fees fall below this threshold can provide judgment. Upon completing the verification process, the user pays the registrar's fee, and the registrar issues an appropriate confidence level classification based on their assessment. Multiple registrars operate across the Polkadot and People chain ecosystems, each with unique specializations and fee structures. To request verification: 1. Research available registrars and their verification requirements 2. Contact your chosen registrar directly through their specified channels 3. Submit required documentation according to their verification process 4. Pay the associated verification fee You must contact specific registrars individually to request judgment. Each registrar maintains its own verification procedures and communication channels. ## Where to Go Next
- External __Polkadot.js Guides about Identity__ --- Step-by-step instructions for managing identities through the Polkadot.js interface, with practical examples and visual guides. [:octicons-arrow-right-24: Reference](https://wiki.polkadot.network/docs/learn-guides-identity) - External __How to Set and Clear an Identity__ --- Practical walkthrough covering identity setup and removal process on People chain. [:octicons-arrow-right-24: Reference](https://support.polkadot.network/support/solutions/articles/65000181981-how-to-set-and-clear-an-identity) - External __People Chain Runtime Implementation__ --- Source code for the People chain runtime, detailing the technical architecture of decentralized identity management. [:octicons-arrow-right-24: Reference](https://github.com/polkadot-fellows/runtimes/tree/main/system-parachains/people)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/glossary/ --- BEGIN CONTENT --- --- title: Glossary description: Glossary of terms used within the Polkadot ecosystem, Polkadot SDK, its subsequent libraries, and other relevant Web3 terminology. template: root-subdirectory-page.html categories: Reference --- # Glossary Key definitions, concepts, and terminology specific to the Polkadot ecosystem are included here. Additional glossaries from around the ecosystem you might find helpful: - [Polkadot Wiki Glossary](https://wiki.polkadot.network/general/glossary/){target=\_blank} - [Polkadot SDK Glossary](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/glossary/index.html){target=\_blank} ## Authority The role in a blockchain that can participate in consensus mechanisms. - [GRANDPA](#grandpa) - the authorities vote on chains they consider final - [Blind Assignment of Blockchain Extension](#blind-assignment-of-blockchain-extension-babe) (BABE) - the authorities are also [block authors](#block-author) Authority sets can be used as a basis for consensus mechanisms such as the [Nominated Proof of Stake (NPoS)](#nominated-proof-of-stake-npos) protocol. ## Authority Round (Aura) A deterministic [consensus](#consensus) protocol where block production is limited to a rotating list of [authorities](#authority) that take turns creating blocks. In authority round (Aura) consensus, most online authorities are assumed to be honest. It is often used in combination with [GRANDPA](#grandpa) as a [hybrid consensus](#hybrid-consensus) protocol. Learn more by reading the official [Aura consensus algorithm](https://openethereum.github.io/Aura){target=\_blank} wiki article. ## Blind Assignment of Blockchain Extension (BABE) A [block authoring](#block-author) protocol similar to [Aura](#authority-round-aura), except [authorities](#authority) win [slots](#slot) based on a Verifiable Random Function (VRF) instead of the round-robin selection method. The winning authority can select a chain and submit a new block. Learn more by reading the official Web3 Foundation [BABE research document](https://research.web3.foundation/Polkadot/protocols/block-production/Babe){target=\_blank}. ## Block Author The node responsible for the creation of a block, also called _block producers_. In a Proof of Work (PoW) blockchain, these nodes are called _miners_. ## Byzantine Fault Tolerance (BFT) The ability of a distributed computer network to remain operational if a certain proportion of its nodes or [authorities](#authority) are defective or behaving maliciously. A distributed network is typically considered Byzantine fault tolerant if it can remain functional, with up to one-third of nodes assumed to be defective, offline, actively malicious, and part of a coordinated attack. ### Byzantine Failure The loss of a network service due to node failures that exceed the proportion of nodes required to reach consensus. ### Practical Byzantine Fault Tolerance (pBFT) An early approach to Byzantine fault tolerance (BFT), practical Byzantine fault tolerance (pBFT) systems tolerate Byzantine behavior from up to one-third of participants. The communication overhead for such systems is `O(n²)`, where `n` is the number of nodes (participants) in the system. ### Preimage A preimage is the data that is input into a hash function to calculate a hash. Since a hash function is a [one-way function](https://en.wikipedia.org/wiki/One-way_function){target=\_blank}, the output, the hash, cannot be used to reveal the input, the preimage. ## Call In the context of pallets containing functions to be dispatched to the runtime, `Call` is an enumeration data type that describes the functions that can be dispatched with one variant per pallet. A `Call` represents a [dispatch](#dispatchable) data structure object. ## Chain Specification A chain specification file defines the properties required to run a node in an active or new Polkadot SDK-built network. It often contains the initial genesis runtime code, network properties (such as the network's name), the initial state for some pallets, and the boot node list. The chain specification file makes it easy to use a single Polkadot SDK codebase as the foundation for multiple independently configured chains. ## Collator An [author](#block-author) of a [parachain](#parachain) network. They aren't [authorities](#authority) in themselves, as they require a [relay chain](#relay-chain) to coordinate [consensus](#consensus). More details are found on the [Polkadot Collator Wiki](https://wiki.polkadot.network/learn/learn-collator/){target=\_blank}. ## Collective Most often used to refer to an instance of the Collective pallet on Polkadot SDK-based networks such as [Kusama](#kusama) or [Polkadot](#polkadot) if the Collective pallet is part of the FRAME-based runtime for the network. ## Consensus Consensus is the process blockchain nodes use to agree on a chain's canonical fork. It is composed of [authorship](#block-author), finality, and [fork-choice rule](#fork-choice-rulestrategy). In the Polkadot ecosystem, these three components are usually separate and the term consensus often refers specifically to authorship. See also [hybrid consensus](#hybrid-consensus). ## Consensus Algorithm Ensures a set of [actors](#authority)—who don't necessarily trust each other—can reach an agreement about the state as the result of some computation. Most consensus algorithms assume that up to one-third of the actors or nodes can be [Byzantine fault tolerant](#byzantine-fault-tolerance-bft). Consensus algorithms are generally concerned with ensuring two properties: - **Safety** - indicating that all honest nodes eventually agreed on the state of the chain - **Liveness** - indicating the ability of the chain to keep progressing ## Consensus Engine The node subsystem responsible for consensus tasks. For detailed information about the consensus strategies of the [Polkadot](#polkadot) network, see the [Polkadot Consensus](/polkadot-protocol/architecture/polkadot-chain/pos-consensus/){target=\_blank} blog series. See also [hybrid consensus](#hybrid-consensus). ## Coretime The time allocated for utilizing a core, measured in relay chain blocks. There are two types of coretime: *on-demand* and *bulk*. On-demand coretime refers to coretime acquired through bidding in near real-time for the validation of a single parachain block on one of the cores reserved specifically for on-demand orders. They are available as an on-demand coretime pool. Set of cores that are available on-demand. Cores reserved through bulk coretime could also be made available in the on-demand coretime pool, in parts or in entirety. Bulk coretime is a fixed duration of continuous coretime represented by an NFT that can be split, shared, or resold. It is managed by the [Broker pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/index.html){target=\_blank}. ## Development Phrase A [mnemonic phrase](https://en.wikipedia.org/wiki/Mnemonic#For_numerical_sequences_and_mathematical_operations){target=\_blank} that is intentionally made public. Well-known development accounts, such as Alice, Bob, Charlie, Dave, Eve, and Ferdie, are generated from the same secret phrase: ``` bottom drive obey lake curtain smoke basket hold race lonely fit walk ``` Many tools in the Polkadot SDK ecosystem, such as [`subkey`](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/bin/utils/subkey){target=\_blank}, allow you to implicitly specify an account using a derivation path such as `//Alice`. ## Digest An extensible field of the [block header](#header) that encodes information needed by several actors in a blockchain network, including: - [Light clients](#light-client) for chain synchronization - Consensus engines for block verification - The runtime itself, in the case of pre-runtime digests ## Dispatchable Function objects that act as the entry points in FRAME [pallets](#pallet). Internal or external entities can call them to interact with the blockchain’s state. They are a core aspect of the runtime logic, handling [transactions](#transaction) and other state-changing operations. ## Events A means of recording that some particular [state](#state) transition happened. In the context of [FRAME](#frame-framework-for-runtime-aggregation-of-modularized-entities), events are composable data types that each [pallet](#pallet) can individually define. Events in FRAME are implemented as a set of transient storage items inspected immediately after a block has been executed and reset during block initialization. ## Executor A means of executing a function call in a given [runtime](#runtime) with a set of dependencies. There are two orchestration engines in Polkadot SDK, _WebAssembly_ and _native_. - The _native executor_ uses a natively compiled runtime embedded in the node to execute calls. This is a performance optimization available to up-to-date nodes - The _WebAssembly executor_ uses a [Wasm](#webassembly-wasm) binary and a Wasm interpreter to execute calls. The binary is guaranteed to be up-to-date regardless of the version of the blockchain node because it is persisted in the [state](#state) of the Polkadot SDK-based chain ## Existential Deposit The minimum balance an account is allowed to have in the [Balances pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_balances/index.html){target=\_blank}. Accounts cannot be created with a balance less than the existential deposit amount. If an account balance drops below this amount, the Balances pallet uses [a FRAME System API](https://paritytech.github.io/substrate/master/frame_system/pallet/struct.Pallet.html#method.dec_ref){target=\_blank} to drop its references to that account. If the Balances pallet reference to an account is dropped, the account can be [reaped](https://paritytech.github.io/substrate/master/frame_system/pallet/struct.Pallet.html#method.allow_death){target=\_blank}. ## Extrinsic A general term for data that originates outside the runtime, is included in a block, and leads to some action. This includes user-initiated transactions and inherent transactions placed into the block by the block builder. It is a SCALE-encoded array typically consisting of a version number, signature, and varying data types indicating the resulting runtime function to be called. Extrinsics can take two forms: [inherents](#inherent-transactions) and [transactions](#transaction). For more technical details, see the [Polkadot spec](https://spec.polkadot.network/id-extrinsics){target=\_blank}. ## Fork Choice Rule/Strategy A fork choice rule or strategy helps determine which chain is valid when reconciling several network forks. A common fork choice rule is the [longest chain](https://paritytech.github.io/polkadot-sdk/master/sc_consensus/struct.LongestChain.html){target=\_blank}, in which the chain with the most blocks is selected. ## FRAME (Framework for Runtime Aggregation of Modularized Entities) Enables developers to create blockchain [runtime](#runtime) environments from a modular set of components called [pallets](#pallet). It utilizes a set of procedural macros to construct runtimes. [Visit the Polkadot SDK docs for more details on FRAME.](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/frame_runtime/index.html){target=\_blank} ## Full Node A node that prunes historical states, keeping only recently finalized block states to reduce storage needs. Full nodes provide current chain state access and allow direct submission and validation of [extrinsics](#extrinsic), maintaining network decentralization. ## Genesis Configuration A mechanism for specifying the initial state of a blockchain. By convention, this initial state or first block is commonly referred to as the genesis state or genesis block. The genesis configuration for Polkadot SDK-based chains is accomplished by way of a [chain specification](#chain-specification) file. ## GRANDPA A deterministic finality mechanism for blockchains that is implemented in the [Rust](https://www.rust-lang.org/){target=\_blank} programming language. The [formal specification](https://github.com/w3f/consensus/blob/master/pdf/grandpa-old.pdf){target=\_blank} is maintained by the [Web3 Foundation](https://web3.foundation/){target=\_blank}. ## Header A structure that aggregates the information used to summarize a block. Primarily, it consists of cryptographic information used by [light clients](#light-client) to get minimally secure but very efficient chain synchronization. ## Hybrid Consensus A blockchain consensus protocol that consists of independent or loosely coupled mechanisms for [block production](#block-author) and finality. Hybrid consensus allows the chain to grow as fast as probabilistic consensus protocols, such as [Aura](#authority-round-aura), while maintaining the same level of security as deterministic finality consensus protocols, such as [GRANDPA](#grandpa). ## Inherent Transactions A special type of unsigned transaction, referred to as _inherents_, that enables a block authoring node to insert information that doesn't require validation directly into a block. Only the block-authoring node that calls the inherent transaction function can insert data into its block. In general, validators assume the data inserted using an inherent transaction is valid and reasonable even if it can't be deterministically verified. ## JSON-RPC A stateless, lightweight remote procedure call protocol encoded in JavaScript Object Notation (JSON). JSON-RPC provides a standard way to call functions on a remote system by using JSON. For Polkadot SDK, this protocol is implemented through the [Parity JSON-RPC](https://github.com/paritytech/jsonrpc){target=\_blank} crate. ## Keystore A subsystem for managing keys for the purpose of producing new blocks. ## Kusama [Kusama](https://kusama.network/){target=\_blank} is a Polkadot SDK-based blockchain that implements a design similar to the [Polkadot](#polkadot) network. Kusama is a [canary](https://en.wiktionary.org/wiki/canary_in_a_coal_mine){target=\_blank} network and is referred to as [Polkadot's "wild cousin."](https://wiki.polkadot.network/learn/learn-comparisons-kusama/){target=\_blank} As a canary network, Kusama is expected to be more stable than a test network like [Westend](#westend) but less stable than a production network like [Polkadot](#polkadot). Kusama is controlled by its network participants and is intended to be stable enough to encourage meaningful experimentation. ## libp2p A peer-to-peer networking stack that allows the use of many transport mechanisms, including WebSockets (usable in a web browser). Polkadot SDK uses the [Rust implementation](https://github.com/libp2p/rust-libp2p){target=\_blank} of the `libp2p` networking stack. ## Light Client A type of blockchain node that doesn't store the [chain state](#state) or produce blocks. A light client can verify cryptographic primitives and provides a [remote procedure call (RPC)](https://en.wikipedia.org/wiki/Remote_procedure_call){target=\_blank} server, enabling blockchain users to interact with the network. ## Metadata Data that provides information about one or more aspects of a system. The metadata that exposes information about a Polkadot SDK blockchain enables you to interact with that system. ## Nominated Proof of Stake (NPoS) A method for determining [validators](#validator) or _[authorities](#authority)_ based on a willingness to commit their stake to the proper functioning of one or more block-producing nodes. ## Oracle An entity that connects a blockchain to a non-blockchain data source. Oracles enable the blockchain to access and act upon information from existing data sources and incorporate data from non-blockchain systems and services. ## Origin A [FRAME](#frame-framework-for-runtime-aggregation-of-modularized-entities) primitive that identifies the source of a [dispatched](#dispatchable) function call into the [runtime](#runtime). The FRAME System pallet defines three built-in [origins](#origin). As a [pallet](#pallet) developer, you can also define custom origins, such as those defined by the [Collective pallet](https://paritytech.github.io/substrate/master/pallet_collective/enum.RawOrigin.html){target=\_blank}. ## Pallet A module that can be used to extend the capabilities of a [FRAME](#frame-framework-for-runtime-aggregation-of-modularized-entities)-based [runtime](#runtime). Pallets bundle domain-specific logic with runtime primitives like [events](#events) and [storage items](#storage-item). ## Parachain A parachain is a blockchain that derives shared infrastructure and security from a _[relay chain](#relay-chain)_. You can learn more about parachains on the [Polkadot Wiki](https://wiki.polkadot.network/docs/en/learn-parachains){target=\_blank}. ## Paseo Paseo TestNet provisions testing on Polkadot's "production" runtime, which means less chance of feature or code mismatch when developing parachain apps. Specifically, after the [Polkadot Technical fellowship](https://wiki.polkadot.network/learn/learn-polkadot-technical-fellowship/){target=\_blank} proposes a runtime upgrade for Polkadot, this TestNet is updated, giving a period where the TestNet will be ahead of Polkadot to allow for testing. ## Polkadot The [Polkadot network](https://polkadot.com/){target=\_blank} is a blockchain that serves as the central hub of a heterogeneous blockchain network. It serves the role of the [relay chain](#relay-chain) and provides shared infrastructure and security to support [parachains](#parachain). ## Polkadot Cloud Polkadot Cloud is a platform for deploying resilient, customizable and scalable Web3 applications through Polkadot's functionality. It encompasses the wider Polkadot network infrastructure and security layer where parachains operate. The platform enables users to launch Ethereum-compatible chains, build specialized blockchains, and flexibly manage computing resources through on-demand or bulk coretime purchases. Initially launched with basic parachain functionality, Polkadot Cloud has evolved to offer enhanced flexibility with features like coretime, elastic scaling, and async backing for improved performance. ## Polkadot Hub Polkadot Hub is a Layer 1 platform that serves as the primary entry point to the Polkadot ecosystem, providing essential functionality without requiring parachain deployment. It offers core services including smart contracts, identity management, staking, governance, and interoperability with other ecosystems, making it simple and fast for both builders and users to get started in Web3. ## PolkaVM PolkaVM is a custom virtual machine optimized for performance, leveraging a RISC-V-based architecture to support Solidity and any language that compiles to RISC-V. It is specifically designed for the Polkadot ecosystem, enabling smart contract deployment and execution. ## Relay Chain Relay chains are blockchains that provide shared infrastructure and security to the [parachains](#parachain) in the network. In addition to providing [consensus](#consensus) capabilities, relay chains allow parachains to communicate and exchange digital assets without needing to trust one another. ## Rococo A [parachain](#parachain) test network for the Polkadot network. The [Rococo](#rococo) network is a Polkadot SDK-based blockchain with an October 14, 2024 deprecation date. Development teams are encouraged to use the Paseo TestNet instead. ## Runtime The runtime represents the [state transition function](#state-transition-function-stf) for a blockchain. In Polkadot SDK, the runtime is stored as a [Wasm](#webassembly-wasm) binary in the chain state. The Runtime is stored under a unique state key and can be modified during the execution of the state transition function. ## Slot A fixed, equal interval of time used by consensus engines such as [Aura](#authority-round-aura) and [BABE](#blind-assignment-of-blockchain-extension-babe). In each slot, a subset of [authorities](#authority) is permitted, or obliged, to [author](#block-author) a block. ## Sovereign Account The unique account identifier for each chain in the relay chain ecosystem. It is often used in cross-consensus (XCM) interactions to sign XCM messages sent to the relay chain or other chains in the ecosystem. The sovereign account for each chain is a root-level account that can only be accessed using the Sudo pallet or through governance. The account identifier is calculated by concatenating the Blake2 hash of a specific text string and the registered parachain identifier. ## SS58 Address Format A public key address based on the Bitcoin [`Base-58-check`](https://en.bitcoin.it/wiki/Base58Check_encoding){target=\_blank} encoding. Each Polkadot SDK SS58 address uses a `base-58` encoded value to identify a specific account on a specific Polkadot SDK-based chain The [canonical `ss58-registry`](https://github.com/paritytech/ss58-registry){target=\_blank} provides additional details about the address format used by different Polkadot SDK-based chains, including the network prefix and website used for different networks ## State Transition Function (STF) The logic of a blockchain that determines how the state changes when a block is processed. In Polkadot SDK, the state transition function is effectively equivalent to the [runtime](#runtime). ## Storage Item [FRAME](#frame-framework-for-runtime-aggregation-of-modularized-entities) primitives that provide type-safe data persistence capabilities to the [runtime](#runtime). Learn more in the [storage items](https://paritytech.github.io/polkadot-sdk/master/frame_support/storage/types/index.html){target=\_blank} reference document in the Polkadot SDK. ## Substrate A flexible framework for building modular, efficient, and upgradeable blockchains. Substrate is written in the [Rust](https://www.rust-lang.org/){target=\_blank} programming language and is maintained by [Parity Technologies](https://www.parity.io/){target=\_blank}. ## Transaction An [extrinsic](#extrinsic) that includes a signature that can be used to verify the account authorizing it inherently or via [signed extensions](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/signed_extensions/index.html){target=\_blank}. ## Transaction Era A definable period expressed as a range of block numbers during which a transaction can be included in a block. Transaction eras are used to protect against transaction replay attacks if an account is reaped and its replay-protecting nonce is reset to zero. ## Trie (Patricia Merkle Tree) A data structure used to represent sets of key-value pairs and enables the items in the data set to be stored and retrieved using a cryptographic hash. Because incremental changes to the data set result in a new hash, retrieving data is efficient even if the data set is very large. With this data structure, you can also prove whether the data set includes any particular key-value pair without access to the entire data set. In Polkadot SDK-based blockchains, state is stored in a trie data structure that supports the efficient creation of incremental digests. This trie is exposed to the [runtime](#runtime) as [a simple key/value map](#storage-item) where both keys and values can be arbitrary byte arrays. ## Validator A validator is a node that participates in the consensus mechanism of the network. Its roles include block production, transaction validation, network integrity, and security maintenance. ## WebAssembly (Wasm) An execution architecture that allows for the efficient, platform-neutral expression of deterministic, machine-executable logic. [Wasm](https://webassembly.org/){target=\_blank} can be compiled from many languages, including the [Rust](https://www.rust-lang.org/){target=\_blank} programming language. Polkadot SDK-based chains use a Wasm binary to provide portable [runtimes](#runtime) that can be included as part of the chain's state. ## Weight A convention used in Polkadot SDK-based blockchains to measure and manage the time it takes to validate a block. Polkadot SDK defines one unit of weight as one picosecond of execution time on reference hardware. The maximum block weight should be equivalent to one-third of the target block time with an allocation of one-third each for: - Block construction - Network propagation - Import and verification By defining weights, you can trade-off the number of transactions per second and the hardware required to maintain the target block time appropriate for your use case. Weights are defined in the runtime, meaning you can tune them using runtime updates to keep up with hardware and software improvements. ## Westend Westend is a Parity-maintained, Polkadot SDK-based blockchain that serves as a test network for the [Polkadot](#polkadot) network. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/ --- BEGIN CONTENT --- --- title: Learn About the Polkadot Protocol description: Gain a comprehensive understanding of Polkadot through this technical overview, exploring its architecture, fundamental concepts, and essential components. template: index-page.html --- # Learn About the Polkadot Protocol The Polkadot protocol is designed to enable scalable, secure, and interoperable networks. It introduces a unique multichain architecture that allows independent blockchains, known as parachains, to operate seamlessly while benefiting from the shared security of the relay chain. Polkadot’s decentralized governance ensures that network upgrades and decisions are community-driven, while its cross-chain messaging and interoperability features make it a hub for multichain applications. This section offers a comprehensive technical overview of the Polkadot Protocol, delving into its multichain architecture, foundational principles, cryptographic underpinnings, and on-chain governance system. These key components constitute the core building blocks that power Polkadot, enabling seamless collaboration between parachains, efficient network operation, and decentralized decision-making through OpenGov. Whether you're new to blockchain or an experienced developer, you'll gain insights into how the Polkadot Protocol enables scalable, interoperable, and decentralized networks. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/onchain-governance/ --- BEGIN CONTENT --- --- title: On-Chain Governance description: Explore Polkadot's decentralized on-chain governance system, OpenGov, including how it works, the proposal process, and key info for developers. template: index-page.html --- # On-Chain Governance Polkadot's on-chain governance system, OpenGov, enables decentralized decision-making across the network. It empowers stakeholders to propose, vote on, and enact changes with transparency and efficiency. This system ensures that governance is both flexible and inclusive, allowing developers to integrate custom governance solutions and mechanisms within the network. Understanding how OpenGov functions is crucial for anyone looking to engage with Polkadot’s decentralized ecosystem, whether you’re proposing upgrades, managing referenda, or exploring voting structures. At the core of Polkadot’s governance system are three key pallets: Preimage, Referenda, and Conviction Voting. These components enable flexible, decentralized decision-making, providing developers with the tools to create tailored governance solutions. This modular approach ensures governance remains dynamic, secure, and adaptable, fostering deeper participation and alignment with the network’s goals. By leveraging these pallets, developers can build custom governance models that shape the evolution of the Polkadot ecosystem. ## Start Building Governance Solutions To develop solutions related to Polkadot's governance system, it’s essential to understand three key pallets: - [**Preimage**](https://paritytech.github.io/polkadot-sdk/master/pallet_preimage/index.html){target=\_blank} - stores and manages the content or the detailed information of a referendum proposal before it is voted on - [**Referenda**](https://paritytech.github.io/polkadot-sdk/master/pallet_referenda/index.html){target=\_blank} - manages the lifecycle of a referendum, including proposal submission, voting, and execution. Once a referendum is proposed and voted on, it can be enacted if it passes the required threshold - [**Conviction Voting**](https://paritytech.github.io/polkadot-sdk/master/pallet_conviction_voting/index.html){target=\_blank} - manages the voting power based on the "conviction" or commitment of voters, providing a more flexible and nuanced voting mechanism ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/onchain-governance/origins-tracks/ --- BEGIN CONTENT --- --- title: Origins and Tracks description: Explore Polkadot's OpenGov origins and tracks system, defining privilege levels, decision processes, and tailored pathways for network proposals. categories: Polkadot Protocol --- # Origins and Tracks ## Introduction Polkadot's OpenGov system empowers decentralized decision-making and active community participation by tailoring the governance process to the impact of proposed changes. Through a system of origins and tracks, OpenGov ensures that every referendum receives the appropriate scrutiny, balancing security, inclusivity, and efficiency. This guide will help you understand the role of origins in classifying proposals by privilege and priority. You will learn how tracks guide proposals through tailored stages like voting, confirmation, and enactment and how to select the correct origin for your referendum to align with community expectations and network governance. Origins and tracks are vital in streamlining the governance workflow and maintaining Polkadot's resilience and adaptability. ## Origins Origins are the foundation of Polkadot's OpenGov governance system. They categorize proposals by privilege and define their decision-making rules. Each origin corresponds to a specific level of importance and risk, guiding how referendums progress through the governance process. - High-privilege origins like Root Origin govern critical network changes, such as core software upgrades - Lower-privilege origins like Small Spender handle minor requests, such as community project funding under 10,000 DOT Proposers select an origin based on the nature of their referendum. Origins determine parameters like approval thresholds, required deposits, and timeframes for voting and confirmation. Each origin is paired with a track, which acts as a roadmap for the proposal's lifecycle, including preparation, voting, and enactment. For a detailed list of origins and their associated parameters, see the [Polkadot OpenGov Origins](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/){target=\_blank} entry in the Polkadot Wiki. ## Tracks Tracks define a referendum's journey from submission to enactment, tailoring governance parameters to the impact of proposed changes. Each track operates independently and includes several key stages: - **Preparation** - time for community discussion before voting begins - **Voting** - period for token holders to cast their votes - **Decision** - finalization of results and determination of the proposal's outcome - **Confirmation** - period to verify sustained community support before enactment - **Enactment** - final waiting period before the proposal takes effect Tracks customize these stages with parameters like decision deposit requirements, voting durations, and approval thresholds, ensuring proposals from each origin receive the required scrutiny and process. For example, a runtime upgrade in the Root Origin track will have longer timeframes and stricter thresholds than a treasury request in the Small Spender track. ## Additional Resources - For a list of origins and tracks for Polkadot and Kusama, including associated parameters, see the [Origins and Tracks Info](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/#origins-and-tracks-info){target=\_blank} entry in the Polkadot Wiki. - For a deeper dive into the approval and support system, see the [Approval and Support](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#approval-and-support){target=\_blank} entry of the Polkadot Wiki. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/onchain-governance/overview/ --- BEGIN CONTENT --- --- title: On-Chain Governance Overview description: Discover Polkadot’s cutting-edge OpenGov system, enabling transparent, decentralized decision-making through direct democracy and flexible governance tracks. categories: Basics, Polkadot Protocol --- # On-Chain Governance ## Introduction Polkadot’s governance system exemplifies decentralized decision-making, empowering its community of stakeholders to shape the network’s future through active participation. The latest evolution, OpenGov, builds on Polkadot’s foundation by providing a more inclusive and efficient governance model. This guide will explain the principles and structure of OpenGov and walk you through its key components, such as Origins, Tracks, and Delegation. You will learn about improvements over earlier governance systems, including streamlined voting processes and enhanced stakeholder participation. With OpenGov, Polkadot achieves a flexible, scalable, and democratic governance framework that allows multiple proposals to proceed simultaneously, ensuring the network evolves in alignment with its community's needs. ## Governance Evolution Polkadot’s governance journey began with [Governance V1](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#governance-summary){target=\_blank}, a system that proved effective in managing treasury funds and protocol upgrades. However, it faced limitations, such as: - Slow voting cycles, causing delays in decision-making - Inflexibility in handling multiple referendums, restricting scalability To address these challenges, Polkadot introduced OpenGov, a governance model designed for greater inclusivity, efficiency, and scalability. OpenGov replaces the centralized structures of Governance V1, such as the Council and Technical Committee, with a fully decentralized and dynamic framework. For a full comparison of the historic and current governance models, visit the [Gov1 vs. Polkadot OpenGov](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#gov1-vs-polkadot-opengov){target=\_blank} section of the Polkadot Wiki. ## OpenGov Key Features OpenGov transforms Polkadot’s governance into a decentralized, stakeholder-driven model, eliminating centralized decision-making bodies like the Council. Key enhancements include: - **Decentralization** - shifts all decision-making power to the public, ensuring a more democratic process - **Enhanced delegation** - allows users to delegate their votes to trusted experts across specific governance tracks - **Simultaneous referendums** - multiple proposals can progress at once, enabling faster decision-making - **Polkadot Technical Fellowship** - a broad, community-driven group replacing the centralized Technical Committee This new system ensures Polkadot governance remains agile and inclusive, even as the ecosystem grows. ## Origins and Tracks In OpenGov, origins and tracks are central to managing proposals and votes. - **Origin** - determines the authority level of a proposal (e.g., Treasury, Root) which decides the track of all referendums from that origin - **Track** - define the procedural flow of a proposal, such as voting duration, approval thresholds, and enactment timelines Developers must be aware that referendums from different origins and tracks will take varying amounts of time to reach approval and enactment. The [Polkadot Technical Fellowship](https://wiki.polkadot.network/learn/learn-polkadot-technical-fellowship/){target=\_blank} has the option to shorten this timeline by whitelisting a proposal and allowing it to be enacted through the [Whitelist Caller](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/#whitelisted-caller){target=\_blank} origin. Visit [Origins and Tracks Info](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#origins-and-tracks){target=\_blank} for details on current origins and tracks, associated terminology, and parameters. ## Referendums In OpenGov, anyone can submit a referendum, fostering an open and participatory system. The timeline for a referendum depends on the privilege level of the origin with more significant changes offering more time for community voting and participation before enactment. The timeline for an individual referendum includes four distinct periods: - **Lead-in** - a minimum amount of time to allow for community participation, available room in the origin, and payment of the decision deposit. Voting is open during this period - **Decision** - voting continues - **Confirmation** - referendum must meet [approval and support](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#approval-and-support){target=\_blank} criteria during entire period to avoid rejection - **Enactment** - changes approved by the referendum are executed ### Vote on Referendums Voters can vote with their tokens on each referendum. Polkadot uses a voluntary token locking mechanism, called conviction voting, as a way for voters to increase their voting power. A token holder signals they have a stronger preference for approving a proposal based upon their willingness to lock up tokens. Longer voluntary token locks are seen as a signal of continual approval and translate to increased voting weight. See [Voting on a Referendum](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#voting-on-a-referendum){target=\_blank} for a deeper look at conviction voting and related token locks. ### Delegate Voting Power The OpenGov system also supports multi-role delegations, allowing token holders to assign their voting power on different tracks to entities with expertise in those areas. For example, if a token holder lacks the technical knowledge to evaluate proposals on the [Root track](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/#root){target=\_blank}, they can delegate their voting power for that track to an expert they trust to vote in the best interest of the network. This ensures informed decision-making across tracks while maintaining flexibility for token holders. Visit [Multirole Delegation](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#multirole-delegation){target=\_blank} for more details on delegating voting power. ### Cancel a Referendum Polkadot OpenGov has two origins for rejecting ongoing referendums: - [**Referendum Canceller**](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/#referendum-canceller){target=\_blank} - cancels an active referendum when non-malicious errors occur and refunds the deposits to the originators - [**Referendum Killer**](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/#referendum-killer){target=\_blank} - used for urgent, malicious cases this origin instantly terminates an active referendum and slashes deposits See [Cancelling, Killing, and Blacklisting](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#cancelling-killing--blacklisting){target=\_blank} for additional information on rejecting referendums. ## Additional Resources - [**Democracy pallet**](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/democracy/src){target=\_blank} - handles administration of general stakeholder voting - [**Gov2: Polkadot’s Next Generation of Decentralised Governance**](https://medium.com/polkadot-network/gov2-polkadots-next-generation-of-decentralised-governance-4d9ef657d11b){target=\_blank} - Medium article by Gavin Wood - [**Polkadot Direction**](https://matrix.to/#/#Polkadot-Direction:parity.io){target=\_blank} - Matrix Element client - [**Polkassembly**](https://polkadot.polkassembly.io/){target=\_blank} - OpenGov dashboard and UI - [**Polkadot.js Apps Governance**](https://polkadot.js.org/apps/#/referenda){target=\_blank} - overview of active referendums --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/accounts/ --- BEGIN CONTENT --- --- title: Polkadot SDK Accounts description: Learn about account structures, balances, and address formats in the Polkadot SDK, including how to manage lifecycle, references, and balances. categories: Basics, Polkadot Protocol --- # Accounts ## Introduction Accounts are essential for managing identity, transactions, and governance on the network in the Polkadot SDK. Understanding these components is critical for seamless development and operation on the network, whether you're building or interacting with Polkadot-based chains. This page will guide you through the essential aspects of accounts, including their data structure, balance types, reference counters, and address formats. You’ll learn how accounts are managed within the runtime, how balances are categorized, and how addresses are encoded and validated. ## Account Data Structure Accounts are foundational to any blockchain, and the Polkadot SDK provides a flexible management system. This section explains how the Polkadot SDK defines accounts and manages their lifecycle through data structures within the runtime. ### Account The [`Account` data type](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/type.Account.html){target=\_blank} is a storage map within the [System pallet](https://paritytech.github.io/polkadot-sdk/master/src/frame_system/lib.rs.html){target=\_blank} that links an account ID to its corresponding data. This structure is fundamental for mapping account-related information within the chain. The code snippet below shows how accounts are defined: ```rs /// The full account information for a particular account ID. #[pallet::storage] #[pallet::getter(fn account)] pub type Account = StorageMap< _, Blake2_128Concat, T::AccountId, AccountInfo, ValueQuery, >; ``` The preceding code block defines a storage map named `Account`. The `StorageMap` is a type of on-chain storage that maps keys to values. In the `Account` map, the key is an account ID, and the value is the account's information. Here, `T` represents the generic parameter for the runtime configuration, which is defined by the pallet's configuration trait (`Config`). The `StorageMap` consists of the following parameters: - **`_`** - used in macro expansion and acts as a placeholder for the storage prefix type. Tells the macro to insert the default prefix during expansion - **`Blake2_128Concat`** - the hashing function applied to keys in the storage map - **`T::AccountId`** - represents the key type, which corresponds to the account’s unique ID - **`AccountInfo`** - the value type stored in the map. For each account ID, the map stores an `AccountInfo` struct containing: - **`T::Nonce`** - a nonce for the account, which is incremented with each transaction to ensure transaction uniqueness - **`T::AccountData`** - custom account data defined by the runtime configuration, which could include balances, locked funds, or other relevant information - **`ValueQuery`** - defines how queries to the storage map behave when no value is found; returns a default value instead of `None` For a detailed explanation of storage maps, see the [`StorageMap`](https://paritytech.github.io/polkadot-sdk/master/frame_support/storage/types/struct.StorageMap.html){target=\_blank} entry in the Rust docs. ### Account Info The `AccountInfo` structure is another key element within the [System pallet](https://paritytech.github.io/polkadot-sdk/master/src/frame_system/lib.rs.html){target=\_blank}, providing more granular details about each account's state. This structure tracks vital data, such as the number of transactions and the account’s relationships with other modules. ```rs /// Information of an account. #[derive(Clone, Eq, PartialEq, Default, RuntimeDebug, Encode, Decode, TypeInfo, MaxEncodedLen)] pub struct AccountInfo { /// The number of transactions this account has sent. pub nonce: Nonce, /// The number of other modules that currently depend on this account's existence. The account /// cannot be reaped until this is zero. pub consumers: RefCount, /// The number of other modules that allow this account to exist. The account may not be reaped /// until this and `sufficients` are both zero. pub providers: RefCount, /// The number of modules that allow this account to exist for their own purposes only. The /// account may not be reaped until this and `providers` are both zero. pub sufficients: RefCount, /// The additional data that belongs to this account. Used to store the balance(s) in a lot of /// chains. pub data: AccountData, } ``` The `AccountInfo` structure includes the following components: - **`nonce`** - tracks the number of transactions initiated by the account, which ensures transaction uniqueness and prevents replay attacks - **`consumers`** - counts how many other modules or pallets rely on this account’s existence. The account cannot be removed from the chain (reaped) until this count reaches zero - **`providers`** - tracks how many modules permit this account’s existence. An account can only be reaped once both `providers` and `sufficients` are zero - **`sufficients`** - represents the number of modules that allow the account to exist for internal purposes, independent of any other modules - **`AccountData`** - a flexible data structure that can be customized in the runtime configuration, usually containing balances or other user-specific data This structure helps manage an account's state and prevents its premature removal while it is still referenced by other on-chain data or modules. The [`AccountInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.AccountInfo.html){target=\_blank} structure can vary as long as it satisfies the trait bounds defined by the `AccountData` associated type in the [`frame-system::pallet::Config`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/trait.Config.html){target=\_blank} trait. ### Account Reference Counters Polkadot SDK uses reference counters to track an account’s dependencies across different runtime modules. These counters ensure that accounts remain active while data is associated with them. The reference counters include: - **`consumers`** - prevents account removal while other pallets still rely on the account - **`providers`** - ensures an account is active before other pallets store data related to it - **`sufficients`** - indicates the account’s independence, ensuring it can exist even without a native token balance, such as when holding sufficient alternative assets #### Providers Reference Counters The `providers` counter ensures that an account is ready to be depended upon by other runtime modules. For example, it is incremented when an account has a balance above the existential deposit, which marks the account as active. The system requires this reference counter to be greater than zero for the `consumers` counter to be incremented, ensuring the account is stable before any dependencies are added. #### Consumers Reference Counters The `consumers` counter ensures that the account cannot be reaped until all references to it across the runtime have been removed. This check prevents the accidental deletion of accounts that still have active on-chain data. It is the user’s responsibility to clear out any data from other runtime modules if they wish to remove their account and reclaim their existential deposit. #### Sufficients Reference Counter The `sufficients` counter tracks accounts that can exist independently without relying on a native account balance. This is useful for accounts holding other types of assets, like tokens, without needing a minimum balance in the native token. For instance, the [Assets pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_assets/index.html){target=\_blank}, may increment this counter for an account holding sufficient tokens. #### Account Deactivation In Polkadot SDK-based chains, an account is deactivated when its reference counters (such as `providers`, `consumers`, and `sufficient`) reach zero. These counters ensure the account remains active as long as other runtime modules or pallets reference it. When all dependencies are cleared and the counters drop to zero, the account becomes deactivated and may be removed from the chain (reaped). This is particularly important in Polkadot SDK-based blockchains, where accounts with balances below the existential deposit threshold are pruned from storage to conserve state resources. Each pallet that references an account has cleanup functions that decrement these counters when the pallet no longer depends on the account. Once these counters reach zero, the account is marked for deactivation. #### Updating Counters The Polkadot SDK provides runtime developers with various methods to manage account lifecycle events, such as deactivation or incrementing reference counters. These methods ensure that accounts cannot be reaped while still in use. The following helper functions manage these counters: - **`inc_consumers()`** - increments the `consumer` reference counter for an account, signaling that another pallet depends on it - **`dec_consumers()`** - decrements the `consumer` reference counter, signaling that a pallet no longer relies on the account - **`inc_providers()`** - increments the `provider` reference counter, ensuring the account remains active - **`dec_providers()`** - decrements the `provider` reference counter, allowing for account deactivation when no longer in use - **`inc_sufficients()`** - increments the `sufficient` reference counter for accounts that hold sufficient assets - **`dec_sufficients()`** - decrements the `sufficient` reference counter To ensure proper account cleanup and lifecycle management, a corresponding decrement should be made for each increment action. The `System` pallet offers three query functions to assist developers in tracking account states: - [**`can_inc_consumer()`**](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.can_inc_consumer){target=\_blank} - checks if the account can safely increment the consumer reference - [**`can_dec_provider()`**](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.can_dec_provider){target=\_blank} - ensures that no consumers exist before allowing the decrement of the provider counter - [**`is_provider_required()`**](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.is_provider_required){target=\_blank} - verifies whether the account still has any active consumer references This modular and flexible system of reference counters tightly controls the lifecycle of accounts in Polkadot SDK-based blockchains, preventing the accidental removal or retention of unneeded accounts. You can refer to the [System pallet Rust docs](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html){target=\_blank} for more details. ## Account Balance Types In the Polkadot ecosystem, account balances are categorized into different types based on how the funds are utilized and their availability. These balance types determine the actions that can be performed, such as transferring tokens, paying transaction fees, or participating in governance activities. Understanding these balance types helps developers manage user accounts and implement balance-dependent logic. !!! note "A more efficient distribution of account balance types is in development" Soon, pallets in the Polkadot SDK will implement the [`Fungible` trait](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/tokens/fungible/index.html){target=\_blank} (see the [tracking issue](https://github.com/paritytech/polkadot-sdk/issues/226){target=\_blank} for more details). For example, the [`transaction-storage`](https://paritytech.github.io/polkadot-sdk/master/pallet_transaction_storage/index.html){target=\_blank} pallet changed the implementation of the [`Currency`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/tokens/currency/index.html){target=\_blank} trait (see the [Refactor transaction storage pallet to use fungible traits](https://github.com/paritytech/polkadot-sdk/pull/1800){target=\_blank} PR for further details): ```rust type BalanceOf = <::Currency as Currency<::AccountId>>::Balance; ``` To the [`Fungible`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/tokens/fungible/index.html){target=\_blank} trait: ```rust type BalanceOf = <::Currency as FnInspect<::AccountId>>::Balance; ``` This update will enable more efficient use of account balances, allowing the free balance to be utilized for on-chain activities such as setting proxies and managing identities. ### Balance Types The five main balance types are: - **Free balance** - represents the total tokens available to the account for any on-chain activity, including staking, governance, and voting. However, it may not be fully spendable or transferrable if portions of it are locked or reserved - **Locked balance** - portions of the free balance that cannot be spent or transferred because they are tied up in specific activities like [staking](https://wiki.polkadot.network/learn/learn-staking/#nominating-validators){target=\_blank}, [vesting](https://wiki.polkadot.network/learn/learn-guides-transfers/#vested-transfers-with-the-polkadot-js-ui){target=\_blank}, or participating in [governance](https://wiki.polkadot.network/learn/learn-polkadot-opengov/#voting-on-a-referendum){target=\_blank}. While the tokens remain part of the free balance, they are non-transferable for the duration of the lock - **Reserved balance** - funds locked by specific system actions, such as setting up an [identity](https://wiki.polkadot.network/learn/learn-identity/){target=\_blank}, creating [proxies](https://wiki.polkadot.network/learn/learn-proxies/){target=\_blank}, or submitting [deposits for governance proposals](https://wiki.polkadot.network/learn/learn-guides-polkadot-opengov/#claiming-opengov-deposits){target=\_blank}. These tokens are not part of the free balance and cannot be spent unless they are unreserved - **Spendable balance** - the portion of the free balance that is available for immediate spending or transfers. It is calculated by subtracting the maximum of locked or reserved amounts from the free balance, ensuring that existential deposit limits are met - **Untouchable balance** - funds that cannot be directly spent or transferred but may still be utilized for on-chain activities, such as governance participation or staking. These tokens are typically tied to certain actions or locked for a specific period The spendable balance is calculated as follows: ```text spendable = free - max(locked - reserved, ED) ``` Here, `free`, `locked`, and `reserved` are defined above. The `ED` represents the [existential deposit](https://wiki.polkadot.network/learn/learn-accounts/#existential-deposit-and-reaping){target=\_blank}, the minimum balance required to keep an account active and prevent it from being reaped. You may find you can't see all balance types when looking at your account via a wallet. Wallet providers often display only spendable, locked, and reserved balances. ### Locks Locks are applied to an account's free balance, preventing that portion from being spent or transferred. Locks are automatically placed when an account participates in specific on-chain activities, such as staking or governance. Although multiple locks may be applied simultaneously, they do not stack. Instead, the largest lock determines the total amount of locked tokens. Locks follow these basic rules: - If different locks apply to varying amounts, the largest lock amount takes precedence - If multiple locks apply to the same amount, the lock with the longest duration governs when the balance can be unlocked #### Locks Example Consider an example where an account has 80 DOT locked for both staking and governance purposes like so: - 80 DOT is staked with a 28-day lock period - 24 DOT is locked for governance with a 1x conviction and a 7-day lock period - 4 DOT is locked for governance with a 6x conviction and a 224-day lock period In this case, the total locked amount is 80 DOT because only the largest lock (80 DOT from staking) governs the locked balance. These 80 DOT will be released at different times based on the lock durations. In this example, the 24 DOT locked for governance will be released first since the shortest lock period is seven days. The 80 DOT stake with a 28-day lock period is released next. Now, all that remains locked is the 4 DOT for governance. After 224 days, all 80 DOT (minus the existential deposit) will be free and transferrable. ![Illustration of Lock Example](/images/polkadot-protocol/parachain-basics/accounts/locks-example-2.webp) #### Edge Cases for Locks In scenarios where multiple convictions and lock periods are active, the lock duration and amount are determined by the longest period and largest amount. For example, if you delegate with different convictions and attempt to undelegate during an active lock period, the lock may be extended for the full amount of tokens. For a detailed discussion on edge case lock behavior, see this [Stack Exchange post](https://substrate.stackexchange.com/questions/5067/delegating-and-undelegating-during-the-lock-period-extends-it-for-the-initial-am){target=\_blank}. ### Balance Types on Polkadot.js Polkadot.js provides a user-friendly interface for managing and visualizing various account balances on Polkadot and Kusama networks. When interacting with Polkadot.js, you will encounter multiple balance types that are critical for understanding how your funds are distributed and restricted. This section explains how different balances are displayed in the Polkadot.js UI and what each type represents. ![](/images/polkadot-protocol/parachain-basics/accounts/account-balance-types-1.webp) The most common balance types displayed on Polkadot.js are: - **Total balance** - the total number of tokens available in the account. This includes all tokens, whether they are transferable, locked, reserved, or vested. However, the total balance does not always reflect what can be spent immediately. In this example, the total balance is 0.6274 KSM - **Transferrable balance** - shows how many tokens are immediately available for transfer. It is calculated by subtracting the locked and reserved balances from the total balance. For example, if an account has a total balance of 0.6274 KSM and a transferrable balance of 0.0106 KSM, only the latter amount can be sent or spent freely - **Vested balance** - tokens that allocated to the account but released according to a specific schedule. Vested tokens remain locked and cannot be transferred until fully vested. For example, an account with a vested balance of 0.2500 KSM means that this amount is owned but not yet transferable - **Locked balance** - tokens that are temporarily restricted from being transferred or spent. These locks typically result from participating in staking, governance, or vested transfers. In Polkadot.js, locked balances do not stack—only the largest lock is applied. For instance, if an account has 0.5500 KSM locked for governance and staking, the locked balance would display 0.5500 KSM, not the sum of all locked amounts - **Reserved balance** - refers to tokens locked for specific on-chain actions, such as setting an identity, creating a proxy, or making governance deposits. Reserved tokens are not part of the free balance, but can be freed by performing certain actions. For example, removing an identity would unreserve those funds - **Bonded balance** - the tokens locked for staking purposes. Bonded tokens are not transferrable until they are unbonded after the unbonding period - **Redeemable balance** - the number of tokens that have completed the unbonding period and are ready to be unlocked and transferred again. For example, if an account has a redeemable balance of 0.1000 KSM, those tokens are now available for spending - **Democracy balance** - reflects the number of tokens locked for governance activities, such as voting on referenda. These tokens are locked for the duration of the governance action and are only released after the lock period ends By understanding these balance types and their implications, developers and users can better manage their funds and engage with on-chain activities more effectively. ## Address Formats The SS58 address format is a core component of the Polkadot SDK that enables accounts to be uniquely identified across Polkadot-based networks. This format is a modified version of Bitcoin's Base58Check encoding, specifically designed to accommodate the multi-chain nature of the Polkadot ecosystem. SS58 encoding allows each chain to define its own set of addresses while maintaining compatibility and checksum validation for security. ### Basic Format SS58 addresses consist of three main components: ```text base58encode(concat(,
, )) ``` - **Address type** - a byte or set of bytes that define the network (or chain) for which the address is intended. This ensures that addresses are unique across different Polkadot SDK-based chains - **Address** - the public key of the account encoded as bytes - **Checksum** - a hash-based checksum which ensures that addresses are valid and unaltered. The checksum is derived from the concatenated address type and address components, ensuring integrity The encoding process transforms the concatenated components into a Base58 string, providing a compact and human-readable format that avoids easily confused characters (e.g., zero '0', capital 'O', lowercase 'l'). This encoding function ([`encode`](https://docs.rs/bs58/latest/bs58/fn.encode.html){target=\_blank}) is implemented exactly as defined in Bitcoin and IPFS specifications, using the same alphabet as both implementations. For more details about the SS58 address format implementation, see the [`Ss58Codec`](https://paritytech.github.io/polkadot-sdk/master/sp_core/crypto/trait.Ss58Codec.html){target=\_blank} trait in the Rust Docs. ### Address Type The address type defines how an address is interpreted and to which network it belongs. Polkadot SDK uses different prefixes to distinguish between various chains and address formats: - **Address types `0-63`** - simple addresses, commonly used for network identifiers - **Address types `64-127`** - full addresses that support a wider range of network identifiers - **Address types `128-255`** - reserved for future address format extensions For example, Polkadot’s main network uses an address type of 0, while Kusama uses 2. This ensures that addresses can be used without confusion between networks. The address type is always encoded as part of the SS58 address, making it easy to quickly identify the network. Refer to the [SS58 registry](https://github.com/paritytech/ss58-registry){target=\_blank} for the canonical listing of all address type identifiers and how they map to Polkadot SDK-based networks. ### Address Length SS58 addresses can have different lengths depending on the specific format. Address lengths range from as short as 3 to 35 bytes, depending on the complexity of the address and network requirements. This flexibility allows SS58 addresses to adapt to different chains while providing a secure encoding mechanism. | Total | Type | Raw account | Checksum | |-------|------|-------------|----------| | 3 | 1 | 1 | 1 | | 4 | 1 | 2 | 1 | | 5 | 1 | 2 | 2 | | 6 | 1 | 4 | 1 | | 7 | 1 | 4 | 2 | | 8 | 1 | 4 | 3 | | 9 | 1 | 4 | 4 | | 10 | 1 | 8 | 1 | | 11 | 1 | 8 | 2 | | 12 | 1 | 8 | 3 | | 13 | 1 | 8 | 4 | | 14 | 1 | 8 | 5 | | 15 | 1 | 8 | 6 | | 16 | 1 | 8 | 7 | | 17 | 1 | 8 | 8 | | 35 | 1 | 32 | 2 | SS58 addresses also support different payload sizes, allowing a flexible range of account identifiers. ### Checksum Types A checksum is applied to validate SS58 addresses. Polkadot SDK uses a Blake2b-512 hash function to calculate the checksum, which is appended to the address before encoding. The checksum length can vary depending on the address format (e.g., 1-byte, 2-byte, or longer), providing varying levels of validation strength. The checksum ensures that an address is not modified or corrupted, adding an extra layer of security for account management. ### Validating Addresses SS58 addresses can be validated using the subkey command-line interface or the Polkadot.js API. These tools help ensure an address is correctly formatted and valid for the intended network. The following sections will provide an overview of how validation works with these tools. #### Using Subkey [Subkey](https://paritytech.github.io/polkadot-sdk/master/subkey/index.html){target=\_blank} is a CLI tool provided by Polkadot SDK for generating and managing keys. It can inspect and validate SS58 addresses. The `inspect` command gets a public key and an SS58 address from the provided secret URI. The basic syntax for the `subkey inspect` command is: ```bash subkey inspect [flags] [options] uri ``` For the `uri` command-line argument, you can specify the secret seed phrase, a hex-encoded private key, or an SS58 address. If the input is a valid address, the `subkey` program displays the corresponding hex-encoded public key, account identifier, and SS58 addresses. For example, to inspect the public keys derived from a secret seed phrase, you can run a command similar to the following: ```bash subkey inspect "caution juice atom organ advance problem want pledge someone senior holiday very" ``` The command displays output similar to the following:
subkey inspect "caution juice atom organ advance problem want pledge someone senior holiday very" Secret phrase `caution juice atom organ advance problem want pledge someone senior holiday very` is account: Secret seed: 0xc8fa03532fb22ee1f7f6908b9c02b4e72483f0dbd66e4cd456b8f34c6230b849 Public key (hex): 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 Public key (SS58): 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR Account ID: 0xd6a3105d6768e956e9e5d41050ac29843f98561410d3a47f9dd5b3b227ab8746 SS58 Address: 5Gv8YYFu8H1btvmrJy9FjjAWfb99wrhV3uhPFoNEr918utyR
The `subkey` program assumes an address is based on a public/private key pair. If you inspect an address, the command returns the 32-byte account identifier. However, not all addresses in Polkadot SDK-based networks are based on keys. Depending on the command-line options you specify and the input you provided, the command output might also display the network for which the address has been encoded. For example: ```bash subkey inspect "12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU" ``` The command displays output similar to the following:
subkey inspect "12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU" Public Key URI `12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU` is account: Network ID/Version: polkadot Public key (hex): 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a Account ID: 0x46ebddef8cd9bb167dc30878d7113b7e168e6f0646beffd77d69d39bad76b47a Public key (SS58): 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU SS58 Address: 12bzRJfh7arnnfPPUZHeJUaE62QLEwhK48QnH9LXeK2m1iZU
#### Using Polkadot.js API To verify an address in JavaScript or TypeScript projects, you can use the functions built into the [Polkadot.js API](https://polkadot.js.org/docs/){target=\_blank}. For example: ```js // Import Polkadot.js API dependencies const { decodeAddress, encodeAddress } = require('@polkadot/keyring'); const { hexToU8a, isHex } = require('@polkadot/util'); // Specify an address to test. const address = 'INSERT_ADDRESS_TO_TEST'; // Check address const isValidSubstrateAddress = () => { try { encodeAddress(isHex(address) ? hexToU8a(address) : decodeAddress(address)); return true; } catch (error) { return false; } }; // Query result const isValid = isValidSubstrateAddress(); console.log(isValid); ``` If the function returns `true`, the specified address is a valid address. #### Other SS58 Implementations Support for encoding and decoding Polkadot SDK SS58 addresses has been implemented in several other languages and libraries. - **Crystal** - [`wyhaines/base58.cr`](https://github.com/wyhaines/base58.cr){target=\_blank} - **Go** - [`itering/subscan-plugin`](https://github.com/itering/subscan-plugin){target=\_blank} - **Python** - [`polkascan/py-scale-codec`](https://github.com/polkascan/py-scale-codec){target=\_blank} - **TypeScript** - [`subsquid/squid-sdk`](https://github.com/subsquid/squid-sdk){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/blocks-transactions-fees/blocks/ --- BEGIN CONTENT --- --- title: Blocks description: Understand how blocks are produced, validated, and imported in Polkadot SDK-based blockchains, covering initialization, finalization, and authoring processes. categories: Basics, Polkadot Protocol --- # Blocks ## Introduction In the Polkadot SDK, blocks are fundamental to the functioning of the blockchain, serving as containers for [transactions](/polkadot-protocol/parachain-basics/blocks-transactions-fees/transactions/){target=\_blank} and changes to the chain's state. Blocks consist of headers and an array of transactions, ensuring the integrity and validity of operations on the network. This guide explores the essential components of a block, the process of block production, and how blocks are validated and imported across the network. By understanding these concepts, developers can better grasp how blockchains maintain security, consistency, and performance within the Polkadot ecosystem. ## What is a Block? In the Polkadot SDK, a block is a fundamental unit that encapsulates both the header and an array of transactions. The block header includes critical metadata to ensure the integrity and sequence of the blockchain. Here's a breakdown of its components: - **Block height** - indicates the number of blocks created in the chain so far - **Parent hash** - the hash of the previous block, providing a link to maintain the blockchain's immutability - **Transaction root** - cryptographic digest summarizing all transactions in the block - **State root** - a cryptographic digest representing the post-execution state - **Digest** - additional information that can be attached to a block, such as consensus-related messages Each transaction is part of a series that is executed according to the runtime's rules. The transaction root is a cryptographic digest of this series, which prevents alterations and enables succinct verification by light clients. This verification process allows light clients to confirm whether a transaction exists in a block with only the block header, avoiding downloading the entire block. ## Block Production When an authoring node is authorized to create a new block, it selects transactions from the transaction queue based on priority. This step, known as block production, relies heavily on the executive module to manage the initialization and finalization of blocks. The process is summarized as follows: ### Initialize Block The block initialization process begins with a series of function calls that prepare the block for transaction execution: 1. **Call `on_initialize`** - the executive module calls the [`on_initialize`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.on_initialize){target=\_blank} hook from the system pallet and other runtime pallets to prepare for the block's transactions 2. **Coordinate runtime calls** - coordinates function calls in the order defined by the transaction queue 3. **Verify information** - once [`on_initialize`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.on_initialize){target=\_blank} functions are executed, the executive module checks the parent hash in the block header and the trie root to verify information is consistent ### Finalize Block Once transactions are processed, the block must be finalized before being broadcast to the network. The finalization steps are as follows: 1. -**Call `on_finalize`** - the executive module calls the [`on_finalize`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.on_finalize){target=\_blank} hooks in each pallet to ensure any remaining state updates or checks are completed before the block is sealed and published 2. -**Verify information** - the block's digest and storage root in the header are checked against the initialized block to ensure consistency 3. -**Call `on_idle`** - the [`on_idle`](https://paritytech.github.io/polkadot-sdk/master/frame_support/traits/trait.Hooks.html#method.on_idle){target=\_blank} hook is triggered to process any remaining tasks using the leftover weight from the block ## Block Authoring and Import Once the block is finalized, it is gossiped to other nodes in the network. Nodes follow this procedure: 1. **Receive transactions** - the authoring node collects transactions from the network 2. **Validate** - transactions are checked for validity 3. **Queue** - valid transactions are placed in the transaction pool for execution 4. **Execute** - state changes are made as the transactions are executed 5. **Publish** - the finalized block is broadcast to the network ### Block Import Queue After a block is published, other nodes on the network can import it into their chain state. The block import queue is part of the outer node in every Polkadot SDK-based node and ensures incoming blocks are valid before adding them to the node's state. In most cases, you don't need to know details about how transactions are gossiped or how other nodes on the network import blocks. The following traits are relevant, however, if you plan to write any custom consensus logic or want a deeper dive into the block import queue: - [**`ImportQueue`**](https://paritytech.github.io/polkadot-sdk/master/sc_consensus/import_queue/trait.ImportQueue.html){target=\_blank} - the trait that defines the block import queue - [**`Link`**](https://paritytech.github.io/polkadot-sdk/master/sc_consensus/import_queue/trait.Link.html){target=\_blank} - the trait that defines the link between the block import queue and the network - [**`BasicQueue`**](https://paritytech.github.io/polkadot-sdk/master/sc_consensus/import_queue/struct.BasicQueue.html){target=\_blank} - a basic implementation of the block import queue - [**`Verifier`**](https://paritytech.github.io/polkadot-sdk/master/sc_consensus/import_queue/trait.Verifier.html){target=\_blank} - the trait that defines the block verifier - [**`BlockImport`**](https://paritytech.github.io/polkadot-sdk/master/sc_consensus/block_import/trait.BlockImport.html){target=\_blank} - the trait that defines the block import process These traits govern how blocks are validated and imported across the network, ensuring consistency and security. ## Additional Resources To learn more about the block structure in the Polkadot SDK runtime, see the [`Block` reference](https://paritytech.github.io/polkadot-sdk/master/sp_runtime/traits/trait.Block.html){target=\_blank} entry in the Rust Docs. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/blocks-transactions-fees/fees/ --- BEGIN CONTENT --- --- title: Transactions Weights and Fees description: Overview of transaction weights and fees in Polkadot SDK chains, detailing how fees are calculated using a defined formula and runtime specifics. categories: Basics, Polkadot Protocol --- # Transactions Weights and Fees ## Introductions When transactions are executed, or data is stored on-chain, the activity changes the chain's state and consumes blockchain resources. Because the resources available to a blockchain are limited, managing how operations on-chain consume them is important. In addition to being limited in practical terms, such as storage capacity, blockchain resources represent a potential attack vector for malicious users. For example, a malicious user might attempt to overload the network with messages to stop the network from producing new blocks. To protect blockchain resources from being drained or overloaded, you need to manage how they are made available and how they are consumed. The resources to be aware of include: - Memory usage - Storage input and output - Computation - Transaction and block size - State database size The Polkadot SDK provides block authors with several ways to manage access to resources and to prevent individual components of the chain from consuming too much of any single resource. Two of the most important mechanisms available to block authors are weights and transaction fees. [Weights](/polkadot-protocol/glossary/#weight){target=\_blank} manage the time it takes to validate a block and characterize the time it takes to execute the calls in the block's body. By controlling the execution time a block can consume, weights set limits on storage input, output, and computation. Some of the weight allowed for a block is consumed as part of the block's initialization and finalization. The weight might also be used to execute mandatory inherent extrinsic calls. To help ensure blocks don’t consume too much execution time and prevent malicious users from overloading the system with unnecessary calls, weights are combined with transaction fees. [Transaction fees](/polkadot-protocol/basics/blocks-transactions-fees/transactions/#transaction-fees){target=\_blank} provide an economic incentive to limit execution time, computation, and the number of calls required to perform operations. Transaction fees are also used to make the blockchain economically sustainable because they are typically applied to transactions initiated by users and deducted before a transaction request is executed. ## How Fees are Calculated The final fee for a transaction is calculated using the following parameters: - **`base fee`** - this is the minimum amount a user pays for a transaction. It is declared a base weight in the runtime and converted to a fee using the [`WeightToFee`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/pallet/trait.Config.html#associatedtype.WeightToFee){target=\_blank} conversion - **`weight fee`** - a fee proportional to the execution time (input and output and computation) that a transaction consumes - **`length fee`** - a fee proportional to the encoded length of the transaction - **`tip`** - an optional tip to increase the transaction’s priority, giving it a higher chance to be included in the transaction queue The base fee and proportional weight and length fees constitute the inclusion fee. The inclusion fee is the minimum fee that must be available for a transaction to be included in a block. ```text inclusion fee = base fee + weight fee + length fee ``` Transaction fees are withdrawn before the transaction is executed. After the transaction is executed, the weight can be adjusted to reflect the resources used. If a transaction uses fewer resources than expected, the transaction fee is corrected, and the adjusted transaction fee is deposited. ## Using the Transaction Payment Pallet The [Transaction Payment pallet](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/transaction-payment){target=\_blank} provides the basic logic for calculating the inclusion fee. You can also use the Transaction Payment pallet to: - Convert a weight value into a deductible fee based on a currency type using [`Config::WeightToFee`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/pallet/trait.Config.html#associatedtype.WeightToFee){target=\_blank} - Update the fee for the next block by defining a multiplier based on the chain’s final state at the end of the previous block using [`Config::FeeMultiplierUpdate`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/pallet/trait.Config.html#associatedtype.FeeMultiplierUpdate){target=\_blank} - Manage the withdrawal, refund, and deposit of transaction fees using [`Config::OnChargeTransaction`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/pallet/trait.Config.html#associatedtype.OnChargeTransaction){target=\_blank} You can learn more about these configuration traits in the [Transaction Payment documentation](https://paritytech.github.io/polkadot-sdk/master/pallet_transaction_payment/index.html){target=\_blank}. ### Understanding the Inclusion Fee The formula for calculating the inclusion fee is as follows: ```text inclusion_fee = base_fee + length_fee + [targeted_fee_adjustment * weight_fee] ``` And then, for calculating the final fee: ```text final_fee = inclusion_fee + tip ``` In the first formula, the `targeted_fee_adjustment` is a multiplier that can tune the final fee based on the network’s congestion. - The `base_fee` derived from the base weight covers inclusion overhead like signature verification - The `length_fee` is a per-byte fee that is multiplied by the length of the encoded extrinsic - The `weight_fee` fee is calculated using two parameters: - The `ExtrinsicBaseWeight` that is declared in the runtime and applies to all extrinsics - The `#[pallet::weight]` annotation that accounts for an extrinsic's complexity To convert the weight to `Currency`, the runtime must define a `WeightToFee` struct that implements a conversion function, [`Convert`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/pallet/struct.Pallet.html#method.weight_to_fee){target=\_blank}. Note that the extrinsic sender is charged the inclusion fee before the extrinsic is invoked. The fee is deducted from the sender's balance even if the transaction fails upon execution. ### Accounts with an Insufficient Balance If an account does not have a sufficient balance to pay the inclusion fee and remain alive—that is, enough to pay the inclusion fee and maintain the minimum existential deposit—then you should ensure the transaction is canceled so that no fee is deducted and the transaction does not begin execution. The Polkadot SDK doesn't enforce this rollback behavior. However, this scenario would be rare because the transaction queue and block-making logic perform checks to prevent it before adding an extrinsic to a block. ### Fee Multipliers The inclusion fee formula always results in the same fee for the same input. However, weight can be dynamic and—based on how [`WeightToFee`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/pallet/trait.Config.html#associatedtype.WeightToFee){target=\_blank} is defined—the final fee can include some degree of variability. The Transaction Payment pallet provides the [`FeeMultiplierUpdate`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/pallet/trait.Config.html#associatedtype.FeeMultiplierUpdate){target=\_blank} configurable parameter to account for this variability. The Polkadot network inspires the default update function and implements a targeted adjustment in which a target saturation level of block weight is defined. If the previous block is more saturated, the fees increase slightly. Similarly, if the last block has fewer transactions than the target, fees are decreased by a small amount. For more information about fee multiplier adjustments, see the [Web3 Research Page](https://research.web3.foundation/Polkadot/overview/token-economics#relay-chain-transaction-fees-and-per-block-transaction-limits){target=\_blank}. ## Transactions with Special Requirements Inclusion fees must be computable before execution and can only represent fixed logic. Some transactions warrant limiting resources with other strategies. For example: - Bonds are a type of fee that might be returned or slashed after some on-chain event. For example, you might want to require users to place a bond to participate in a vote. The bond might then be returned at the end of the referendum or slashed if the voter attempted malicious behavior - Deposits are fees that might be returned later. For example, you might require users to pay a deposit to execute an operation that uses storage. The user’s deposit could be returned if a subsequent operation frees up storage - Burn operations are used to pay for a transaction based on its internal logic. For example, a transaction might burn funds from the sender if the transaction creates new storage items to pay for the increased state size - Limits enable you to enforce constant or configurable limits on specific operations. For example, the default [Staking pallet](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/staking){target=\_blank} only allows nominators to nominate 16 validators to limit the complexity of the validator election process It is important to note that if you query the chain for a transaction fee, it only returns the inclusion fee. ## Default Weight Annotations All dispatchable functions in the Polkadot SDK must specify a weight. The way of doing that is using the annotation-based system that lets you combine fixed values for database read/write weight and/or fixed values based on benchmarks. The most basic example would look like this: ```rust #[pallet::weight(100_000)] fn my_dispatchable() { // ... } ``` Note that the [`ExtrinsicBaseWeight`](https://crates.parity.io/frame_support/weights/constants/struct.ExtrinsicBaseWeight.html){target=\_blank} is automatically added to the declared weight to account for the costs of simply including an empty extrinsic into a block. ### Weights and Database Read/Write Operations To make weight annotations independent of the deployed database backend, they are defined as a constant and then used in the annotations when expressing database accesses performed by the dispatchable: ```rust #[pallet::weight(T::DbWeight::get().reads_writes(1, 2) + 20_000)] fn my_dispatchable() { // ... } ``` This dispatchable allows one database to read and two to write, in addition to other things that add the additional 20,000. Database access is generally every time a value declared inside the [`#[pallet::storage]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.storage.html){target=\_blank} block is accessed. However, unique accesses are counted because after a value is accessed, it is cached, and reaccessing it does not result in a database operation. That is: - Multiple reads of the exact value count as one read - Multiple writes of the exact value count as one write - Multiple reads of the same value, followed by a write to that value, count as one read and one write - A write followed by a read-only counts as one write ### Dispatch Classes Dispatches are broken into three classes: - Normal - Operational - Mandatory If a dispatch is not defined as `Operational` or `Mandatory` in the weight annotation, the dispatch is identified as `Normal` by default. You can specify that the dispatchable uses another class like this: ```rust #[pallet::dispatch((DispatchClass::Operational))] fn my_dispatchable() { // ... } ``` This tuple notation also allows you to specify a final argument determining whether the user is charged based on the annotated weight. If you don't specify otherwise, `Pays::Yes` is assumed: ```rust #[pallet::dispatch(DispatchClass::Normal, Pays::No)] fn my_dispatchable() { // ... } ``` #### Normal Dispatches Dispatches in this class represent normal user-triggered transactions. These types of dispatches only consume a portion of a block's total weight limit. For information about the maximum portion of a block that can be consumed for normal dispatches, see [`AvailableBlockRatio`](https://paritytech.github.io/polkadot-sdk/master/frame_system/limits/struct.BlockLength.html){target=\_blank}. Normal dispatches are sent to the transaction pool. #### Operational Dispatches Unlike normal dispatches, which represent the usage of network capabilities, operational dispatches are those that provide network capabilities. Operational dispatches can consume the entire weight limit of a block. They are not bound by the [`AvailableBlockRatio`](https://paritytech.github.io/polkadot-sdk/master/frame_system/limits/struct.BlockLength.html){target=\_blank}. Dispatches in this class are given maximum priority and are exempt from paying the [`length_fee`](https://docs.rs/pallet-transaction-payment/latest/pallet_transaction_payment/){target=\_blank}. #### Mandatory Dispatches Mandatory dispatches are included in a block even if they cause the block to surpass its weight limit. You can only use the mandatory dispatch class for inherent transactions that the block author submits. This dispatch class is intended to represent functions in the block validation process. Because these dispatches are always included in a block regardless of the function weight, the validation process must prevent malicious nodes from abusing the function to craft valid but impossibly heavy blocks. You can typically accomplish this by ensuring that: - The operation performed is always light - The operation can only be included in a block once To make it more difficult for malicious nodes to abuse mandatory dispatches, they cannot be included in blocks that return errors. This dispatch class serves the assumption that it is better to allow an overweight block to be created than not to allow any block to be created at all. ### Dynamic Weights In addition to purely fixed weights and constants, the weight calculation can consider the input arguments of a dispatchable. The weight should be trivially computable from the input arguments with some basic arithmetic: ```rust use frame_support:: { dispatch:: { DispatchClass::Normal, Pays::Yes, }, weights::Weight, }; #[pallet::weight(FunctionOf( |args: (&Vec,)| args.0.len().saturating_mul(10_000), ) ] fn handle_users(origin, calls: Vec) { // Do something per user } ``` ## Post Dispatch Weight Correction Depending on the execution logic, a dispatchable function might consume less weight than was prescribed pre-dispatch. To correct weight, the function declares a different return type and returns its actual weight: ```rust #[pallet::weight(10_000 + 500_000_000)] fn expensive_or_cheap(input: u64) -> DispatchResultWithPostInfo { let was_heavy = do_calculation(input); if (was_heavy) { // None means "no correction" from the weight annotation. Ok(None.into()) } else { // Return the actual weight consumed. Ok(Some(10_000).into()) } } ``` ## Custom Fees You can also define custom fee systems through custom weight functions or inclusion fee functions. ### Custom Weights Instead of using the default weight annotations, you can create a custom weight calculation type using the weights module. The custom weight calculation type must implement the following traits: - [`WeighData`](https://crates.parity.io/frame_support/weights/trait.WeighData.html){target=\_blank} to determine the weight of the dispatch - [`ClassifyDispatch`](https://crates.parity.io/frame_support/weights/trait.ClassifyDispatch.html){target=\_blank} to determine the class of the dispatch - [`PaysFee`](https://crates.parity.io/frame_support/weights/trait.PaysFee.html){target=\_blank} to determine whether the sender of the dispatch pays fees The Polkadot SDK then bundles the output information of the three traits into the [`DispatchInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/dispatch/struct.DispatchInfo.html){target=\_blank} struct and provides it by implementing the [`GetDispatchInfo`](https://docs.rs/frame-support/latest/frame_support/dispatch/trait.GetDispatchInfo.html){target=\_blank} for all `Call` variants and opaque extrinsic types. This is used internally by the System and Executive modules. `ClassifyDispatch`, `WeighData`, and `PaysFee` are generic over T, which gets resolved into the tuple of all dispatch arguments except for the origin. The following example illustrates a struct that calculates the weight as `m * len(args)`, where `m` is a given multiplier and args is the concatenated tuple of all dispatch arguments. In this example, the dispatch class is `Operational` if the transaction has more than 100 bytes of length in arguments and will pay fees if the encoded length exceeds 10 bytes. ```rust struct LenWeight(u32); impl WeighData for LenWeight { fn weigh_data(&self, target: T) -> Weight { let multiplier = self.0; let encoded_len = target.encode().len() as u32; multiplier * encoded_len } } impl ClassifyDispatch for LenWeight { fn classify_dispatch(&self, target: T) -> DispatchClass { let encoded_len = target.encode().len() as u32; if encoded_len > 100 { DispatchClass::Operational } else { DispatchClass::Normal } } } impl PaysFee { fn pays_fee(&self, target: T) -> Pays { let encoded_len = target.encode().len() as u32; if encoded_len > 10 { Pays::Yes } else { Pays::No } } } ``` A weight calculator function can also be coerced to the final type of the argument instead of defining it as a vague type that can be encoded. The code would roughly look like this: ```rust struct CustomWeight; impl WeighData<(&u32, &u64)> for CustomWeight { fn weigh_data(&self, target: (&u32, &u64)) -> Weight { ... } } // given a dispatch: #[pallet::call] impl, I: 'static> Pallet { #[pallet::weight(CustomWeight)] fn foo(a: u32, b: u64) { ... } } ``` In this example, the `CustomWeight` can only be used in conjunction with a dispatch with a particular signature `(u32, u64)`, as opposed to `LenWeight`, which can be used with anything because there aren't any assumptions about ``. #### Custom Inclusion Fee The following example illustrates how to customize your inclusion fee. You must configure the appropriate associated types in the respective module. ```rust // Assume this is the balance type type Balance = u64; // Assume we want all the weights to have a `100 + 2 * w` conversion to fees struct CustomWeightToFee; impl WeightToFee for CustomWeightToFee { fn convert(w: Weight) -> Balance { let a = Balance::from(100); let b = Balance::from(2); let w = Balance::from(w); a + b * w } } parameter_types! { pub const ExtrinsicBaseWeight: Weight = 10_000_000; } impl frame_system::Config for Runtime { type ExtrinsicBaseWeight = ExtrinsicBaseWeight; } parameter_types! { pub const TransactionByteFee: Balance = 10; } impl transaction_payment::Config { type TransactionByteFee = TransactionByteFee; type WeightToFee = CustomWeightToFee; type FeeMultiplierUpdate = TargetedFeeAdjustment; } struct TargetedFeeAdjustment(sp_std::marker::PhantomData); impl> WeightToFee for TargetedFeeAdjustment { fn convert(multiplier: Fixed128) -> Fixed128 { // Don't change anything. Put any fee update info here. multiplier } } ``` ## Additional Resources You now know the weight system, how it affects transaction fee computation, and how to specify weights for your dispatchable calls. The next step is determining the correct weight for your dispatchable operations. You can use Substrate benchmarking functions and frame-benchmarking calls to test your functions with different parameters and empirically determine the proper weight in their worst-case scenarios. - [Benchmark](/develop/parachains/testing/benchmarking/) - [`SignedExtension`](https://paritytech.github.io/polkadot-sdk/master/sp_runtime/traits/trait.SignedExtension.html){target=\_blank} - [Custom weights for the Example pallet](https://github.com/paritytech/polkadot-sdk/blob/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/examples/basic/src/weights.rs){target=\_blank} - [Web3 Foundation Research](https://research.web3.foundation/Polkadot/overview/token-economics#relay-chain-transaction-fees-and-per-block-transaction-limits){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/blocks-transactions-fees/ --- BEGIN CONTENT --- --- title: Blocks, Transactions, and Fees description: Dive into the structure, processing, and lifecycle of blocks and transactions in Polkadot, and learn how fees are calculated and applied. template: index-page.html --- # Blocks, Transactions, and Fees Discover the inner workings of Polkadot’s blocks and transactions, including their structure, processing, and lifecycle within the network. Learn how blocks are authored, validated, and finalized, ensuring seamless operation and consensus across the ecosystem. Dive into the various types of transactions—signed, unsigned, and inherent—and understand how they are constructed, submitted, and validated. Uncover how Polkadot’s fee system balances resource usage and economic incentives. Explore the role of transaction weights, runtime specifics, and the precise formula used to calculate fees. These mechanisms ensure fair resource allocation while maintaining the network’s efficiency and scalability. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/blocks-transactions-fees/transactions/ --- BEGIN CONTENT --- --- title: Transactions description: Learn how to construct, submit, and validate transactions in the Polkadot SDK, covering signed, unsigned, and inherent types of transactions. categories: Basics, Polkadot Protocol --- # Transactions ## Introduction Transactions are essential components of blockchain networks, enabling state changes and the execution of key operations. In the Polkadot SDK, transactions, often called extrinsics, come in multiple forms, including signed, unsigned, and inherent transactions. This guide walks you through the different transaction types and how they're formatted, validated, and processed within the Polkadot ecosystem. You'll also learn how to customize transaction formats and construct transactions for FRAME-based runtimes, ensuring a complete understanding of how transactions are built and executed in Polkadot SDK-based chains. ## What Is a Transaction? In the Polkadot SDK, transactions represent operations that modify the chain's state, bundled into blocks for execution. The term extrinsic is often used to refer to any data that originates outside the runtime and is included in the chain. While other blockchain systems typically refer to these operations as "transactions," the Polkadot SDK adopts the broader term "extrinsic" to capture the wide variety of data types that can be added to a block. There are three primary types of transactions (extrinsics) in the Polkadot SDK: - **Signed transactions** - signed by the submitting account, often carrying transaction fees - **Unsigned transactions** - submitted without a signature, often requiring custom validation logic - **Inherent transactions** - typically inserted directly into blocks by block authoring nodes, without gossiping between peers Each type serves a distinct purpose, and understanding when and how to use each is key to efficiently working with the Polkadot SDK. ### Signed Transactions Signed transactions require an account's signature and typically involve submitting a request to execute a runtime call. The signature serves as a form of cryptographic proof that the sender has authorized the action, using their private key. These transactions often involve a transaction fee to cover the cost of execution and incentivize block producers. Signed transactions are the most common type of transaction and are integral to user-driven actions, such as token transfers. For instance, when you transfer tokens from one account to another, the sending account must sign the transaction to authorize the operation. For example, the [`pallet_balances::Call::transfer_allow_death`](https://paritytech.github.io/polkadot-sdk/master/pallet_balances/pallet/struct.Pallet.html#method.transfer_allow_death){target=\_blank} extrinsic in the Balances pallet allows you to transfer tokens. Since your account initiates this transaction, your account key is used to sign it. You'll also be responsible for paying the associated transaction fee, with the option to include an additional tip to incentivize faster inclusion in the block. ### Unsigned Transactions Unsigned transactions do not require a signature or account-specific data from the sender. Unlike signed transactions, they do not come with any form of economic deterrent, such as fees, which makes them susceptible to spam or replay attacks. Custom validation logic must be implemented to mitigate these risks and ensure these transactions are secure. Unsigned transactions typically involve scenarios where including a fee or signature is unnecessary or counterproductive. However, due to the absence of fees, they require careful validation to protect the network. For example, [`pallet_im_online::Call::heartbeat`](https://paritytech.github.io/polkadot-sdk/master/pallet_im_online/pallet/struct.Pallet.html#method.heartbeat){target=\_blank} extrinsic allows validators to send a heartbeat signal, indicating they are active. Since only validators can make this call, the logic embedded in the transaction ensures that the sender is a validator, making the need for a signature or fee redundant. Unsigned transactions are more resource-intensive than signed ones because custom validation is required, but they play a crucial role in certain operational scenarios, especially when regular user accounts aren't involved. ### Inherent Transactions Inherent transactions are a specialized type of unsigned transaction that is used primarily for block authoring. Unlike signed or other unsigned transactions, inherent transactions are added directly by block producers and are not broadcasted to the network or stored in the transaction queue. They don't require signatures or the usual validation steps and are generally used to insert system-critical data directly into blocks. A key example of an inherent transaction is inserting a timestamp into each block. The [`pallet_timestamp::Call::now`](https://paritytech.github.io/polkadot-sdk/master/pallet_timestamp/pallet/struct.Pallet.html#method.now-1){target=\_blank} extrinsic allows block authors to include the current time in the block they are producing. Since the block producer adds this information, there is no need for transaction validation, like signature verification. The validation in this case is done indirectly by the validators, who check whether the timestamp is within an acceptable range before finalizing the block. Another example is the [`paras_inherent::Call::enter`](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_parachains/paras_inherent/pallet/struct.Pallet.html#method.enter){target=\_blank} extrinsic, which enables parachain collator nodes to send validation data to the relay chain. This inherent transaction ensures that the necessary parachain data is included in each block without the overhead of gossiped transactions. Inherent transactions serve a critical role in block authoring by allowing important operational data to be added directly to the chain without needing the validation processes required for standard transactions. ## Transaction Formats Understanding the structure of signed and unsigned transactions is crucial for developers building on Polkadot SDK-based chains. Whether you're optimizing transaction processing, customizing formats, or interacting with the transaction pool, knowing the format of extrinsics, Polkadot's term for transactions, is essential. ### Types of Transaction Formats In Polkadot SDK-based chains, extrinsics can fall into three main categories: - **Unchecked extrinsics** - typically used for signed transactions that require validation. They contain a signature and additional data, such as a nonce and information for fee calculation. Unchecked extrinsics are named as such because they require validation checks before being accepted into the transaction pool - **Checked extrinsics** - typically used for inherent extrinsics (unsigned transactions); these don't require signature verification. Instead, they carry information such as where the extrinsic originates and any additional data required for the block authoring process - **Opaque extrinsics** - used when the format of an extrinsic is not yet fully committed or finalized. They are still decodable, but their structure can be flexible depending on the context ### Signed Transaction Data Structure A signed transaction typically includes the following components: - **Signature** - verifies the authenticity of the transaction sender - **Call** - the actual function or method call the transaction is requesting (for example, transferring funds) - **Nonce** - tracks the number of prior transactions sent from the account, helping to prevent replay attacks - **Tip** - an optional incentive to prioritize the transaction in block inclusion - **Additional data** - includes details such as spec version, block hash, and genesis hash to ensure the transaction is valid within the correct runtime and chain context Here's a simplified breakdown of how signed transactions are typically constructed in a Polkadot SDK runtime: ``` code + + ``` Each part of the signed transaction has a purpose, ensuring the transaction's authenticity and context within the blockchain. ### Signed Extensions Polkadot SDK also provides the concept of [signed extensions](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/signed_extensions/index.html){target=\_blank}, which allow developers to extend extrinsics with additional data or validation logic before they are included in a block. The [`SignedExtension`](https://paritytech.github.io/try-runtime-cli/sp_runtime/traits/trait.SignedExtension.html){target=\_blank} set helps enforce custom rules or protections, such as ensuring the transaction's validity or calculating priority. The transaction queue regularly calls signed extensions to verify a transaction's validity before placing it in the ready queue. This safeguard ensures transactions won't fail in a block. Signed extensions are commonly used to enforce validation logic and protect the transaction pool from spam and replay attacks. In FRAME, a signed extension can hold any of the following types by default: - [**`AccountId`**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_frame/runtime/types_common/type.AccountId.html){target=\_blank} - to encode the sender's identity - [**`Call`**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_frame/traits/trait.SignedExtension.html#associatedtype.Call){target=\_blank} - to encode the pallet call to be dispatched. This data is used to calculate transaction fees - [**`AdditionalSigned`**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_frame/traits/trait.SignedExtension.html#associatedtype.AdditionalSigned){target=\_blank} - to handle any additional data to go into the signed payload allowing you to attach any custom logic prior to dispatching a transaction - [**`Pre`**](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_frame/traits/trait.SignedExtension.html#associatedtype.Pre){target=\_blank} - to encode the information that can be passed from before a call is dispatched to after it gets dispatched Signed extensions can enforce checks like: - [**`CheckSpecVersion`**](https://paritytech.github.io/polkadot-sdk/master/src/frame_system/extensions/check_spec_version.rs.html){target=\_blank} - ensures the transaction is compatible with the runtime's current version - [**`CheckWeight`**](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.CheckWeight.html){target=\_blank} - calculates the weight (or computational cost) of the transaction, ensuring the block doesn't exceed the maximum allowed weight These extensions are critical in the transaction lifecycle, ensuring that only valid and prioritized transactions are processed. ## Transaction Construction Building transactions in the Polkadot SDK involves constructing a payload that can be verified, signed, and submitted for inclusion in a block. Each runtime in the Polkadot SDK has its own rules for validating and executing transactions, but there are common patterns for constructing a signed transaction. ### Construct a Signed Transaction A signed transaction in the Polkadot SDK includes various pieces of data to ensure security, prevent replay attacks, and prioritize processing. Here's an overview of how to construct one: 1. **Construct the unsigned payload** - gather the necessary information for the call, including: - **Pallet index** - identifies the pallet where the runtime function resides - **Function index** - specifies the particular function to call in the pallet - **Parameters** - any additional arguments required by the function call 2. **Create a signing payload** - once the unsigned payload is ready, additional data must be included: - **Transaction nonce** - unique identifier to prevent replay attacks - **Era information** - defines how long the transaction is valid before it's dropped from the pool - **Block hash** - ensures the transaction doesn't execute on the wrong chain or fork 3. **Sign the payload** - using the sender's private key, sign the payload to ensure that the transaction can only be executed by the account holder 4. **Serialize the signed payload** - once signed, the transaction must be serialized into a binary format, ensuring the data is compact and easy to transmit over the network 5. **Submit the serialized transaction** - finally, submit the serialized transaction to the network, where it will enter the transaction pool and wait for processing by an authoring node The following is an example of how a signed transaction might look: ``` rust node_runtime::UncheckedExtrinsic::new_signed( function.clone(), // some call sp_runtime::AccountId32::from(sender.public()).into(), // some sending account node_runtime::Signature::Sr25519(signature.clone()), // the account's signature extra.clone(), // the signed extensions ) ``` ### Transaction Encoding Before a transaction is sent to the network, it is serialized and encoded using a structured encoding process that ensures consistency and prevents tampering: - `[1]` - compact encoded length in bytes of the entire transaction - `[2]` - a u8 containing 1 byte to indicate whether the transaction is signed or unsigned (1 bit) and the encoded transaction version ID (7 bits) - `[3]` - if signed, this field contains an account ID, an SR25519 signature, and some extra data - `[4]` - encoded call data, including pallet and function indices and any required arguments This encoded format ensures consistency and efficiency in processing transactions across the network. By adhering to this format, applications can construct valid transactions and pass them to the network for execution. To learn more about how compact encoding works using SCALE, see the [SCALE Codec](https://github.com/paritytech/parity-scale-codec){target=\_blank} README on GitHub. ### Customize Transaction Construction Although the basic steps for constructing transactions are consistent across Polkadot SDK-based chains, developers can customize transaction formats and validation rules. For example: - **Custom pallets** - you can define new pallets with custom function calls, each with its own parameters and validation logic - **Signed extensions** - developers can implement custom extensions that modify how transactions are prioritized, validated, or included in blocks By leveraging Polkadot SDK's modular design, developers can create highly specialized transaction logic tailored to their chain's needs. ## Lifecycle of a Transaction In the Polkadot SDK, transactions are often referred to as extrinsics because the data in transactions originates outside of the runtime. These transactions contain data that initiates changes to the chain state. The most common type of extrinsic is a signed transaction, which is cryptographically verified and typically incurs a fee. This section focuses on how signed transactions are processed, validated, and ultimately included in a block. ### Define Transaction Properties The Polkadot SDK runtime defines key transaction properties, such as: - **Transaction validity** - ensures the transaction meets all runtime requirements - **Signed or unsigned** - identifies whether a transaction needs to be signed by an account - **State changes** - determines how the transaction modifies the state of the chain Pallets, which compose the runtime's logic, define the specific transactions that your chain supports. When a user submits a transaction, such as a token transfer, it becomes a signed transaction, verified by the user's account signature. If the account has enough funds to cover fees, the transaction is executed, and the chain's state is updated accordingly. ### Process on a Block Authoring Node In Polkadot SDK-based networks, some nodes are authorized to author blocks. These nodes validate and process transactions. When a transaction is sent to a node that can produce blocks, it undergoes a lifecycle that involves several stages, including validation and execution. Non-authoring nodes gossip the transaction across the network until an authoring node receives it. The following diagram illustrates the lifecycle of a transaction that's submitted to a network and processed by an authoring node. ![Transaction lifecycle diagram](/images/polkadot-protocol/parachain-basics/blocks-transactions-fees/transactions/transaction-lifecycle-1.webp) ### Validate and Queue Once a transaction reaches an authoring node, it undergoes an initial validation process to ensure it meets specific conditions defined in the runtime. This validation includes checks for: - **Correct nonce** - ensures the transaction is sequentially valid for the account - **Sufficient funds** - confirms the account can cover any associated transaction fees - **Signature validity** - verifies that the sender's signature matches the transaction data After these checks, valid transactions are placed in the transaction pool, where they are queued for inclusion in a block. The transaction pool regularly re-validates queued transactions to ensure they remain valid before being processed. To reach consensus, two-thirds of the nodes must agree on the order of the transactions executed and the resulting state change. Transactions are validated and queued on the local node in a transaction pool to prepare for consensus. #### Transaction Pool The transaction pool is responsible for managing valid transactions. It ensures that only transactions that pass initial validity checks are queued. Transactions that fail validation, expire, or become invalid for other reasons are removed from the pool. The transaction pool organizes transactions into two queues: - **Ready queue** - transactions that are valid and ready to be included in a block - **Future queue** - transactions that are not yet valid but could be in the future, such as transactions with a nonce too high for the current state Details on how the transaction pool validates transactions, including fee and signature handling, can be found in the [`validate_transaction`](https://paritytech.github.io/polkadot-sdk/master/sp_transaction_pool/runtime_api/trait.TaggedTransactionQueue.html#method.validate_transaction){target=\_blank} method. #### Invalid Transactions If a transaction is invalid, for example, due to an invalid signature or insufficient funds, it is rejected and won't be added to the block. Invalid transactions might be rejected for reasons such as: - The transaction has already been included in a block - The transaction's signature does not match the sender - The transaction is too large to fit in the current block ### Transaction Ordering and Priority When a node is selected as the next block author, it prioritizes transactions based on weight, length, and tip amount. The goal is to fill the block with high-priority transactions without exceeding its maximum size or computational limits. Transactions are ordered as follows: - **Inherents first** - inherent transactions, such as block timestamp updates, are always placed first - **Nonce-based ordering** - transactions from the same account are ordered by their nonce - **Fee-based ordering** - among transactions with the same nonce or priority level, those with higher fees are prioritized ### Transaction Execution Once a block author selects transactions from the pool, the transactions are executed in priority order. As each transaction is processed, the state changes are written directly to the chain's storage. It's important to note that these changes are not cached, meaning a failed transaction won't revert earlier state changes, which could leave the block in an inconsistent state. Events are also written to storage. Runtime logic should not emit an event before performing the associated actions. If the associated transaction fails after the event was emitted, the event will not revert. ## Transaction Mortality Transactions in the network can be configured as either mortal (with expiration) or immortal (without expiration). Every transaction payload contains a block checkpoint (reference block number and hash) and an era/validity period that determines how many blocks after the checkpoint the transaction remains valid. When a transaction is submitted, the network validates it against these parameters. If the transaction is not included in a block within the specified validity window, it is automatically removed from the transaction queue. - **Mortal transactions**: have a finite lifespan and will expire after a specified number of blocks. For example, a transaction with a block checkpoint of 1000 and a validity period of 64 blocks will be valid from blocks 1000 to 1064. - **Immortal transactions**: never expire and remain valid indefinitely. To create an immortal transaction, set the block checkpoint to 0 (genesis block), use the genesis hash as a reference, and set the validity period to 0. However, immortal transactions pose significant security risks through replay attacks. If an account is reaped (balance drops to zero, account removed) and later re-funded, malicious actors can replay old immortal transactions. The blockchain maintains only a limited number of prior block hashes for reference validation, called `BlockHashCount`. If your validity period exceeds `BlockHashCount`, the effective validity period becomes the minimum of your specified period and the block hash count. ## Unique Identifiers for Extrinsics Transaction hashes are **not unique identifiers** in Polkadot SDK-based chains. Key differences from traditional blockchains: - Transaction hashes serve only as fingerprints of transaction information - Multiple valid transactions can share the same hash - Hash uniqueness assumptions lead to serious issues For example, when an account is reaped (removed due to insufficient balance) and later recreated, it resets to nonce 0, allowing identical transactions to be valid at different points: | Block | Extrinsic Index | Hash | Origin | Nonce | Call | Result | |-------|----------------|------|-----------|-------|---------------------|-------------------------------| | 100 | 0 | 0x01 | Account A | 0 | Transfer 5 DOT to B | Account A reaped | | 150 | 5 | 0x02 | Account B | 4 | Transfer 7 DOT to A | Account A created (nonce = 0) | | 200 | 2 | 0x01 | Account A | 0 | Transfer 5 DOT to B | Successful transaction | Notice that blocks 100 and 200 contain transactions with identical hashes (0x01) but are completely different, valid operations occurring at different times. Additional complexity comes from Polkadot SDK's origin abstraction. Origins can represent collectives, governance bodies, or other non-account entities that don't maintain nonces like regular accounts and might dispatch identical calls multiple times with the same hash values. Each execution occurs in different chain states with different results. The correct way to uniquely identify an extrinsic on a Polkadot SDK-based chain is to use the block ID (height or hash) and the extrinsic index. Since the Polkadot SDK defines blocks as headers plus ordered arrays of extrinsics, the index position within a canonical block provides guaranteed uniqueness. ## Additional Resources For a video overview of the lifecycle of transactions and the types of transactions that exist, see the [Transaction lifecycle](https://www.youtube.com/watch?v=3pfM0GOp02c){target=\_blank} seminar from Parity Tech. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/chain-data/ --- BEGIN CONTENT --- --- title: Chain Data description: Learn how to expose and utilize chain data for blockchain applications. Discover runtime metadata, RPC APIs, and tools for efficient development. categories: Basics, Polkadot Protocol --- # Chain Data ## Introduction Understanding and leveraging on-chain data is a fundamental aspect of blockchain development. Whether you're building frontend applications or backend systems, accessing and decoding runtime metadata is vital to interacting with the blockchain. This guide introduces you to the tools and processes for generating and retrieving metadata, explains its role in application development, and outlines the additional APIs available for interacting with a Polkadot node. By mastering these components, you can ensure seamless communication between your applications and the blockchain. ## Application Development You might not be directly involved in building frontend applications as a blockchain developer. However, most applications that run on a blockchain require some form of frontend or user-facing client to enable users or other programs to access and modify the data that the blockchain stores. For example, you might develop a browser-based, mobile, or desktop application that allows users to submit transactions, post articles, view their assets, or track previous activity. The backend for that application is configured in the runtime logic for your blockchain, but the frontend client makes the runtime features accessible to your users. For your custom chain to be useful to others, you'll need to provide a client application that allows users to view, interact with, or update information that the blockchain keeps track of. In this article, you'll learn how to expose information about your runtime so that client applications can use it, see examples of the information exposed, and explore tools and libraries that use it. ## Understand Metadata Polkadot SDK-based blockchain networks are designed to expose their runtime information, allowing developers to learn granular details regarding pallets, RPC calls, and runtime APIs. The metadata also exposes their related documentation. The chain's metadata is [SCALE-encoded](/polkadot-protocol/basics/data-encoding/){target=\_blank}, allowing for the development of browser-based, mobile, or desktop applications to support the chain's runtime upgrades seamlessly. It is also possible to develop applications compatible with multiple Polkadot SDK-based chains simultaneously. ## Expose Runtime Information as Metadata To interact with a node or the state of the blockchain, you need to know how to connect to the chain and access the exposed runtime features. This interaction involves a Remote Procedure Call (RPC) through a node endpoint address, commonly through a secure web socket connection. An application developer typically needs to know the contents of the runtime logic, including the following details: - Version of the runtime the application is connecting to - Supported APIs - Implemented pallets - Defined functions and corresponding type signatures - Defined custom types - Exposed parameters users can set As the Polkadot SDK is modular and provides a composable framework for building blockchains, there are limitless opportunities to customize the schema of properties. Each runtime can be configured with its properties, including function calls and types, which can be changed over time with runtime upgrades. The Polkadot SDK enables you to generate the runtime metadata schema to capture information unique to a runtime. The metadata for a runtime describes the pallets in use and types defined for a specific runtime version. The metadata includes information about each pallet's storage items, functions, events, errors, and constants. The metadata also provides type definitions for any custom types included in the runtime. Metadata provides a complete inventory of a chain's runtime. It is key to enabling client applications to interact with the node, parse responses, and correctly format message payloads sent back to that chain. ## Generate Metadata To efficiently use the blockchain's networking resources and minimize the data transmitted over the network, the metadata schema is encoded using the [Parity SCALE Codec](https://github.com/paritytech/parity-scale-codec?tab=readme-ov-file#parity-scale-codec){target=\_blank}. This encoding is done automatically through the [`scale-info`](https://docs.rs/scale-info/latest/scale_info/){target=\_blank}crate. At a high level, generating the metadata involves the following steps: 1. The pallets in the runtime logic expose callable functions, types, parameters, and documentation that need to be encoded in the metadata 2. The `scale-info` crate collects type information for the pallets in the runtime, builds a registry of the pallets that exist in a particular runtime, and the relevant types for each pallet in the registry. The type information is detailed enough to enable encoding and decoding for every type 3. The [`frame-metadata`](https://github.com/paritytech/frame-metadata){target=\_blank} crate describes the structure of the runtime based on the registry provided by the `scale-info` crate 4. Nodes provide the RPC method `state_getMetadata` to return a complete description of all the types in the current runtime as a hex-encoded vector of SCALE-encoded bytes ## Retrieve Runtime Metadata The type information provided by the metadata enables applications to communicate with nodes using different runtime versions and across chains that expose different calls, events, types, and storage items. The metadata also allows libraries to generate a substantial portion of the code needed to communicate with a given node, enabling libraries like [`subxt`](https://github.com/paritytech/subxt){target=\_blank} to generate frontend interfaces that are specific to a target chain. ### Use Polkadot.js Visit the [Polkadot.js Portal](https://polkadot.js.org/apps/#/rpc){target=\_blank} and select the **Developer** dropdown in the top banner. Select **RPC Calls** to make the call to request metadata. Follow these steps to make the RPC call: 1. Select **state** as the endpoint to call 2. Select **`getMetadata(at)`** as the method to call 3. Click **Submit RPC call** to submit the call and return the metadata in JSON format ### Use Curl You can fetch the metadata for the network by calling the node's RPC endpoint. This request returns the metadata in bytes rather than human-readable JSON: ```sh curl -H "Content-Type: application/json" \ -d '{"id":1, "jsonrpc":"2.0", "method": "state_getMetadata"}' \ https://rpc.polkadot.io ``` ### Use Subxt [`subxt`](https://github.com/paritytech/subxt){target=\_blank} may also be used to fetch the metadata of any data in a human-readable JSON format: ```sh subxt metadata --url wss://rpc.polkadot.io --format json > spec.json ``` Another option is to use the [`subxt` explorer web UI](https://paritytech.github.io/subxt-explorer/#/){target=\_blank}. ## Client Applications and Metadata The metadata exposes the expected way to decode each type, meaning applications can send, retrieve, and process application information without manual encoding and decoding. Client applications must use the [SCALE codec library](https://github.com/paritytech/parity-scale-codec?tab=readme-ov-file#parity-scale-codec){target=\_blank} to encode and decode RPC payloads to use the metadata. Client applications use the metadata to interact with the node, parse responses, and format message payloads sent to the node. ## Metadata Format Although the SCALE-encoded bytes can be decoded using the `frame-metadata` and [`parity-scale-codec`](https://github.com/paritytech/parity-scale-codec){target=\_blank} libraries, there are other tools, such as `subxt` and the Polkadot-JS API, that can convert the raw data to human-readable JSON format. The types and type definitions included in the metadata returned by the `state_getMetadata` RPC call depend on the runtime's metadata version. In general, the metadata includes the following information: - A constant identifying the file as containing metadata - The version of the metadata format used in the runtime - Type definitions for all types used in the runtime and generated by the `scale-info` crate - Pallet information for the pallets included in the runtime in the order that they are defined in the `construct_runtime` macro !!!tip Depending on the frontend library used (such as the [Polkadot API](https://papi.how/){target=\_blank}), they may format the metadata differently than the raw format shown. The following example illustrates a condensed and annotated section of metadata decoded and converted to JSON: ```json [ 1635018093, { "V14": { "types": { "types": [{}] }, "pallets": [{}], "extrinsic": { "ty": 126, "version": 4, "signed_extensions": [{}] }, "ty": 141 } } ] ``` The constant `1635018093` is a magic number that identifies the file as a metadata file. The rest of the metadata is divided into the `types`, `pallets`, and `extrinsic` sections: - The `types` section contains an index of the types and information about each type's type signature - The `pallets` section contains information about each pallet in the runtime - The `extrinsic` section describes the type identifier and transaction format version that the runtime uses Different extrinsic versions can have varying formats, especially when considering [signed transactions](/polkadot-protocol/parachain-basics/blocks-transactions-fees/transactions/#signed-transactions){target=\_blank}. ### Pallets The following is a condensed and annotated example of metadata for a single element in the `pallets` array (the [`sudo`](https://paritytech.github.io/polkadot-sdk/master/pallet_sudo/index.html){target=\_blank} pallet): ```json { "name": "Sudo", "storage": { "prefix": "Sudo", "entries": [ { "name": "Key", "modifier": "Optional", "ty": { "Plain": 0 }, "default": [0], "docs": ["The `AccountId` of the sudo key."] } ] }, "calls": { "ty": 117 }, "event": { "ty": 42 }, "constants": [], "error": { "ty": 124 }, "index": 8 } ``` Every element metadata contains the name of the pallet it represents and information about its storage, calls, events, and errors. You can look up details about the definition of the calls, events, and errors by viewing the type index identifier. The type index identifier is the `u32` integer used to access the type information for that item. For example, the type index identifier for calls in the Sudo pallet is 117. If you view information for that type identifier in the `types` section of the metadata, it provides information about the available calls, including the documentation for each call. For example, the following is a condensed excerpt of the calls for the Sudo pallet: ```json { "id": 117, "type": { "path": ["pallet_sudo", "pallet", "Call"], "params": [ { "name": "T", "type": null } ], "def": { "variant": { "variants": [ { "name": "sudo", "fields": [ { "name": "call", "type": 114, "typeName": "Box<::RuntimeCall>" } ], "index": 0, "docs": [ "Authenticates sudo key, dispatches a function call with `Root` origin" ] }, { "name": "sudo_unchecked_weight", "fields": [ { "name": "call", "type": 114, "typeName": "Box<::RuntimeCall>" }, { "name": "weight", "type": 8, "typeName": "Weight" } ], "index": 1, "docs": [ "Authenticates sudo key, dispatches a function call with `Root` origin" ] }, { "name": "set_key", "fields": [ { "name": "new", "type": 103, "typeName": "AccountIdLookupOf" } ], "index": 2, "docs": [ "Authenticates current sudo key, sets the given AccountId (`new`) as the new sudo" ] }, { "name": "sudo_as", "fields": [ { "name": "who", "type": 103, "typeName": "AccountIdLookupOf" }, { "name": "call", "type": 114, "typeName": "Box<::RuntimeCall>" } ], "index": 3, "docs": [ "Authenticates sudo key, dispatches a function call with `Signed` origin from a given account" ] } ] } } } } ``` For each field, you can access type information and metadata for the following: - **Storage metadata** - provides the information required to enable applications to get information for specific storage items - **Call metadata** - includes information about the runtime calls defined by the `#[pallet]` macro including call names, arguments and documentation - **Event metadata** - provides the metadata generated by the `#[pallet::event]` macro, including the name, arguments, and documentation for each pallet event - **Constants metadata** - provides metadata generated by the `#[pallet::constant]` macro, including the name, type, and hex-encoded value of the constant - **Error metadata** - provides metadata generated by the `#[pallet::error]` macro, including the name and documentation for each pallet error !!!tip Type identifiers change from time to time, so you should avoid relying on specific type identifiers in your applications. ### Extrinsic The runtime generates extrinsic metadata and provides useful information about transaction format. When decoded, the metadata contains the transaction version and the list of signed extensions. For example: ```json { "extrinsic": { "ty": 126, "version": 4, "signed_extensions": [ { "identifier": "CheckNonZeroSender", "ty": 132, "additional_signed": 41 }, { "identifier": "CheckSpecVersion", "ty": 133, "additional_signed": 4 }, { "identifier": "CheckTxVersion", "ty": 134, "additional_signed": 4 }, { "identifier": "CheckGenesis", "ty": 135, "additional_signed": 11 }, { "identifier": "CheckMortality", "ty": 136, "additional_signed": 11 }, { "identifier": "CheckNonce", "ty": 138, "additional_signed": 41 }, { "identifier": "CheckWeight", "ty": 139, "additional_signed": 41 }, { "identifier": "ChargeTransactionPayment", "ty": 140, "additional_signed": 41 } ] }, "ty": 141 } ``` The type system is [composite](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_runtime_types/index.html){target=\_blank}, meaning each type identifier contains a reference to a specific type or to another type identifier that provides information about the associated primitive types. For example, you can encode the `BitVec` type, but to decode it properly, you must know the types used for the `Order` and `Store` types. To find type information for `Order` and `Store`, you can use the path in the decoded JSON to locate their type identifiers. ## Included RPC APIs A standard node comes with the following APIs to interact with a node: - [**`AuthorApiServer`**](https://paritytech.github.io/polkadot-sdk/master/sc_rpc/author/trait.AuthorApiServer.html){target=\_blank} - make calls into a full node, including authoring extrinsics and verifying session keys - [**`ChainApiServer`**](https://paritytech.github.io/polkadot-sdk/master/sc_rpc/chain/trait.ChainApiServer.html){target=\_blank} - retrieve block header and finality information - [**`OffchainApiServer`**](https://paritytech.github.io/polkadot-sdk/master/sc_rpc/offchain/trait.OffchainApiServer.html){target=\_blank} - make RPC calls for off-chain workers - [**`StateApiServer`**](https://paritytech.github.io/polkadot-sdk/master/sc_rpc/state/trait.StateApiServer.html){target=\_blank} - query information about on-chain state such as runtime version, storage items, and proofs - [**`SystemApiServer`**](https://paritytech.github.io/polkadot-sdk/master/sc_rpc/system/trait.SystemApiServer.html){target=\_blank} - retrieve information about network state, such as connected peers and node roles ## Additional Resources The following tools can help you locate and decode metadata: - [Subxt Explorer](https://paritytech.github.io/subxt-explorer/#/){target=\_blank} - [Metadata Portal 🌗](https://github.com/paritytech/metadata-portal){target=\_blank} - [De[code] Sub[strate]](https://github.com/paritytech/desub){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/cryptography/ --- BEGIN CONTENT --- --- title: Cryptography description: A concise guide to cryptography in blockchain, covering hash functions, encryption types, digital signatures, and elliptic curve applications. categories: Basics, Polkadot Protocol --- # Cryptography ## Introduction Cryptography forms the backbone of blockchain technology, providing the mathematical verifiability crucial for consensus systems, data integrity, and user security. While a deep understanding of the underlying mathematical processes isn't necessary for most blockchain developers, grasping the fundamental applications of cryptography is essential. This page comprehensively overviews cryptographic implementations used across Polkadot SDK-based chains and the broader blockchain ecosystem. ## Hash Functions Hash functions are fundamental to blockchain technology, creating a unique digital fingerprint for any piece of data, including simple text, images, or any other form of file. They map input data of any size to a fixed-size output (typically 32 bytes) using complex mathematical operations. Hashing is used to verify data integrity, create digital signatures, and provide a secure way to store passwords. This form of mapping is known as the ["pigeonhole principle,"](https://en.wikipedia.org/wiki/Pigeonhole_principle){target=\_blank} it is primarily implemented to efficiently and verifiably identify data from large sets. ### Key Properties of Hash Functions - **Deterministic** - the same input always produces the same output - **Quick computation** - it's easy to calculate the hash value for any given input - **Pre-image resistance** - it's infeasible to generate the input data from its hash - **Small changes in input yield large changes in output** - known as the ["avalanche effect"](https://en.wikipedia.org/wiki/Avalanche_effect){target=\_blank} - **Collision resistance** - the probabilities are extremely low to find two different inputs with the same hash ### Blake2 The Polkadot SDK utilizes Blake2, a state-of-the-art hashing method that offers: - Equal or greater security compared to [SHA-2](https://en.wikipedia.org/wiki/SHA-2){target=\_blank} - Significantly faster performance than other algorithms These properties make Blake2 ideal for blockchain systems, reducing sync times for new nodes and lowering the resources required for validation. For detailed technical specifications about Blake2, see the [official Blake2 paper](https://www.blake2.net/blake2.pdf){target=\_blank}. ## Types of Cryptography There are two different ways that cryptographic algorithms are implemented: symmetric cryptography and asymmetric cryptography. ### Symmetric Cryptography Symmetric encryption is a branch of cryptography that isn't based on one-way functions, unlike asymmetric cryptography. It uses the same cryptographic key to encrypt plain text and decrypt the resulting ciphertext. Symmetric cryptography is a type of encryption that has been used throughout history, such as the Enigma Cipher and the Caesar Cipher. It is still widely used today and can be found in Web2 and Web3 applications alike. There is only one single key, and a recipient must also have access to it to access the contained information. #### Advantages {: #symmetric-advantages } - Fast and efficient for large amounts of data - Requires less computational power #### Disadvantages {: #symmetric-disadvantages } - Key distribution can be challenging - Scalability issues in systems with many users ### Asymmetric Cryptography Asymmetric encryption is a type of cryptography that uses two different keys, known as a keypair: a public key, used to encrypt plain text, and a private counterpart, used to decrypt the ciphertext. The public key encrypts a fixed-length message that can only be decrypted with the recipient's private key and, sometimes, a set password. The public key can be used to cryptographically verify that the corresponding private key was used to create a piece of data without compromising the private key, such as with digital signatures. This has obvious implications for identity, ownership, and properties and is used in many different protocols across Web2 and Web3. #### Advantages {: #asymmetric-advantages } - Solves the key distribution problem - Enables digital signatures and secure key exchange #### Disadvantages {: #asymmetric-disadvantages } - Slower than symmetric encryption - Requires more computational resources ### Trade-offs and Compromises Symmetric cryptography is faster and requires fewer bits in the key to achieve the same level of security that asymmetric cryptography provides. However, it requires a shared secret before communication can occur, which poses issues to its integrity and a potential compromise point. On the other hand, asymmetric cryptography doesn't require the secret to be shared ahead of time, allowing for far better end-user security. Hybrid symmetric and asymmetric cryptography is often used to overcome the engineering issues of asymmetric cryptography, as it is slower and requires more bits in the key to achieve the same level of security. It encrypts a key and then uses the comparatively lightweight symmetric cipher to do the "heavy lifting" with the message. ## Digital Signatures Digital signatures are a way of verifying the authenticity of a document or message using asymmetric keypairs. They are used to ensure that a sender or signer's document or message hasn't been tampered with in transit, and for recipients to verify that the data is accurate and from the expected sender. Signing digital signatures only requires a low-level understanding of mathematics and cryptography. For a conceptual example -- when signing a check, it is expected that it cannot be cashed multiple times. This isn't a feature of the signature system but rather the check serialization system. The bank will check that the serial number on the check hasn't already been used. Digital signatures essentially combine these two concepts, allowing the signature to provide the serialization via a unique cryptographic fingerprint that cannot be reproduced. Unlike pen-and-paper signatures, knowledge of a digital signature cannot be used to create other signatures. Digital signatures are often used in bureaucratic processes, as they are more secure than simply scanning in a signature and pasting it onto a document. Polkadot SDK provides multiple different cryptographic schemes and is generic so that it can support anything that implements the [`Pair` trait](https://paritytech.github.io/polkadot-sdk/master/sp_core/crypto/trait.Pair.html){target=\_blank}. ### Example of Creating a Digital Signature The process of creating and verifying a digital signature involves several steps: 1. The sender creates a hash of the message 2. The hash is encrypted using the sender's private key, creating the signature 3. The message and signature are sent to the recipient 4. The recipient decrypts the signature using the sender's public key 5. The recipient hashes the received message and compares it to the decrypted hash If the hashes match, the signature is valid, confirming the message's integrity and the sender's identity. ## Elliptic Curve Blockchain technology requires the ability to have multiple keys creating a signature for block proposal and validation. To this end, Elliptic Curve Digital Signature Algorithm (ECDSA) and Schnorr signatures are two of the most commonly used methods. While ECDSA is a far simpler implementation, Schnorr signatures are more efficient when it comes to multi-signatures. Schnorr signatures bring some noticeable features over the ECDSA/EdDSA schemes: - It is better for hierarchical deterministic key derivations - It allows for native multi-signature through [signature aggregation](https://bitcoincore.org/en/2017/03/23/schnorr-signature-aggregation/){target=\_blank} - It is generally more resistant to misuse One sacrifice that is made when using Schnorr signatures over ECDSA is that both require 64 bytes, but only ECDSA signatures communicate their public key. ### Various Implementations - [ECDSA](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm){target=\_blank} - Polkadot SDK provides an ECDSA signature scheme using the [secp256k1](https://en.bitcoin.it/wiki/Secp256k1){target=\_blank} curve. This is the same cryptographic algorithm used to secure [Bitcoin](https://en.wikipedia.org/wiki/Bitcoin){target=\_blank} and [Ethereum](https://en.wikipedia.org/wiki/Ethereum){target=\_blank} - [Ed25519](https://en.wikipedia.org/wiki/EdDSA#Ed25519){target=\_blank} - is an EdDSA signature scheme using [Curve25519](https://en.wikipedia.org/wiki/Curve25519){target=\_blank}. It is carefully engineered at several levels of design and implementation to achieve very high speeds without compromising security - [SR25519](https://research.web3.foundation/Polkadot/security/keys/accounts-more){target=\_blank} - is based on the same underlying curve as Ed25519. However, it uses Schnorr signatures instead of the EdDSA scheme --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/data-encoding/ --- BEGIN CONTENT --- --- title: Data Encoding description: SCALE codec enables fast, efficient data encoding, ideal for resource-constrained environments like Wasm, supporting custom types and compact encoding. categories: Basics, Polkadot Protocol --- # Data Encoding ## Introduction The Polkadot SDK uses a lightweight and efficient encoding/decoding mechanism to optimize data transmission across the network. This mechanism, known as the _SCALE_ codec, is used for serializing and deserializing data. The SCALE codec enables communication between the runtime and the outer node. This mechanism is designed for high-performance, copy-free data encoding and decoding in resource-constrained environments like the Polkadot SDK [Wasm runtime](/develop/parachains/deployment/build-deterministic-runtime/#introduction){target=\_blank}. It is not self-describing, meaning the decoding context must fully know the encoded data types. Parity's libraries utilize the [`parity-scale-codec`](https://github.com/paritytech/parity-scale-codec){target=\_blank} crate (a Rust implementation of the SCALE codec) to handle encoding and decoding for interactions between RPCs and the runtime. The `codec` mechanism is ideal for Polkadot SDK-based chains because: - It is lightweight compared to generic serialization frameworks like [`serde`](https://serde.rs/){target=\_blank}, which add unnecessary bulk to binaries - It doesn’t rely on Rust’s `libstd`, making it compatible with `no_std` environments like Wasm runtime - It integrates seamlessly with Rust, allowing easy derivation of encoding and decoding logic for new types using `#[derive(Encode, Decode)]` Defining a custom encoding scheme in the Polkadot SDK-based chains, rather than using an existing Rust codec library, is crucial for enabling cross-platform and multi-language support. ## SCALE Codec The codec is implemented using the following traits: - [`Encode`](#encode) - [`Decode`](#decode) - [`CompactAs`](#compactas) - [`HasCompact`](#hascompact) - [`EncodeLike`](#encodelike) ### Encode The [`Encode`](https://docs.rs/parity-scale-codec/latest/parity_scale_codec/trait.Encode.html){target=\_blank} trait handles data encoding into SCALE format and includes the following key functions: - **`size_hint(&self) -> usize`** - estimates the number of bytes required for encoding to prevent multiple memory allocations. This should be inexpensive and avoid complex operations. Optional if the size isn’t known - **`encode_to(&self, dest: &mut T)`** - encodes the data, appending it to a destination buffer - **`encode(&self) -> Vec`** - encodes the data and returns it as a byte vector - **`using_encoded R>(&self, f: F) -> R`** - encodes the data and passes it to a closure, returning the result - **`encoded_size(&self) -> usize`** - calculates the encoded size. Should be used when the encoded data isn’t required !!!tip For best performance, value types should override `using_encoded`, and allocating types should override `encode_to`. It's recommended to implement `size_hint` for all types where possible. ### Decode The [`Decode`](https://docs.rs/parity-scale-codec/latest/parity_scale_codec/trait.Decode.html){target=\_blank} trait handles decoding SCALE-encoded data back into the appropriate types: - **`fn decode(value: &mut I) -> Result`** - decodes data from the SCALE format, returning an error if decoding fails ### CompactAs The [`CompactAs`](https://docs.rs/parity-scale-codec/latest/parity_scale_codec/trait.CompactAs.html){target=\_blank} trait wraps custom types for compact encoding: - **`encode_as(&self) -> &Self::As`** - encodes the type as a compact type - **`decode_from(_: Self::As) -> Result`** - decodes from a compact encoded type ### HasCompact The [`HasCompact`](https://docs.rs/parity-scale-codec/latest/parity_scale_codec/trait.HasCompact.html){target=\_blank} trait indicates a type supports compact encoding. ### EncodeLike The [`EncodeLike`](https://docs.rs/parity-scale-codec/latest/parity_scale_codec/trait.EncodeLike.html){target=\_blank} trait is used to ensure multiple types that encode similarly are accepted by the same function. When using `derive`, it is automatically implemented. ### Data Types The table below outlines how the Rust implementation of the Parity SCALE codec encodes different data types. | Type | Description | Example SCALE Decoded Value | SCALE Encoded Value | |-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------| | Boolean | Boolean values are encoded using the least significant bit of a single byte. | `false` / `true` | `0x00` / `0x01` | | Compact/general integers | A "compact" or general integer encoding is sufficient for encoding large integers (up to 2^536) and is more efficient at encoding most values than the fixed-width version. | `unsigned integer 0` / `unsigned integer 1` / `unsigned integer 42` / `unsigned integer 69` / `unsigned integer 65535` / `BigInt(100000000000000)` | `0x00` / `0x04` / `0xa8` / `0x1501` / `0xfeff0300` / `0x0b00407a10f35a` | | Enumerations (tagged-unions) | A fixed number of variants, each mutually exclusive and potentially implying a further value or series of values. Encoded as the first byte identifying the index of the variant that the value is. Any further bytes are used to encode any data that the variant implies. Thus, no more than 256 variants are supported. | `Int(42)` and `Bool(true)` where `enum IntOrBool { Int(u8), Bool(bool) }` | `0x002a` and `0x0101` | | Fixed-width integers | Basic integers are encoded using a fixed-width little-endian (LE) format. | `signed 8-bit integer 69` / `unsigned 16-bit integer 42` / `unsigned 32-bit integer 16777215` | `0x45` / `0x2a00` / `0xffffff00` | | Options | One or zero values of a particular type. | `Some` / `None` | `0x01` followed by the encoded value / `0x00` | | Results | Results are commonly used enumerations which indicate whether certain operations were successful or unsuccessful. | `Ok(42)` / `Err(false)` | `0x002a` / `0x0100` | | Strings | Strings are Vectors of bytes (Vec) containing a valid UTF8 sequence. | | | | Structs | For structures, the values are named, but that is irrelevant for the encoding (names are ignored - only order matters). | `SortedVecAsc::from([3, 5, 2, 8])` | `[3, 2, 5, 8] ` | | Tuples | A fixed-size series of values, each with a possibly different but predetermined and fixed type. This is simply the concatenation of each encoded value. | Tuple of compact unsigned integer and boolean: `(3, false) ` | `0x0c00` | | Vectors (lists, series, sets) | A collection of same-typed values is encoded, prefixed with a compact encoding of the number of items, followed by each item's encoding concatenated in turn. | Vector of unsigned `16`-bit integers: `[4, 8, 15, 16, 23, 42] ` | `0x18040008000f00100017002a00` | ## Encode and Decode Rust Trait Implementations Here's how the `Encode` and `Decode` traits are implemented: ```rust use parity_scale_codec::{Encode, Decode}; [derive(Debug, PartialEq, Encode, Decode)] enum EnumType { #[codec(index = 15)] A, B(u32, u64), C { a: u32, b: u64, }, } let a = EnumType::A; let b = EnumType::B(1, 2); let c = EnumType::C { a: 1, b: 2 }; a.using_encoded(|ref slice| { assert_eq!(slice, &b"\x0f"); }); b.using_encoded(|ref slice| { assert_eq!(slice, &b"\x01\x01\0\0\0\x02\0\0\0\0\0\0\0"); }); c.using_encoded(|ref slice| { assert_eq!(slice, &b"\x02\x01\0\0\0\x02\0\0\0\0\0\0\0"); }); let mut da: &[u8] = b"\x0f"; assert_eq!(EnumType::decode(&mut da).ok(), Some(a)); let mut db: &[u8] = b"\x01\x01\0\0\0\x02\0\0\0\0\0\0\0"; assert_eq!(EnumType::decode(&mut db).ok(), Some(b)); let mut dc: &[u8] = b"\x02\x01\0\0\0\x02\0\0\0\0\0\0\0"; assert_eq!(EnumType::decode(&mut dc).ok(), Some(c)); let mut dz: &[u8] = &[0]; assert_eq!(EnumType::decode(&mut dz).ok(), None); ``` ## SCALE Codec Libraries Several SCALE codec implementations are available in various languages. Here's a list of them: - **AssemblyScript** - [`LimeChain/as-scale-codec`](https://github.com/LimeChain/as-scale-codec){target=\_blank} - **C** - [`MatthewDarnell/cScale`](https://github.com/MatthewDarnell/cScale){target=\_blank} - **C++** - [`qdrvm/scale-codec-cpp`](https://github.com/qdrvm/scale-codec-cpp){target=\_blank} - **JavaScript** - [`polkadot-js/api`](https://github.com/polkadot-js/api){target=\_blank} - **Dart** - [`leonardocustodio/polkadart`](https://github.com/leonardocustodio/polkadart){target=\_blank} - **Haskell** - [`airalab/hs-web3`](https://github.com/airalab/hs-web3/tree/master/packages/scale){target=\_blank} - **Golang** - [`itering/scale.go`](https://github.com/itering/scale.go){target=\_blank} - **Java** - [`splix/polkaj`](https://github.com/splix/polkaj){target=\_blank} - **Python** - [`polkascan/py-scale-codec`](https://github.com/polkascan/py-scale-codec){target=\_blank} - **Ruby** - [` wuminzhe/scale_rb`](https://github.com/wuminzhe/scale_rb){target=\_blank} - **TypeScript** - [`parity-scale-codec-ts`](https://github.com/tjjfvi/subshape){target=\_blank}, [`scale-ts`](https://github.com/unstoppablejs/unstoppablejs/tree/main/packages/scale-ts#scale-ts){target=\_blank}, [`soramitsu/scale-codec-js-library`](https://github.com/soramitsu/scale-codec-js-library){target=\_blank}, [`subsquid/scale-codec`](https://github.com/subsquid/squid-sdk/tree/master/substrate/scale-codec){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/ --- BEGIN CONTENT --- --- title: Parachain Basics description: Discover Polkadot’s technical foundations, from blockchain basics and cryptography to network features like interoperability and randomness. template: index-page.html --- # Parachain Basics This section equips developers with the essential knowledge to create, deploy, and enhance applications and blockchains within the Polkadot ecosystem. Gain a comprehensive understanding of Polkadot’s foundational components, including accounts, balances, and transactions, as well as advanced topics like data encoding and cryptographic methods. Mastering these concepts is vital for building robust and secure applications on Polkadot. By exploring these core topics, developers can leverage Polkadot's unique architecture to build scalable and interoperable solutions. From understanding how Polkadot's networks operate to implementing efficient fee mechanisms and utilizing tools like SCALE encoding, this section provides the building blocks for innovation. Whether you're optimizing blockchain performance or designing cross-chain functionality, these insights will help you navigate Polkadot’s ecosystem with confidence. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/interoperability/ --- BEGIN CONTENT --- --- title: Interoperability description: Explore the importance of interoperability in the Polkadot ecosystem, covering XCM, bridges, and cross-chain communication. categories: Basics, Polkadot Protocol --- # Interoperability ## Introduction Interoperability lies at the heart of the Polkadot ecosystem, enabling communication and collaboration across a diverse range of blockchains. By bridging the gaps between parachains, relay chains, and even external networks, Polkadot unlocks the potential for truly decentralized applications, efficient resource sharing, and scalable solutions. Polkadot’s design ensures that blockchains can transcend their individual limitations by working together as part of a unified system. This cooperative architecture is what sets Polkadot apart in the blockchain landscape. ## Why Interoperability Matters The blockchain ecosystem is inherently fragmented. Different blockchains excel in specialized domains such as finance, gaming, or supply chain management, but these chains function in isolation without interoperability. This lack of connectivity stifles the broader utility of blockchain technology. Interoperability solves this problem by enabling blockchains to: - **Collaborate across networks** - chains can interact to share assets, functionality, and data, creating synergies that amplify their individual strengths - **Achieve greater scalability** - specialized chains can offload tasks to others, optimizing performance and resource utilization - **Expand use-case potential** - cross-chain applications can leverage features from multiple blockchains, unlocking novel user experiences and solutions In the Polkadot ecosystem, interoperability transforms a collection of isolated chains into a cohesive, efficient network, pushing the boundaries of what blockchains can achieve together. ## Key Mechanisms for Interoperability At the core of Polkadot's cross-chain collaboration are foundational technologies designed to break down barriers between networks. These mechanisms empower blockchains to communicate, share resources, and operate as a cohesive ecosystem. ### Cross-Consensus Messaging (XCM): The Backbone of Communication Polkadot's Cross-Consensus Messaging (XCM) is the standard framework for interaction between parachains, relay chains, and, eventually, external blockchains. XCM provides a trustless, secure messaging format for exchanging assets, sharing data, and executing cross-chain operations. Through XCM, decentralized applications can: - Transfer tokens and other assets across chains - Coordinate complex workflows that span multiple blockchains - Enable seamless user experiences where underlying blockchain differences are invisible - XCM exemplifies Polkadot’s commitment to creating a robust and interoperable ecosystem For further information about XCM, check the [Introduction to XCM](/develop/interoperability/intro-to-xcm/){target=\_blank} article. ### Bridges: Connecting External Networks While XCM enables interoperability within the Polkadot ecosystem, bridges extend this functionality to external blockchains such as Ethereum and Bitcoin. By connecting these networks, bridges allow Polkadot-based chains to access external liquidity, additional functionalities, and broader user bases. With bridges, developers and users gain the ability to: - Integrate external assets into Polkadot-based applications - Combine the strengths of Polkadot’s scalability with the liquidity of other networks - Facilitate accurate multi-chain applications that transcend ecosystem boundaries For more information about bridges in the Polkadot ecosystem, see the [Bridge Hub](/polkadot-protocol/architecture/system-chains/bridge-hub/){target=\_blank} guide. ## The Polkadot Advantage Polkadot was purpose-built for interoperability. Unlike networks that add interoperability as an afterthought, Polkadot integrates it as a fundamental design principle. This approach offers several distinct advantages: - **Developer empowerment** - polkadot’s interoperability tools allow developers to build applications that leverage multiple chains’ capabilities without added complexity - **Enhanced ecosystem collaboration** - chains in Polkadot can focus on their unique strengths while contributing to the ecosystem’s overall growth - **Future-proofing blockchain** - by enabling seamless communication, Polkadot ensures its ecosystem can adapt to evolving demands and technologies ## Looking Ahead Polkadot’s vision of interoperability extends beyond technical functionality, representing a shift towards a more collaborative blockchain landscape. By enabling chains to work together, Polkadot fosters innovation, efficiency, and accessibility, paving the way for a decentralized future where blockchains are not isolated competitors but interconnected collaborators. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/networks/ --- BEGIN CONTENT --- --- title: Networks description: Explore Polkadot's testing and production networks, including Westend, Kusama, and Paseo, for efficient development, deployment, and testing. categories: Basics, Polkadot Protocol, Networks --- # Networks ## Introduction The Polkadot ecosystem is built on a robust set of networks designed to enable secure and scalable development. Whether you are testing new features or deploying to live production, Polkadot offers several layers of networks tailored for each stage of the development process. From local environments to experimental networks like Kusama and community-run TestNets such as Paseo, developers can thoroughly test, iterate, and validate their applications. This guide will introduce you to Polkadot's various networks and explain how they fit into the development workflow. ## Network Overview Polkadot's development process is structured to ensure new features and upgrades are rigorously tested before being deployed on live production networks. The progression follows a well-defined path, starting from local environments and advancing through TestNets, ultimately reaching the Polkadot MainNet. The diagram below outlines the typical progression of the Polkadot development cycle: ``` mermaid flowchart LR id1[Local] --> id2[Westend] --> id4[Kusama] --> id5[Polkadot] id1[Local] --> id3[Paseo] --> id5[Polkadot] ``` This flow ensures developers can thoroughly test and iterate without risking real tokens or affecting production networks. Testing tools like [Chopsticks](#chopsticks) and various TestNets make it easier to experiment safely before releasing to production. A typical journey through the Polkadot core protocol development process might look like this: 1. **Local development node** - development starts in a local environment, where developers can create, test, and iterate on upgrades or new features using a local development node. This stage allows rapid experimentation in an isolated setup without any external dependencies 2. **Westend** - after testing locally, upgrades are deployed to [Westend](#westend), Polkadot's primary TestNet. Westend simulates real-world conditions without using real tokens, making it the ideal place for rigorous feature testing before moving on to production networks 3. **Kusama** - once features have passed extensive testing on Westend, they move to Kusama, Polkadot's experimental and fast-moving "canary" network. Kusama operates as a high-fidelity testing ground with actual economic incentives, giving developers insights into how their features will perform in a real-world environment 4. **Polkadot** - after passing tests on Westend and Kusama, features are considered ready for deployment to Polkadot, the live production network In addition, parachain developers can leverage local TestNets like [Zombienet](#zombienet) and deploy upgrades on parachain TestNets. 5. **Paseo** - For parachain and dApp developers, Paseo serves as a community-run TestNet that mirrors Polkadot's runtime. Like Westend for core protocol development, Paseo provides a testing ground for parachain development without affecting live networks !!!note The Rococo TestNet deprecation date was October 14, 2024. Teams should use Westend for Polkadot protocol and feature testing and Paseo for chain development-related testing. ## Polkadot Development Networks Development and testing are crucial to building robust dApps and parachains and performing network upgrades within the Polkadot ecosystem. To achieve this, developers can leverage various networks and tools that provide a risk-free environment for experimentation and validation before deploying features to live networks. These networks help avoid the costs and risks associated with real tokens, enabling testing for functionalities like governance, cross-chain messaging, and runtime upgrades. ## Kusama Network Kusama is the experimental version of Polkadot, designed for developers who want to move quickly and test their applications in a real-world environment with economic incentives. Kusama serves as a production-grade testing ground where developers can deploy features and upgrades with the pressure of game theory and economics in mind. It mirrors Polkadot but operates as a more flexible space for innovation. The native token for Kusama is KSM. For more information about KSM, visit the [Native Assets](https://wiki.polkadot.network/learn/learn-dot/#kusama){target=\_blank} page. ## Test Networks The following test networks provide controlled environments for testing upgrades and new features. TestNet tokens are available from the [Polkadot faucet](https://faucet.polkadot.io/){target=\_blank}. ### Westend Westend is Polkadot's primary permanent TestNet. Unlike temporary test networks, Westend is not reset to the genesis block, making it an ongoing environment for testing Polkadot core features. Managed by Parity Technologies, Westend ensures that developers can test features in a real-world simulation without using actual tokens. The native token for Westend is WND. More details about WND can be found on the [Native Assets](https://wiki.polkadot.network/learn/learn-dot/#getting-tokens-on-the-westend-testnet){target=\_blank} page. ### Paseo [Paseo](https://github.com/paseo-network){target=\_blank} is a community-managed TestNet designed for parachain and dApp developers. It mirrors Polkadot's runtime and is maintained by Polkadot community members. Paseo provides a dedicated space for parachain developers to test their applications in a Polkadot-like environment without the risks associated with live networks. The native token for Paseo is PAS. Additional information on PAS is available on the [Native Assets](https://wiki.polkadot.network/learn/learn-dot/#getting-tokens-on-the-paseo-testnet){target=\_blank} page. ## Local Test Networks Local test networks are an essential part of the development cycle for blockchain developers using the Polkadot SDK. They allow for fast, iterative testing in controlled, private environments without connecting to public TestNets. Developers can quickly spin up local instances to experiment, debug, and validate their code before deploying to larger TestNets like Westend or Paseo. Two key tools for local network testing are Zombienet and Chopsticks. ### Zombienet [Zombienet](https://github.com/paritytech/zombienet){target=\_blank} is a flexible testing framework for Polkadot SDK-based blockchains. It enables developers to create and manage ephemeral, short-lived networks. This feature makes Zombienet particularly useful for quick iterations, as it allows you to run multiple local networks concurrently, mimicking different runtime conditions. Whether you're developing a parachain or testing your custom blockchain logic, Zombienet gives you the tools to automate local testing. Key features of Zombienet include: - Creating dynamic, local networks with different configurations - Running parachains and relay chains in a simulated environment - Efficient testing of network components like cross-chain messaging and governance Zombienet is ideal for developers looking to test quickly and thoroughly before moving to more resource-intensive public TestNets. ### Chopsticks [Chopsticks](https://github.com/AcalaNetwork/chopsticks){target=\_blank} is a tool designed to create forks of Polkadot SDK-based blockchains, allowing developers to interact with network forks as part of their testing process. This capability makes Chopsticks a powerful option for testing upgrades, runtime changes, or cross-chain applications in a forked network environment. Key features of Chopsticks include: - Forking live Polkadot SDK-based blockchains for isolated testing - Simulating cross-chain messages in a private, controlled setup - Debugging network behavior by interacting with the fork in real-time Chopsticks provides a controlled environment for developers to safely explore the effects of runtime changes. It ensures that network behavior is tested and verified before upgrades are deployed to live networks. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/node-and-runtime/ --- BEGIN CONTENT --- --- title: Node and Runtime description: Learn how Polkadot SDK-based nodes function, how the client and runtime are separated, and how they communicate using SCALE-encoded data. categories: Basics, Polkadot Protocol --- # Node and Runtime ## Introduction Every blockchain platform relies on a decentralized network of computers, called nodes, that communicate with each other about transactions and blocks. In this context, a node refers to the software running on the connected devices rather than the physical or virtual machines in the network. Polkadot SDK-based nodes consist of two main components, each with distinct responsibilities: the client (also called node) and the runtime. If the system were a monolithic protocol, any modification would require updating the entire system. Instead, Polkadot achieves true upgradeability by defining an immutable meta-protocol (the client) and a protocol (the runtime) that can be upgraded independently. This separation gives the [Polkadot Relay Chain](/polkadot-protocol/architecture/polkadot-chain){target=\_blank} and all connected [parachains](/polkadot-protocol/architecture/parachains){target=\_blank} an evolutionary advantage over other blockchain platforms. ## Architectural Principles The Polkadot SDK-based blockchain architecture is fundamentally built on two distinct yet interconnected components: - **Client (Meta-protocol)** - Handles the foundational infrastructure of the blockchain - Manages runtime execution, networking, consensus, and other off-chain components - Provides an immutable base layer that ensures network stability - Upgradable only through hard forks - **Runtime (Protocol)** - Defines the blockchain's state transition logic - Determines the specific rules and behaviors of the blockchain - Compiled to WebAssembly (Wasm) for platform-independent execution - Capable of being upgraded without network-wide forking ### Advantages of this Architecture - **Forkless upgrades** - runtime can be updated without disrupting the entire network - **Modularity** - clear separation allows independent development of client and runtime - **Flexibility** - enables rapid iteration and evolution of blockchain logic - **Performance** - WebAssembly compilation provides efficient, cross-platform execution ## Node (Client) The node, also known as the client, is the core component responsible for executing the Wasm runtime and orchestrating various essential blockchain components. It ensures the correct execution of the state transition function and manages multiple critical subsystems, including: - **Wasm execution** - runs the blockchain runtime, which defines the state transition rules - **Database management** - stores blockchain data - **Networking** - facilitates peer-to-peer communication, block propagation, and transaction gossiping - **Transaction pool (Mempool)** - manages pending transactions before they are included in a block - **Consensus mechanism** - ensures agreement on the blockchain state across nodes - **RPC services** - provides external interfaces for applications and users to interact with the node ## Runtime The runtime is more than just a set of rules. It's the fundamental logic engine that defines a blockchain's entire behavior. In Polkadot SDK-based blockchains, the runtime represents a complete, self-contained description of the blockchain's state transition function. ### Characteristics The runtime is distinguished by three key characteristics: - **Business logic** - defines the complete application-specific blockchain behavior - **WebAssembly compilation** - ensures platform-independent, secure execution - **On-chain storage** - stored within the blockchain's state, allowing dynamic updates ### Key Functions The runtime performs several critical functions, such as: - Define state transition rules - Implement blockchain-specific logic - Manage account interactions - Control transaction processing - Define governance mechanisms - Handle custom pallets and modules ## Communication Between Node and Runtime The client and runtime communicate exclusively using [SCALE-encoded](/polkadot-protocol/parachain-basics/data-encoding){target=\_blank} communication. This ensures efficient and compact data exchange between the two components. ### Runtime APIs The Runtime API consists of well-defined functions and constants a client assumes are implemented in the Runtime Wasm blob. These APIs enable the client to interact with the runtime to execute blockchain operations and retrieve information. The client invokes these APIs to: - Build, execute, and finalize blocks - Access metadata - Access consensus related information - Handle transaction execution ### Host Functions During execution, the runtime can access certain external client functionalities via host functions. The specific functions the client exposes allow the runtime to perform operations outside the WebAssembly domain. Host functions enable the runtime to: - Perform cryptographic operations - Access the current blockchain state - Handle storage modifications - Allocate memory --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/parachain-basics/randomness/ --- BEGIN CONTENT --- --- title: Randomness description: Explore the importance of randomness in PoS blockchains, focusing on Polkadot’s VRF-based approach to ensure fairness and security in validator selection. categories: Basics, Polkadot Protocol --- # Randomness ## Introduction Randomness is crucial in Proof of Stake (PoS) blockchains to ensure a fair and unpredictable distribution of validator duties. However, computers are inherently deterministic, meaning the same input always produces the same output. What we typically refer to as "random" numbers on a computer are actually pseudo-random. These numbers rely on an initial "seed," which can come from external sources like [atmospheric noise](https://www.random.org/randomness/){target=\_blank}, [heart rates](https://mdpi.altmetric.com/details/47574324){target=\_blank}, or even [lava lamps](https://en.wikipedia.org/wiki/Lavarand){target=\_blank}. While this may seem random, given the same "seed," the same sequence of numbers will always be generated. In a global blockchain network, relying on real-world entropy for randomness isn’t feasible because these inputs vary by time and location. If nodes use different inputs, blockchains can fork. Hence, real-world randomness isn't suitable for use as a seed in blockchain systems. Currently, two primary methods for generating randomness in blockchains are used: [`RANDAO`](#randao) and [`VRF`](#vrf) (Verifiable Random Function). Polkadot adopts the `VRF` approach for its randomness. ## VRF A Verifiable Random Function (VRF) is a cryptographic function that generates a random number and proof that ensures the submitter produced the number. This proof allows anyone to verify the validity of the random number. Polkadot's VRF is similar to the one used in [**Ouroboros Praos**](https://eprint.iacr.org/2017/573.pdf){target=\_blank}, which secures randomness for block production in systems like [BABE](/polkadot-protocol/architecture/polkadot-chain/pos-consensus/#block-production-babe){target=\_blank} (Polkadot’s block production mechanism). The key difference is that Polkadot's VRF doesn’t rely on a central clock—avoiding the issue of whose clock to trust. Instead, it uses its own past results and slot numbers to simulate time and determine future outcomes. ### How VRF Works Slots on Polkadot are discrete units of time, each lasting six seconds, and can potentially hold a block. Multiple slots form an epoch, with 2400 slots making up one four-hour epoch. In each slot, validators execute a "die roll" using a VRF. The VRF uses three inputs: 1. A "secret key," unique to each validator, is used for the die roll 2. An epoch randomness value, derived from the hash of VRF outputs from blocks two epochs ago (N-2), so past randomness influences the current epoch (N) 3. The current slot number This process helps maintain fair randomness across the network. Here is a graphical representation: ![](/images/polkadot-protocol/parachain-basics/blocks-transactions-fees/randomness/slots-epochs.webp) The VRF produces two outputs: a result (the random number) and a proof (verifying that the number was generated correctly). The result is checked by the validator against a protocol threshold. If it's below the threshold, the validator becomes a candidate for block production in that slot. The validator then attempts to create a block, submitting it along with the `PROOF` and `RESULT`. So, VRF can be expressed like: `(RESULT, PROOF) = VRF(SECRET, EPOCH_RANDOMNESS_VALUE, CURRENT_SLOT_NUMBER)` Put simply, performing a "VRF roll" generates a random number along with proof that the number was genuinely produced and not arbitrarily chosen. After executing the VRF, the `RESULT` is compared to a protocol-defined `THRESHOLD`. If the `RESULT` is below the `THRESHOLD`, the validator becomes a valid candidate to propose a block for that slot. Otherwise, the validator skips the slot. As a result, there may be multiple validators eligible to propose a block for a slot. In this case, the block accepted by other nodes will prevail, provided it is on the chain with the latest finalized block as determined by the GRANDPA finality gadget. It's also possible for no block producers to be available for a slot, in which case the AURA consensus takes over. AURA is a fallback mechanism that randomly selects a validator to produce a block, running in parallel with BABE and only stepping in when no block producers exist for a slot. Otherwise, it remains inactive. Because validators roll independently, no block candidates may appear in some slots if all roll numbers are above the threshold. To verify resolution of this issue and that Polkadot block times remain near constant-time, see the [PoS Consensus](/polkadot-protocol/architecture/polkadot-chain/pos-consensus/){target=\_blank} page of this documentation. ## RANDAO An alternative on-chain randomness method is Ethereum's RANDAO, where validators perform thousands of hashes on a seed and publish the final hash during a round. The collective input from all validators forms the random number, and as long as one honest validator participates, the randomness is secure. To enhance security, RANDAO can optionally be combined with a Verifiable Delay Function (VDF), ensuring that randomness can't be predicted or manipulated during computation. For more information about RANDAO, see the [Randomness - RANDAO](https://eth2book.info/capella/part2/building_blocks/randomness/){target=\_blank} section of the Upgrading Ethereum documentation. ## VDFs Verifiable Delay Functions (VDFs) are time-bound computations that, even on parallel computers, take a set amount of time to complete. They produce a unique result that can be quickly verified publicly. When combined with RANDAO, feeding RANDAO's output into a VDF introduces a delay that nullifies an attacker's chance to influence the randomness. However, VDF likely requires specialized ASIC devices to run separately from standard nodes. !!!warning While only one is needed to secure the system, and they will be open-source and inexpensive, running VDF devices involves significant costs without direct incentives, adding friction for blockchain users. ## Additional Resources For more information about the reasoning for choices made along with proofs, see Polkadot's research on blockchain randomness and sortition in the [Block production](https://research.web3.foundation/Polkadot/protocols/block-production){target=\_blank} entry of the Polkadot Wiki. For a discussion with Web3 Foundation researchers about when and under what conditions Polkadot's randomness can be utilized, see the [Discussion on Randomness used in Polkadot](https://github.com/use-ink/ink/issues/57){target=\_blank} issue on GitHub. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/smart-contract-basics/accounts/ --- BEGIN CONTENT --- --- title: Accounts in Asset Hub Smart Contracts description: Bridges Ethereum's 20-byte addresses with Polkadot's 32-byte accounts, enabling seamless interaction while maintaining compatibility with Ethereum tooling. categories: Basics, Polkadot Protocol --- # Accounts on Asset Hub Smart Contracts !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Asset Hub natively utilizes Polkadot's 32-byte account system while providing interoperability with Ethereum's 20-byte addresses through an automatic conversion system. When interacting with smart contracts: - Ethereum-compatible wallets (like MetaMask) can use their familiar 20-byte addresses. - Polkadot accounts continue using their native 32-byte format. - The Asset Hub chain automatically handles conversion between the two formats behind the scenes: - 20-byte Ethereum addresses are padded with `0xEE` bytes to create valid 32-byte Polkadot accounts. - 32-byte Polkadot accounts can optionally register a mapping to a 20-byte address for Ethereum compatibility. This dual-format approach enables Asset Hub to maintain compatibility with Ethereum tooling while fully integrating with the Polkadot ecosystem. ## Address Types and Mappings The platform handles two distinct address formats: - [Ethereum-style addresses (20 bytes)](https://ethereum.org/en/developers/docs/accounts/#account-creation){target=\_blank} - [Polkadot native account IDs (32 bytes)](https://wiki.polkadot.network/docs/build-protocol-info#addresses){target=\_blank} ### Ethereum to Polkadot Mapping The [`AccountId32Mapper`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/struct.AccountId32Mapper.html){target=\_blank} implementation in [`pallet_revive`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/index.html){target=\_blank} handles the core address conversion logic. For converting a 20-byte Ethereum address to a 32-byte Polkadot address, the pallet uses a simple concatenation approach: - [**Core mechanism**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/trait.AddressMapper.html#tymethod.to_fallback_account_id){target=\_blank} : takes a 20-byte Ethereum address and extends it to 32 bytes by adding twelve `0xEE` bytes at the end. The key benefits of this approach are: - Able to fully revert, allowing a smooth transition back to the Ethereum format. - Provides clear identification of Ethereum-controlled accounts through the `0xEE` suffix pattern. - Maintains cryptographic security with a `2^96` difficulty for pattern reproduction. ### Polkadot to Ethereum Mapping The conversion from 32-byte Polkadot accounts to 20-byte Ethereum addresses is more complex than the reverse direction due to the lossy nature of the conversion. The [`AccountId32Mapper`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/struct.AccountId32Mapper.html){target=\_blank} handles this through two distinct approaches: - **For Ethereum-derived accounts** : The system uses the [`is_eth_derived`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/fn.is_eth_derived.html){target=\_blank} function to detect accounts that were originally Ethereum addresses (identified by the `0xEE` suffix pattern). For these accounts, the conversion strips the last 12 bytes to recover the original 20-byte Ethereum address. - **For native Polkadot accounts** : Since these accounts utilize the whole 32-byte space and weren't derived from Ethereum addresses, direct truncation would result in lost information. Instead, the system: 1. Hashes the entire 32-byte account using Keccak-256. 2. Takes the last 20 bytes of the hash to create the Ethereum address. 3. This ensures a deterministic mapping while avoiding simple truncation. The conversion process is implemented through the [`to_address`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/trait.AddressMapper.html#tymethod.to_address){target=\_blank} function, which automatically detects the account type and applies the appropriate conversion method. **Stateful Mapping for Reversibility** : Since the conversion from 32-byte to 20-byte addresses is inherently lossy, the system provides an optional stateful mapping through the [`OriginalAccount`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/pallet/storage_types/struct.OriginalAccount.html){target=\_blank} storage. When a Polkadot account registers a mapping (via the [`map`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/trait.AddressMapper.html#tymethod.map){target=\_blank} function), the system stores the original 32-byte account ID, enabling the [`to_account_id`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/trait.AddressMapper.html#tymethod.to_account_id){target=\_blank} function to recover the exact original account rather than falling back to a default conversion. ## Account Registration The registration process is implemented through the [`map`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/trait.AddressMapper.html#tymethod.map){target=\_blank} function. This process involves: - Checking if the account is already mapped. - Calculating and collecting required deposits based on data size. - Storing the address suffix for future reference. - Managing the currency holds for security. ## Fallback Accounts The fallback mechanism is integrated into the [`to_account_id`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/trait.AddressMapper.html#tymethod.to_account_id){target=\_blank} function. It provides a safety net for address conversion by: - First, attempting to retrieve stored mapping data. - Falling back to the default conversion method if no mapping exists. - Maintaining consistency in address representation. ## Contract Address Generation The system supports two methods for generating contract addresses: - [**CREATE1 method**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/fn.create1.html){target=\_blank}: - Uses the deployer address and nonce. - Generates deterministic addresses for standard contract deployment. - [**CREATE2 method**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/fn.create2.html){target=\_blank}: - Uses the deployer address, initialization code, input data, and salt. - Enables predictable address generation for advanced use cases. ## Security Considerations The address mapping system maintains security through several design choices evident in the implementation: - The stateless mapping requires no privileged operations, as shown in the [`to_fallback_account_id`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/trait.AddressMapper.html#tymethod.to_fallback_account_id){target=\_blank} implementation. - The stateful mapping requires a deposit managed through the [`Currency`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/pallet/trait.Config.html#associatedtype.Currency){target=\_blank} trait. - Mapping operations are protected against common errors through explicit checks. - The system prevents double-mapping through the [`ensure!(!Self::is_mapped(account_id))`](https://github.com/paritytech/polkadot-sdk/blob/stable2412/substrate/frame/revive/src/address.rs#L125){target=\_blank} check. All source code references are from the [`address.rs`](https://github.com/paritytech/polkadot-sdk/blob/stable2412/substrate/frame/revive/src/address.rs){target=\_blank} file in the Revive pallet of the Polkadot SDK repository. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/smart-contract-basics/blocks-transactions-fees/ --- BEGIN CONTENT --- --- title: Blocks, Transactions and Fees for Asset Hub Smart Contracts description: Explore how Asset Hub smart contracts handle blocks, transactions, and fees with EVM compatibility, supporting various Ethereum transaction types. categories: Basics, Polkadot Protocol --- # Blocks, Transactions, and Fees !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Asset Hub smart contracts operate within the Polkadot ecosystem using the [`pallet_revive`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/){target=\_blank} implementation, which provides EVM compatibility. While many aspects of blocks and transactions are inherited from the underlying parachain architecture, there are specific considerations and mechanisms unique to smart contract operations on Asset Hub. ## Smart Contract Blocks Smart contract blocks in Asset Hub follow the same fundamental structure as parachain blocks, inheriting all standard parachain block components. The `pallet_revive` implementation maintains this consistency while adding necessary [EVM-specific features](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/evm){target=\_blank}. For detailed implementation specifics, the [`Block`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/evm/struct.Block.html){target=\_blank} struct in `pallet_revive` demonstrates how parachain and smart contract block implementations align. ## Smart Contract Transactions Asset Hub implements a sophisticated transaction system that supports various transaction types and formats, encompassing both traditional parachain operations and EVM-specific interactions. ### EVM Transaction Types The system provides a fundamental [`eth_transact`](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/pallet/dispatchables/fn.eth_transact.html){target=\_blank} interface for processing raw EVM transactions dispatched through [Ethereum JSON-RPC APIs](/develop/smart-contracts/json-rpc-apis/){target=\_blank}. This interface acts as a wrapper for Ethereum transactions, requiring an encoded signed transaction payload, though it cannot be dispatched directly. Building upon this foundation, the system supports multiple transaction formats to accommodate different use cases and optimization needs: - [**Legacy transactions**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/evm/struct.TransactionLegacyUnsigned.html){target=\_blank} - the original Ethereum transaction format, providing basic transfer and contract interaction capabilities. These transactions use a simple pricing mechanism and are supported for backward compatibility - [**EIP-1559 transactions**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/evm/struct.Transaction1559Unsigned.html){target=\_blank} - an improved transaction format that introduces a more predictable fee mechanism with base fee and priority fee components. This format helps optimize gas fee estimation and network congestion management - [**EIP-2930 transactions**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/evm/struct.Transaction2930Unsigned.html){target=\_blank} - introduces access lists to optimize gas costs for contract interactions by pre-declaring accessed addresses and storage slots - [**EIP-4844 transactions**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/evm/struct.Transaction4844Unsigned.html){target=\_blank} - implements blob-carrying transactions, designed to optimize Layer 2 scaling solutions by providing dedicated space for roll-up data Each transaction type can exist in both signed and unsigned states, with appropriate validation and processing mechanisms for each. ## Fees and Gas Asset Hub implements a sophisticated resource management system that combines parachain transaction fees with EVM gas mechanics, providing both Ethereum compatibility and enhanced features. ### Gas Model Overview Gas serves as the fundamental unit for measuring computational costs, with each network operation consuming a specified amount. This implementation maintains compatibility with Ethereum's approach while adding parachain-specific optimizations. - **Dynamic gas scaling** - Asset Hub implements a dynamic pricing mechanism that reflects actual execution performance. This results in: - More efficient pricing for computational instructions relative to I/O operations - Better correlation between gas costs and actual resource consumption - Need for developers to implement flexible gas calculation rather than hardcoding values - **Multi-dimensional resource metering** - Asset Hub extends beyond the traditional single-metric gas model to track three distinct resources: - `ref_time` (computation time) - Functions as traditional gas equivalent - Measures actual computational resource usage - Primary metric for basic operation costs - `proof_size` (verification overhead) - Tracks state proof size required for validator verification - Helps manage consensus-related resource consumption - Important for cross-chain operations - `storage_deposit` (state management) - Manages blockchain state growth - Implements a deposit-based system for long-term storage - Refundable when storage is freed These resources can be limited at both transaction and contract levels, similar to Ethereum's gas limits. For more information, check the [Gas Model](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm#gas-model){target=\_blank} section in the [EVM vs PolkaVM](/polkadot-protocol/smart-contract-basics/evm-vs-polkavm/){target=\_blank} article. ### Fee Components - **Base fees** - Storage deposit for contract deployment - Minimum transaction fee for network access - Network maintenance costs - **Execution fees** - Computed based on gas consumption - Converted to native currency using network-defined rates - Reflects actual computational resource usage - **Storage fees** - Deposit for long-term storage usage - Refundable when storage is freed - Helps prevent state bloat ### Gas Calculation and Conversion The system maintains precise conversion mechanisms between: - Substrate weights and EVM gas units - Native currency and gas costs - Different resource metrics within the multi-dimensional model This ensures accurate fee calculation while maintaining compatibility with existing Ethereum tools and workflows. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/smart-contract-basics/evm-vs-polkavm/ --- BEGIN CONTENT --- --- title: EVM vs PolkaVM description: Compares EVM and PolkaVM, highlighting key architectural differences, gas models, memory management, and account handling while ensuring Solidity compatibility. categories: Basics, Polkadot Protocol --- # EVM vs PolkaVM !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction While [PolkaVM](/polkadot-protocol/smart-contract-basics/polkavm-design/){target=\_blank} strives for maximum Ethereum compatibility, several fundamental design decisions create necessary divergences from the [EVM](https://ethereum.org/en/developers/docs/evm/){target=\_blank}. These differences represent trade-offs that enhance performance and resource management while maintaining accessibility for Solidity developers. ## Core Virtual Machine Architecture The most significant departure from Ethereum comes from PolkaVM's foundation itself. Rather than implementing the EVM, PolkaVM utilizes a RISC-V instruction set. For most Solidity developers, this architectural change remains transparent thanks to the [Revive compiler's](https://github.com/paritytech/revive){target=\_blank} complete Solidity support, including inline assembler functionality. ```mermaid graph TD subgraph "Ethereum Path" EthCompile["Standard Solidity Compiler"] --> EVM_Bytecode["EVM Bytecode"] EVM_Bytecode --> EVM["Stack-based EVM"] EVM --> EthExecution["Contract Execution"] end subgraph "PolkaVM Path" ReviveCompile["Revive Compiler"] --> RISCV_Bytecode["RISC-V Format Bytecode"] RISCV_Bytecode --> PolkaVM["RISC-V Based PolkaVM"] PolkaVM --> PolkaExecution["Contract Execution"] end EthExecution -.-> DifferencesNote["Key Differences: - Instruction Set Architecture - Bytecode Format - Runtime Behavior"] PolkaExecution -.-> DifferencesNote ``` However, this architectural difference becomes relevant in specific scenarios. Tools that attempt to download and inspect contract bytecode will fail, as they expect EVM bytecode rather than PolkaVM's RISC-V format. Most applications typically pass bytecode as an opaque blob, making this a non-issue for standard use cases. This primarily affects contracts using [`EXTCODECOPY`](https://www.evm.codes/?fork=cancun#3c){target=\_blank} to manipulate code at runtime. A contract encounters problems specifically when it uses `EXTCODECOPY` to copy contract code into memory and then attempts to mutate it. This pattern is not possible in standard Solidity and requires dropping down to YUL assembly. An example would be a factory contract written in assembly that constructs and instantiates new contracts by generating code at runtime. Such contracts are rare in practice. PolkaVM offers an elegant alternative through its [on-chain constructors](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/pallet/struct.Pallet.html#method.bare_instantiate){target=\_blank}, enabling contract instantiation without runtime code modification, making this pattern unnecessary. This architectural difference also impacts how contract deployment works more broadly, as discussed in the [Contract Deployment](#contract-deployment) section. ### High-Level Architecture Comparison | Feature | Ethereum Virtual Machine (EVM) | PolkaVM | | :---------------------------: | :----------------------------------------------------------------------------------: | :----------------------------------------------------: | | **Instruction Set** | Stack-based architecture | RISC-V instruction set | | **Bytecode Format** | EVM bytecode | RISC-V format | | **Contract Size Limit** | 24KB code size limit | Contract-specific memory limits | | **Compiler** | Solidity Compiler | Revive Compiler | | **Inline Assembly** | Supported | Supported with the compatibility layer | | **Code Introspection** | Supported via [`EXTCODECOPY`](https://www.evm.codes/?fork=cancun#3c){target=\_blank} | Limited support, alternative via on-chain constructors | | **Resource Metering** | Single gas metric | Multi-dimensional | | **Runtime Code Modification** | Supported | Limited, with alternatives | | **Contract Instantiation** | Standard deployment | On-chain constructors for flexible instantiation | ## Gas Model Ethereum's resource model relies on a single metric: [gas](https://ethereum.org/en/developers/docs/gas/#what-is-gas){target=\_blank}, which serves as the universal unit for measuring computational costs. Each operation on the network consumes a specific amount of gas. Most platforms aiming for Ethereum compatibility typically adopt identical gas values to ensure seamless integration. The significant changes to Ethereum's gas model will be outlined in the following sections. ### Dynamic Gas Value Scaling Instead of adhering to Ethereum's fixed gas values, PolkaVM implements benchmark-based pricing that better reflects its improved execution performance. This makes instructions cheaper relative to I/O-bound operations but requires developers to avoid hardcoding gas values, particularly in cross-contract calls. ### Multi-Dimensional Resource Metering Moving beyond Ethereum's single gas metric, PolkaVM meters three distinct resources: - **`ref_time`** - Equivalent to traditional gas, measuring computation time. - **`proof_size`** - Tracks state proof size for validator verification. - **`storage_deposit`** - Manages state bloat through a deposit system. All three resources can be limited at the transaction level, just like gas on Ethereum. The [Ethereum RPC proxy](https://github.com/paritytech/polkadot-sdk/tree/master/substrate/frame/revive/rpc){target=\_blank} maps all three dimensions into the single gas dimension, ensuring everything behaves as expected for users. These resources can also be limited when making cross-contract calls, which is essential for security when interacting with untrusted contracts. However, Solidity only allows specifying `gas_limit` for cross-contract calls. The `gas_limit` is most similar to Polkadots `ref_time_limit`, but the Revive compiler doesn't supply any imposed `gas_limit` for cross-contract calls for two key reasons: - **Semantic differences** - `gas_limit` and `ref_time_limit` are not semantically identical; blindly passing EVM gas as `ref_time_limit` can lead to unexpected behavior. - **Incomplete protection** - The other two resources (`proof_size` and `storage_deposit`) would remain uncapped anyway, making it insufficient to prevent malicious callees from performing DOS attacks. When resources are "uncapped" in cross-contract calls, they remain constrained by transaction-specified limits, preventing abuse of the transaction signer. !!! note The runtime will provide a special precompile, allowing cross-contract calls with limits specified for all weight dimensions in the future. All gas-related opcodes like [`GAS`](https://www.evm.codes/?fork=cancun#5a){target=\_blank} or [`GAS_LIMIT`](https://www.evm.codes/?fork=cancun#45){target=\_blank} return only the `ref_time` value as it's the closest match to traditional gas. Extended APIs will be provided through precompiles to make full use of all resources, including cross-contract calls with all three resources specified. ## Memory Management The EVM and the PolkaVM take fundamentally different approaches to memory constraints: | Feature | Ethereum Virtual Machine (EVM) | PolkaVM | | :----------------------: | :---------------------------------------: | :--------------------------------------------: | | **Memory Constraints** | Indirect control via gas costs | Hard memory limits per contract | | **Cost Model** | Increasing gas curve with allocation size | Fixed costs separated from execution gas | | **Memory Limits** | Soft limits through prohibitive gas costs | Hard fixed limits per contract | | **Pricing Efficiency** | Potential overcharging for memory | More efficient through separation of concerns | | **Contract Nesting** | Limited by available gas | Limited by constant memory per contract | | **Memory Metering** | Dynamic based on total allocation | Static limits per contract instance | | **Future Improvements** | Incremental gas cost updates | Potential dynamic metering for deeper nesting | | **Cross-Contract Calls** | Handled through gas forwarding | Requires careful boundary limit implementation | The architecture establishes a constant memory limit per contract, which is the basis for calculating maximum contract nesting depth. This calculation assumes worst-case memory usage for each nested contract, resulting in a straightforward but conservative limit that operates independently of actual memory consumption. Future iterations may introduce dynamic memory metering, allowing deeper nesting depths for contracts with smaller memory footprints. However, such an enhancement would require careful implementation of cross-contract boundary limits before API stabilization, as it would introduce an additional resource metric to the system. ### Current Memory Limits The following table depicts memory-related limits at the time of writing: | Limit | Maximum | | :----------------------------------------: | :-------------: | | Call stack depth | 5 | | Event topics | 4 | | Event data payload size (including topics) | 416 bytes | | Storage value size | 416 bytes | | Transient storage variables | 128 uint values | | Immutable variables | 16 uint values | | Contract code blob size | ~100 kilobytes | !!! note Limits might be increased in the future. To guarantee existing contracts work as expected, limits will never be decreased. ## Account Management - Existential Deposit Ethereum and Polkadot handle account persistence differently, affecting state management and contract interactions: ### Account Management Comparison | Feature | Ethereum Approach | PolkaVM/Polkadot Approach | | :-----------------------: | :---------------------------------------------------: | :----------------------------------------------------: | | **Account Persistence** | Accounts persist indefinitely, even with zero balance | Requires existential deposit (ED) to maintain account | | **Minimum Balance** | None | ED required | | **Account Deletion** | Accounts remain in state | Accounts below ED are automatically deleted | | **Contract Accounts** | Exist indefinitely | Must maintain ED | | **Balance Reporting** | Reports full balance | Reports ED-adjusted balance via Ethereum RPC | | **New Account Transfers** | Standard transfer | Includes ED automatically with extra fee cost | | **Contract-to-Contract** | Direct transfers | ED drawn from transaction signer, not sending contract | | **State Management** | Potential bloat from zero-balance accounts | Optimized with auto-deletion of dust accounts | This difference introduces potential compatibility challenges for Ethereum-based contracts and tools, particularly wallets. To mitigate this, PolkaVM implements several transparent adjustments: - Balance queries via Ethereum RPC automatically deduct the ED, ensuring reported balances match spendable amounts. - Account balance checks through EVM opcodes reflect the ED-adjusted balance. - Transfers to new accounts automatically include the ED (`x + ED`), with the extra cost incorporated into transaction fees. - Contract-to-contract transfers handle ED requirements by: - Drawing ED from the transaction signer instead of the sending contract. - Keeping transfer amounts transparent for contract logic. - Treating ED like other storage deposit costs. This approach ensures that Ethereum contracts work without modifications while maintaining Polkadot's optimized state management. ## Contract Deployment For most users deploying contracts (like ERC-20 tokens), contract deployment works seamlessly without requiring special steps. However, when using advanced patterns like factory contracts that dynamically create other contracts at runtime, you'll need to understand PolkaVM's unique deployment model. In the PolkaVM, contract deployment follows a fundamentally different model from EVM. The EVM allows contracts to be deployed with a single transaction, where the contract code is bundled with the deployment transaction. In contrast, PolkaVM has a different process for contract instantiation. - **Code must be pre-uploaded** - Unlike EVM, where contract code is bundled within the deploying contract, PolkaVM requires all contract bytecode to be uploaded to the chain before instantiation. - **Factory pattern limitations** - The common EVM pattern, where contracts dynamically create other contracts, will fail with a `CodeNotFound` error unless the dependent contract code was previously uploaded. - **Separate upload and instantiation** - This creates a two-step process where developers must first upload all contract code, then instantiate relationships between contracts. This architecture impacts several common EVM patterns and requires developers to adapt their deployment strategies accordingly. _Factory contracts must be modified to work with pre-uploaded code rather than embedding bytecode_, and runtime code generation is not supported due to PolkaVM's RISC-V bytecode format. The specific behavior of contract creation opcodes is detailed in the [YUL IR Translation](#yul-function-translation-differences) section. When migrating EVM projects to PolkaVM, developers should identify all contracts that will be instantiated at runtime and ensure they are pre-uploaded to the chain before any instantiation attempts. ## Solidity and YUL IR Translation Incompatibilities While PolkaVM maintains high-level compatibility with Solidity, several low-level differences exist in the translation of YUL IR and specific Solidity constructs. These differences are particularly relevant for developers working with assembly code or utilizing advanced contract patterns. ### Contract Code Structure PolkaVM's contract runtime does not differentiate between runtime code and deploy (constructor) code. Instead, both are emitted into a single PolkaVM contract code blob and live on-chain. Therefore, in EVM terminology, the deploy code equals the runtime code. For most standard Solidity contracts, this is transparent. However, if you are analyzing raw bytecode or building tools that expect separate deploy and runtime sections, you'll need to adjust for this unified structure. In the constructor code, the `codesize` instruction returns the call data size instead of the actual code blob size, which differs from standard EVM behavior. Developers might consider that the constructor logic uses `codesize` to inspect the deployed contract's size (e.g., for self-validation or specific deployment patterns); this will return an incorrect value on PolkaVM. Re-evaluate such logic or use alternative methods to achieve your goal. ### Solidity-Specific Differences Solidity constructs behave differently under PolkaVM: - **`address.creationCode`** - Returns the bytecode keccak256 hash instead of the actual creation code, reflecting PolkaVM's hash-based code referencing system. - If your contract relies on `address.creationCode` to verify or interact with the full raw bytecode of a newly deployed contract, this will not work as expected. You will receive a hash, not the code itself. This typically affects highly specialized factory contracts or introspection tools. ### YUL Function Translation Differences The following YUL functions exhibit notable behavioral differences in PolkaVM: - **Memory Operations:** - **`mload`, `mstore`, `msize`, `mcopy`** - PolkaVM preserves memory layout but implements several constraints: - EVM linear heap memory is emulated using a fixed 64KB byte buffer, limiting maximum contract memory usage. - Accessing memory offsets larger than the buffer size traps the contract with an `OutOfBound` error. - Compiler optimizations may eliminate unused memory operations, potentially causing `msize` to differ from EVM behavior. For Solidity developers, the compiler generally handles memory efficiently within this 64KB limit. However, if you are writing low-level YUL assembly and perform direct memory manipulations, you must respect the 64KB buffer limit. Attempting to access memory outside this range will cause your transaction to revert. Be aware that `msize` might not always reflect the exact EVM behavior if compiler optimizations occur. - **Call Data Operations:** - **`calldataload`, `calldatacopy`** - In constructor code, the offset parameter is ignored and these functions always return `0`, diverging from EVM behavior where call data represents constructor arguments. - If your constructor logic in YUL assembly attempts to read constructor arguments using `calldataload` or `calldatacopy` with specific offsets, this will not yield the expected constructor arguments. Instead, these functions will return `zeroed` values. Standard Solidity constructors are handled correctly by the compiler, but manual YUL assembly for constructor argument parsing will need adjustment. - **Code Operations:** - **`codecopy`** - Only supported within constructor code, reflecting PolkaVM's different approach to code handling and the unified code blob structure. - If your contracts use `codecopy` (e.g., for self-modifying code or inspecting other contract's runtime bytecode) outside of the constructor, this will not be supported and will likely result in a compile-time error or runtime trap. This implies that patterns like dynamically generating or modifying contract code at runtime are not directly feasible with `codecopy` on PolkaVM. - **Control Flow:** - **`invalid`** - Traps the contract execution but does not consume remaining gas, unlike EVM where it consumes all available gas. - While `invalid` still reverts the transaction, the difference in gas consumption could subtly affect very specific error handling or gas accounting patterns that rely on `invalid` to consume all remaining gas. For most error scenarios, `revert()` is the standard and recommended practice. - **Cross-Contract Calls:** - **`call`, `delegatecall`, `staticall`** - These functions ignore supplied gas limits and forward all remaining resources due to PolkaVM's multi-dimensional resource model. This creates important security implications: - Contract authors must implement reentrancy protection since gas stipends don't provide protection. - The compiler detects `address payable.{send,transfer}` patterns and disables call reentrancy as a protective heuristic. - Using `address payable.{send,transfer}` is already deprecated; PolkaVM will provide dedicated precompiles for safe balance transfers. The traditional EVM pattern of limiting gas in cross-contract calls (especially with the 2300 gas stipend for send/transfer) does not provide reentrancy protection on PolkaVM. Developers must explicitly implement reentrancy guards (e.g., using a reentrancy lock mutex) in their Solidity code when making external calls to untrusted contracts. Relying on gas limits alone for reentrancy prevention is unsafe and will lead to vulnerabilities on PolkaVM. !!! warning The 2300 gas stipend that is provided by solc for address payable.{send, transfer} calls offers no reentrancy protection in PolkaVM. While the compiler attempts to detect and mitigate this pattern, developers should avoid these deprecated functions. - **Contract Creation:** - **`create`, `create2`** - Contract instantiation works fundamentally differently in PolkaVM. Instead of supplying deploy code concatenated with constructor arguments, the runtime expects: 1. A buffer containing the code hash to deploy. 2. The constructor arguments buffer. PolkaVM translates `dataoffset` and `datasize` instructions to handle contract hashes instead of contract code, enabling seamless use of the `new` keyword in Solidity. However, this translation may fail for contracts creating other contracts within `assembly` blocks. If you use the Solidity `new` keyword to deploy contracts, the Revive compiler handles this transparently. However, if you are creating contracts manually in YUL assembly using `create` or `create2` opcodes, you must provide the code hash of the contract to be deployed, not its raw bytecode. Attempting to pass raw bytecode will fail. This fundamentally changes how manual contract creation is performed in assembly. !!! warning Avoid using `create` family opcodes for manual deployment crafting in `assembly` blocks. This pattern is discouraged due to translation complexity and offers no gas savings benefits in PolkaVM. - **Data Operations:** - **`dataoffset`** - Returns the contract hash instead of code offset, aligning with PolkaVM's hash-based code referencing. - **`datasize`** - Returns the constant contract hash size (32 bytes) rather than variable code size. These changes are primarily relevant for low-level YUL assembly developers who are trying to inspect or manipulate contract code directly. `dataoffset` will provide a hash, not a memory offset to the code, and `datasize` will always be 32 bytes (the size of a hash). This reinforces that direct manipulation of contract bytecode at runtime, as might be done in some EVM patterns, is not supported. - **Resource Queries:** - **`gas`, `gaslimit`** - Return only the `ref_time` component of PolkaVM's multi-dimensional weight system, providing the closest analog to traditional gas measurements. - While `gas` and `gaslimit` still provide a useful metric, consider that they represent `ref_time` (computation time) only. If your contract logic depends on precise knowledge of other resource costs (like `proof_size` or `storage_deposit`), you won't get that information from these opcodes. You'll need to use future precompiles for full multi-dimensional resource queries. - **Blockchain State:** - **`prevrandao`, `difficulty`** - Both translate to a constant value of `2500000000000000`, as PolkaVM doesn't implement Ethereum's difficulty adjustment or randomness mechanisms. - If your Solidity contract relies on `block.difficulty` (or its equivalent YUL opcode `difficulty`) for randomness generation or any logic tied to Ethereum's proof-of-work difficulty, this will not provide true randomness on PolkaVM. The value will always be constant. Developers needing on-chain randomness should utilize Polkadot's native randomness sources or dedicated VRF (Verifiable Random Function) solutions if available. ### Unsupported Operations Several EVM operations are not supported in PolkaVM and produce compile-time errors: - **`pc`, `extcodecopy`** - These operations are EVM-specific and have no equivalent functionality in PolkaVM's RISC-V architecture. - Any Solidity contracts that utilize inline assembly to interact with `pc` (program counter) or `extcodecopy` will fail to compile or behave unexpectedly. This means patterns involving introspection of the current execution location or copying external contract bytecode at runtime are not supported. - **`blobhash`, `blobbasefee`** - Related to Ethereum's rollup model and blob data handling, these operations are unnecessary given Polkadot's superior rollup architecture. - If you are porting contracts designed for Ethereum's EIP-4844 (proto-danksharding) and rely on these blob-related opcodes, they will not be available on PolkaVM. - **`extcodecopy`, `selfdestruct`** - These deprecated operations are not supported and generate compile-time errors. - The `selfdestruct` opcode, which allowed contracts to remove themselves from the blockchain, is not supported. Contracts cannot be self-destroyed on PolkaVM. This affects contract upgradeability patterns that rely on self-destruction and redeployment. Similarly, `extcodecopy` is unsupported, impacting contracts that intend to inspect or copy the bytecode of other deployed contracts. ### Compilation Pipeline Considerations PolkaVM processes YUL IR exclusively, meaning all contracts exhibit behavior consistent with Solidity's `via-ir` compilation mode. Developers familiar with the legacy compilation pipeline should expect [IR-based codegen behavior](https://docs.soliditylang.org/en/latest/ir-breaking-changes.html){target=\_blank} when working with PolkaVM contracts. If you've previously worked with older Solidity compilers that did not use the `via-ir` pipeline by default, you might observe subtle differences in compiled bytecode size or gas usage. It's recommended to familiarize yourself with Solidity's IR-based codegen behavior, as this is the standard for PolkaVM. ### Memory Pointer Limitations YUL functions accepting memory buffer offset pointers or size arguments are limited by PolkaVM's 32-bit pointer size. Supplying values above `2^32-1` will trap the contract immediately. The Solidity compiler typically generates valid memory references, making this primarily a concern for low-level assembly code. For standard Solidity development, this limitation is unlikely to be hit as the compiler handles memory addresses correctly within typical contract sizes. However, if you are writing extremely large contracts using YUL assembly that manually and extensively manipulate memory addresses, ensure that your memory offsets and sizes do not exceed PolkaVM's **fixed 64KB memory limit per contract**. While the YUL functions might accept 32-bit pointers (up to 2^32-1), attempting to access memory beyond the allocated 64KB buffer will trap the contract immediately. These incompatibilities reflect the fundamental architectural differences between EVM and PolkaVM while maintaining high-level Solidity compatibility. Most developers using standard Solidity patterns will encounter no issues, but those working with assembly code or advanced contract patterns should carefully review these differences during migration. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/smart-contract-basics/ --- BEGIN CONTENT --- --- title: Smart Contract Basics description: Learn the fundamental concepts of smart contracts on Polkadot, including PolkaVM, account management, networks, and transaction mechanics. template: index-page.html --- # Smart Contract Basics !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. Gain a deep understanding of smart contracts on Polkadot, from execution environments to transaction mechanics. This section covers the essential components of the ecosystem. ## Key Topics Explore foundational concepts that shape smart contract functionality on Polkadot: - **PolkaVM design** – insights into PolkaVM’s architecture, Ethereum compatibility, and optimized execution - **EVM vs PolkaVM** – a comparison of Ethereum's EVM and PolkaVM, highlighting key differences in design, gas models, and memory management - **Accounts** – how accounts function within Polkadot’s ecosystem, including existential deposits and contract account handling - **Networks** – an overview of smart contract-enabled networks within the Polkadot ecosystem - **Blocks, transactions, and fees** – understanding transaction lifecycle, execution fees, and resource management ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/smart-contract-basics/networks/ --- BEGIN CONTENT --- --- title: Networks for Polkadot Hub Smart Contracts description: Explore the available networks for smart contract development on Polkadot Hub, including Westend Hub, Kusama Hub, and Polkadot Hub. categories: Basics, Polkadot Protocol --- # Networks !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Polkadot Hub provides smart contract functionality across multiple networks to facilitate smart contract development in the Polkadot ecosystem. Whether you're testing new contracts or deploying to production, Polkadot Hub offers several network environments tailored for each stage of development. Developers can thoroughly test, iterate, and validate their smart contracts from local testing environments to production networks like Polkadot Hub. This guide will introduce you to the current and upcoming networks available for smart contract development and explain how they fit into the development workflow. ## Network Overview Smart contract development on Polkadot Hub follows a structured process to ensure rigorous testing of new contracts and upgrades before deployment on production networks. Development progresses through a well-defined path, beginning with local environments, advancing through TestNets, and ultimately reaching MainNets. The diagram below illustrates this progression: ``` mermaid flowchart LR id1[Local Polkadot Hub] --> id2[TestNet Polkadot Hub] --> id4[MainNet Polkadot Hub] ``` This progression ensures developers can thoroughly test and iterate their smart contracts without risking real tokens or affecting production networks. A typical development journey consists of three main stages: 1. **Local Development** - Developers start in a local environment to create, test, and iterate on smart contracts - Provides rapid experimentation in an isolated setup without external dependencies 2. **TestNet Development** - Contracts move to TestNets like Westend Hub and Passet Hub - Enables testing in simulated real-world conditions without using real tokens 3. **Production Deployment** - Final deployment to MainNets like Kusama Hub and Polkadot Hub - Represents the live environment where contracts interact with real economic value ## Local Development The local development environment is crucial for smart contract development on Polkadot Hub. It provides developers a controlled space for rapid testing and iteration before moving to public networks. The local setup consists of several key components: - [**Kitchensink node**](https://paritytech.github.io/polkadot-sdk/master/kitchensink_runtime/index.html){target=\_blank} - a local node that can be run for development and testing. It includes logging capabilities for debugging contract execution and provides a pre-configured development environment with pre-funded accounts for testing purposes - [**Ethereum RPC proxy**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive_eth_rpc/index.html){target=\_blank} - bridges Ethereum-compatible tools with the Polkadot SDK-based network. It enables seamless integration with popular development tools like MetaMask and Remix IDE. The purpose of this component is to translate Ethereum RPC calls into Substrate format ## Test Networks The following test networks provide controlled environments for testing smart contracts. TestNet tokens are available from the [Polkadot faucet](https://faucet.polkadot.io/){target=\_blank}. They provide a stable environment for testing your contracts without using real tokens. ``` mermaid flowchart TB id1[Polkadot Hub TestNets] --> id2[Passet Hub] id1[Polkadot Hub TestNets] --> id3[Westend Hub] ``` ### Passet Hub The Passet Hub will be a community-managed TestNet designed specifically for smart contract development. It will mirror Asset Hub's runtime and provide developers with an additional environment for testing their contracts before deployment to production networks. ### Westend Hub Westend Hub is the TestNet for smart contract development and its cutting-edge features. The network maintains the same features and capabilities as the production Polkadot Hub, and also incorporates the latest features developed by core developers. ## Production Networks The MainNet environments represent the final destination for thoroughly tested and validated smart contracts, where they operate with real economic value and serve actual users. ``` mermaid flowchart TB id1[Polkadot Hub MainNets] --> id2[Polkadot Hub] id1[Polkadot Hub MainNets] --> id3[Kusama Hub] ``` ### Polkadot Hub Polkadot Hub is the primary production network for deploying smart contracts in the Polkadot ecosystem. It provides a secure and stable environment for running smart contracts with real economic value. The network supports PolkaVM-compatible contracts written in Solidity or Rust, maintaining compatibility with Ethereum-based development tools. ### Kusama Hub Kusama Hub is the canary version of Polkadot Hub. It is designed for developers who want to move quickly and test their smart contracts in a real-world environment with economic incentives. It provides a more flexible space for innovation while maintaining the same core functionality as Polkadot Hub. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/smart-contract-basics/overview/ --- BEGIN CONTENT --- --- title: Smart Contracts Basics Overview description: Learn how developers can build smart contracts on Polkadot by leveraging either Wasm/ink! or EVM contracts across many parachains. categories: Basics, Polkadot Protocol --- # An Overview of the Smart Contract Landscape on Polkadot !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Polkadot is designed to support an ecosystem of parachains, rather than hosting smart contracts directly. Developers aiming to build smart contract applications on Polkadot rely on parachains within the ecosystem that provide smart contract functionality. This guide outlines the primary approaches to developing smart contracts in the Polkadot ecosystem: - **PolkaVM-compatible contracts** - which support Solidity and any language that compiles down to RISC-V while maintaining compatibility with Ethereum based tools - **EVM-compatible contracts** - which support languages like [Solidity](https://soliditylang.org/){target=\_blank} and [Vyper](https://vyperlang.org/){target=\_blank}, offering compatibility with popular Ethereum tools and wallets - **Wasm-based smart contracts** - using [ink!](https://use.ink/){target=\_blank}, a Rust-based embedded domain-specific language (eDSL), enabling developers to leverage Rust’s safety and tooling You'll explore the key differences between these development paths, along with considerations for parachain developers integrating smart contract functionality. !!!note "Parachain Developer?" If you are a parachain developer looking to add smart contract functionality to your chain, please refer to the [Add Smart Contract Functionality](/develop/parachains/customize-parachain/add-smart-contract-functionality/){target=\_blank} page, which covers both Wasm and EVM-based contract implementations. ## Smart Contracts Versus Parachains A smart contract is a program that executes specific logic isolated to the chain on which it is being executed. All the logic executed is bound to the same state transition rules determined by the underlying virtual machine (VM). Consequently, smart contracts are more streamlined to develop, and programs can easily interact with each other through similar interfaces. ``` mermaid flowchart LR subgraph A[Chain State] direction LR B["Program Logic and Storage
(Smart Contract)"] C["Tx Relevant Storage"] end A --> D[[Virtual Machine]] E[Transaction] --> D D --> F[(New State)] D --> G[Execution Logs] style A fill:#ffffff,stroke:#000000,stroke-width:1px ``` In addition, because smart contracts are programs that execute on top of existing chains, teams don't have to think about the underlying consensus they are built on. These strengths do come with certain limitations. Some smart contracts environments, like EVM, tend to be immutable by default. Developers have developed different [proxy strategies](https://blog.openzeppelin.com/proxy-patterns){target=\_blank} to be able to upgrade smart contracts over time. The typical pattern relies on a proxy contract which holds the program storage forwarding a call to an implementation contract where the execution logic resides. Smart contract upgrades require changing the implementation contract while retaining the same storage structure, necessitating careful planning. Another downside is that smart contracts often follow a gas metering model, where program execution is associated with a given unit and a marketplace is set up to pay for such an execution unit. This fee system is often very rigid, and some complex flows, like account abstraction, have been developed to circumvent this problem. In contrast, parachains can create their own custom logics (known as pallets or modules), and combine them as the state transition function (STF or runtime) thanks to the modularity provided by the [Polkadot-SDK](https://github.com/paritytech/polkadot-sdk/){target=\_blank}. The different pallets within the parachain runtime can give developers a lot of flexibility when building applications on top of it. ``` mermaid flowchart LR A[(Chain State)] --> B[["STF
[Pallet 1]
[Pallet 2]
...
[Pallet N]"]] C[Transaction
Targeting Pallet 2] --> B B --> E[(New State)] B --> F[Execution Logs] ``` Parachains inherently offer features such as logic upgradeability, flexible transaction fee mechanisms, and chain abstraction logic. More so, by using Polkadot, parachains can benefit from robust consensus guarantees with little engineering overhead. To read more about the differences between smart contracts and parachain runtimes, see the [Runtime vs. Smart Contracts](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/runtime_vs_smart_contract/index.html){target=\_blank} section of the Polkadot SDK Rust docs. For a more in-depth discussion about choosing between runtime development and smart contract development, see the Stack Overflow post on [building a Polkadot SDK runtime versus a smart contract](https://stackoverflow.com/a/56041305){target=\_blank}. ## Building a Smart Contract The Polkadot SDK supports multiple smart contract execution environments: - **PolkaVM** - a cutting-edge virtual machine tailored to optimize smart contract execution on Polkadot. Unlike traditional EVMs, PolkaVM is built with a [RISC-V-based register architecture](https://en.wikipedia.org/wiki/RISC-V){target=\_blank} for increased performance and scalability - **EVM** - through [Frontier](https://github.com/polkadot-evm/frontier){target=\_blank}. It consists of a full Ethereum JSON RPC compatible client, an Ethereum emulation layer, and a [Rust-based EVM](https://github.com/rust-ethereum/evm){target=\_blank}. This is used by chains like [Acala](https://acala.network/){target=\_blank}, [Astar](https://astar.network/){target=\_blank}, [Moonbeam](https://moonbeam.network){target=\_blank} and more - **Wasm** - [ink!](https://use.ink/){target=\_blank} is a domain-specific language (DSL) for Rust smart contract development that uses the [Contracts pallet](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/contracts/){target=\_blank} with [`cargo-contract`](https://github.com/use-ink/cargo-contract){target=\_blank} serving as the compiler to WebAssembly. Wasm contracts can be used by chains like [Astar](https://astar.network/){target=\_blank} ### PolkaVM Contracts A component of the Asset Hub parachain, PolkaVM helps enable the deployment of Solidity-based smart contracts directly on Asset Hub. Learn more about how this cutting edge virtual machine facilitates using familiar Ethereum-compatible contracts and tools with Asset Hub by visiting the [Native Smart Contracts](/develop/smart-contracts/overview#native-smart-contracts){target=\_blank} guide. ### EVM Contracts The [Frontier](https://github.com/polkadot-evm/frontier){target=\_blank} project provides a set of modules that enables a Polkadot SDK-based chain to run an Ethereum emulation layer that allows the execution of EVM smart contracts natively with the same API/RPC interface. [Ethereum addresses (ECDSA)](https://ethereum.org/en/glossary/#address){target=\_blank} can also be mapped directly to and from the Polkadot SDK's SS58 scheme from existing accounts. Moreover, you can modify Polkadot SDK to use the ECDSA signature scheme directly to avoid any mapping. At a high level, [Frontier](https://github.com/polkadot-evm/frontier){target=\_blank} is composed of three main components: - [**Ethereum Client**](https://github.com/polkadot-evm/frontier/tree/master/client){target=\_blank} - an Ethereum JSON RPC compliant client that allows any request coming from an Ethereum tool, such as [Remix](https://remix.ethereum.org/){target=\_blank}, [Hardhat](https://hardhat.org/){target=\_blank} or [Foundry](https://getfoundry.sh/){target=\_blank}, to be admitted by the network - [**Pallet Ethereum**](https://docs.rs/pallet-ethereum/latest/pallet_ethereum/){target=\_blank} - a block emulation and Ethereum transaction validation layer that works jointly with the Ethereum client to ensure compatibility with Ethereum tools - [**Pallet EVM**](https://docs.rs/pallet-evm/latest/pallet_evm/){target=\_blank} - access layer to the [Rust-based EVM](https://github.com/rust-ethereum/evm){target=\_blank}, enabling the execution of EVM smart contract logic natively The following diagram illustrates a high-level overview of the path an EVM transaction follows when using this configuration: ``` mermaid flowchart TD A[Users and Devs] -->|Send Tx| B[Frontier RPC Ext] subgraph C[Pallet Ethereum] D[Validate Tx] E[Send
Valid Tx] end B -->|Interact with| C D --> E subgraph F[Pallet EVM] G[Rust EVM] end I[(Current EVM
Emulated State)] H[Smart Contract
Solidity, Vyper...] <-->|Compiled to EVM
Bytecode| I C --> F I --> F F --> J[(New Ethereum
Emulated State)] F --> K[Execution Logs] style C fill:#ffffff,stroke:#000000,stroke-width:1px style F fill:#ffffff,stroke:#000000,stroke-width:1px ``` Although it seems complex, users and developers are abstracted of that complexity, and tools can easily interact with the parachain as they would with any other Ethereum-compatible environment. The Rust EVM is capable of executing regular [EVM bytecode](https://www.ethervm.io/){target=\_blank}. Consequently, any language that compiles to EVM bytecode can be used to create programs that the parachain can execute. ### Wasm Contracts The [`pallet_contracts`](https://docs.rs/pallet-contracts/latest/pallet_contracts/index.html#contracts-pallet){target=\_blank} provides the execution environment for Wasm-based smart contracts. Consequently, any smart contract language that compiles to Wasm can be executed in a parachain that enables this module. At the time of writing there are two main languages that can be used for Wasm programs: - [**ink!**](https://use.ink/){target=\_blank} - a Rust-based language that compiles to Wasm. It allows developers to inherit all its safety guarantees and use normal Rust tooling, being the dedicated domain-specific language - **Solidity** - can be compiled to Wasm via the [Solang](https://github.com/hyperledger-solang/solang/){target=\_blank} compiler. Consequently, developers can write Solidity 0.8 smart contracts that can be executed as Wasm programs in parachains The following diagram illustrates a high-level overview of the path a transaction follows when using [`pallet_contracts`](https://docs.rs/pallet-contracts/latest/pallet_contracts/index.html#contracts-pallet){target=\_blank}: ``` mermaid flowchart TD subgraph A[Wasm Bytecode API] C[Pallet Contracts] end B[Users and Devs] -- Interact with ---> A D[(Current State)] E[Smart Contract
ink!, Solidity...] <-->|Compiled to Wasm
Bytecode| D D --> A A --> F[(New State)] A --> G[Execution Logs] style A fill:#ffffff,stroke:#000000,stroke-width:1px ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/polkadot-protocol/smart-contract-basics/polkavm-design/ --- BEGIN CONTENT --- --- title: PolkaVM Design description: Discover PolkaVM, a high-performance smart contract VM for Polkadot, enabling Ethereum compatibility via pallet_revive, Solidity support & optimized execution. categories: Basics, Polkadot Protocol --- # PolkaVM Design !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction The Asset Hub smart contracts solution includes multiple components to ensure Ethereum compatibility and high performance. Its architecture allows for integration with current Ethereum tools, while its innovative virtual machine design enhances performance characteristics. ## PolkaVM [**PolkaVM**](https://github.com/paritytech/polkavm){target=\_blank} is a custom virtual machine optimized for performance with [RISC-V-based](https://en.wikipedia.org/wiki/RISC-V){target=\_blank} architecture, supporting Solidity and additional high-performance languages. It serves as the core execution environment, integrated directly within the runtime. It features: - An efficient interpreter for immediate code execution - A planned JIT compiler for optimized performance - Dual-mode execution capability, allowing selection of the most appropriate backend for specific workloads - Optimized performance for short-running contract calls through the interpreter The interpreter remains particularly beneficial for contracts with minimal code execution, as it eliminates JIT compilation overhead and enables immediate code execution through lazy interpretation. ## Architecture The smart contract solution consists of the following key components that work together to enable Ethereum compatibility on Polkadot-based chains: ### Pallet Revive [**`pallet_revive`**](https://paritytech.github.io/polkadot-sdk/master/pallet_revive/index.html){target=\_blank} is a runtime module that executes smart contracts by adding extrinsics, runtime APIs, and logic to convert Ethereum-style transactions into formats compatible with Polkadot SDK-based blockchains. It processes Ethereum-style transactions through the following workflow: ```mermaid sequenceDiagram participant User as User/dApp participant Proxy as Ethereum JSON RPC Proxy participant Chain as Blockchain Node participant Pallet as pallet_revive User->>Proxy: Submit Ethereum Transaction Proxy->>Chain: Repackage as Polkadot Compatible Transaction Chain->>Pallet: Process Transaction Pallet->>Pallet: Decode Ethereum Transaction Pallet->>Pallet: Execute Contract via PolkaVM Pallet->>Chain: Return Results Chain->>Proxy: Forward Results Proxy->>User: Return Ethereum-compatible Response ``` This proxy-based approach eliminates the need for node binary modifications, maintaining compatibility across different client implementations. Preserving the original Ethereum transaction payload simplifies adapting existing tools, which can continue processing familiar transaction formats. ### PolkaVM Design Fundamentals PolkaVM introduces two fundamental architectural differences compared to the Ethereum Virtual Machine (EVM): ```mermaid flowchart TB subgraph "EVM Architecture" EVMStack[Stack-Based] EVM256[256-bit Word Size] end subgraph "PolkaVM Architecture" PVMReg[Register-Based] PVM64[64-bit Word Size] end ``` - **Register-based design** - PolkaVM utilizes a RISC-V register-based approach. This design: - Employs a finite set of registers for argument passing instead of an infinite stack - Facilitates efficient translation to underlying hardware architectures - Optimizes register allocation through careful register count selection - Enables simple 1:1 mapping to x86-64 instruction sets - Reduces compilation complexity through strategic register limitation - Improves overall execution performance through hardware-aligned design - **64-bit word size** - PolkaVM operates with a 64-bit word size as follows: - Enables direct hardware-supported arithmetic operations - Maintains compatibility with Solidity's 256-bit operations through YUL translation - Allows integration of performance-critical components written in lower-level languages - Optimizes computation-intensive operations through native word size alignment - Reduces overhead for operations not requiring extended precision - Facilitates efficient integration with modern CPU architectures ## Compilation Process When compiling a Solidity smart contract, the code passes through the following stages: ```mermaid flowchart LR Dev[Developer] --> |Solidity\nSource\nCode| Solc subgraph "Compilation Process" direction LR Solc[solc] --> |YUL\nIR| Revive Revive[Revive Compiler] --> |LLVM\nIR| LLVM LLVM[LLVM\nOptimizer] --> |RISC-V ELF\nShared Object| PVMLinker end PVMLinker[PVM Linker] --> PVM[PVM Blob\nwith Metadata] ``` The compilation process integrates several specialized components: 1. **Solc** - the standard Ethereum Solidity compiler that translates Solidity source code to [YUL IR](https://docs.soliditylang.org/en/latest/yul.html){target=\_blank} 2. **Revive Compiler** - takes YUL IR and transforms it to [LLVM IR](https://llvm.org/){target=\_blank} 3. **LLVM** - a compiler infrastructure that optimizes the code and generates RISC-V ELF objects 4. **PVM linker** - links the RISC-V ELF object into a final PolkaVM blob with metadata --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/dapps/ --- BEGIN CONTENT --- --- title: Decentralized Application Tutorials description: Explore step-by-step tutorials for exploring the world of building decentralized applications using the toolkits that Polkadot provides. template: index-page.html --- # Build Decentralized Applications on Polkadot This section provides hands-on tutorials for building decentralized applications (dApps) using the Polkadot SDK and its developer toolkits. These guides help you leverage Polkadot's infrastructure to build scalable, secure, and interoperable dApps without relying solely on smart contracts. You'll explore a range of topics—from client-side apps and CLI tools to on-chain interaction patterns—all backed by lightweight or full-node tooling. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/dapps/remark-tutorial/ --- BEGIN CONTENT --- --- title: PAPI Account Watcher Tutorial description: Build a CLI app that listens to on-chain events using the Polkadot API and responds to specific messages for a given account. categories: Tooling --- # PAPI Account Watcher ## Introduction This tutorial demonstrates how to build a simple command-line interface (CLI) application that monitors a user's account on the relay chain for the [`system.remarkWithEvent`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.remark_with_event){target=\_blank} extrinsic, using the [Polkadot API](/develop/toolkit/api-libraries/papi){target=\_blank}. The `system.remarkWithEvent` extrinsic enables the submission of arbitrary data on-chain. In this tutorial, the data consists of a hash derived from the combination of an account address and the word "email" (`address+email`). This hash is monitored on-chain, and the application listens for remarks addressed to the specified account. The `system.remarkWithEvent` extrinsic emits an event that can be observed using the Polkadot API (PAPI). When the application detects a remark addressed to the specified account, it plays the "You've Got Mail!" sound byte. ## Prerequisites Before starting, ensure the following tools and dependencies are installed: - Node.js (version 18 or higher) - A package manager (npm or yarn) - [Polkadot.js browser extension (wallet)](https://polkadot.js.org/extension/){target=\_blank} - An account with [Westend tokens](https://faucet.polkadot.io/westend){target=\_blank} ## Clone the Repository To follow this tutorial, you can either run the example directly or use a boilerplate/template. This tutorial uses a template that includes all necessary dependencies for working with the Polkadot API and TypeScript. Clone the `polkadot-api-example-cli` project and checkout to the [`empty-cli`](https://github.com/CrackTheCode016/polkadot-api-example-cli/tree/empty-cli){target=\_blank} as follows: ```bash git clone https://github.com/polkadot-developers/dapp-examples/tree/v0.0.2 cd polkadot-api-example-cli git checkout empty-cli ``` After cloning, install the required dependencies by running: ```bash npm install ``` ## Explore the Template (Light Clients) After opening the repository, you will find the following code (excluding imports): ```typescript title="index.ts" async function withLightClient(): Promise { // Start the light client const smoldot = start(); // The Westend Relay Chain const relayChain = await smoldot.addChain({ chainSpec: westEndChainSpec }); return createClient(getSmProvider(relayChain)); } async function main() { // CLI code goes here... } main(); ``` The `withLightClient` function is particularly important. It uses the built-in [light client](/develop/toolkit/parachains/light-clients/){target=\_blank} functionality, powered by [`smoldot`](https://github.com/smol-dot/smoldot){target=\_blank}, to create a light client that synchronizes and interacts with Polkadot directly within the application. ## Create the CLI The CLI functionality is implemented within the `main` function. The CLI includes an option (`-a` / `--account`) to specify the account to monitor for remarks: ```typescript title="index.ts" const program = new Command(); console.log(chalk.white.dim(figlet.textSync('Web3 Mail Watcher'))); program .version('0.0.1') .description( 'Web3 Mail Watcher - A simple CLI tool to watch for remarks on the Polkadot network' ) .option('-a, --account ', 'Account to watch') .parse(process.argv); // CLI arguments from commander const options = program.opts(); ``` ## Watch for Remarks The application monitors the Westend network for remarks sent to the specified account. The following code, placed within the `main` function, implements this functionality: ```typescript title="index.ts" if (options.account) { console.log( chalk.black.bgRed('Watching account:'), chalk.bold.whiteBright(options.account) ); // Create a light client to connect to the Polkadot (Westend) network const lightClient = await withLightClient(); // Get the typed API to interact with the network const dotApi = lightClient.getTypedApi(wnd); // Subscribe to the System.Remarked event and watch for remarks from the account dotApi.event.System.Remarked.watch().subscribe((event) => { const { sender, hash } = event.payload; const calculatedHash = bytesToHex( blake2b(`${options.account}+email`, { dkLen: 32 }) ); if (`0x${calculatedHash}` === hash.asHex()) { sound.play('youve-got-mail-sound.mp3'); console.log(chalk.black.bgRed('You got mail!')); console.log( chalk.black.bgCyan('From:'), chalk.bold.whiteBright(sender.toString()) ); console.log( chalk.black.bgBlue('Hash:'), chalk.bold.whiteBright(hash.asHex()) ); } }); } else { console.error('Account is required'); return; } ``` ## Compile and Run Compile and execute the application using the following command: ```bash npm start -- --account ``` For example: ```bash npm start -- --account 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY ``` The output should look like this:
npm start -- --account 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY __ __ _ _____ __ __ _ _ __ __ _ _ \ \ / /__| |__|___ / | \/ | __ _(_) | \ \ / /_ _| |_ ___| |__ ___ _ __ \ \ /\ / / _ \ '_ \ |_ \ | |\/| |/ _` | | | \ \ /\ / / _` | __/ __| '_ \ / _ \ '__| \ V V / __/ |_) |__) | | | | | (_| | | | \ V V / (_| | || (__| | | | __/ | \_/\_/ \___|_.__/____/ |_| |_|\__,_|_|_| \_/\_/ \__,_|\__\___|_| |_|\___|_| 📬 Watching account: 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY ⚙️ [smoldot] Smoldot v2.0.34 ✅ [smoldot] Chain initialization complete for westend2. 🔗 Name: "Westend" 🧬 Genesis hash: 0xe143…423e ⛓️ Chain specification starting at: 0x10cf…b908 (#23920337)
## Test the CLI To test the application, navigate to the [**Extrinsics** page of the PAPI Dev Console](https://dev.papi.how/extrinsics#networkId=westend&endpoint=light-client){target=\_blank}. Select the **System** pallet and the **remark_with_event** call. Ensure the input field follows the convention `address+email`. For example, if monitoring `5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY`, the input should be: ![](/images/tutorials/dapps/remark-tutorial/papi-console.webp) Submit the extrinsic and sign it using the Polkadot.js browser wallet. The CLI will display the following output and play the "You've Got Mail!" sound:
npm start -- --account 5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY __ __ _ _____ __ __ _ _ __ __ _ _ \ \ / /__| |__|___ / | \/ | __ _(_) | \ \ / /_ _| |_ ___| |__ ___ _ __ \ \ /\ / / _ \ '_ \ |_ \ | |\/| |/ _` | | | \ \ /\ / / _` | __/ __| '_ \ / _ \ '__| \ V V / __/ |_) |__) | | | | | (_| | | | \ V V / (_| | || (__| | | | __/ | \_/\_/ \___|_.__/____/ |_| |_|\__,_|_|_| \_/\_/ \__,_|\__\___|_| |_|\___|_| 📬 Watching account: 5Cm8yiG45rqrpyV2zPLrbtr8efksrRuCXcqcB4xj8AejfcTB 📥 You've got mail! 👤 From: 5Cm8yiG45rqrpyV2zPLrbtr8efksrRuCXcqcB4xj8AejfcTB 🔖 Hash: 0xb6999c9082f5b1dede08b387404c9eb4eb2deee4781415dfa7edf08b87472050
## Next Steps This application demonstrates how the Polkadot API can be used to build decentralized applications. While this is not a production-grade application, it introduces several key features for developing with the Polkadot API. To explore more, refer to the [official PAPI documentation](https://papi.how){target=\_blank}. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/ --- BEGIN CONTENT --- --- title: Tutorials description: Explore step-by-step tutorials for building in Polkadot, from parachain deployment and testing to cross-chain asset creation and XCM channel management. template: index-page.html --- # Tutorials Welcome to the Polkadot Tutorials hub! Whether you’re building parachains, integrating system chains, or developing decentralized applications, these step-by-step guides are designed to help you achieve your goals efficiently and effectively. Not sure where to start? Check out the highlighted tutorials below! ## Polkadot Zero to Hero The Zero to Hero series offers step-by-step guidance to development across the Polkadot ecosystem. ### Parachain Developers ## Featured Tutorials ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/interoperability/ --- BEGIN CONTENT --- --- title: Interoperability Tutorials description: Explore tutorials on interoperability for Polkadot SDK-based blockchains, covering cross-chain communication and integration techniques. template: index-page.html --- # Cross-Chain Interoperability Tutorials This section introduces you to the core interoperability solutions within the Polkadot ecosystem through practical, hands-on tutorials. These resources are designed to help you master cross-chain communication techniques, from setting up messaging channels between parachains to leveraging Polkadot's advanced features of the [XCM protocol](/develop/interoperability/intro-to-xcm/){target=\_blank}. By following these guides, you’ll gain the skills needed to implement seamless integration and interaction across diverse blockchains, unlocking the full potential of Polkadot's interconnected network. ## XCM (Cross-Consensus Messaging) XCM provides a secure and trustless framework that facilitates communication between parachains, relay chains, and external blockchains, enabling asset transfers, data sharing, and complex cross-chain workflows. ### For Parachain Integrators Learn to establish and use cross-chain communication channels: - **[Opening HRMP Channels Between Parachains](/tutorials/interoperability/xcm-channels/para-to-para/)** - set up uni- and bidirectional messaging channels between parachains - **[Opening HRMP Channels with System Parachains](/tutorials/interoperability/xcm-channels/para-to-system/)** - establish communication channels with system parachains using optimized XCM messages ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/interoperability/xcm-channels/ --- BEGIN CONTENT --- --- title: Tutorials for Managing XCM Channels description: Learn step-by-step how to establish unidirectional and bidirectional HRMP channels between parachains and system parachains using XCM. template: index-page.html --- # Tutorials for Managing XCM Channels Establishing [XCM channels](/develop/interoperability/xcm-channels/) is essential to unlocking Polkadot's native interoperability. Before bridging assets or sending cross-chain contract calls, the necessary XCM channels must be established. These tutorials guide you through the process of setting up [Horizontal Relay-routed Message Passing (HRMP)](/develop/interoperability/xcm-channels/#establishing-hrmp-channels) channels for cross-chain messaging. Learn how to configure unidirectional channels [between parachains](/tutorials/interoperability/xcm-channels/para-to-para/) and the simplified single-message process for bidirectional channels with [system parachains like Asset Hub](/tutorials/interoperability/xcm-channels/para-to-system/). ## Understand the Process of Opening Channels Each parachain starts with two default unidirectional XCM channels: an upward channel for sending messages to the relay chain, and a downward channel for receiving messages. These channels are implicitly available. To enable communication between parachains, explicit HRMP channels must be established by registering them on the relay chain. This process requires a deposit to cover the costs associated with storing message queues on the relay chain. The deposit amount depends on the specific relay chain’s parameters. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/interoperability/xcm-channels/para-to-para/ --- BEGIN CONTENT --- --- title: Opening HRMP Channels Between Parachains description: Learn how to open HRMP channels between parachains on Polkadot. Discover the step-by-step process for establishing uni- and bidirectional communication. tutorial_badge: Advanced categories: Parachains --- # Opening HRMP Channels Between Parachains ## Introduction For establishing communication channels between parachains on the Polkadot network using the Horizontal Relay-routed Message Passing (HRMP) protocol, the following steps are required: 1. **Channel request** - the parachain that wants to open an HRMP channel must make a request to the parachain it wishes to have an open channel with 2. **Channel acceptance** - the other parachain must then accept this request to complete the channel establishment This process results in a unidirectional HRMP channel, where messages can flow in only one direction between the two parachains. An additional HRMP channel must be established in the opposite direction to enable bidirectional communication. This requires repeating the request and acceptance process but with the parachains reversing their roles. Once both unidirectional channels are established, the parachains can send messages back and forth freely through the bidirectional HRMP communication channel. ## Prerequisites Before proceeding, ensure you meet the following requirements: - Blockchain network with a relay chain and at least two connected parachains - Wallet with sufficient funds to execute transactions on the participant chains ## Procedure to Initiate an HRMP Channel This example will demonstrate how to open a channel between parachain 2500 and parachain 2600, using Rococo Local as the relay chain. ### Fund Sender Sovereign Account The [sovereign account](https://github.com/polkadot-fellows/xcm-format/blob/10726875bd3016c5e528c85ed6e82415e4b847d7/README.md?plain=1#L50){target=_blank} for parachain 2500 on the relay chain must be funded so it can take care of any XCM transact fees. Use [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} UI to connect to the relay chain and transfer funds from your account to the parachain 2500 sovereign account. ![](/images/tutorials/interoperability/xcm-channels/hrmp-channels-2.webp) ??? note "Calculating Parachain Sovereign Account" To generate the sovereign account address for a parachain, you'll need to follow these steps: 1. Determine if the parachain is an "up/down" chain (parent or child) or a "sibling" chain: - Up/down chains use the prefix `0x70617261` (which decodes to `b"para"`) - Sibling chains use the prefix `0x7369626c` (which decodes to `b"sibl"`) 2. Calculate the u32 scale encoded value of the parachain ID: - Parachain 2500 would be encoded as `c4090000` 3. Combine the prefix and parachain ID encoding to form the full sovereign account address: The sovereign account of parachain 2500 in relay chain will be `0x70617261c4090000000000000000000000000000000000000000000000000000` and the SS58 format of this address is `5Ec4AhPSY2GEE4VoHUVheqv5wwq2C1HMKa7c9fVJ1WKivX1Y` To perform this conversion, you can also use the **"Para ID" to Address** section in [Substrate Utilities](https://www.shawntabrizi.com/substrate-js-utilities/){target=_blank}. ### Create Channel Opening Extrinsic 1. In Polkadot.js Apps, connect to the relay chain, navigate to the **Developer** dropdown and select the **Extrinsics** option ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-1.webp) 2. Construct an `hrmpInitOpenChannel` extrinsic call 1. Select the **`hrmp`** pallet 2. Choose the **`hrmpInitOpenChannel`** extrinsic 3. Fill in the parameters - **`recipient`** - parachain ID of the target chain (in this case, 2600) - **`proposedMaxCapacity`** - max number of messages that can be pending in the channel at once - **`proposedMaxMessageSize`** - max message size that could be put into the channel 4. Copy the encoded call data ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-2.webp) The encoded call data for opening a channel with parachain 2600 is `0x3c00280a00000800000000001000`. ### Craft and Submit the XCM Message from the Sender To initiate the HRMP channel opening process, you need to create an XCM message that includes the encoded `hrmpInitOpenChannel` call data from the previous step. This message will be sent from your parachain to the relay chain. This example uses the `sudo` pallet to dispatch the extrinsic. Verify the XCM configuration of the parachain you're working with and ensure you're using an origin with the necessary privileges to execute the `polkadotXcm.send` extrinsic. The XCM message should contain the following instructions: - **`WithdrawAsset`** - withdraws assets from the origin's ownership and places them in the Holding Register - **`BuyExecution`** - pays for the execution of the current message using the assets in the Holding Register - **`Transact`** - execute the encoded transaction call - **`RefundSurplus`** - increases the Refunded Weight Register to the value of the Surplus Weight Register, attempting to reclaim any excess fees paid via BuyExecution - **`DepositAsset`** - subtracts assets from the Holding Register and deposits equivalent on-chain assets under the specified beneficiary's ownership !!!note For more detailed information about XCM's functionality, complexities, and instruction set, refer to the [xcm-format](https://github.com/polkadot-fellows/xcm-format){target=_blank} documentation. In essence, this process withdraws funds from the parachain's sovereign account to the XCVM Holding Register, then uses these funds to purchase execution time for the XCM `Transact` instruction, executes `Transact`, refunds any unused execution time and deposits any remaining funds into a specified account. To send the XCM message to the relay chain, connect to parachain 2500 in Polkadot.js Apps. Fill in the required parameters as shown in the image below, ensuring that you: 1. Replace the **`call`** field with your encoded `hrmpInitOpenChannel` call data from the previous step 2. Use the correct beneficiary information 3. Click the **Submit Transaction** button to dispatch the XCM message to the relay chain ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-3.webp) !!! note The exact process and parameters for submitting this XCM message may vary depending on your specific parachain and relay chain configurations. Always refer to the most current documentation for your particular network setup. After submitting the XCM message to initiate the HRMP channel opening, you should verify that the request was successful. Follow these steps to check the status of your channel request: 1. Using Polkadot.js Apps, connect to the relay chain and navigate to the **Developer** dropdown, then select the **Chain state** option ![](/images/tutorials/interoperability/xcm-channels/hrmp-channels-1.webp) 2. Query the HRMP open channel requests 1. Select **`hrmp`** 2. Choose the **`hrmpOpenChannelRequests`** call 3. Click the **+** button to execute the query 4. Check the status of all pending channel requests ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-4.webp) If your channel request was successful, you should see an entry for your parachain ID in the list of open channel requests. This confirms that your request has been properly registered on the relay chain and is awaiting acceptance by the target parachain. ## Procedure to Accept an HRMP Channel For the channel to be fully established, the target parachain must accept the channel request by submitting an XCM message to the relay chain. ### Fund Receiver Sovereign Account Before proceeding, ensure that the sovereign account of parachain 2600 on the relay chain is funded. This account will be responsible for covering any XCM transact fees. To fund the account, follow the same process described in the previous section, [Fund Sovereign Account](#fund-sender-sovereign-account). ### Create Channel Accepting Extrinsic 1. In Polkadot.js Apps, connect to the relay chain, navigate to the **Developer** dropdown and select the **Extrinsics** option ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-1.webp) 2. Construct an `hrmpAcceptOpenChannel` extrinsic call 1. Select the **`hrmp`** pallet 2. Choose the **`hrmpAcceptOpenChannel`** extrinsic 3. Fill in the parameters: - **`sender`** - parachain ID of the requesting chain (in this case, 2500) 4. Copy the encoded call data ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-5.webp) The encoded call data for accepting a channel with parachain 2500 should be `0x3c01c4090000` ### Craft and Submit the XCM Message from the Receiver To accept the HRMP channel opening, you need to create and submit an XCM message that includes the encoded `hrmpAcceptOpenChannel` call data from the previous step. This process is similar to the one described in the previous section, [Craft and Submit the XCM Message](#craft-and-submit-the-xcm-message-from-the-sender), with a few key differences: - Use the encoded call data for `hrmpAcceptOpenChannel` obtained in Step 2 of this section - In the last XCM instruction (DepositAsset), set the beneficiary to parachain 2600's sovereign account to receive any surplus funds To send the XCM message to the relay chain, connect to parachain 2600 in Polkadot.js Apps. Fill in the required parameters as shown in the image below, ensuring that you: 1. Replace the **`call`** field with your encoded `hrmpAcceptOpenChannel` call data from the previous step 2. Use the correct beneficiary information 3. Click the **Submit Transaction** button to dispatch the XCM message to the relay chain ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-6.webp) After submitting the XCM message to accept the HRMP channel opening, verify that the channel has been set up correctly. 1. Using Polkadot.js Apps, connect to the relay chain and navigate to the **Developer** dropdown, then select the **Chain state** option ![](/images/tutorials/interoperability/xcm-channels/hrmp-channels-1.webp) 2. Query the HRMP channels 1. Select **`hrmp`** 2. Choose the **`hrmpChannels`** call 3. Click the **+** button to execute the query 4. Check the status of the opened channel ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-7.webp) If the channel has been successfully established, you should see the channel details in the query results. By following these steps, you will have successfully accepted the HRMP channel request and established a unidirectional channel between the two parachains. !!! note Remember that for full bidirectional communication, you'll need to repeat this process in the opposite direction, with parachain 2600 initiating a channel request to parachain 2500. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/interoperability/xcm-channels/para-to-system/ --- BEGIN CONTENT --- --- title: Opening HRMP Channels with System Parachains description: Learn how to open HRMP channels with Polkadot system parachains. Discover the process for establishing bi-directional communication using a single XCM message. tutorial_badge: Advanced categories: Parachains --- # Opening HRMP Channels with System Parachains ## Introduction While establishing Horizontal Relay-routed Message Passing (HRMP) channels between regular parachains involves a two-step request and acceptance procedure, opening channels with system parachains follows a more straightforward approach. System parachains are specialized chains that provide core functionality to the Polkadot network. Examples include Asset Hub for cross-chain asset transfers and Bridge Hub for connecting to external networks. Given their critical role, establishing communication channels with these system parachains has been optimized for efficiency and ease of use. Any parachain can establish a bidirectional channel with a system chain through a single operation, requiring just one XCM message from the parachain to the relay chain. ## Prerequisites To successfully complete this process, you'll need to have the following in place: - Access to a blockchain network consisting of: - A relay chain - A parachain - An Asset Hub system chain - A wallet containing enough funds to cover transaction fees on each of the participating chains ## Procedure to Establish an HRMP Channel This guide demonstrates opening an HRMP channel between parachain 2500 and system chain Asset Hub (parachain 1000) on the Rococo Local relay chain. ### Fund Parachain Sovereign Account The [sovereign account](https://github.com/polkadot-fellows/xcm-format/blob/10726875bd3016c5e528c85ed6e82415e4b847d7/README.md?plain=1#L50){target=_blank} for parachain 2500 on the relay chain must be funded so it can take care of any XCM transact fees. Use [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} UI to connect to the relay chain and transfer funds from your account to the parachain 2500 sovereign account. ![](/images/tutorials/interoperability/xcm-channels/hrmp-channels-2.webp) ??? note "Calculating Parachain Sovereign Account" To generate the sovereign account address for a parachain, you'll need to follow these steps: 1. Determine if the parachain is an "up/down" chain (parent or child) or a "sibling" chain: - Up/down chains use the prefix `0x70617261` (which decodes to `b"para"`) - Sibling chains use the prefix `0x7369626c` (which decodes to `b"sibl"`) 2. Calculate the u32 scale encoded value of the parachain ID: - Parachain 2500 would be encoded as `c4090000` 3. Combine the prefix and parachain ID encoding to form the full sovereign account address: The sovereign account of parachain 2500 in relay chain will be `0x70617261c4090000000000000000000000000000000000000000000000000000` and the SS58 format of this address is `5Ec4AhPSY2GEE4VoHUVheqv5wwq2C1HMKa7c9fVJ1WKivX1Y` To perform this conversion, you can also use the **"Para ID" to Address** section in [Substrate Utilities](https://www.shawntabrizi.com/substrate-js-utilities/){target=\_blank}. ### Create Establish Channel with System Extrinsic 1. In Polkadot.js Apps, connect to the relay chain, navigate to the **Developer** dropdown and select the **Extrinsics** option ![](/images/tutorials/interoperability/xcm-channels/para-to-para/hrmp-para-to-para-1.webp) 2. Construct an `establish_channel_with_system` extrinsic call 1. Select the **`hrmp`** pallet 2. Choose the **`establish_channel_with_system`** extrinsic 3. Fill in the parameters: - **`target_system_chain`** - parachain ID of the target system chain (in this case, 1000) 4. Copy the encoded call data ![](/images/tutorials/interoperability/xcm-channels/para-to-system/hrmp-para-to-system-1.webp) The encoded call data for establishing a channel with system parachain 1000 should be `0x3c0ae8030000` ### Craft and Submit the XCM Message Connect to parachain 2500 using Polkadot.js Apps to send the XCM message to the relay chain. Input the necessary parameters as illustrated in the image below. Make sure to: 1. Insert your previously encoded `establish_channel_with_system` call data into the **`call`** field 2. Provide beneficiary details 3. Dispatch the XCM message to the relay chain by clicking the **Submit Transaction** button ![](/images/tutorials/interoperability/xcm-channels/para-to-system/hrmp-para-to-system-2.webp) !!! note The exact process and parameters for submitting this XCM message may vary depending on your specific parachain and relay chain configurations. Always refer to the most current documentation for your particular network setup. After successfully submitting the XCM message to the relay chain, two HRMP channels should be created, establishing bidirectional communication between parachain 2500 and system chain 1000. To verify this, follow these steps: 1. Using Polkadot.js Apps, connect to the relay chain and navigate to the **Developer** dropdown, then select **Chain state** ![](/images/tutorials/interoperability/xcm-channels/hrmp-channels-1.webp) 2. Query the HRMP channels 1. Select **`hrmp`** from the options 2. Choose the **`hrmpChannels`** call 3. Click the **+** button to execute the query ![](/images/tutorials/interoperability/xcm-channels/para-to-system/hrmp-para-to-system-3.webp) 3. Examine the query results. You should see output similar to the following: ```json [ [ [ { "sender": 1000, "recipient": 2500 } ], { "maxCapacity": 8, "maxTotalSize": 8192, "maxMessageSize": 1048576, "msgCount": 0, "totalSize": 0, "mqcHead": null, "senderDeposit": 0, "recipientDeposit": 0 } ], [ [ { "sender": 2500, "recipient": 1000 } ], { "maxCapacity": 8, "maxTotalSize": 8192, "maxMessageSize": 1048576, "msgCount": 0, "totalSize": 0, "mqcHead": null, "senderDeposit": 0, "recipientDeposit": 0 } ] ] ``` The output confirms the successful establishment of two HRMP channels: - From chain 1000 (system chain) to chain 2500 (parachain) - From chain 2500 (parachain) to chain 1000 (system chain) This bidirectional channel enables direct communication between the system chain and the parachain, allowing for cross-chain message passing. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/ --- BEGIN CONTENT --- --- title: XCM Transfers from Relay Chain to Parachain description: Learn how to perform a reserve-backed asset transfer between a relay chain and a parachain using XCM for cross-chain interoperability. tutorial_badge: Intermediate categories: Parachains --- # From Relay Chain to Parachain ## Introduction [Cross-Consensus Messaging (XCM)](/develop/interoperability/intro-to-xcm/){target=\_blank} facilitates asset transfers both within the same consensus system and between different ones, such as between a relay chain and its parachains. For cross-system transfers, two main methods are available: - [**Asset teleportation**](https://paritytech.github.io/xcm-docs/journey/transfers/teleports.html){target=\_blank} - a simple and efficient method involving only the source and destination chains, ideal for systems with a high level of trust - [**Reserve-backed transfers**](https://paritytech.github.io/xcm-docs/journey/transfers/reserve.html){target=\_blank} - involves a trusted reserve holding real assets and mints derivative tokens to track ownership. This method is suited for systems with lower trust levels In this tutorial, you will learn how to perform a reserve-backed transfer of DOT between a relay chain (Polkadot) and a parachain (Astar). ## Prerequisites When adapting this tutorial for other chains, before you can send messages between different consensus systems, you must first open HRMP channels. For detailed guidance, refer to the [XCM Channels](/develop/interoperability/xcm-channels/#xcm-channels){target=\_blank} article before for further information about. This tutorial uses Chopsticks to fork a relay chain and a parachain connected via HRMP channels. For more details on this setup, see the [XCM Testing](/tutorials/polkadot-sdk/testing/fork-live-chains/#xcm-testing){target=\_blank} section on the Chopsticks page. ## Setup To simulate XCM operations between different consensus systems, start by forking the network with the following command: ```bash chopsticks xcm -r polkadot -p astar ``` After executing this command, the relay chain and parachain will expose the following WebSocket endpoints: | Chain | WebSocket Endpoint | |------------------------|--------------------------------------| | Polkadot (relay chain) |
```ws://localhost:8001```
| | Astar (parachain) |
```ws://localhost:8000```
| You can perform the reserve-backed transfer using either the [Polkadot.js Apps interface](#using-polkadotjs-apps) or the [Polkadot API](#using-papi), depending on your preference. Both methods provide the same functionality to facilitate asset transfers between the relay chain and parachain. ## Use Polkadot.js Apps Open two browser tabs and can connect these endpoints using the [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} interface: a. Add the custom endpoint for each chain b. Click **Switch** to connect to the respective network ![](/images/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/from-relaychain-to-parachain-01.webp) This reserve-backed transfer method facilitates asset transfers from a local chain to a destination chain by trusting a third party called a reserve to store the real assets. Fees on the destination chain are deducted from the asset specified in the assets vector at the `fee_asset_item` index, covering up to the specified `weight_limit.` The operation fails if the required weight exceeds this limit, potentially putting the transferred assets at risk. The following steps outline how to execute a reserve-backed transfer from the Polkadot relay chain to the Astar parachain. ### From the Relay Chain Perspective 1. Navigate to the Extrinsics page 1. Click on the **Developer** tab from the top navigation bar 2. Select **Extrinsics** from the dropdown ![](/images/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/from-relaychain-to-parachain-02.webp) 2. Select **xcmPallet** ![](/images/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/from-relaychain-to-parachain-03.webp) 3. Select the **limitedReservedAssetTransfer** extrinsic from the dropdown list ![](/images/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/from-relaychain-to-parachain-04.webp) 4. Fill out the required fields: 1. **dest** - specifies the destination context for the assets. Commonly set to `[Parent, Parachain(..)]` for parachain-to-parachain transfers or `[Parachain(..)]` for relay chain-to-parachain transfers. In this case, since the transfer is from a relay chain to a parachain, the destination ([`Location`](https://paritytech.github.io/xcm-docs/fundamentals/multilocation/index.html){target=\_blank}) is the following: ```bash { parents: 0, interior: { X1: [{ Parachain: 2006 }] } } ``` 3. **beneficiary** - defines the recipient of the assets within the destination context, typically represented as an `AccountId32` value. This example uses the following account present in the destination chain: ```bash X2mE9hCGX771c3zzV6tPa8U2cDz4U4zkqUdmBrQn83M3cm7 ``` 4. **assets** - lists the assets to be withdrawn, including those designated for fee payment on the destination chain 5. **feeAssetItem** - indicates the index of the asset within the assets list to be used for paying fees 6. **weightLimit** - specifies the weight limit, if applicable, for the fee payment on the remote chain 7. Click on the **Submit Transaction** button to send the transaction ![](/images/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/from-relaychain-to-parachain-05.webp) After submitting the transaction, verify that the `xcmPallet.FeesPaid` and `xcmPallet.Sent` events have been emitted: ![](/images/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/from-relaychain-to-parachain-06.webp) ### From the Parachain Perspective After submitting the transaction from the relay chain, confirm its success by checking the parachain's events. Look for the `assets.Issued` event, which verifies that the assets have been issued to the destination as expected: ![](/images/tutorials/interoperability/xcm-transfers/from-relaychain-to-parachain/from-relaychain-to-parachain-07.webp) ## Use PAPI To programmatically execute the reserve-backed asset transfer between the relay chain and the parachain, you can use [Polkadot API (PAPI)](/develop/toolkit/api-libraries/papi/){target=\_blank}. PAPI is a robust toolkit that simplifies interactions with Polkadot-based chains. For this project, you'll first need to set up your environment, install necessary dependencies, and create a script to handle the transfer process. 1. Start by creating a folder for your project: ```bash mkdir reserve-backed-asset-transfer cd reserve-backed-asset ``` 2. Initialize a Node.js project and install the required dependencies. Execute the following commands: ```bash npm init npm install polkadot-api @polkadot-labs/hdkd @polkadot-labs/hdkd-helpers ``` 3. To enable static, type-safe APIs for interacting with the Polkadot and Astar chains, add their metadata to your project using PAPI: ```bash npx papi add dot -n polkadot npx papi add astar -w wss://rpc.astar.network ``` !!! note - `dot` and `astar` are arbitrary names you assign to the chains, allowing you to access their metadata information - The first command uses the well-known Polkadot chain, while the second connects to the Astar chain using its WebSocket endpoint 4. Create a `index.js` file and insert the following code to configure the clients and handle the asset transfer ```js // Import necessary modules from Polkadot API and helpers import { astar, // Astar chain metadata dot, // Polkadot chain metadata XcmVersionedLocation, XcmVersionedAssets, XcmV3Junction, XcmV3Junctions, XcmV3WeightLimit, XcmV3MultiassetFungibility, XcmV3MultiassetAssetId, } from '@polkadot-api/descriptors'; import { createClient } from 'polkadot-api'; import { sr25519CreateDerive } from '@polkadot-labs/hdkd'; import { DEV_PHRASE, entropyToMiniSecret, mnemonicToEntropy, ss58Decode, } from '@polkadot-labs/hdkd-helpers'; import { getPolkadotSigner } from 'polkadot-api/signer'; import { getWsProvider } from 'polkadot-api/ws-provider/web'; import { withPolkadotSdkCompat } from 'polkadot-api/polkadot-sdk-compat'; import { Binary } from 'polkadot-api'; // Create Polkadot client using WebSocket provider for Polkadot chain const polkadotClient = createClient( withPolkadotSdkCompat(getWsProvider('ws://127.0.0.1:8001')) ); const dotApi = polkadotClient.getTypedApi(dot); // Create Astar client using WebSocket provider for Astar chain const astarClient = createClient( withPolkadotSdkCompat(getWsProvider('ws://localhost:8000')) ); const astarApi = astarClient.getTypedApi(astar); // Create keypair for Alice using dev phrase to sign transactions const miniSecret = entropyToMiniSecret(mnemonicToEntropy(DEV_PHRASE)); const derive = sr25519CreateDerive(miniSecret); const aliceKeyPair = derive('//Alice'); const alice = getPolkadotSigner( aliceKeyPair.publicKey, 'Sr25519', aliceKeyPair.sign ); // Define recipient (Dave) address on Astar chain const daveAddress = 'X2mE9hCGX771c3zzV6tPa8U2cDz4U4zkqUdmBrQn83M3cm7'; const davePublicKey = ss58Decode(daveAddress)[0]; const idBenef = Binary.fromBytes(davePublicKey); // Define Polkadot Asset ID on Astar chain (example) const polkadotAssetId = 340282366920938463463374607431768211455n; // Fetch asset balance of recipient (Dave) before transaction let assetMetadata = await astarApi.query.Assets.Account.getValue( polkadotAssetId, daveAddress ); console.log('Asset balance before tx:', assetMetadata?.balance ?? 0); // Prepare and submit transaction to transfer assets from Polkadot to Astar const tx = dotApi.tx.XcmPallet.limited_reserve_transfer_assets({ dest: XcmVersionedLocation.V3({ parents: 0, interior: XcmV3Junctions.X1( XcmV3Junction.Parachain(2006) // Destination is the Astar parachain ), }), beneficiary: XcmVersionedLocation.V3({ parents: 0, interior: XcmV3Junctions.X1( XcmV3Junction.AccountId32({ // Beneficiary address on Astar network: undefined, id: idBenef, }) ), }), assets: XcmVersionedAssets.V3([ { id: XcmV3MultiassetAssetId.Concrete({ parents: 0, interior: XcmV3Junctions.Here(), // Asset from the sender's location }), fun: XcmV3MultiassetFungibility.Fungible(120000000000), // Asset amount to transfer }, ]), fee_asset_item: 0, // Asset used to pay transaction fees weight_limit: XcmV3WeightLimit.Unlimited(), // No weight limit on transaction }); // Sign and submit the transaction tx.signSubmitAndWatch(alice).subscribe({ next: async (event) => { if (event.type === 'finalized') { console.log('Transaction completed successfully'); } }, error: console.error, complete() { polkadotClient.destroy(); // Clean up after transaction }, }); // Wait for transaction to complete await new Promise((resolve) => setTimeout(resolve, 20000)); // Fetch asset balance of recipient (Dave) after transaction assetMetadata = await astarApi.query.Assets.Account.getValue( polkadotAssetId, daveAddress ); console.log('Asset balance after tx:', assetMetadata?.balance ?? 0); // Exit the process process.exit(0); ``` !!! note To use this script with real-world blockchains, you'll need to update the WebSocket endpoint to the appropriate one, replace the Alice account with a valid account, and ensure the account has sufficient funds to cover transaction fees. 4. Execute the script ```bash node index.js ``` 5. Check the terminal output. If the operation is successful, you should see the following message:
node index.js Asset balance before tx: 0 Transaction completed successfully Asset balance after tx: 119999114907n
## Additional Resources You can perform these operations using the Asset Transfer API for an alternative approach. Refer to the [Asset Transfer API](/develop/toolkit/interoperability/asset-transfer-api/){target=\_blank} guide in the documentation for more details. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/interoperability/xcm-transfers/ --- BEGIN CONTENT --- --- title: XCM Transfers description: Explore tutorials on performing transfers between different consensus systems using XCM technology to enable cross-chain interoperability. template: index-page.html --- # XCM Transfers Discover comprehensive tutorials that guide you through performing asset transfers between distinct consensus systems. These tutorials leverage [XCM (Cross-Consensus Messaging)](/develop/interoperability/intro-to-xcm/){target=\_blank} technology, that enables cross-chain communication and asset exchanges across different blockchain networks. Whether you're working within the same ecosystem or bridging multiple systems, XCM ensures secure, efficient, and interoperable solutions. By mastering XCM-based transfers, you'll unlock new possibilities for building cross-chain applications and expanding blockchain utility. Learn the methods, tools, and best practices for testibg XCM-powered transfers, ensuring your systems achieve robust interoperability. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/onchain-governance/fast-track-gov-proposal/ --- BEGIN CONTENT --- --- title: Fast Track a Governance Proposal description: Learn how to fast-track governance proposals in Polkadot's OpenGov using Chopsticks. Simulate, test, and execute proposals confidently. tutorial_badge: Advanced categories: Tooling --- # Fast Track a Governance Proposal ## Introduction Polkadot's [OpenGov](/polkadot-protocol/onchain-governance/overview){target=\_blank} is a sophisticated governance mechanism designed to allow the network to evolve gracefully over time, guided by its stakeholders. This system features multiple [tracks](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/#origins-and-tracks-info){target=\_blank} for different types of proposals, each with parameters for approval, support, and confirmation period. While this flexibility is powerful, it also introduces complexity that can lead to failed proposals or unexpected outcomes. Testing governance proposals before submission is crucial for the ecosystem. This process enhances efficiency by reducing the need for repeated submissions, improves security by identifying potential risks, and allows proposal optimization based on simulated outcomes. It also serves as an educational tool, providing stakeholders with a safe environment to understand the impacts of different voting scenarios. By leveraging simulation tools like [Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks){target=\_blank}, developers can: - Simulate the entire lifecycle of a proposal - Test the voting outcomes by varying the support and approval levels - Analyze the effects of a successfully executed proposal on the network's state - Identify and troubleshoot potential issues or unexpected consequences before submitting the proposals This tutorial will guide you through using Chopsticks to test OpenGov proposals thoroughly. This ensures that when you submit a proposal to the live network, you can do so with confidence in its effects and viability. ## Prerequisites Before proceeding, ensure the following prerequisites are met: - **Chopsticks installation** - if you have not installed Chopsticks yet, refer to the [Install Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/#install-chopsticks){target=\_blank} guide for detailed instructions - **Familiarity with key concepts** - you should have a basic understanding of the following: - [Polkadot.js](/develop/toolkit/api-libraries/polkadot-js-api){target=\_blank} - [OpenGov](/polkadot-protocol/onchain-governance/overview){target=\_blank} ## Set Up the Project Before testing OpenGov proposals, you need to set up your development environment. You'll set up a TypeScript project and install the required dependencies to simulate and evaluate proposals. You'll use Chopsticks to fork the Polkadot network and simulate the proposal lifecycle, while Polkadot.js will be your interface for interacting with the forked network and submitting proposals. Follow these steps to set up your project: 1. Create a new project directory and navigate into it: ```bash mkdir opengov-chopsticks && cd opengov-chopsticks ``` 2. Initialize a new TypeScript project: ```bash npm init -y \ && npm install typescript ts-node @types/node --save-dev \ && npx tsc --init ``` 3. Install the required dependencies: ```bash npm install @polkadot/api @acala-network/chopsticks ``` 4. Create a new TypeScript file for your script: ```bash touch test-proposal.ts ``` !!!note You'll write your code to simulate and test OpenGov proposals in the `test-proposal.ts` file. 5. Open the `tsconfig.json` file and ensure it includes these compiler options: ```json { "compilerOptions": { "module": "CommonJS", "esModuleInterop": true, "target": "esnext", "moduleResolution": "node", "declaration": true, "sourceMap": true, "skipLibCheck": true, "outDir": "dist", "composite": true } } ``` ## Submit and Execute a Proposal Using Chopsticks You should identify the right track and origin for your proposal. For example, select the appropriate treasury track based on the spending limits if you're requesting funds from the treasury. For more detailed information, refer to [Polkadot OpenGov Origins](https://wiki.polkadot.network/learn/learn-polkadot-opengov-origins/){target=\_blank}. !!!note This tutorial will focus on the main steps and core logic within the main function. For clarity and conciseness, the implementation details of individual functions will be available in expandable tabs below each section. You'll find the complete code for reference at the end of the tutorial. ### Spin Up the Polkadot Fork To set up your Polkadot fork using Chopsticks, open a new terminal window and run the following command: ```bash npx @acala-network/chopsticks --config=polkadot ``` This command will start a local fork of the Polkadot network accessible at `ws://localhost:8000`. Keep this terminal window open and running throughout your testing process. Once your forked network is up and running, you can proceed with the following steps. ### Set Up Dependencies and Structure Begin by adding the necessary imports and a basic structure to the `test-proposal.ts` file: ```typescript // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] const main = async () => { // The code will be added here process.exit(0); } // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] ``` This structure provides the foundation for your script. It imports all the necessary dependencies and sets up a main function that will contain the core logic of your proposal submission process. ### Connect to the Forked Chain Create a `connectToFork` function outside the `main` function to connect your locally forked chain to the Polkadot.js API: ```typescript // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] ``` Inside the `main` function, add the code to establish a connection to your local Polkadot fork: ```typescript hl_lines="2-3" const main = async () => { // Connect to the forked chain const api = await connectToFork(); ... } ``` ### Create and Submit the Proposal Create a `generateProposal` function that will be responsible for preparing and submitting the on-chain proposal: ```typescript async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { ... } ``` Now, you need to implement the following logic: 1. Set up the keyring and use the Alice development account ```typescript // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); ``` !!!note When using Chopsticks, this development account is pre-funded to execute all necessary actions. 2. Retrieve the proposal index ```typescript // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); ``` 3. Execute a batch transaction that comprises the following three operations: 1. **`preimage.notePreimage`** - registers a [preimage](/polkadot-protocol/glossary#preimage){target=\_blank} using the selected call !!!note The preimage hash is simply the hash of the proposal to be enacted. The on-chain proposals do not require the entire image of extrinsics and data (for instance, the Wasm code, in case of upgrades) to be submitted but would need that image's hash. That preimage can be submitted and stored on-chain against the hash later upon the proposal's dispatch. 2. **`referenda.submit`** - submits the proposal to the referenda system. It uses the preimage hash extracted from the call as part of the proposal submission process. The proposal is submitted with the selected origin 3. **`referenda.placeDecisionDeposit`** - places the required decision deposit for the referendum. This deposit is required to move the referendum from the preparing phase to the deciding phase ```typescript // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); ``` 4. Return the proposal index: ```typescript return proposalIndex; ``` If you followed all the steps correctly, the function should look like this: ??? code "`generateProposal` code" ```typescript // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] ``` Then, within the `main` function, define the specific call you want to execute and its corresponding origin, then invoke the `generateProposal` method: ```typescript hl_lines="5-14" const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); ... } ``` !!!note The [`setCodeWithoutChecks`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_code_without_checks){target=\_blank} extrinsic used in this example is for demonstration purposes only. Replace it with the specific extrinsic that matches your governance proposal's intended functionality. Ensure the call matches your target Polkadot SDK-based network's runtime requirements and governance process. ### Force Proposal Execution After submitting your proposal, you can test its execution by directly manipulating the chain state and scheduler using Chopsticks, bypassing the standard voting and enactment periods. Create a new function called `forceProposalExecution`: ```typescript async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { ... } ``` This function will accomplish two primary objectives: - Modify the chain storage to set the proposal's approvals and support artificially, ensuring its passage - Override the scheduler to execute the proposal immediately in the subsequent blocks, circumventing standard waiting periods Implement the functionality through the following steps: 1. Get the referendum information and its hash: ```typescript // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); ``` 2. Determine the total amount of existing native tokens: ```typescript // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); ``` 3. Fetch the current block number: ```typescript // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); ``` 4. Modify the proposal data and overwrite the storage: ```typescript // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); ``` 5. Manipulate the scheduler to execute the proposal in the next blocks: ```typescript // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); ``` ???+ child "Utility Function" This section utilizes a `moveScheduledCallTo` utility function to move a scheduled call matching specific criteria to a designated future block. Include this function in the same file: ??? code "`moveScheduledCallTo`" ```typescript // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] ``` After implementing the complete logic, your function will resemble: ??? code "`forceProposalExecution`" ```typescript // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] ``` Invoke `forceProposalExecution` from the `main` function using the `proposalIndex` obtained from the previous `generateProposal` call: ```typescript hl_lines="16-17" // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] ``` ## Execute the Proposal Script To run the proposal execution script, use the following command in your terminal: ```bash npx ts-node test-proposal.ts ``` When executing the script, you should expect the following key actions and outputs: - **Chain forking** - the script connects to a forked version of the Polkadot network, allowing safe manipulation of the chain state without affecting the live network. - **Proposal generation** - a new governance proposal is created using the specified extrinsic (in this example, `setCodeWithoutChecks`) - **State manipulation** - the referendum's storage is modified to simulate immediate approval by adjusting tally and support values to force proposal passing. Scheduled calls are then redirected to ensure immediate execution - **Execution** - the script advances the chain to trigger the scheduled call execution. The specified call (e.g., `setCodeWithoutChecks`) is processed ## Summary In this tutorial, you've learned how to use Chopsticks to test OpenGov proposals on a local fork of the Polkadot network. You've set up a TypeScript project, connected to a local fork, submitted a proposal, and forced its execution for testing purposes. This process allows you to: - Safely experiment with different types of proposals - Test the effects of proposals without affecting the live network - Rapidly iterate and debug your governance ideas Using these techniques, you can develop and refine your proposals before submitting them to the Polkadot network, ensuring they're well-tested and likely to achieve their intended effects. ## Full Code Here's the complete code for the `test-proposal.ts` file, incorporating all the steps we've covered: ??? code "`test-proposal.ts`" ```typescript // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] // --8<-- [start:imports] import '@polkadot/api-augment/polkadot'; import { FrameSupportPreimagesBounded } from '@polkadot/types/lookup'; import { blake2AsHex } from '@polkadot/util-crypto'; import { ApiPromise, Keyring, WsProvider } from '@polkadot/api'; import { type SubmittableExtrinsic } from '@polkadot/api/types'; import { ISubmittableResult } from '@polkadot/types/types'; // --8<-- [end:imports] // --8<-- [start:connectToFork] /** * Establishes a connection to the local forked chain. * * @returns A promise that resolves to an `ApiPromise` instance connected to the local chain. */ async function connectToFork(): Promise { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; console.log(`Connected to chain: ${await api.rpc.system.chain()}`); return api; } // --8<-- [end:connectToFork] // --8<-- [start:generateProposal] /** * Generates a proposal by submitting a preimage, creating the proposal, and placing a deposit. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param call - The extrinsic to be executed, encapsulating the specific action to be proposed. * @param origin - The origin of the proposal, specifying the source authority (e.g., `{ System: 'Root' }`). * @returns A promise that resolves to the proposal ID of the generated proposal. * */ async function generateProposal( api: ApiPromise, call: SubmittableExtrinsic<'promise', ISubmittableResult>, origin: any ): Promise { // Initialize the keyring const keyring = new Keyring({ type: 'sr25519' }); // Set up Alice development account const alice = keyring.addFromUri('//Alice'); // Get the next available proposal index const proposalIndex = ( await api.query.referenda.referendumCount() ).toNumber(); // Execute the batch transaction await new Promise(async (resolve) => { const unsub = await api.tx.utility .batch([ // Register the preimage for your proposal api.tx.preimage.notePreimage(call.method.toHex()), // Submit your proposal to the referenda system api.tx.referenda.submit( origin as any, { Lookup: { Hash: call.method.hash.toHex(), len: call.method.encodedLength, }, }, { At: 0 } ), // Place the required decision deposit api.tx.referenda.placeDecisionDeposit(proposalIndex), ]) .signAndSend(alice, (status: any) => { if (status.blockNumber) { unsub(); resolve(); } }); }); return proposalIndex; } // --8<-- [end:generateProposal] // --8<-- [start:moveScheduledCallTo] /** * Moves a scheduled call to a specified future block if it matches the given verifier criteria. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param blockCounts - The number of blocks to move the scheduled call forward. * @param verifier - A function to verify if a scheduled call matches the desired criteria. * @throws An error if no matching scheduled call is found. */ async function moveScheduledCallTo( api: ApiPromise, blockCounts: number, verifier: (call: FrameSupportPreimagesBounded) => boolean ) { // Get the current block number const blockNumber = (await api.rpc.chain.getHeader()).number.toNumber(); // Retrieve the scheduler's agenda entries const agenda = await api.query.scheduler.agenda.entries(); // Initialize a flag to track if a matching scheduled call is found let found = false; // Iterate through the scheduler's agenda entries for (const agendaEntry of agenda) { // Iterate through the scheduled entries in the current agenda entry for (const scheduledEntry of agendaEntry[1]) { // Check if the scheduled entry is valid and matches the verifier criteria if (scheduledEntry.isSome && verifier(scheduledEntry.unwrap().call)) { found = true; // Overwrite the agendaEntry item in storage const result = await api.rpc('dev_setStorage', [ [agendaEntry[0]], // require to ensure unique id [ await api.query.scheduler.agenda.key(blockNumber + blockCounts), agendaEntry[1].toHex(), ], ]); // Check if the scheduled call has an associated lookup if (scheduledEntry.unwrap().maybeId.isSome) { // Get the lookup ID const id = scheduledEntry.unwrap().maybeId.unwrap().toHex(); const lookup = await api.query.scheduler.lookup(id); // Check if the lookup exists if (lookup.isSome) { // Get the lookup key const lookupKey = await api.query.scheduler.lookup.key(id); // Create a new lookup object with the updated block number const fastLookup = api.registry.createType('Option<(u32,u32)>', [ blockNumber + blockCounts, 0, ]); // Overwrite the lookup entry in storage const result = await api.rpc('dev_setStorage', [ [lookupKey, fastLookup.toHex()], ]); } } } } } // Throw an error if no matching scheduled call is found if (!found) { throw new Error('No scheduled call found'); } } // --8<-- [end:moveScheduledCallTo] // --8<-- [start:forceProposalExecution] /** * Forces the execution of a specific proposal by updating its referendum state and ensuring the execution process is triggered. * * @param api - An instance of the Polkadot.js API promise used to interact with the blockchain. * @param proposalIndex - The index of the proposal to be executed. * @throws An error if the referendum is not found or not in an ongoing state. */ async function forceProposalExecution(api: ApiPromise, proposalIndex: number) { // Retrieve the referendum data for the given proposal index const referendumData = await api.query.referenda.referendumInfoFor( proposalIndex ); // Get the storage key for the referendum data const referendumKey = api.query.referenda.referendumInfoFor.key(proposalIndex); // Check if the referendum data exists if (!referendumData.isSome) { throw new Error(`Referendum ${proposalIndex} not found`); } const referendumInfo = referendumData.unwrap(); // Check if the referendum is in an ongoing state if (!referendumInfo.isOngoing) { throw new Error(`Referendum ${proposalIndex} is not ongoing`); } // Get the ongoing referendum data const ongoingData = referendumInfo.asOngoing; // Convert the ongoing data to JSON const ongoingJson = ongoingData.toJSON(); // Support Lookup, Inline or Legacy proposals const callHash = ongoingData.proposal.isLookup ? ongoingData.proposal.asLookup.toHex() : ongoingData.proposal.isInline ? blake2AsHex(ongoingData.proposal.asInline.toHex()) : ongoingData.proposal.asLegacy.toHex(); // Get the total issuance of the native token const totalIssuance = (await api.query.balances.totalIssuance()).toBigInt(); // Get the current block number const proposalBlockTarget = ( await api.rpc.chain.getHeader() ).number.toNumber(); // Create a new proposal data object with the updated fields const fastProposalData = { ongoing: { ...ongoingJson, enactment: { after: 0 }, deciding: { since: proposalBlockTarget - 1, confirming: proposalBlockTarget - 1, }, tally: { ayes: totalIssuance - 1n, nays: 0, support: totalIssuance - 1n, }, alarm: [proposalBlockTarget + 1, [proposalBlockTarget + 1, 0]], }, }; // Create a new proposal object from the proposal data let fastProposal; try { fastProposal = api.registry.createType( `Option`, fastProposalData ); } catch { fastProposal = api.registry.createType( `Option`, fastProposalData ); } // Update the storage with the new proposal object const result = await api.rpc('dev_setStorage', [ [referendumKey, fastProposal.toHex()], ]); // Fast forward the nudge referendum to the next block to get the refendum to be scheduled await moveScheduledCallTo(api, 1, (call) => { if (!call.isInline) { return false; } const callData = api.createType('Call', call.asInline.toHex()); return ( callData.method == 'nudgeReferendum' && (callData.args[0] as any).toNumber() == proposalIndex ); }); // Create a new block await api.rpc('dev_newBlock', { count: 1 }); // Move the scheduled call to the next block await moveScheduledCallTo(api, 1, (call) => call.isLookup ? call.asLookup.toHex() == callHash : call.isInline ? blake2AsHex(call.asInline.toHex()) == callHash : call.asLegacy.toHex() == callHash ); // Create another new block await api.rpc('dev_newBlock', { count: 1 }); } // --8<-- [end:forceProposalExecution] // --8<-- [start:main] const main = async () => { // Connect to the forked chain const api = await connectToFork(); // Select the call to perform const call = api.tx.system.setCodeWithoutChecks('0x1234'); // Select the origin const origin = { System: 'Root', }; // Submit preimage, submit proposal, and place decision deposit const proposalIndex = await generateProposal(api, call, origin); // Force the proposal to be executed await forceProposalExecution(api, proposalIndex); process.exit(0); }; // --8<-- [end:main] // --8<-- [start:try-catch-block] try { main(); } catch (e) { console.log(e); process.exit(1); } // --8<-- [end:try-catch-block] ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/onchain-governance/ --- BEGIN CONTENT --- --- title: On-Chain Governance Tutorials description: Learn how to utilize Polkadot OpenGov with step-by-step tutorials on on-chain governance, including proposals, referenda, delegation, and voting processes. template: index-page.html --- # On-Chain Governance Tutorials On-chain governance enables decentralized networks to grow and adapt through collective decision-making. For developers, understanding and implementing governance features is crucial for contributing to network improvements and supporting user interactions. This section provides step-by-step tutorials to help you navigate the technical aspects of on-chain governance. ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/ --- BEGIN CONTENT --- --- title: Polkadot SDK Tutorials description: Explore detailed, step-by-step tutorials designed to help you gain hands-on experience building custom solutions with the Polkadot SDK. template: index-page.html --- # Polkadot SDK Tutorials The Polkadot SDK is a versatile framework for building custom blockchains, whether as standalone networks or as part of the Polkadot ecosystem. With its modular design and extensible tools, libraries, and runtime components, the SDK simplifies the process of creating parachains, system chains, and solochains. Ready to create a parachain from the ground up? Start with the tutorials highlighted in the following section. ## Build and Deploy a Parachain Follow these key milestones to guide you through parachain development. Each step links to detailed tutorials for a deeper dive into each stage: - [**Install the Polkadot SDK**](/develop/parachains/install-polkadot-sdk/) - set up the necessary tools to begin building on Polkadot. This step will get your environment ready for parachain development - [**Parachains Zero to Hero**](/tutorials/polkadot-sdk/parachains/zero-to-hero/) - a series of step-by-step guides to building, testing, and deploying custom pallets and runtimes using the Polkadot SDK ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/ --- BEGIN CONTENT --- --- title: Parachain Tutorials description: This collection of tutorials will guide you step by step, from setting up your first local chain to deploying and customizing a fully operational parachain. template: index-page.html --- # Tutorials for Building Parachains with the Polkadot SDK The Polkadot SDK enables you to build custom blockchains that can operate as part of the Polkadot network. These tutorials guide you through the essential steps of developing, testing, and deploying your own parachain. ## Parachain Zero To Hero Tutorials Dive deep into parachain development with this comprehensive tutorial series designed to take you from a beginner to a proficient parachain developer. ## Key Takeaways Through these tutorials, you'll gain practical experience with: - Setting up blockchain development environments - Creating custom runtime logic - Implementing and testing pallets - Deploying parachains to test networks - Understanding Polkadot ecosystem concepts Each tutorial builds upon previous concepts while providing flexibility to focus on your specific development goals. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/add-pallets-to-runtime/ --- BEGIN CONTENT --- --- title: Add Pallets to the Runtime description: Add pallets to your runtime for custom functionality. Learn to configure and integrate pallets in Polkadot SDK-based blockchains. tutorial_badge: Beginner categories: Basics, Parachains --- # Add Pallets to the Runtime ## Introduction In previous tutorials, you learned how to [create a custom pallet](/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/){target=\_blank} and [test it](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-unit-testing/){target=\_blank}. The next step is to include this pallet in your runtime, integrating it into the core logic of your blockchain. This tutorial will guide you through adding two pallets to your runtime: the custom pallet you previously developed and the [utility pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_utility/index.html){target=\_blank}. This standard Polkadot SDK pallet provides powerful dispatch functionality. The utility pallet offers, for example, batch dispatch, a stateless operation that enables executing multiple calls in a single transaction. ## Add the Pallets as Dependencies First, you'll update the runtime's `Cargo.toml` file to include the Utility pallet and your custom pallets as dependencies for the runtime. Follow these steps: Update the runtime's `Cargo.toml` file to include the utility pallet and your custom pallets as dependencies. Follow these steps: 1. Open the `runtime/Cargo.toml` file and locate the `[dependencies]` section. Add the pallets with the following lines: ```toml hl_lines="3-4" title="Cargo.toml" [dependencies] ... pallet-utility = { version = "39.0.0", default-features = false } custom-pallet = { path = "../pallets/custom-pallet", default-features = false } ``` 2. In the `[features]` section, add the pallets to the `std` feature list: ```toml hl_lines="5-6" title="Cargo.toml" [features] default = ["std"] std = [ ... "pallet-utility/std", "custom-pallet/std", ] ``` 3. Save the changes and close the `Cargo.toml` file ### Update the Runtime Configuration Configure the pallets by implementing their `Config` trait and update the runtime macro to include the new pallets: 1. Add the `OriginCaller` import: ```rust title="mod.rs" hl_lines="2" // Local module imports use super::OriginCaller; ... ``` 2. Implement the [`Config`](https://paritytech.github.io/polkadot-sdk/master/pallet_utility/pallet/trait.Config.html){target=\_blank} trait for both pallets at the end of the `runtime/src/config/mod.rs` file: ```rust title="mod.rs" hl_lines="7-25" ... impl pallet_parachain_template::Config for Runtime { type RuntimeEvent = RuntimeEvent; type WeightInfo = pallet_parachain_template::weights::SubstrateWeight; } // Configure utility pallet. impl pallet_utility::Config for Runtime { type RuntimeEvent = RuntimeEvent; type RuntimeCall = RuntimeCall; type PalletsOrigin = OriginCaller; type WeightInfo = pallet_utility::weights::SubstrateWeight; } // Define counter max value runtime constant. parameter_types! { pub const CounterMaxValue: u32 = 500; } // Configure custom pallet. impl custom_pallet::Config for Runtime { type RuntimeEvent = RuntimeEvent; type CounterMaxValue = CounterMaxValue; } ``` 3. Locate the `#[frame_support::runtime]` macro in the `runtime/src/lib.rs` file and add the pallets: ```rust hl_lines="8-12" title="lib.rs" mod runtime { #[runtime::runtime] #[runtime::derive( ... )] pub struct Runtime; #[runtime::pallet_index(51)] pub type Utility = pallet_utility; #[runtime::pallet_index(52)] pub type CustomPallet = custom_pallet; } ``` ## Recompile the Runtime After adding and configuring your pallets in the runtime, the next step is to ensure everything is set up correctly. To do this, recompile the runtime with the following command (make sure you're in the project's root directory): ```bash cargo build --release ``` This command ensures the runtime compiles without errors, validates the pallet configurations, and prepares the build for subsequent testing or deployment. ## Run Your Chain Locally Launch your parachain locally and start producing blocks: !!!tip Generated chain TestNet specifications include development accounts "Alice" and "Bob." These accounts are pre-funded with native parachain currency, allowing you to sign and send TestNet transactions. Take a look at the [Polkadot.js Accounts section](https://polkadot.js.org/apps/#/accounts){target=\_blank} to view the development accounts for your chain. 1. Create a new chain specification file with the updated runtime: ```bash chain-spec-builder create -t development \ --relay-chain paseo \ --para-id 1000 \ --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \ named-preset development ``` 2. Start the omni node with the generated chain specification: ```bash polkadot-omni-node --chain ./chain_spec.json --dev ``` 3. Verify you can interact with the new pallets using the [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/extrinsics){target=\_blank} interface. Navigate to the **Extrinsics** tab and check that you can see both pallets: - Utility pallet ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/add-pallets-to-runtime/add-pallets-to-runtime-1.webp) - Custom pallet ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/add-pallets-to-runtime/add-pallets-to-runtime-2.webp) ## Where to Go Next
- Tutorial __Deploy on Paseo TestNet__ --- Deploy your Polkadot SDK blockchain on Paseo! Follow this step-by-step guide for a seamless journey to a successful TestNet deployment. [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/) - Tutorial __Pallet Benchmarking (Optional)__ --- Discover how to measure extrinsic costs and assign precise weights to optimize your pallet for accurate fees and runtime performance. [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-benchmarking/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/ --- BEGIN CONTENT --- --- title: Build a Custom Pallet description: Learn how to build a custom pallet for Polkadot SDK-based blockchains with this step-by-step guide. Create and configure a simple counter pallet from scratch. tutorial_badge: Beginner categories: Basics, Parachains --- # Build a Custom Pallet ## Introduction In Polkadot SDK-based blockchains, runtime functionality is built through modular components called [pallets](/polkadot-protocol/glossary#pallet){target=\_blank}. These pallets are Rust-based runtime modules created using [FRAME (Framework for Runtime Aggregation of Modular Entities)](/develop/parachains/customize-parachain/overview/){target=\_blank}, a powerful library that simplifies blockchain development by providing specialized macros and standardized patterns for building blockchain logic. A pallet encapsulates a specific set of blockchain functionalities, such as managing token balances, implementing governance mechanisms, or creating custom state transitions. In this tutorial, you'll learn how to create a custom pallet from scratch. You will develop a simple counter pallet with the following features: - Users can increment and decrement a counter - Only a [root origin](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/type.Origin.html#variant.Root){target=\_blank} can set an arbitrary counter value ## Prerequisites You'll use the [Polkadot SDK Parachain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/parachain){target=\_blank} created in the [Set Up a Template](/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template/){target=\_blank} tutorial. ## Create a New Project In this tutorial, you'll build a custom pallet from scratch to demonstrate the complete workflow, rather than starting with the pre-built `pallet-template`. The first step is to create a new Rust package for your pallet: 1. Navigate to the `pallets` directory in your workspace: ```bash cd pallets ``` 2. Create a new Rust library project for your custom pallet by running the following command: ```bash cargo new --lib custom-pallet ``` 3. Enter the new project directory: ```bash cd custom-pallet ``` 4. Ensure the project was created successfully by checking its structure. The file layout should resemble the following: ``` custom-pallet ├── Cargo.toml └── src └── lib.rs ``` If the files are in place, your project setup is complete, and you're ready to start building your custom pallet. ## Add Dependencies To build and integrate your custom pallet into a Polkadot SDK-based runtime, you must add specific dependencies to the `Cargo.toml` file of your pallet's project. These dependencies provide essential modules and features required for pallet development. Since your custom pallet is part of a workspace that includes other components, such as the runtime, the configuration must align with the workspace structure. Follow the steps below to set up your `Cargo.toml` file properly: 1. Open your `Cargo.toml` file 2. Add the required dependencies in the `[dependencies]` section: ```toml [dependencies] codec = { features = ["derive"], workspace = true } scale-info = { features = ["derive"], workspace = true } frame = { features = ["experimental", "runtime"], workspace = true } ``` 3. Enable `std` features: ```toml [features] default = ["std"] std = ["codec/std", "frame/std", "scale-info/std"] ``` The final `Cargo.toml` file should resemble the following: ??? code "Cargo.toml" ```toml [package] name = "custom-pallet" version = "0.1.0" license.workspace = true authors.workspace = true homepage.workspace = true repository.workspace = true edition.workspace = true [dependencies] codec = { features = ["derive"], workspace = true } scale-info = { features = ["derive"], workspace = true } frame = { features = ["experimental", "runtime"], workspace = true } [features] default = ["std"] std = ["codec/std", "frame/std", "scale-info/std"] runtime-benchmarks = ["frame/runtime-benchmarks"] ``` ## Implement the Pallet Logic In this section, you will construct the core structure of your custom pallet, starting with setting up its basic scaffold. This scaffold acts as the foundation, enabling you to later add functionality such as storage items, events, errors, and dispatchable calls. ### Add Scaffold Pallet Structure You now have the bare minimum of package dependencies that your pallet requires specified in the `Cargo.toml` file. The next step is to prepare the scaffolding for your new pallet. 1. Open `src/lib.rs` in a text editor and delete all the content 2. Prepare the scaffolding for the pallet by adding the following: ```rust title="lib.rs" #![cfg_attr(not(feature = "std"), no_std)] pub use pallet::*; #[frame::pallet] pub mod pallet { use super::*; use frame::prelude::*; #[pallet::pallet] pub struct Pallet(_); // Configuration trait for the pallet. #[pallet::config] pub trait Config: frame_system::Config { // Defines the event type for the pallet. } } ``` 3. Verify that it compiles by running the following command: ```bash cargo build --package custom-pallet ``` ### Pallet Configuration Implementing the `#[pallet::config]` macro is mandatory and sets the module's dependency on other modules and the types and values specified by the runtime-specific settings. In this step, you will configure two essential components that are critical for the pallet's functionality: - **`RuntimeEvent`** - since this pallet emits events, the [`RuntimeEvent`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/trait.Config.html#associatedtype.RuntimeEvent){target=\_blank} type is required to handle them. This ensures that events generated by the pallet can be correctly processed and interpreted by the runtime - **`CounterMaxValue`** - a constant that sets an upper limit on the value of the counter, ensuring that the counter remains within a predefined range Add the following `Config` trait definition to your pallet: ```rust title="lib.rs" #[pallet::config] pub trait Config: frame_system::Config { // Defines the event type for the pallet. type RuntimeEvent: From> + IsType<::RuntimeEvent>; // Defines the maximum value the counter can hold. #[pallet::constant] type CounterMaxValue: Get; } ``` ### Add Events Events allow the pallet to communicate with the outside world by emitting signals when specific actions occur. These events are critical for transparency, debugging, and integration with external systems such as UIs or monitoring tools. Below are the events defined for this pallet: - **`CounterValueSet`** - is emitted when the counter is explicitly set to a new value. This event includes the counter's updated value - **`CounterIncremented`** - is emitted after a successful increment operation. It includes: - The new counter value - The account responsible for the increment - The amount by which the counter was incremented - **`CounterDecremented`** - is emitted after a successful decrement operation. It includes: - The new counter value - The account responsible for the decrement - The amount by which the counter was decremented Define the events in the pallet as follows: ```rust title="lib.rs" #[pallet::event] #[pallet::generate_deposit(pub(super) fn deposit_event)] pub enum Event { /// The counter value has been set to a new value by Root. CounterValueSet { /// The new value set. counter_value: u32, }, /// A user has successfully incremented the counter. CounterIncremented { /// The new value set. counter_value: u32, /// The account who incremented the counter. who: T::AccountId, /// The amount by which the counter was incremented. incremented_amount: u32, }, /// A user has successfully decremented the counter. CounterDecremented { /// The new value set. counter_value: u32, /// The account who decremented the counter. who: T::AccountId, /// The amount by which the counter was decremented. decremented_amount: u32, }, } ``` ### Add Storage Items Storage items are used to manage the pallet's state. This pallet defines two items to handle the counter's state and user interactions: - **`CounterValue`** - a single storage value that keeps track of the current value of the counter. This value is the core state variable manipulated by the pallet's functions - **`UserInteractions`** - a storage map that tracks the number of times each account interacts with the counter Define the storage items as follows: ```rust title="lib.rs" #[pallet::storage] pub type CounterValue = StorageValue<_, u32>; /// Storage map to track the number of interactions performed by each account. #[pallet::storage] pub type UserInteractions = StorageMap<_, Twox64Concat, T::AccountId, u32>; ``` ### Implement Custom Errors The `#[pallet::error]` macro defines a custom `Error` enum to handle specific failure conditions within the pallet. Errors help provide meaningful feedback to users and external systems when an extrinsic cannot be completed successfully. They are critical for maintaining the pallet's clarity and robustness. To add custom errors, use the `#[pallet::error]` macro to define the `Error` enum. Each variant represents a unique error that the pallet can emit, and these errors should align with the logic and constraints of the pallet. Add the following errors to the pallet: ```rust title="lib.rs" #[pallet::error] pub enum Error { /// The counter value exceeds the maximum allowed value. CounterValueExceedsMax, /// The counter value cannot be decremented below zero. CounterValueBelowZero, /// Overflow occurred in the counter. CounterOverflow, /// Overflow occurred in user interactions. UserInteractionOverflow, } ``` ### Implement Calls The `#[pallet::call]` macro defines the dispatchable functions (or calls) the pallet exposes. These functions allow users or the runtime to interact with the pallet's logic and state. Each call includes comprehensive validations, modifies the state, and optionally emits events to signal successful execution. The structure of the dispatchable calls in this pallet is as follows: ```rust title="lib.rs" #[pallet::call] impl Pallet { /// Set the value of the counter. /// /// The dispatch origin of this call must be _Root_. /// /// - `new_value`: The new value to set for the counter. /// /// Emits `CounterValueSet` event when successful. #[pallet::call_index(0)] #[pallet::weight(0)] pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { } /// Increment the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_increment`: The amount by which to increment the counter. /// /// Emits `CounterIncremented` event when successful. #[pallet::call_index(1)] #[pallet::weight(0)] pub fn increment(origin: OriginFor, amount_to_increment: u32) -> DispatchResult { } /// Decrement the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_decrement`: The amount by which to decrement the counter. /// /// Emits `CounterDecremented` event when successful. #[pallet::call_index(2)] #[pallet::weight(0)] pub fn decrement(origin: OriginFor, amount_to_decrement: u32) -> DispatchResult { } } ``` Expand the following items to view the implementations of each dispatchable call in this pallet. ???- code "set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult" This call sets the counter to a specific value. It is restricted to the Root origin, meaning it can only be invoked by privileged users or entities. - **Parameters**: - `new_value` - the value to set the counter to - **Validations**: - The new value must not exceed the maximum allowed counter value (`CounterMaxValue`) - **Behavior**: - Updates the `CounterValue` storage item - Emits a `CounterValueSet` event on success ```rust title="lib.rs" /// Set the value of the counter. /// /// The dispatch origin of this call must be _Root_. /// /// - `new_value`: The new value to set for the counter. /// /// Emits `CounterValueSet` event when successful. #[pallet::call_index(0)] #[pallet::weight(0)] pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { ensure_root(origin)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); Self::deposit_event(Event::::CounterValueSet { counter_value: new_value, }); Ok(()) } ``` ???- code "increment(origin: OriginFor, amount_to_increment: u32) -> DispatchResult" This call increments the counter by a specified amount. It is accessible to any signed account. - **Parameters**: - `amount_to_increment` - the amount to add to the counter - **Validations**: - Prevents overflow during the addition - Ensures the resulting counter value does not exceed `CounterMaxValue` - **Behavior**: - Updates the `CounterValue` storage item - Tracks the number of interactions by the user in the `UserInteractions` storage map - Emits a `CounterIncremented` event on success ```rust title="lib.rs" /// Increment the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_increment`: The amount by which to increment the counter. /// /// Emits `CounterIncremented` event when successful. #[pallet::call_index(1)] #[pallet::weight(0)] pub fn increment(origin: OriginFor, amount_to_increment: u32) -> DispatchResult { let who = ensure_signed(origin)?; let current_value = CounterValue::::get().unwrap_or(0); let new_value = current_value .checked_add(amount_to_increment) .ok_or(Error::::CounterOverflow)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); UserInteractions::::try_mutate(&who, |interactions| -> Result<_, Error> { let new_interactions = interactions .unwrap_or(0) .checked_add(1) .ok_or(Error::::UserInteractionOverflow)?; *interactions = Some(new_interactions); // Store the new value. Ok(()) })?; Self::deposit_event(Event::::CounterIncremented { counter_value: new_value, who, incremented_amount: amount_to_increment, }); Ok(()) } ``` ???- code "decrement(origin: OriginFor, amount_to_decrement: u32) -> DispatchResult" This call decrements the counter by a specified amount. It is accessible to any signed account. - **Parameters**: - `amount_to_decrement` - the amount to subtract from the counter - **Validations**: - Prevents underflow during the subtraction - Ensures the counter does not drop below zero - **Behavior**: - Updates the `CounterValue` storage item - Tracks the number of interactions by the user in the `UserInteractions` storage map - Emits a `CounterDecremented` event on success ```rust title="lib.rs" /// Decrement the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_decrement`: The amount by which to decrement the counter. /// /// Emits `CounterDecremented` event when successful. #[pallet::call_index(2)] #[pallet::weight(0)] pub fn decrement(origin: OriginFor, amount_to_decrement: u32) -> DispatchResult { let who = ensure_signed(origin)?; let current_value = CounterValue::::get().unwrap_or(0); let new_value = current_value .checked_sub(amount_to_decrement) .ok_or(Error::::CounterValueBelowZero)?; CounterValue::::put(new_value); UserInteractions::::try_mutate(&who, |interactions| -> Result<_, Error> { let new_interactions = interactions .unwrap_or(0) .checked_add(1) .ok_or(Error::::UserInteractionOverflow)?; *interactions = Some(new_interactions); // Store the new value. Ok(()) })?; Self::deposit_event(Event::::CounterDecremented { counter_value: new_value, who, decremented_amount: amount_to_decrement, }); Ok(()) } ``` ## Verify Compilation After implementing all the pallet components, verifying that the code still compiles successfully is crucial. Run the following command in your terminal to ensure there are no errors: ```bash cargo build --package custom-pallet ``` If you encounter any errors or warnings, carefully review your code to resolve the issues. Once the build is complete without errors, your pallet implementation is ready. ## Key Takeaways In this tutorial, you learned how to create a custom pallet by defining storage, implementing errors, adding dispatchable calls, and emitting events. These are the foundational building blocks for developing robust Polkadot SDK-based blockchain logic. Expand the following item to review this implementation and the complete pallet code. ???- code "src/lib.rs" ```rust title="lib.rs" #![cfg_attr(not(feature = "std"), no_std)] pub use pallet::*; #[frame::pallet] pub mod pallet { use super::*; use frame::prelude::*; #[pallet::pallet] pub struct Pallet(_); // Configuration trait for the pallet. #[pallet::config] pub trait Config: frame_system::Config { // Defines the event type for the pallet. type RuntimeEvent: From> + IsType<::RuntimeEvent>; // Defines the maximum value the counter can hold. #[pallet::constant] type CounterMaxValue: Get; } #[pallet::event] #[pallet::generate_deposit(pub(super) fn deposit_event)] pub enum Event { /// The counter value has been set to a new value by Root. CounterValueSet { /// The new value set. counter_value: u32, }, /// A user has successfully incremented the counter. CounterIncremented { /// The new value set. counter_value: u32, /// The account who incremented the counter. who: T::AccountId, /// The amount by which the counter was incremented. incremented_amount: u32, }, /// A user has successfully decremented the counter. CounterDecremented { /// The new value set. counter_value: u32, /// The account who decremented the counter. who: T::AccountId, /// The amount by which the counter was decremented. decremented_amount: u32, }, } /// Storage for the current value of the counter. #[pallet::storage] pub type CounterValue = StorageValue<_, u32>; /// Storage map to track the number of interactions performed by each account. #[pallet::storage] pub type UserInteractions = StorageMap<_, Twox64Concat, T::AccountId, u32>; #[pallet::error] pub enum Error { /// The counter value exceeds the maximum allowed value. CounterValueExceedsMax, /// The counter value cannot be decremented below zero. CounterValueBelowZero, /// Overflow occurred in the counter. CounterOverflow, /// Overflow occurred in user interactions. UserInteractionOverflow, } #[pallet::call] impl Pallet { /// Set the value of the counter. /// /// The dispatch origin of this call must be _Root_. /// /// - `new_value`: The new value to set for the counter. /// /// Emits `CounterValueSet` event when successful. #[pallet::call_index(0)] #[pallet::weight(0)] pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { ensure_root(origin)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); Self::deposit_event(Event::::CounterValueSet { counter_value: new_value, }); Ok(()) } /// Increment the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_increment`: The amount by which to increment the counter. /// /// Emits `CounterIncremented` event when successful. #[pallet::call_index(1)] #[pallet::weight(0)] pub fn increment(origin: OriginFor, amount_to_increment: u32) -> DispatchResult { let who = ensure_signed(origin)?; let current_value = CounterValue::::get().unwrap_or(0); let new_value = current_value .checked_add(amount_to_increment) .ok_or(Error::::CounterOverflow)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); UserInteractions::::try_mutate(&who, |interactions| -> Result<_, Error> { let new_interactions = interactions .unwrap_or(0) .checked_add(1) .ok_or(Error::::UserInteractionOverflow)?; *interactions = Some(new_interactions); // Store the new value. Ok(()) })?; Self::deposit_event(Event::::CounterIncremented { counter_value: new_value, who, incremented_amount: amount_to_increment, }); Ok(()) } /// Decrement the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_decrement`: The amount by which to decrement the counter. /// /// Emits `CounterDecremented` event when successful. #[pallet::call_index(2)] #[pallet::weight(0)] // This file is part of 'custom-pallet'. // SPDX-License-Identifier: MIT-0 // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE // SOFTWARE. #![cfg_attr(not(feature = "std"), no_std)] pub use pallet::*; #[cfg(test)] mod mock; #[cfg(test)] mod tests; #[cfg(feature = "runtime-benchmarks")] mod benchmarking; pub mod weights; use crate::weights::WeightInfo; #[frame::pallet] pub mod pallet { use super::*; use frame::prelude::*; #[pallet::pallet] pub struct Pallet(_); // Configuration trait for the pallet. #[pallet::config] pub trait Config: frame_system::Config { // Defines the event type for the pallet. type RuntimeEvent: From> + IsType<::RuntimeEvent>; // Defines the maximum value the counter can hold. #[pallet::constant] type CounterMaxValue: Get; /// A type representing the weights required by the dispatchables of this pallet. type WeightInfo: WeightInfo; } #[pallet::event] #[pallet::generate_deposit(pub(super) fn deposit_event)] pub enum Event { /// The counter value has been set to a new value by Root. CounterValueSet { /// The new value set. counter_value: u32, }, /// A user has successfully incremented the counter. CounterIncremented { /// The new value set. counter_value: u32, /// The account who incremented the counter. who: T::AccountId, /// The amount by which the counter was incremented. incremented_amount: u32, }, /// A user has successfully decremented the counter. CounterDecremented { /// The new value set. counter_value: u32, /// The account who decremented the counter. who: T::AccountId, /// The amount by which the counter was decremented. decremented_amount: u32, }, } /// Storage for the current value of the counter. #[pallet::storage] pub type CounterValue = StorageValue<_, u32>; /// Storage map to track the number of interactions performed by each account. #[pallet::storage] pub type UserInteractions = StorageMap<_, Twox64Concat, T::AccountId, u32>; #[pallet::error] pub enum Error { /// The counter value exceeds the maximum allowed value. CounterValueExceedsMax, /// The counter value cannot be decremented below zero. CounterValueBelowZero, /// Overflow occurred in the counter. CounterOverflow, /// Overflow occurred in user interactions. UserInteractionOverflow, } #[pallet::call] impl Pallet { /// Set the value of the counter. /// /// The dispatch origin of this call must be _Root_. /// /// - `new_value`: The new value to set for the counter. /// /// Emits `CounterValueSet` event when successful. #[pallet::call_index(0)] #[pallet::weight(T::WeightInfo::set_counter_value())] pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { ensure_root(origin)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); Self::deposit_event(Event::::CounterValueSet { counter_value: new_value, }); Ok(()) } /// Increment the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_increment`: The amount by which to increment the counter. /// /// Emits `CounterIncremented` event when successful. #[pallet::call_index(1)] #[pallet::weight(T::WeightInfo::increment())] pub fn increment(origin: OriginFor, amount_to_increment: u32) -> DispatchResult { let who = ensure_signed(origin)?; let current_value = CounterValue::::get().unwrap_or(0); let new_value = current_value .checked_add(amount_to_increment) .ok_or(Error::::CounterOverflow)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); UserInteractions::::try_mutate(&who, |interactions| -> Result<_, Error> { let new_interactions = interactions .unwrap_or(0) .checked_add(1) .ok_or(Error::::UserInteractionOverflow)?; *interactions = Some(new_interactions); // Store the new value. Ok(()) })?; Self::deposit_event(Event::::CounterIncremented { counter_value: new_value, who, incremented_amount: amount_to_increment, }); Ok(()) } /// Decrement the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_decrement`: The amount by which to decrement the counter. /// /// Emits `CounterDecremented` event when successful. #[pallet::call_index(2)] #[pallet::weight(T::WeightInfo::decrement())] pub fn decrement(origin: OriginFor, amount_to_decrement: u32) -> DispatchResult { let who = ensure_signed(origin)?; let current_value = CounterValue::::get().unwrap_or(0); let new_value = current_value .checked_sub(amount_to_decrement) .ok_or(Error::::CounterValueBelowZero)?; CounterValue::::put(new_value); UserInteractions::::try_mutate(&who, |interactions| -> Result<_, Error> { let new_interactions = interactions .unwrap_or(0) .checked_add(1) .ok_or(Error::::UserInteractionOverflow)?; *interactions = Some(new_interactions); // Store the new value. Ok(()) })?; Self::deposit_event(Event::::CounterDecremented { counter_value: new_value, who, decremented_amount: amount_to_decrement, }); Ok(()) } } } ``` ## Where to Go Next
- Tutorial __Pallet Unit Testing__ --- Learn to write effective unit tests for Polkadot SDK pallets! Use a custom pallet as a practical example in this comprehensive guide. [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-unit-testing/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/ --- BEGIN CONTENT --- --- title: Deploy on Paseo TestNet description: This guide walks you through the journey of deploying your Polkadot SDK blockchain on Paseo, detailing each step to a successful TestNet deployment. tutorial_badge: Advanced categories: Parachains --- # Deploy on Paseo TestNet ## Introduction Previously, you learned how to [build and run a blockchain locally](/tutorials/polkadot-sdk/parachains/zero-to-hero/add-pallets-to-runtime/){target=\_blank}. Now, you'll take the next step towards a production-like environment by deploying your parachain to a public test network. This tutorial guides you through deploying a parachain on the Paseo network, a public TestNet that provides a more realistic blockchain ecosystem. While public testnets have a higher barrier to entry compared to private networks, they are crucial for validating your parachain's functionality and preparing it for eventual mainnet deployment. ## Get Started with an Account and Tokens To perform any action on Paseo, you need PAS tokens, which can be requested from the [Polkadot Faucet](https://faucet.polkadot.io/){target=\_blank}. To store the tokens, you must have access to a Substrate-compatible wallet. Go to the [Polkadot Wallets](https://polkadot.com/get-started/wallets/){target=\_blank} page on the Polkadot Wiki to view different options for a wallet, or use the [Polkadot.js browser extension](https://polkadot.js.org/extension/){target=\_blank}, which is suitable for development purposes. !!!warning Development keys and accounts should never hold assets of actual value and should not be used for production. The [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} interface can be used to get you started for testing purposes. To prepare an account, follow these steps: 1. Open the [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} interface and connect to the Paseo network. Alternatively use this link to connect directly to Paseo: [Polkadot.js Apps: Paseo](https://polkadot.js.org/apps/?rpc=wss://paseo.dotters.network#/explorer){target=\_blank} ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-1.webp) 2. Navigate to the **Accounts** section 1. Click on the **Accounts** tab in the top menu 2. Select the **Accounts** option from the dropdown menu ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-2.webp) 3. Copy the address of the account you want to use for the parachain deployment ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-3.webp) 4. Visit the [Polkadot Faucet](https://faucet.polkadot.io){target=\_blank} and paste the copied address in the input field. Ensure that the network is set to Paseo and click on the **Get some PASs** button ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-4.webp) After a few seconds, you will receive 5000 PAS tokens in your account. ## Reserve a Parachain Identifier You must reserve a parachain identifier (ID) before registering your parachain on Paseo. You'll be assigned the next available identifier. To reserve a parachain identifier, follow these steps: 1. Navigate to the **Parachains** section 1. Click on the **Network** tab in the top menu 2. Select the **Parachains** option from the dropdown menu ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-5.webp) 2. Register a ParaId 1. Select the **Parathreads** tab 2. Click on the **+ ParaId** button ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-6.webp) 3. Review the transaction and click on the **+ Submit** button ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-7.webp) For this case, the next available parachain identifier is `4508`. 4. After submitting the transaction, you can navigate to the **Explorer** tab and check the list of recent events for successful `registrar.Reserved` ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-8.webp) ## Generate Customs Keys for Your Collator To securely deploy your parachain, it is essential to generate custom keys specifically for your collators (block producers). You should generate two sets of keys for each collator: - **Account keys** - used to interact with the network and manage funds. These should be protected carefully and should never exist on the filesystem of the collator node - **Session keys** - used in block production to identify your node and its blocks on the network. These keys are stored in the parachain keystore and function as disposable "hot wallet" keys. If these keys are leaked, someone could impersonate your node, which could result in the slashing of your funds. To minimize these risks, rotating your session keys frequently is essential. Treat them with the same level of caution as you would a hot wallet to ensure the security of your node To perform this step, you can use [subkey](https://docs.rs/crate/subkey/latest){target=\_blank}, a command-line tool for generating and managing keys: ```bash docker run -it parity/subkey:latest generate --scheme sr25519 ``` The output should look similar to the following:
docker run -it parity/subkey:latest generate --scheme sr25519
Secret phrase: lemon play remain picture leopard frog mad bridge hire hazard best buddy
Network ID: substrate
Secret seed: 0xb748b501de061bae1fcab1c0b814255979d74d9637b84e06414a57a1a149c004
Public key (hex): 0xf4ec62ec6e70a3c0f8dcbe0531e2b1b8916cf16d30635bbe9232f6ed3f0bf422
Account ID: 0xf4ec62ec6e70a3c0f8dcbe0531e2b1b8916cf16d30635bbe9232f6ed3f0bf422
Public key (SS58): 5HbqmBBJ5ALUzho7tw1k1jEgKBJM7dNsQwrtfSfUskT1a3oe
SS58 Address: 5HbqmBBJ5ALUzho7tw1k1jEgKBJM7dNsQwrtfSfUskT1a3oe
Ensure that this command is executed twice to generate the keys for both the account and session keys. Save them for future reference. ## Generate the Chain Specification Polkadot SDK-based blockchains are defined by a file called the [chain specification](/develop/parachains/deployment/generate-chain-specs/){target=\_blank}, or chain spec for short. There are two types of chain spec files: - **Plain chain spec** - a human-readable JSON file that can be modified to suit your parachain's requirements. It serves as a template for initial configuration and includes human-readable keys and structures - **Raw chain spec** - a binary-encoded file used to start your parachain node. This file is generated from the plain chain spec and contains the encoded information necessary for the parachain node to synchronize with the blockchain network. It ensures compatibility across different runtime versions by providing data in a format directly interpretable by the node's runtime, regardless of upgrades since the chain's genesis The chain spec file is only required during the initial blockchain creation (genesis). You do not need to generate a new chain spec when performing runtime upgrades after your chain is already running. The files required to register a parachain must specify the correct relay chain to connect to and the parachain identifier you have been assigned. To make these changes, you must build and modify the chain specification file for your parachain. In this tutorial, the relay chain is `paseo`, and the parachain identifier is `4508`. To define your chain specification: 1. Generate the plain chain specification for the parachain template node by running the following command. Make sure to use the `*.compact.compressed.wasm` version of your compiled runtime when generating your chain specification, and replace `INSERT_PARA_ID` with the ID you obtained in the [Reserve a Parachain Identifier](#reserve-a-parachain-identifier) section: ```bash chain-spec-builder \ --chain-spec-path ./plain_chain_spec.json \ create \ --relay-chain paseo \ --para-id INSERT_PARA_ID \ --runtime target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \ named-preset local_testnet ``` 2. Edit the `plain_chain_spec.json` file: - Update the `name`, `id`, and `protocolId` fields to unique values for your parachain - Change `para_id` and `parachainInfo.parachainId` fields to the parachain ID you obtained previously. Make sure to use a number without quotes - Modify the `balances` field to specify the initial balances for your accounts in SS58 format - Insert the account IDs and session keys in SS58 format generated for your collators in the `collatorSelection.invulnerables` and `session.keys` fields - Modify the `sudo` value to specify the account that will have sudo access to the parachain ```json { "bootNodes": [], "chainType": "Live", "codeSubstitutes": {}, "genesis": { "runtimeGenesis": { "code": "0x...", "patch": { "aura": { "authorities": [] }, "auraExt": {}, "balances": { "balances": [["INSERT_SUDO_ACCOUNT", 1152921504606846976]] }, "collatorSelection": { "candidacyBond": 16000000000, "desiredCandidates": 0, "invulnerables": ["INSERT_ACCOUNT_ID_COLLATOR_1"] }, "parachainInfo": { "parachainId": "INSERT_PARA_ID" }, "parachainSystem": {}, "polkadotXcm": { "safeXcmVersion": 4 }, "session": { "keys": [ [ "INSERT_ACCOUNT_ID_COLLATOR_1", "INSERT_ACCOUNT_ID_COLLATOR_1", { "aura": "INSERT_SESSION_KEY_COLLATOR_1" } ] ], "nonAuthorityKeys": [] }, "sudo": { "key": "INSERT_SUDO_ACCOUNT" }, "system": {}, "transactionPayment": { "multiplier": "1000000000000000000" } } } }, "id": "INSERT_ID", "name": "INSERT_NAME", "para_id": "INSERT_PARA_ID", "properties": { "tokenDecimals": 12, "tokenSymbol": "UNIT" }, "protocolId": "INSERT_PROTOCOL_ID", "relay_chain": "paseo", "telemetryEndpoints": null } ``` For this tutorial, the `plain_chain_spec.json` file should look similar to the following. Take into account that the same account is being used for the collator and sudo, which must not be the case in a production environment: ??? code "View complete script" ```json title="plain_chain_spec.json" { "bootNodes": [], "chainType": "Live", "codeSubstitutes": {}, "genesis": { "runtimeGenesis": { "code": "0x...", "patch": { "aura": { "authorities": [] }, "auraExt": {}, "balances": { "balances": [ [ "5F9Zteceg3Q4ywi63AxQNVb2b2r5caFSqjQxBkCrux6j8ZpS", 1152921504606846976 ] ] }, "collatorSelection": { "candidacyBond": 16000000000, "desiredCandidates": 0, "invulnerables": [ "5F9Zteceg3Q4ywi63AxQNVb2b2r5caFSqjQxBkCrux6j8ZpS" ] }, "parachainInfo": { "parachainId": 4508 }, "parachainSystem": {}, "polkadotXcm": { "safeXcmVersion": 4 }, "session": { "keys": [ [ "5F9Zteceg3Q4ywi63AxQNVb2b2r5caFSqjQxBkCrux6j8ZpS", "5F9Zteceg3Q4ywi63AxQNVb2b2r5caFSqjQxBkCrux6j8ZpS", { "aura": "5GcAKNdYcw5ybb2kAnta8WVFyiQbGJ5od3aH9MsgYDmVcrhJ" } ] ], "nonAuthorityKeys": [] }, "sudo": { "key": "5F9Zteceg3Q4ywi63AxQNVb2b2r5caFSqjQxBkCrux6j8ZpS" }, "system": {}, "transactionPayment": { "multiplier": "1000000000000000000" } } } }, "id": "custom", "name": "Custom", "para_id": 4508, "properties": { "tokenDecimals": 12, "tokenSymbol": "UNIT" }, "protocolId": null, "relay_chain": "paseo", "telemetryEndpoints": null } ``` 3. Save your changes and close the plain text chain specification file 4. Convert the modified plain chain specification file to a raw chain specification file: ```bash chain-spec-builder \ --chain-spec-path ./raw_chain_spec.json \ convert-to-raw plain_chain_spec.json ``` You should now see your chain specification containing SCALE-encoded hex values versus plain text. !!!note "Deprecation of `para_id` in Chain Specs" The `para_id` field in JSON chain specifications, added through the [`chain-spec-builder`](https://paritytech.github.io/polkadot-sdk/master/staging_chain_spec_builder/index.html){target=\_blank} command, is currently used by nodes for configuration purposes. However, beginning with Polkadot SDK release `stable2509`, the `para_id` field will no longer be required in chain specifications. Instead, runtimes need to be updated to implement the [`cumulus_primitives_core::GetParachainInfo`](https://paritytech.github.io/polkadot-sdk/master/cumulus_primitives_core/trait.GetParachainInfo.html){target=\_blank} trait to successfully operate with nodes using chain specs that omit the `para_id` field. With the upcoming `stable2512` release, the `para_id` field will be completely removed from chain specifications in favor of the new runtime API. New nodes will be unable to start with chain specs containing the `para_id` field unless the runtime implements the `GetParachainInfo` trait. Ensure your runtime is updated to maintain compatibility with future node versions. For guidance on performing runtime upgrades to implement this new trait, refer to the [runtime upgrade tutorial](/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/){target=\_blank}. ## Export Required Files To prepare the parachain collator to be registered on Paseo, follow these steps: 1. Export the Wasm runtime for the parachain by running the following command: ```bash polkadot-omni-node export-genesis-wasm \ --chain raw_chain_spec.json para-wasm ``` 2. Export the genesis state for the parachain by running the following command: ```bash polkadot-omni-node export-genesis-head \ --chain raw_chain_spec.json para-state ``` ## Register a Parathread Once you have the genesis state and runtime, you can now register these with your parachain ID. 1. Go to the [Parachains > Parathreads](https://polkadot.js.org/apps/#/parachains/parathreads){target=\_blank} tab, and select **+ Parathread** 2. You should see fields to place your runtime Wasm and genesis state respectively, along with the parachain ID. Select your parachain ID, and upload `para-wasm` in the **code** field and `para-state` in the **initial state** field: ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-9.webp) 3. Confirm your details and **+ Submit** button, where there should be a new Parathread with your parachain ID and an active **Deregister** button: ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/deploy-to-testnet-10.webp) Your parachain's runtime logic and genesis are now part of the relay chain. The next step is to ensure you are able to run a collator to produce blocks for your parachain. !!!note You may need to wait several hours for your parachain to onboard. Until it has onboarded, you will be unable to purchase coretime, and therefore will not be able to perform transactions on your network. ## Start the Collator Node Before starting a collator, you need to generate a node key. This key is responsible for communicating with other nodes over Libp2p: ```bash polkadot-omni-node key generate-node-key \ --base-path data \ --chain raw_chain_spec.json ``` After running the command, you should see the following output, indicating the base path now has a suitable node key:
polkadot-omni-node key generate-node-key --base-path data --chain raw_chain_spec.json
Generating key in "/data/chains/custom/network/secret_ed25519" 12D3KooWKGW964eG4fAwsNMFdckbj3GwhpmSGFU9dd8LFAVAa4EE
You must have the ports for the collator publicly accessible and discoverable to enable parachain nodes to peer with Paseo validator nodes to produce blocks. You can specify the ports with the `--port` command-line option. You can start the collator with a command similar to the following: ```bash polkadot-omni-node --collator \ --chain raw_chain_spec.json \ --base-path data \ --port 40333 \ --rpc-port 8845 \ --force-authoring \ --node-key-file ./data/chains/custom/network/secret_ed25519 \ -- \ --sync warp \ --chain paseo \ --port 50343 \ --rpc-port 9988 ``` In this example, the first `--port` setting specifies the port for the collator node, and the second `--port` specifies the embedded relay chain node port. The first `--rpc-port` setting specifies the port you can connect to the collator. The second `--rpc-port` specifies the port for connecting to the embedded relay chain. Before proceeding, ensure that the collator node is running. Then, open a new terminal and insert your generated session key into the collator keystore by running the following command. Use the same port specified in the `--rpc-port` parameter when starting the collator node (`8845` in this example) to connect to it. Replace `INSERT_SECRET_PHRASE` and `INSERT_PUBLIC_KEY_HEX_FORMAT` with the values from the session key you generated in the [Generate Customs Keys for Your Collator](#generate-customs-keys-for-your-collator) section: ```bash curl -H "Content-Type: application/json" \ --data '{ "jsonrpc":"2.0", "method":"author_insertKey", "params":[ "aura", "INSERT_SECRET_PHRASE", "INSERT_PUBLIC_KEY_HEX_FORMAT" ], "id":1 }' \ http://localhost:8845 ``` If successful, you should see the following response: ```json {"jsonrpc":"2.0","result":null,"id":1} ``` Once your collator is synced with the Paseo relay chain, and your parathread finished onboarding, it will be ready to start producing blocks. This process may take some time. ## Producing Blocks With your parachain collator operational, the next step is acquiring coretime. This is essential for ensuring your parachain's security through the relay chain. [Agile Coretime](https://wiki.polkadot.network/learn/learn-agile-coretime/){target=\_blank} enhances Polkadot's resource management, offering developers greater economic adaptability. Once you have configured your parachain, you can follow two paths: - Bulk coretime is purchased via the Broker pallet on the respective coretime system parachain. You can purchase bulk coretime on the coretime chain and assign the purchased core to the registered `ParaID` - On-demand coretime is ordered via the `OnDemandAssignment` pallet, which is located on the respective relay chain Once coretime is correctly assigned to your parachain, whether bulk or on-demand, blocks should be produced (provided your collator is running). For more information on coretime, refer to the [Coretime](/polkadot-protocol/architecture/system-chains/coretime/){target=\_blank} documentation. ## Where to Go Next
- Tutorial __Obtain Coretime__ --- Get coretime for block production now! Follow this guide to explore on-demand and bulk options for seamless and efficient operations. [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/ --- BEGIN CONTENT --- --- title: Zero To Hero Parachain Tutorial Series description: A comprehensive guide for developers to build, test, and deploy custom pallets and runtimes, leveraging the full potential of the Polkadot SDK. template: index-page.html --- # Parachain Zero To Hero Tutorials The **Parachain Zero To Hero Tutorials** provide developers with a series of step-by-step guides to building, testing, and deploying custom pallets and runtimes using the Polkadot SDK. These tutorials are designed to help you gain hands-on experience and understand the core concepts necessary to create efficient and scalable blockchains. To get the most from this section, complete the guides in the order shown, starting with the [Set Up a Template](/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template/){target=\_blank} guide. As you complete each guide, look for **Where to Go Next** to move to the next guide in the series. ## Parachain Development Cycle [timeline(polkadot-docs/.snippets/text/tutorials/polkadot-sdk/parachains/zero-to-hero/zero-to-hero-timeline.json)] --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/ --- BEGIN CONTENT --- --- title: Obtain Coretime description: Learn how to obtain coretime for block production with this guide, covering both on-demand and bulk options for smooth operations. tutorial_badge: Advanced categories: Parachains --- ## Introduction After deploying a parachain to the Paseo TestNet in the [Deploy to TestNet](/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/){target=\_blank} tutorial, the focus shifts to understanding Coretime, which is the mechanism in which validation resources are allocated from the relay chain to the respective task, such as a parachain. A parachain could only produce blocks and finalize them on the relay chain by obtaining coretime. There are two ways to obtain coretime: - **[On-demand coretime](#order-on-demand-coretime)** - on-demand coretime allows you to buy coretime on a block-by-block basis - **[Bulk coretime](#purchase-bulk-coretime)** - bulk coretime allows you to obtain a core or part of a core. It is purchased for some time, up to 28 days. It must be renewed once the lease finishes In this tutorial, you will: - Learn about the different coretime interfaces available - Learn how to purchase a core via bulk coretime - Assign a task / parachain to the core for block production - Alternatively, use on-demand coretime to produce blocks as required ## Prerequisites Before proceeding, you should have the following items: - A parachain ID - A chain specification - A registered parathread with the correct genesis, runtime, and parachain ID that matches the chain specification - A properly configured and synced (with the relay chain) collator Once the above is complete, obtaining coretime is the last step to enable your parachain to start producing and finalizing blocks using the relay chain's validator set. If you don't, refer to the previous tutorial: [Deploy on Paseo TestNet](/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/){target=\_blank}. ## Order On Demand Coretime There are two extrinsics which allow you to place orders for on-demand coretime: - [**`onDemand.placeOrderAllowDeath`**](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_parachains/on_demand/pallet/dispatchables/fn.place_order_allow_death.html){target=\_blank} - will [reap](https://wiki.polkadot.network/learn/learn-accounts/#existential-deposit-and-reaping){target=\_blank} the account once the provided funds run out - [**`onDemand.placeOrderKeepAlive`**](https://paritytech.github.io/polkadot-sdk/master/polkadot_runtime_parachains/on_demand/pallet/dispatchables/fn.place_order_keep_alive.html){target=\_blank} - includes a check that will **not** reap the account if the provided funds run out, ensuring the account is kept alive To produce a block in your parachain, navigate to Polkadot.js Apps and ensure you're connected to the Paseo relay chain. Then, access the [**Developer > Extrinsics**](https://polkadot.js.org/apps/#/extrinsics){target=\_blank} tab and execute the `onDemand.placeOrderAllowDeath` extrinsic from the account that registered the `ParaID`. For this example, `maxAmount` is set to `1000000000000` (this value may vary depending on the network conditions), and `paraId` is set to `4518`: ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-9.webp) With each successful on-demand extrinsic, the parachain will produce a new block. You can verify this by checking the collator logs. If the extrinsic is successful, you should see output similar to the following:
2024-12-11 18:03:29 [Parachain] 🙌 Starting consensus session on top of parent 0x860e5e37dbc04e736e76c4a42c64e71e069084548862d4007d32958578b26d87 (#214) 2024-12-11 18:03:30 [Parachain] 🎁 Prepared block for proposing at 215 (701 ms) hash: 0xee48b7dd559ab4cbff679f59e5cd37f2fd5b60c53a25b11d770dce999968076c; parent_hash: 0x860e…6d87; end: NoMoreTransactions; extrinsics_count: 2 2024-12-11 18:03:30 [Parachain] 🏆 Imported #215 (0x860e…6d87 → 0xee48…076c)
## Purchase Bulk Coretime Purchasing bulk coretime involves purchasing a core from the [Coretime Chain](/polkadot-protocol/architecture/system-chains/coretime/){target=\_blank}, which has an instance of [`pallet_broker`](https://paritytech.github.io/polkadot-sdk/master/pallet_broker/index.html){target=\_blank} (the Broker pallet). Although this can be done via sending extrinsics through a tool like Polkadot.js Apps, the [RegionX Coretime Marketplace](https://app.regionx.tech){target=\_blank} (includes Paseo support) also provides a user interface for purchasing and managing bulk coretime. !!!tip Obtaining a core for bulk coretime on Paseo follows a different process from Polkadot or Kusama. To apply for a core on Paseo, visit their guide for doing so: [PAS-10 Onboard Paras Coretime](https://github.com/paseo-network/paseo-action-submission/blob/main/pas/PAS-10-Onboard-paras-coretime.md#summary){target=\_blank}. ### Get Coretime Funds First, ensure your wallet is connected to the [RegionX](https://app.regionx.tech){target=\_blank} interface. To do so, go to **Home** in the RegionX app and click the **Connect Wallet** button in the upper right. After connecting your wallet, you must obtain funds on the Coretime chain. You can use the [RegionX Transfer](https://app.regionx.tech/transfer){target=\_blank} page to perform a cross-chain transfer from the relay to the system chain. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-1.webp) If you are purchasing a core on a TestNet, be sure to visit the [Polkadot Faucet](https://faucet.polkadot.io/westend){target=\_blank} for TestNet tokens. If successful, you should see the balance in the upper right of the **Transfer** page update with balances on the relay and Coretime chain, respectively. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-2.webp) ### Purchase a Core For this tutorial, we will use [RegionX](https://app.regionx.tech){target=\_blank}. Once you open the app, you should be presented with the following screen: ![Screenshot of the RegionX app displaying the main interface.](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-3.webp) On the top left is a network switch. Ensure you have selected your parachain and that it is registered before purchasing a core. To purchase a core, go to the menu on the left and select the **Purchase A Core** item under **Primary Market**. Here, you should see the cores available for purchase, details regarding the sale period, and its current phase. Alternatively, you may use this link to visit it: [**Primary Market > Purchase A Core**](https://app.regionx.tech/purchase){target=\_blank}. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-4.webp) At the bottom-right corner of the page, select the **Purchase a Core** button. A modal detailing the fees will appear. Review the details, then click **Ok** and sign the transaction using the wallet of your choice. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-5.webp) Once the transaction is confirmed, click [**My Regions**](https://app.regionx.tech/regions){target=\_blank} on the left-hand menu, and you will see your purchased core. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-6.webp) Congratulations, you just purchased a core using RegionX! You can assign the core to your parachain, partition, interlace, and more using RegionX. ### Assign a Core Once you have the core as shown in the dashboard, select it by clicking on it, then click the **Assign** option on the left-hand side. You will be presented with a modal in which you can add a new task. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-7.webp) Click the **Add Task** button and input the parachain identifier, along with the name of your project, and finalize it by clicking **Add Task**. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/obtain-coretime/obtain-coretime-8.webp) You may now select a task from the list. You must also set the core's finality, which determines whether you can renew this specific core. Provisional finality allows for interlacing and partitioning, whereas Final finality does not allow the region to be modified. A core must not be interlaced or partitioned to be renewable, so Finality should be selected if you want to renew this specific core. Once you sign and send this transaction, your parachain will be assigned to that core. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-benchmarking/ --- BEGIN CONTENT --- --- title: Pallet Benchmarking description: Learn how to benchmark Polkadot SDK-based pallets, assigning precise weights to extrinsics for accurate fee calculation and runtime optimization. tutorial_badge: Advanced categories: Parachains --- ## Introduction After validating your pallet through testing and integrating it into your runtime, the next crucial step is benchmarking. Testing procedures were detailed in the [Pallet Unit Testing](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-unit-testing/){target=\_blank} tutorial, while runtime integration was covered in the [Add Pallets to the Runtime](/tutorials/polkadot-sdk/parachains/zero-to-hero/add-pallets-to-runtime/){target=\_blank} guide. Benchmarking assigns precise [weight](/polkadot-protocol/glossary/#weight){target=\_blank} to each extrinsic, measuring their computational and storage costs. These derived weights enable accurate fee calculation and resource allocation within the runtime. This tutorial demonstrates how to: - Configure your development environment for benchmarking - Create and implement benchmark tests for your extrinsics - Apply benchmark results to your pallet's extrinsics For comprehensive information about benchmarking concepts, refer to the [Benchmarking](/develop/parachains/testing/benchmarking/){target=\_blank} guide. ## Environment Setup Follow these steps to prepare your environment for pallet benchmarking: 1. Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: ```bash cargo install frame-omni-bencher@0.10.0 ``` 2. Update your pallet's `Cargo.toml` file in the `pallets/custom-pallet` directory by adding the `runtime-benchmarks` feature flag: ```toml hl_lines="4" title="Cargo.toml" [package] name = "custom-pallet" version = "0.1.0" license.workspace = true authors.workspace = true homepage.workspace = true repository.workspace = true edition.workspace = true [dependencies] codec = { features = ["derive"], workspace = true } scale-info = { features = ["derive"], workspace = true } frame = { features = ["experimental", "runtime"], workspace = true } [features] default = ["std"] std = ["codec/std", "frame/std", "scale-info/std"] runtime-benchmarks = ["frame/runtime-benchmarks"] ``` 3. Add your pallet to the runtime's benchmark configuration: 1. Register your pallet in `runtime/src/benchmarks.rs`: ```rust hl_lines="11" title="benchmarks.rs" polkadot_sdk::frame_benchmarking::define_benchmarks!( [frame_system, SystemBench::] [pallet_balances, Balances] [pallet_session, SessionBench::] [pallet_timestamp, Timestamp] [pallet_message_queue, MessageQueue] [pallet_sudo, Sudo] [pallet_collator_selection, CollatorSelection] [cumulus_pallet_parachain_system, ParachainSystem] [cumulus_pallet_xcmp_queue, XcmpQueue] [custom_pallet, CustomPallet] ); ``` 2. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: ```toml hl_lines="6" title="Cargo.toml" runtime-benchmarks = [ "cumulus-pallet-parachain-system/runtime-benchmarks", "hex-literal", "pallet-parachain-template/runtime-benchmarks", "polkadot-sdk/runtime-benchmarks", "custom-pallet/runtime-benchmarks", ] ``` 4. Set up the benchmarking module in your pallet: 1. Create a new `benchmarking.rs` file in your pallet directory: ```bash touch benchmarking.rs ``` 2. Add the benchmarking module to your pallet. In the pallet `lib.rs` file add the following: ```rust hl_lines="9-10" title="lib.rs" pub use pallet::*; #[cfg(test)] mod mock; #[cfg(test)] mod tests; #[cfg(feature = "runtime-benchmarks")] mod benchmarking; ``` The `benchmarking` module is gated behind the `runtime-benchmarks` feature flag. It will only be compiled when this flag is explicitly enabled in your project's `Cargo.toml` or via the `--features runtime-benchmarks` compilation flag. ## Implement Benchmark Tests When writing benchmarking tests for your pallet, you'll create specialized test functions for each extrinsic, similar to unit tests. These tests use the mock runtime you created earlier for testing, allowing you to leverage its utility functions. Every benchmark test must follow a three-step pattern: 1. **Setup** - perform any necessary setup before calling the extrinsic. This might include creating accounts, setting initial states, or preparing test data 2. **Execute the extrinsic** - execute the actual extrinsic using the [`#[extrinsic_call]`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/attr.extrinsic_call.html){target=\_blank} macro. This must be a single line that calls your extrinsic function with the origin as its first argument 3. **Verification** - check that the extrinsic worked correctly within the benchmark context by checking the expected state changes Check the following example on how to benchmark the `increment` extrinsic: ```rust #[benchmark] fn increment() { let caller: T::AccountId = whitelisted_caller(); assert_ok!(CustomPallet::::set_counter_value( RawOrigin::Root.into(), 5u32 )); #[extrinsic_call] increment(RawOrigin::Signed(caller.clone()), 1); assert_eq!(CounterValue::::get(), Some(6u32.into())); assert_eq!(UserInteractions::::get(caller), 1u32.into()); } ``` This benchmark test: 1. Creates a whitelisted caller and sets an initial counter value of 5 2. Calls the increment extrinsic to increase the counter by 1 3. Verifies that the counter was properly incremented to 6 and that the user's interaction was recorded in storage This example demonstrates how to properly set up state, execute an extrinsic, and verify its effects during benchmarking. Now, implement the complete set of benchmark tests. Copy the following content in the `benchmarking.rs` file: ```rust title="benchmarking.rs" // This file is part of 'custom-pallet'. // SPDX-License-Identifier: MIT-0 // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE // SOFTWARE. #![cfg(feature = "runtime-benchmarks")] use super::{Pallet as CustomPallet, *}; use frame::deps::frame_support::assert_ok; use frame::{deps::frame_benchmarking::v2::*, prelude::*}; #[benchmarks] mod benchmarks { use super::*; #[cfg(test)] use crate::pallet::Pallet as CustomPallet; use frame_system::RawOrigin; #[benchmark] fn set_counter_value() { #[extrinsic_call] set_counter_value(RawOrigin::Root, 5); assert_eq!(CounterValue::::get(), Some(5u32.into())); } #[benchmark] fn increment() { let caller: T::AccountId = whitelisted_caller(); assert_ok!(CustomPallet::::set_counter_value( RawOrigin::Root.into(), 5u32 )); #[extrinsic_call] increment(RawOrigin::Signed(caller.clone()), 1); assert_eq!(CounterValue::::get(), Some(6u32.into())); assert_eq!(UserInteractions::::get(caller), 1u32.into()); } #[benchmark] fn decrement() { let caller: T::AccountId = whitelisted_caller(); assert_ok!(CustomPallet::::set_counter_value( RawOrigin::Root.into(), 5u32 )); #[extrinsic_call] decrement(RawOrigin::Signed(caller.clone()), 1); assert_eq!(CounterValue::::get(), Some(4u32.into())); assert_eq!(UserInteractions::::get(caller), 1u32.into()); } impl_benchmark_test_suite!(CustomPallet, crate::mock::new_test_ext(), crate::mock::Test); } ``` The [`#[benchmark]`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/attr.benchmark.html){target=\_blank} macro marks these functions as benchmark tests, while the `#[extrinsic_call]` macro specifically identifies which line contains the extrinsic being measured. For more information, see the [frame_benchmarking](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} Rust docs. ## Execute the Benchmarking After implementing your benchmark test suite, you'll need to execute the tests and generate the weights for your extrinsics. This process involves building your runtime with benchmarking features enabled and using the `frame-omni-bencher` CLI tool. To do that, follow these steps: 1. Build your runtime with the `runtime-benchmarks` feature enabled: ```bash cargo build --features runtime-benchmarks --release ``` This special build includes all the necessary benchmarking code that's normally excluded from production builds. 2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: ```bash touch weights.rs ``` 3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: ```bash mkdir ./pallets/benchmarking && \ curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/heads/stable2412/substrate/.maintain/frame-umbrella-weight-template.hbs \ --output ./pallets/benchmarking/frame-umbrella-weight-template.hbs ``` 4. Execute the benchmarking process using the `frame-omni-bencher` CLI: ```bash frame-omni-bencher v1 benchmark pallet \ --runtime target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \ --pallet "custom_pallet" \ --extrinsic "" \ --template ./pallets/benchmarking/frame-umbrella-weight-template.hbs \ --output ./pallets/custom-pallet/src/weights.rs ``` When the benchmarking process completes, your `weights.rs` file will contain auto-generated code with weight calculations for each of your pallet's extrinsics. These weights help ensure fair and accurate fee calculations when your pallet is used in a production environment. ## Add Benchmarking Weights to the Pallet After generating the weight calculations, you need to integrate these weights into your pallet's code. This integration ensures your pallet properly accounts for computational costs in its extrinsics. First, add the necessary module imports to your pallet. These imports make the weights available to your code: ```rust hl_lines="4-5" title="lib.rs" #[cfg(feature = "runtime-benchmarks")] mod benchmarking; pub mod weights; use crate::weights::WeightInfo; ``` Next, update your pallet's `Config` trait to include weight information. Define the `WeightInfo` type: ```rust hl_lines="9-10" title="lib.rs" pub trait Config: frame_system::Config { // Defines the event type for the pallet. type RuntimeEvent: From> + IsType<::RuntimeEvent>; // Defines the maximum value the counter can hold. #[pallet::constant] type CounterMaxValue: Get; /// A type representing the weights required by the dispatchables of this pallet. type WeightInfo: WeightInfo; } ``` Now you can assign weights to your extrinsics. Here's how to add weight calculations to the `set_counter_value` function: ```rust hl_lines="1" title="lib.rs" #[pallet::weight(T::WeightInfo::set_counter_value())] pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { ensure_root(origin)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); Self::deposit_event(Event::::CounterValueSet { counter_value: new_value, }); Ok(()) } ``` You must apply similar weight annotations to the other extrinsics in your pallet. Add the `#[pallet::weight(T::WeightInfo::function_name())]` attribute to both `increment` and `decrement`, replacing `function_name` with the respective function names from your `WeightInfo` trait. For testing purposes, you must implement the weight calculations in your mock runtime. Open `custom-pallet/src/mock.rs` and add: ```rust hl_lines="4" title="mock.rs" impl custom_pallet::Config for Test { type RuntimeEvent = RuntimeEvent; type CounterMaxValue = CounterMaxValue; type WeightInfo = custom_pallet::weights::SubstrateWeight; } ``` Finally, configure the actual weight values in your production runtime. In `runtime/src/config/mod.rs`, add: ```rust hl_lines="5" title="mod.rs" // Configure custom pallet. impl custom_pallet::Config for Runtime { type RuntimeEvent = RuntimeEvent; type CounterMaxValue = CounterMaxValue; type WeightInfo = custom_pallet::weights::SubstrateWeight; } ``` Your pallet is now complete with full testing and benchmarking support, ready for production use. ## Where to Go Next
- Tutorial __Runtime Upgrade__ --- Learn how to safely perform runtime upgrades for your Polkadot SDK-based blockchain, including step-by-step instructions for preparing, submitting, and verifying upgrades. [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-unit-testing/ --- BEGIN CONTENT --- --- title: Pallet Unit Testing description: Discover how to create thorough unit tests for pallets built with the Polkadot SDK, using a custom pallet as a practical example. tutorial_badge: Intermediate categories: Parachains --- # Pallet Unit Testing ## Introduction You have learned how to create a new pallet in the [Build a Custom Pallet](/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/){target=\_blank} tutorial; now you will see how to test the pallet to ensure that it works as expected. As stated in the [Pallet Testing](/develop/parachains/testing/pallet-testing/){target=\_blank} article, unit testing is crucial for ensuring the reliability and correctness of pallets in Polkadot SDK-based blockchains. Comprehensive testing helps validate pallet functionality, prevent potential bugs, and maintain the integrity of your blockchain logic. This tutorial will guide you through creating a unit testing suite for a custom pallet created in the [Build a Custom Pallet](/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/){target=\_blank} tutorial, covering essential testing aspects and steps. ## Prerequisites To set up your testing environment for Polkadot SDK pallets, you'll need: - [Polkadot SDK dependencies](/develop/parachains/install-polkadot-sdk/){target=\_blank} installed - Basic understanding of Substrate/Polkadot SDK concepts - A custom pallet implementation, check the [Build a Custom Pallet](/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/){target=\_blank} tutorial - Familiarity with [Rust testing frameworks](https://doc.rust-lang.org/book/ch11-01-writing-tests.html){target=\_blank} ## Set Up the Testing Environment To effectively create the test environment for your pallet, you'll need to follow these steps: 1. Move to the project directory ```bash cd custom-pallet ``` 2. Create a `mock.rs` and a `tests.rs` files (leave these files empty for now, they will be filled in later): ```bash touch src/mock.rs touch src/tests.rs ``` 3. Include them in your `lib.rs` module: ```rust hl_lines="5-9" title="lib.rs" #![cfg_attr(not(feature = "std"), no_std)] pub use pallet::*; #[cfg(test)] mod mock; #[cfg(test)] mod tests; ``` ## Implement Mocked Runtime The following portion of code sets up a mock runtime (`Test`) to test the `custom-pallet` in an isolated environment. Using [`frame_support`](https://paritytech.github.io/polkadot-sdk/master/frame_support/index.html){target=\_blank} macros, it defines a minimal runtime configuration with traits such as `RuntimeCall` and `RuntimeEvent` to simulate runtime behavior. The mock runtime integrates the [`System pallet`](https://paritytech.github.io/polkadot-sdk/master/frame_system/index.html){target=\_blank}, which provides core functionality, and the `custom pallet` under specific indices. Copy and paste the following snippet of code into your `mock.rs` file: ```rust title="mock.rs" use crate as custom_pallet; use frame::{prelude::*, runtime::prelude::*, testing_prelude::*}; type Block = frame_system::mocking::MockBlock; // Configure a mock runtime to test the pallet. #[frame_construct_runtime] mod runtime { #[runtime::runtime] #[runtime::derive( RuntimeCall, RuntimeEvent, RuntimeError, RuntimeOrigin, RuntimeFreezeReason, RuntimeHoldReason, RuntimeSlashReason, RuntimeLockId, RuntimeTask )] pub struct Test; #[runtime::pallet_index(0)] pub type System = frame_system; #[runtime::pallet_index(1)] pub type CustomPallet = custom_pallet; } ``` Once you have your mock runtime set up, you can customize it by implementing the configuration traits for the `System pallet` and your `custom-pallet`, along with additional constants and initial states for testing. Here's an example of how to extend the runtime configuration. Copy and paste the following snippet of code below the previous one you added to `mock.rs`: ```rust title="mock.rs" // System pallet configuration #[derive_impl(frame_system::config_preludes::TestDefaultConfig)] impl frame_system::Config for Test { type Block = Block; } // Custom pallet configuration parameter_types! { pub const CounterMaxValue: u32 = 10; } impl custom_pallet::Config for Test { type RuntimeEvent = RuntimeEvent; type CounterMaxValue = CounterMaxValue; // This file is part of 'custom-pallet'. // SPDX-License-Identifier: MIT-0 // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE // SOFTWARE. use crate as custom_pallet; use frame::{prelude::*, runtime::prelude::*, testing_prelude::*}; type Block = frame_system::mocking::MockBlock; // Configure a mock runtime to test the pallet. #[frame_construct_runtime] mod runtime { #[runtime::runtime] #[runtime::derive( RuntimeCall, RuntimeEvent, RuntimeError, RuntimeOrigin, RuntimeFreezeReason, RuntimeHoldReason, RuntimeSlashReason, RuntimeLockId, RuntimeTask )] pub struct Test; #[runtime::pallet_index(0)] pub type System = frame_system; #[runtime::pallet_index(1)] pub type CustomPallet = custom_pallet; } // System pallet configuration #[derive_impl(frame_system::config_preludes::TestDefaultConfig)] impl frame_system::Config for Test { type Block = Block; } // Custom pallet configuration parameter_types! { pub const CounterMaxValue: u32 = 10; } impl custom_pallet::Config for Test { type RuntimeEvent = RuntimeEvent; type CounterMaxValue = CounterMaxValue; type WeightInfo = custom_pallet::weights::SubstrateWeight; } // Test externalities initialization pub fn new_test_ext() -> TestExternalities { frame_system::GenesisConfig::::default() .build_storage() .unwrap() .into() } ``` Explanation of the additions: - **System pallet configuration** - implements the `frame_system::Config` trait for the mock runtime, setting up the basic system functionality and specifying the block type - **Custom pallet configuration** - defines the `Config` trait for the `custom-pallet`, including a constant (`CounterMaxValue`) to set the maximum allowed counter value. In this case, that value is set to 10 for testing purposes - **Test externalities initialization** - the `new_test_ext()` function initializes the mock runtime with default configurations, creating a controlled environment for testing ### Full Mocked Runtime Expand the following item to see the complete `mock.rs` implementation for the mock runtime. ??? code "mock.rs" ```rust title="mock.rs" use crate as custom_pallet; use frame::{prelude::*, runtime::prelude::*, testing_prelude::*}; type Block = frame_system::mocking::MockBlock; // Configure a mock runtime to test the pallet. #[frame_construct_runtime] mod runtime { #[runtime::runtime] #[runtime::derive( RuntimeCall, RuntimeEvent, RuntimeError, RuntimeOrigin, RuntimeFreezeReason, RuntimeHoldReason, RuntimeSlashReason, RuntimeLockId, RuntimeTask )] pub struct Test; #[runtime::pallet_index(0)] pub type System = frame_system; #[runtime::pallet_index(1)] pub type CustomPallet = custom_pallet; } // System pallet configuration #[derive_impl(frame_system::config_preludes::TestDefaultConfig)] impl frame_system::Config for Test { type Block = Block; } // Custom pallet configuration parameter_types! { pub const CounterMaxValue: u32 = 10; } impl custom_pallet::Config for Test { type RuntimeEvent = RuntimeEvent; type CounterMaxValue = CounterMaxValue; // This file is part of 'custom-pallet'. // SPDX-License-Identifier: MIT-0 // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE // SOFTWARE. use crate as custom_pallet; use frame::{prelude::*, runtime::prelude::*, testing_prelude::*}; type Block = frame_system::mocking::MockBlock; // Configure a mock runtime to test the pallet. #[frame_construct_runtime] mod runtime { #[runtime::runtime] #[runtime::derive( RuntimeCall, RuntimeEvent, RuntimeError, RuntimeOrigin, RuntimeFreezeReason, RuntimeHoldReason, RuntimeSlashReason, RuntimeLockId, RuntimeTask )] pub struct Test; #[runtime::pallet_index(0)] pub type System = frame_system; #[runtime::pallet_index(1)] pub type CustomPallet = custom_pallet; } // System pallet configuration #[derive_impl(frame_system::config_preludes::TestDefaultConfig)] impl frame_system::Config for Test { type Block = Block; } // Custom pallet configuration parameter_types! { pub const CounterMaxValue: u32 = 10; } impl custom_pallet::Config for Test { type RuntimeEvent = RuntimeEvent; type CounterMaxValue = CounterMaxValue; type WeightInfo = custom_pallet::weights::SubstrateWeight; } // Test externalities initialization pub fn new_test_ext() -> TestExternalities { frame_system::GenesisConfig::::default() .build_storage() .unwrap() .into() } ``` ## Implement Test Cases Unit testing a pallet involves creating a comprehensive test suite that validates various scenarios. You ensure your pallet’s reliability, security, and expected behavior under different conditions by systematically testing successful operations, error handling, event emissions, state modifications, and access control. Expand the following item to see the pallet calls to be tested. ??? code "Custom pallet calls" ```rust #[pallet::call] impl Pallet { /// Set the value of the counter. /// /// The dispatch origin of this call must be _Root_. /// /// - `new_value`: The new value to set for the counter. /// /// Emits `CounterValueSet` event when successful. #[pallet::call_index(0)] #[pallet::weight(0)] pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { ensure_root(origin)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); Self::deposit_event(Event::::CounterValueSet { counter_value: new_value, }); Ok(()) } /// Increment the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_increment`: The amount by which to increment the counter. /// /// Emits `CounterIncremented` event when successful. #[pallet::call_index(1)] #[pallet::weight(0)] pub fn increment(origin: OriginFor, amount_to_increment: u32) -> DispatchResult { let who = ensure_signed(origin)?; let current_value = CounterValue::::get().unwrap_or(0); let new_value = current_value .checked_add(amount_to_increment) .ok_or(Error::::CounterOverflow)?; ensure!( new_value <= T::CounterMaxValue::get(), Error::::CounterValueExceedsMax ); CounterValue::::put(new_value); UserInteractions::::try_mutate(&who, |interactions| -> Result<_, Error> { let new_interactions = interactions .unwrap_or(0) .checked_add(1) .ok_or(Error::::UserInteractionOverflow)?; *interactions = Some(new_interactions); // Store the new value. Ok(()) })?; Self::deposit_event(Event::::CounterIncremented { counter_value: new_value, who, incremented_amount: amount_to_increment, }); Ok(()) } /// Decrement the counter by a specified amount. /// /// This function can be called by any signed account. /// /// - `amount_to_decrement`: The amount by which to decrement the counter. /// /// Emits `CounterDecremented` event when successful. #[pallet::call_index(2)] #[pallet::weight(0)] pub fn decrement(origin: OriginFor, amount_to_decrement: u32) -> DispatchResult { let who = ensure_signed(origin)?; let current_value = CounterValue::::get().unwrap_or(0); let new_value = current_value .checked_sub(amount_to_decrement) .ok_or(Error::::CounterValueBelowZero)?; CounterValue::::put(new_value); UserInteractions::::try_mutate(&who, |interactions| -> Result<_, Error> { let new_interactions = interactions .unwrap_or(0) .checked_add(1) .ok_or(Error::::UserInteractionOverflow)?; *interactions = Some(new_interactions); // Store the new value. Ok(()) })?; Self::deposit_event(Event::::CounterDecremented { counter_value: new_value, who, decremented_amount: amount_to_decrement, }); Ok(()) } } } ``` The following sub-sections outline various scenarios in which the `custom-pallet` can be tested. Feel free to add these snippets to your `tests.rs` while you read the examples. ### Successful Operations Verify that the counter can be successfully incremented under normal conditions, ensuring the increment works and the correct event is emitted. ```rust title="tests.rs" // Test successful counter increment #[test] fn it_works_for_increment() { new_test_ext().execute_with(|| { System::set_block_number(1); // Initialize the counter value to 0 assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 0)); // Increment the counter by 5 assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(1), 5)); // Check that the event emitted matches the increment operation System::assert_last_event( Event::CounterIncremented { counter_value: 5, who: 1, incremented_amount: 5, } .into(), ); }); } ``` ### Preventing Value Overflow Test that the pallet prevents incrementing beyond the maximum allowed value, protecting against unintended state changes. ```rust title="tests.rs" // Verify increment is blocked when it would exceed max value #[test] fn increment_fails_for_max_value_exceeded() { new_test_ext().execute_with(|| { System::set_block_number(1); // Set counter value close to max (10) assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 7)); // Ensure that incrementing by 4 exceeds max value (10) and fails assert_noop!( CustomPallet::increment(RuntimeOrigin::signed(1), 4), Error::::CounterValueExceedsMax // Expecting CounterValueExceedsMax error ); }); } ``` ### Origin and Access Control Confirm that sensitive operations like setting counter value are restricted to authorized origins, preventing unauthorized modifications. ```rust title="tests.rs" // Ensure non-root accounts cannot set counter value #[test] fn set_counter_value_fails_for_non_root() { new_test_ext().execute_with(|| { System::set_block_number(1); // Ensure only root (privileged account) can set counter value assert_noop!( CustomPallet::set_counter_value(RuntimeOrigin::signed(1), 5), // non-root account sp_runtime::traits::BadOrigin // Expecting a BadOrigin error ); }); } ``` ### Edge Case Handling Ensure the pallet gracefully handles edge cases, such as preventing increment operations that would cause overflow. ```rust title="tests.rs" // Ensure increment fails on u32 overflow #[test] fn increment_handles_overflow() { new_test_ext().execute_with(|| { System::set_block_number(1); // Set to max value assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 1)); assert_noop!( CustomPallet::increment(RuntimeOrigin::signed(1), u32::MAX), Error::::CounterOverflow ); }); } ``` ### Verify State Changes Test that pallet operations modify the internal state correctly and maintain expected storage values across different interactions. ```rust title="tests.rs" // Check that user interactions are correctly tracked #[test] fn user_interactions_increment() { new_test_ext().execute_with(|| { System::set_block_number(1); // Initialize counter value to 0 assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 0)); // Increment by 5 and decrement by 2 assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(1), 5)); assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(1), 2)); // Check if the user interactions are correctly tracked assert_eq!(UserInteractions::::get(1).unwrap_or(0), 2); // User should have 2 interactions }); } ``` ### Full Test Suite Expand the following item to see the complete `tests.rs` implementation for the custom pallet. ??? code "tests.rs" ```rust title="tests.rs" // This file is part of 'custom-pallet'. // SPDX-License-Identifier: MIT-0 // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE // SOFTWARE. use crate::{mock::*, Error, Event, UserInteractions}; use frame::deps::sp_runtime; use frame::testing_prelude::*; // Verify root can successfully set counter value #[test] fn it_works_for_set_counter_value() { new_test_ext().execute_with(|| { System::set_block_number(1); // Set counter value within max allowed (10) assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 5)); // Ensure that the correct event is emitted when the value is set System::assert_last_event(Event::CounterValueSet { counter_value: 5 }.into()); }); } // Ensure non-root accounts cannot set counter value #[test] fn set_counter_value_fails_for_non_root() { new_test_ext().execute_with(|| { System::set_block_number(1); // Ensure only root (privileged account) can set counter value assert_noop!( CustomPallet::set_counter_value(RuntimeOrigin::signed(1), 5), // non-root account sp_runtime::traits::BadOrigin // Expecting a BadOrigin error ); }); } // Check that setting value above max is prevented #[test] fn set_counter_value_fails_for_max_value_exceeded() { new_test_ext().execute_with(|| { System::set_block_number(1); // Ensure the counter value cannot be set above the max limit (10) assert_noop!( CustomPallet::set_counter_value(RuntimeOrigin::root(), 11), Error::::CounterValueExceedsMax // Expecting CounterValueExceedsMax error ); }); } // Test successful counter increment #[test] fn it_works_for_increment() { new_test_ext().execute_with(|| { System::set_block_number(1); // Initialize the counter value to 0 assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 0)); // Increment the counter by 5 assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(1), 5)); // Check that the event emitted matches the increment operation System::assert_last_event( Event::CounterIncremented { counter_value: 5, who: 1, incremented_amount: 5, } .into(), ); }); } // Verify increment is blocked when it would exceed max value #[test] fn increment_fails_for_max_value_exceeded() { new_test_ext().execute_with(|| { System::set_block_number(1); // Set counter value close to max (10) assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 7)); // Ensure that incrementing by 4 exceeds max value (10) and fails assert_noop!( CustomPallet::increment(RuntimeOrigin::signed(1), 4), Error::::CounterValueExceedsMax // Expecting CounterValueExceedsMax error ); }); } // Ensure increment fails on u32 overflow #[test] fn increment_handles_overflow() { new_test_ext().execute_with(|| { System::set_block_number(1); // Set to max value assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 1)); assert_noop!( CustomPallet::increment(RuntimeOrigin::signed(1), u32::MAX), Error::::CounterOverflow ); }); } // Test successful counter decrement #[test] fn it_works_for_decrement() { new_test_ext().execute_with(|| { System::set_block_number(1); // Initialize counter value to 8 assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 8)); // Decrement counter by 3 assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(1), 3)); // Ensure the event matches the decrement action System::assert_last_event( Event::CounterDecremented { counter_value: 5, who: 1, decremented_amount: 3, } .into(), ); }); } // Verify decrement is blocked when it would go below zero #[test] fn decrement_fails_for_below_zero() { new_test_ext().execute_with(|| { System::set_block_number(1); // Set counter value to 5 assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 5)); // Ensure that decrementing by 6 fails as it would result in a negative value assert_noop!( CustomPallet::decrement(RuntimeOrigin::signed(1), 6), Error::::CounterValueBelowZero // Expecting CounterValueBelowZero error ); }); } // Check that user interactions are correctly tracked #[test] fn user_interactions_increment() { new_test_ext().execute_with(|| { System::set_block_number(1); // Initialize counter value to 0 assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 0)); // Increment by 5 and decrement by 2 assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(1), 5)); assert_ok!(CustomPallet::decrement(RuntimeOrigin::signed(1), 2)); // Check if the user interactions are correctly tracked assert_eq!(UserInteractions::::get(1).unwrap_or(0), 2); // User should have 2 interactions }); } // Ensure user interactions prevent overflow #[test] fn user_interactions_overflow() { new_test_ext().execute_with(|| { System::set_block_number(1); // Initialize counter value to 0 assert_ok!(CustomPallet::set_counter_value(RuntimeOrigin::root(), 0)); // Set user interactions to max value (u32::MAX) UserInteractions::::insert(1, u32::MAX); // Ensure that incrementing by 5 fails due to overflow in user interactions assert_noop!( CustomPallet::increment(RuntimeOrigin::signed(1), 5), Error::::UserInteractionOverflow // Expecting UserInteractionOverflow error ); }); } ``` ## Run the Tests Execute the test suite for your custom pallet using Cargo's test command. This will run all defined test cases and provide detailed output about the test results. ```bash cargo test --package custom-pallet ``` After running the test suite, you should see the following output in your terminal:
cargo test --package custom-pallet
running 12 tests
test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok
test mock::test_genesis_config_builds ... ok
test test::set_counter_value_fails_for_max_value_exceeded ... ok
test test::set_counter_value_fails_for_non_root ... ok
test test::user_interactions_increment ... ok
test test::it_works_for_increment ... ok
test test::it_works_for_set_counter_value ... ok
test test::it_works_for_decrement ... ok
test test::increment_handles_overflow ... ok
test test::decrement_fails_for_below_zero ... ok
test test::increment_fails_for_max_value_exceeded ... ok
test test::user_interactions_overflow ... ok
test result: ok. 12 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.01s

Doc-tests custom_pallet
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
    
## Where to Go Next
- Tutorial __Add Pallets to the Runtime__ --- Learn how to add and integrate custom pallets in your Polkadot SDK-based blockchain [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/add-pallets-to-runtime/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/ --- BEGIN CONTENT --- --- title: Runtime Upgrades description: Learn how to safely perform runtime upgrades for your Polkadot SDK-based blockchain, including step-by-step instructions. tutorial_badge: Intermediate --- # Runtime Upgrades ## Introduction Upgrading the runtime of your Polkadot SDK-based blockchain is a fundamental feature that allows you to add new functionality, fix bugs, or improve performance without requiring a hard fork. Runtime upgrades are performed by submitting a special extrinsic that replaces the existing on-chain WASM runtime code. This process is trustless, transparent, and can be executed either through governance or using sudo, depending on your chain's configuration. This tutorial will guide you through the steps to prepare, submit, and verify a runtime upgrade for your parachain or standalone Polkadot SDK-based chain. For this example, you'll continue from the state left by the previous tutorials, where you have a custom pallet integrated into your runtime. ## Update the Runtime In this section, you will add a new feature to your existing custom pallet and upgrade your runtime to include this new functionality. ### Start Your Chain Before making any changes, ensure your blockchain node is running properly: ```bash polkadot-omni-node --chain ./chain_spec.json --dev ``` Verify your chain is operational and note the runtime version state in Polkadot JS. For more details, check the [Interact with the Node](/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template/#interact-with-the-node){target=\_blank}. ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-01.webp) As you can see, the runtime version is `1` since this chain has not been upgraded. Keep this chain running in the background. ### Add a New Feature Now, you can extend your existing custom pallet by adding a new dispatchable function to reset the counter to zero. This provides a meaningful upgrade that demonstrates new functionality. Copy and paste the following code at the end of your `lib.rs` file in your custom pallet: ```rust title="custom-pallet/src/lib.rs" hl_lines="5-17" #[pallet::call] impl Pallet { // ... existing calls like increment, decrement, etc. /// Reset the counter to zero. /// /// The dispatch origin of this call must be _Root_. /// /// Emits `CounterValueSet` event when successful. #[pallet::call_index(3)] #[pallet::weight(0)] pub fn reset_counter(origin: OriginFor) -> DispatchResult { ensure_root(origin)?; >::put(0u32); Self::deposit_event(Event::CounterValueSet { counter_value: 0 }); Ok(()) } } ``` The `reset_counter` function will be a Root-only operation that sets the counter value back to zero, regardless of its current state. This is useful for administrative purposes, such as clearing the counter after maintenance, testing, or at the start of new periods. Unlike the existing increment/decrement functions that any signed user can call, this reset function requires Root privileges, making it a controlled administrative action. Ensure that your runtime compiles by running: ```bash cargo build --release ``` Now, you can test this new function in `pallets/custom-pallet/src/tests.rs` ```rust title="custom-pallet/src/tests.rs" hl_lines="4-39" // ... existing unit tests ... #[test] fn reset_counter_works() { new_test_ext().execute_with(|| { System::set_block_number(1); // First increment the counter assert_ok!(CustomPallet::increment(RuntimeOrigin::signed(1), 1)); // Ensure the event matches the increment action System::assert_last_event( Event::CounterIncremented { counter_value: 1, who: 1, incremented_amount: 1, } .into(), ); // Reset should work with root origin assert_ok!(CustomPallet::reset_counter(RuntimeOrigin::root())); // Check that the event was emitted System::assert_last_event(Event::CounterValueSet { counter_value: 0 }.into()); }); } #[test] fn reset_counter_fails_without_root() { new_test_ext().execute_with(|| { System::set_block_number(1); // Should fail with non-root origin assert_noop!( CustomPallet::reset_counter(RuntimeOrigin::signed(1)), sp_runtime::DispatchError::BadOrigin ); }); } ``` Ensure that your tests pass by running: ```bash cargo test --package custom-pallet ``` ### Update Runtime Configuration Since you've only added new functionality without changing existing APIs, minimal runtime changes are needed. However, verify that your runtime configuration is still compatible. If you've added new configuration parameters to your pallet, update them accordingly in the `runtime/configs/mod.rs`. ### Bump the Runtime Version This is a critical step - you must increment the runtime version numbers to signal that an upgrade has occurred. In `runtime/src/lib.rs`: ```rust title="lib.rs" hl_lines="6" #[sp_version::runtime_version] pub const VERSION: RuntimeVersion = RuntimeVersion { spec_name: alloc::borrow::Cow::Borrowed("parachain-template-runtime"), impl_name: alloc::borrow::Cow::Borrowed("parachain-template-runtime"), authoring_version: 1, spec_version: 2, // <-- increment this (was 1) impl_version: 0, apis: apis::RUNTIME_API_VERSIONS, transaction_version: 1, system_version: 1, }; ``` Also update `runtime/Cargo.toml` version: ```toml title="Cargo.toml" hl_lines="4" [package] name = "parachain-template-runtime" description = "A parachain runtime template built with Substrate and Cumulus, part of Polkadot Sdk." version = "0.2.0" # <-- increment this version # ... rest of your Cargo.toml ``` For more information about runtime versioning, check the [Runtime Upgrades](/develop/parachains/maintenance/runtime-upgrades#runtime-versioning){target=\_blank} guide. ### Build the New Runtime Navigate to your project root: ```bash cd /path/to/your/parachain-template ``` Build the new runtime: ```bash cargo build --release ``` Verify that you have the proper WASM builds by executing: ``` ls -la target/release/wbuild/parachain-template-runtime/ ``` If you can see the following elements, it means that you are ready to submit the runtime upgrade to your running chain:
ls -la target/release/wbuild/parachain-template-runtime/
parachain_template_runtime.wasm parachain_template_runtime.compact.wasm parachain_template_runtime.compact.compressed.wasm
## Submit the Runtime Upgrade You can submit a runtime upgrade using the [Sudo pallet](https://paritytech.github.io/polkadot-sdk/master/pallet_sudo/index.html){target=\_blank} (for development chains) or via on-chain governance (for production chains). 1. Open [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} and connect to your node 2. Click on the **Developer** and select the **Extrinsics** option in the dropdown ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-02.webp) 3. Prepare the **sudo** call: 1. Select the **sudo** pallet 2. Select the **sudo(call)** extrinsic from the list ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-03.webp) 4. In the **sudo** call: 1. Select the **system** call 2. Select **setCode** extrinsic from the list ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-04.webp) 5. For the `code` parameter, click **file upload** and select your WASM runtime file: - Use `parachain_template_runtime.compact.compressed.wasm` if available (smaller file) - Otherwise, use `parachain_template_runtime.wasm` ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-05.webp) 6. Click **Submit Transaction** and sign the transaction with the sudo key ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-06.webp) !!!info "Using Governance (Production)" For production chains with governance enabled, you must follow the on-chain democratic process. This involves submitting a preimage of the new runtime code (using the Democracy pallet), referencing the preimage hash in a proposal, and then following your chain's governance process (such as voting and council approval) until the proposal passes and is enacted. This ensures that runtime upgrades are transparent and subject to community oversight. ## Verify the Upgrade After the runtime upgrade extrinsic is included in a block, verify that the upgrade was successful. ### Check Runtime Version 1. In Polkadot.js Apps, navigate to the **Chain State** section 1. Click the **Developer** dropdown 2. Click the **Chain State** option ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-07.webp) 2. Query runtime spec version 1. Select the **System** pallet 2. Select the **lastRuntimeUpgrade()** query ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-08.webp) 3. Click the **+** button to query the current runtime version ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-09.webp) 4. Verify that the `specVersion` matches your new runtime (should be `2` if you followed the example) ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-10.webp) ### Test New Functionality 1. Navigate to **Developer > Extrinsics** 2. Select your custom pallet from the dropdown 3. You should now see the new `resetCounter` function available ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/runtime-upgrade/runtime-upgrade-11.webp) Now, you can test the new functionality: - First, increment the counter using your existing function - Then use the new reset function (note: you'll need sudo/root privileges) - Verify the counter value is reset to 0 ## Where to Go Next
- Tutorial __Deploy on Paseo TestNet__ --- Deploy your Polkadot SDK blockchain on Paseo! Follow this step-by-step guide for a seamless journey to a successful TestNet deployment. [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template/ --- BEGIN CONTENT --- --- title: Set Up a Template description: Learn to compile and run a local parachain node using Polkadot SDK. Launch, run, and interact with a pre-configured runtime template. tutorial_badge: Beginner categories: Basics, Parachains --- # Set Up a Template ## Introduction [Polkadot SDK](https://github.com/paritytech/polkadot-sdk){target=\_blank} offers a versatile and extensible blockchain development framework, enabling you to create custom blockchains tailored to your specific application or business requirements. This tutorial guides you through compiling and running a parachain node using the [Polkadot SDK Parachain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/parachain){target=\_blank}. The parachain template provides a pre-configured, functional runtime you can use in your local development environment. It includes several key components, such as user accounts and account balances. These predefined elements allow you to experiment with common blockchain operations without requiring initial template modifications. In this tutorial, you will: - Build and start a local parachain node using the node template - Explore how to use a front-end interface to: - View information about blockchain activity - Submit a transaction By the end of this tutorial, you'll have a working local parachain and understand how to interact with it, setting the foundation for further customization and development. ## Prerequisites Before getting started, ensure you have done the following: - Completed the [Install Polkadot SDK Dependencies](/develop/parachains/install-polkadot-sdk/){target=\_blank} guide and successfully installed [Rust](https://www.rust-lang.org/){target=\_blank} and the required packages to set up your development environment For this tutorial series, you need to use Rust `1.86`. Newer versions of the compiler may not work with this parachain template version. Run the following commands to set up the correct Rust version: ```bash rustup default 1.86 rustup target add wasm32-unknown-unknown --toolchain 1.86-aarch64-apple-darwin rustup component add rust-src --toolchain 1.86-aarch64-apple-darwin ``` ## Utility Tools This tutorial requires two essential tools: - [**Chain spec builder**](https://crates.io/crates/staging-chain-spec-builder/{{dependencies.crates.chain_spec_builder.version}}){target=\_blank} - is a Polkadot SDK utility for generating chain specifications. Refer to the [Generate Chain Specs](/develop/parachains/deployment/generate-chain-specs/){target=\_blank} documentation for detailed usage. Install it by executing the following command: ```bash cargo install --locked staging-chain-spec-builder@{{dependencies.crates.chain_spec_builder.version}} ``` This installs the `chain-spec-builder` binary. - [**Polkadot Omni Node**](https://crates.io/crates/polkadot-omni-node/{{dependencies.crates.polkadot_omni_node.version}}){target=\_blank} - is a white-labeled binary, released as a part of Polkadot SDK that can act as the collator of a parachain in production, with all the related auxiliary functionalities that a normal collator node has: RPC server, archiving state, etc. Moreover, it can also run the wasm blob of the parachain locally for testing and development. To install it, run the following command: ```bash cargo install --locked polkadot-omni-node@{{dependencies.crates.polkadot_omni_node.version}} ``` This installs the `polkadot-omni-node` binary. ## Compile the Runtime The [Polkadot SDK Parachain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/parachain){target=\_blank} provides a ready-to-use development environment for building using the [Polkadot SDK](https://github.com/paritytech/polkadot-sdk){target=\_blank}. Follow these steps to compile the runtime: 1. Clone the template repository: ```bash git clone -b stable2412 https://github.com/paritytech/polkadot-sdk-parachain-template.git parachain-template ``` 2. Navigate into the project directory: ```bash cd parachain-template ``` 3. Compile the runtime: ```bash cargo build --release --locked ``` !!!tip Initial compilation may take several minutes, depending on your machine specifications. Use the `--release` flag for improved runtime performance compared to the default `--debug` build. If you need to troubleshoot issues, the `--debug` build provides better diagnostics. For production deployments, consider using a dedicated [`--profile production`](https://github.com/paritytech/polkadot-sdk-parachain-template/blob/v0.0.4/Cargo.toml#L42-L45){target=\_blank} flag - this can provide an additional 15-30% performance improvement over the standard `--release` profile. 4. Upon successful compilation, you should see output similar to:
cargo build --release --locked ... Finished `release` profile [optimized] target(s) in 1.79s
## Start the Local Chain After successfully compiling your runtime, you can spin up a local chain and produce blocks. This process will start your local parachain and allow you to interact with it. You'll first need to generate a chain specification that defines your network's identity, initial connections, and genesis state, providing the foundational configuration for how your nodes connect and what initial state they agree upon, and then run the chain. Follow these steps to launch your node in development mode: 1. Generate the chain specification file of your parachain: ```bash chain-spec-builder create -t development \ --relay-chain paseo \ --para-id 1000 \ --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \ named-preset development ``` 2. Start the omni node with the generated chain spec. You'll start it in development mode (without a relay chain config), producing and finalizing blocks: ```bash polkadot-omni-node --chain ./chain_spec.json --dev ``` The `--dev` option does the following: - Deletes all active data (keys, blockchain database, networking information) when stopped - Ensures a clean working state each time you restart the node 3. Verify that your node is running by reviewing the terminal output. You should see something similar to:
polkadot-omni-node --chain ./chain_spec.json --dev
2024-12-12 12:44:02 polkadot-omni-node 2024-12-12 12:44:02 ✌️ version 0.1.0-da2dd9b7737 2024-12-12 12:44:02 ❤️ by Parity Technologies admin@parity.io, 2017-2024 2024-12-12 12:44:02 📋 Chain specification: Custom 2024-12-12 12:44:02 🏷 Node name: grieving-drum-1926 2024-12-12 12:44:02 👤 Role: AUTHORITY 2024-12-12 12:44:02 💾 Database: RocksDb at /var/folders/x0/xl_kjddj3ql3bx7752yr09hc0000gn/T/substrateoUrZMQ/chains/custom/db/full 2024-12-12 12:44:03 [Parachain] assembling new collators for new session 0 at #0 2024-12-12 12:44:03 [Parachain] assembling new collators for new session 1 at #0 2024-12-12 12:44:03 [Parachain] 🔨 Initializing Genesis block/state (state: 0xa6f8…5b46, header-hash: 0x0579…2153) 2024-12-12 12:44:03 [Parachain] creating SingleState txpool Limit { count: 8192, total_bytes: 20971520 }/Limit { count: 819, total_bytes: 2097152 }. 2024-12-12 12:44:03 [Parachain] Using default protocol ID "sup" because none is configured in the chain specs 2024-12-12 12:44:03 [Parachain] 🏷 Local node identity is: 12D3KooWCSXy6rBuJVsn5mx8uyNqkdfNfFzEbToi4hR31v3PwdgX 2024-12-12 12:44:03 [Parachain] Running libp2p network backend 2024-12-12 12:44:03 [Parachain] 💻 Operating system: macos 2024-12-12 12:44:03 [Parachain] 💻 CPU architecture: aarch64 2024-12-12 12:44:03 [Parachain] 📦 Highest known block at #0 2024-12-12 12:44:03 [Parachain] 〽️ Prometheus exporter started at 127.0.0.1:9615 2024-12-12 12:44:03 [Parachain] Running JSON-RPC server: addr=127.0.0.1:9944,[::1]:9944 2024-12-12 12:44:06 [Parachain] 🙌 Starting consensus session on top of parent 0x05794f9adcdaa23a5edd335e8310637d3a7e6e9393f2b0794af7d3e219f62153 (#0) 2024-12-12 12:44:06 [Parachain] 🎁 Prepared block for proposing at 1 (2 ms) hash: 0x6fbea46711e9b38bab8e7877071423cd03feab03d3f4a0d578a03ab42dcee34b; parent_hash: 0x0579…2153; end: NoMoreTransactions; extrinsics_count: 2 2024-12-12 12:44:06 [Parachain] 🏆 Imported #1 (0x0579…2153 → 0x6fbe…e34b) ...
4. Confirm that your blockchain is producing new blocks by checking if the number after `finalized` is increasing
... 2024-12-12 12:49:20 [Parachain] 💤 Idle (0 peers), best: #1 (0x6fbe…e34b), finalized #1 (0x6fbe…e34b), ⬇ 0 ⬆ 0 ... 2024-12-12 12:49:25 [Parachain] 💤 Idle (0 peers), best: #3 (0x7543…bcfc), finalized #3 (0x7543…bcfc), ⬇ 0 ⬆ 0 ... 2024-12-12 12:49:30 [Parachain] 💤 Idle (0 peers), best: #4 (0x0478…8d63), finalized #4 (0x0478…8d63), ⬇ 0 ⬆ 0 ...
The details of the log output will be explored in a later tutorial. For now, knowing that your node is running and producing blocks is sufficient. ## Interact with the Node When running the template node, it's accessible by default at `ws://localhost:9944`. To interact with your node using the [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} interface, follow these steps: 1. Open [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} in your web browser and click the network icon (which should be the Polkadot logo) in the top left corner as shown in the image below: ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template/set-up-a-template-1.webp) 2. Connect to your local node: 1. Scroll to the bottom and select **Development** 2. Choose **Custom** 3. Enter `ws://localhost:9944` in the input field 4. Click the **Switch** button ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template/set-up-a-template-2.webp) 3. Verify connection: - Once connected, you should see **parachain-template-runtime** in the top left corner - The interface will display information about your local blockchain ![](/images/tutorials/polkadot-sdk/parachains/zero-to-hero/set-up-a-template/set-up-a-template-3.webp) You are now connected to your local node and can now interact with it through the Polkadot.js Apps interface. This tool enables you to explore blocks, execute transactions, and interact with your blockchain's features. For in-depth guidance on using the interface effectively, refer to the [Polkadot.js Guides](https://wiki.polkadot.network/general/polkadotjs/){target=\_blank} available on the Polkadot Wiki. ## Stop the Node When you're done exploring your local node, you can stop it to remove any state changes you've made. Since you started the node with the `--dev` option, stopping the node will purge all persistent block data, allowing you to start fresh the next time. To stop the local node: 1. Return to the terminal window where the node output is displayed 2. Press `Control-C` to stop the running process 3. Verify that your terminal returns to the prompt in the `parachain-template` directory ## Where to Go Next
- Tutorial __Build a Custom Pallet__ --- Build your own custom pallet for Polkadot SDK-based blockchains! Follow this step-by-step guide to create and configure a simple counter pallet from scratch. [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/build-custom-pallet/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/ --- BEGIN CONTENT --- --- title: Convert Assets on Asset Hub description: A guide detailing the step-by-step process of converting assets on Asset Hub, helping users efficiently navigate asset management on the platform. tutorial_badge: Intermediate categories: dApps --- # Convert Assets on Asset Hub ## Introduction Asset Conversion is an Automated Market Maker (AMM) utilizing [Uniswap V2](https://github.com/Uniswap/v2-core){target=\_blank} logic and implemented as a pallet on Polkadot's Asset Hub. For more details about this feature, please visit the [Asset Conversion on Asset Hub](/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/){target=\_blank} wiki page. This guide will provide detailed information about the key functionalities offered by the [Asset Conversion](https://github.com/paritytech/polkadot-sdk/tree/{{dependencies.repositories.polkadot_sdk.version}}/substrate/frame/asset-conversion){target=\_blank} pallet on Asset Hub, including: - Creating a liquidity pool - Adding liquidity to a pool - Swapping assets - Withdrawing liquidity from a pool ## Prerequisites Before converting assets on Asset Hub, you must ensure you have: - Access to the [Polkadot.js Apps](https://polkadot.js.org/apps){target=\_blank} interface and a connection with the intended blockchain - A funded wallet containing the assets you wish to convert and enough available funds to cover the transaction fees - An asset registered on Asset Hub that you want to convert. If you haven't created an asset on Asset Hub yet, refer to the [Register a Local Asset](/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/){target=\_blank} or [Register a Foreign Asset](/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/){target=\_blank} documentation to create an asset. ## Create a Liquidity Pool If an asset on Asset Hub does not have an existing liquidity pool, the first step is to create one. The asset conversion pallet provides the `createPool` extrinsic to create a new liquidity pool, creating an empty liquidity pool and a new `LP token` asset. !!! tip A testing token with the asset ID `1112` and the name `PPM` was created for this example. As stated in the [Test Environment Setup](#test-environment-setup) section, this tutorial is based on the assumption that you have an instance of Polkadot Asset Hub running locally. Therefore, the demo liquidity pool will be created between DOT and PPM tokens. However, the same steps can be applied to any other asset on Asset Hub. From the Asset Hub perspective, the Multilocation that identifies the PPM token is the following: ```javascript { parents: 0, interior: { X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }] } } ``` The `PalletInstance` value of `50` represents the Assets pallet on Asset Hub. The `GeneralIndex` value of `1112` is the PPM asset's asset ID. To create the liquidity pool, you can follow these steps: 1. Navigate to the **Extrinsics** section on the Polkadot.js Apps interface 1. Select **Developer** from the top menu 2. Click on **Extrinsics** from the dropdown menu ![Extrinsics Section](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-1.webp) 2. Choose the **`AssetConversion`** pallet and click on the **`createPool`** extrinsic 1. Select the **`AssetConversion`** pallet 2. Choose the **`createPool`** extrinsic from the list of available extrinsics ![Create Pool Extrinsic](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-2.webp) 3. Fill in the required fields: 1. **`asset1`** - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents: ```javascript { parents: 0, interior: 'Here' } ``` 2. **`asset2`** - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies: ```javascript { parents: 0, interior: { X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }] } } ``` 3. Click on **Submit Transaction** to create the liquidity pool ![Create Pool Fields](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-3.webp) Signing and submitting the transaction triggers the creation of the liquidity pool. To verify the new pool's creation, check the **Explorer** section on the Polkadot.js Apps interface and ensure that the **`PoolCreated`** event was emitted. ![Pool Created Event](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-4.webp) As the preceding image shows, the **`lpToken`** ID created for this pool is 19. This ID is essential to identify the liquidity pool and associated LP tokens. ## Add Liquidity to a Pool The `addLiquidity` extrinsic allows users to provide liquidity to a pool of two assets. Users specify their preferred amounts for both assets and minimum acceptable quantities. The function determines the best asset contribution, which may vary from the amounts desired but won't fall below the specified minimums. Providers receive liquidity tokens representing their pool portion in return for their contribution. To add liquidity to a pool, follow these steps: 1. Navigate to the **Extrinsics** section on the Polkadot.js Apps interface 1. Select **Developer** from the top menu 2. Click on **Extrinsics** from the dropdown menu ![Extrinsics Section](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-1.webp) 2. Choose the **`assetConversion`** pallet and click on the **`addLiquidity`** extrinsic 1. Select the **`assetConversion`** pallet 2. Choose the **`addLiquidity`** extrinsic from the list of available extrinsics ![Add Liquidity Extrinsic](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-5.webp) 3. Fill in the required fields: 1. **`asset1`** - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents: ```javascript { parents: 0, interior: 'Here' } ``` 2. **`asset2`** - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies: ```javascript { parents: 0, interior: { X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }] } } ``` 3. **`amount1Desired`** - the amount of the first asset that will be contributed to the pool 4. **`amount2Desired`** - the quantity of the second asset intended for pool contribution 5. **`amount1Min`** - the minimum amount of the first asset that will be contributed 6. **`amount2Min`** - the lowest acceptable quantity of the second asset for contribution 7. **`mintTo`** - the account to which the liquidity tokens will be minted 8. Click on **Submit Transaction** to add liquidity to the pool ![Add Liquidity Fields](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-6.webp) !!! warning Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool. In this case, the liquidity provided to the pool is between DOT tokens and PPM tokens with the asset ID 1112 on Polkadot Asset Hub. The intention is to provide liquidity for 1 DOT token (`u128` value of 1000000000000 as it has 10 decimals) and 1 PPM token (`u128` value of 1000000000000 as it also has 10 decimals). Signing and submitting the transaction adds liquidity to the pool. To verify the liquidity addition, check the **Explorer** section on the Polkadot.js Apps interface and ensure that the **`LiquidityAdded`** event was emitted. ![Liquidity Added Event](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-7.webp) ## Swap Assets ### Swap from an Exact Amount of Tokens The asset conversion pallet enables users to exchange a specific quantity of one asset for another in a designated liquidity pool by swapping them for an exact amount of tokens. It guarantees the user will receive at least a predetermined minimum amount of the second asset. This function increases trading predictability and allows users to conduct asset exchanges with confidence that they are assured a minimum return. To swap assets for an exact amount of tokens, follow these steps: 1. Navigate to the **Extrinsics** section on the Polkadot.js Apps interface 1. Select **Developer** from the top menu 2. Click on **Extrinsics** from the dropdown menu ![Extrinsics Section](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-1.webp) 2. Choose the **`AssetConversion`** pallet and click on the **`swapExactTokensForTokens`** extrinsic 1. Select the **`AssetConversion`** pallet 2. Choose the **`swapExactTokensForTokens`** extrinsic from the list of available extrinsics ![Swap From Exact Tokens Extrinsic](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-8.webp) 3. Fill in the required fields: 1. **`path:Vec`** - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements: - **`0: StagingXcmV3MultiLocation`** - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents: ```javascript { parents: 0, interior: 'Here' } ``` - **`1: StagingXcmV3MultiLocation`** - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies: ```javascript { parents: 0, interior: { X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }] } } ``` 2. **`amountOut`** - the exact amount of the second asset that the user wants to receive 3. **`amountInMax`** - the maximum amount of the first asset that the user is willing to swap 4. **`sendTo`** - the account to which the swapped assets will be sent 5. **`keepAlive`** - a boolean value that determines whether the pool should be kept alive after the swap 6. Click on **Submit Transaction** to swap assets for an exact amount of tokens ![Swap For Exact Tokens Fields](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-9.webp) !!! warning Ensure that the appropriate amount of tokens provided has been minted previously and is available in your account before adding liquidity to the pool. In this case, the intention is to swap 0.01 DOT token (u128 value of 100000000000 as it has 10 decimals) for 0.04 PPM token (u128 value of 400000000000 as it also has 10 decimals). Signing and submitting the transaction will execute the swap. To verify execution, check the **Explorer** section on the Polkadot.js Apps interface and make sure that the **`SwapExecuted`** event was emitted. ![Swap From Exact Tokens Event](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-10.webp) ### Swap to an Exact Amount of Tokens Conversely, the Asset Conversion pallet comes with a function that allows users to trade a variable amount of one asset to acquire a precise quantity of another. It ensures that users stay within a set maximum of the initial asset to obtain the desired amount of the second asset. This provides a method to control transaction costs while achieving the intended result. To swap assets for an exact amount of tokens, follow these steps: 1. Navigate to the **Extrinsics** section on the Polkadot.js Apps interface 1. Select **Developer** from the top menu 2. Click on **Extrinsics** from the dropdown menu ![Extrinsics Section](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-1.webp) 2. Choose the **`AssetConversion`** pallet and click on the **`swapTokensForExactTokens`** extrinsic: 1. Select the **`AssetConversion`** pallet 2. Choose the **`swapTokensForExactTokens`** extrinsic from the list of available extrinsics ![Swap Tokens For Exact Tokens Extrinsic](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-11.webp) 3. Fill in the required fields: 1. **`path:Vec`** - an array of Multilocations representing the path of the swap. The first and last elements of the array are the input and output assets, respectively. In this case, the path consists of two elements: - **`0: StagingXcmV3MultiLocation`** - the Multilocation of the first asset in the pool. In this case, it is the PPM token, which the following Multilocation represents: ```javascript { parents: 0, interior: { X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }] } } ``` - **`1: StagingXcmV3MultiLocation`** - the second asset's Multilocation within the pool. This refers to the DOT token, which the following Multilocation identifies: ```javascript { parents: 0, interior: 'Here' } ``` 2. **`amountOut`** - the exact amount of the second asset that the user wants to receive 3. **`amountInMax`** - the maximum amount of the first asset that the user is willing to swap 4. **`sendTo`** - the account to which the swapped assets will be sent 5. **`keepAlive`** - a boolean value that determines whether the pool should be kept alive after the swap 6. Click on **Submit Transaction** to swap assets for an exact amount of tokens ![Swap Tokens For Exact Tokens Fields](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-12.webp) !!! warning Before swapping assets, ensure that the tokens provided have been minted previously and are available in your account. In this case, the intention is to swap 0.01 DOT token (`u128` value of 100000000000 as it has ten decimals) for 0.04 PPM token (`u128` value of 400000000000 as it also has ten decimals). Signing and submitting the transaction will execute the swap. To verify execution, check the **Explorer** section on the Polkadot.js Apps interface and make sure that the **`SwapExecuted`** event was emitted. ![Swap Tokens For Exact Tokens Event](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-13.webp) ## Withdraw Liquidity from a Pool The Asset Conversion pallet provides the `removeLiquidity` extrinsic to remove liquidity from a pool. This function allows users to withdraw the liquidity they offered from a pool, returning the original assets. When calling this function, users specify the number of liquidity tokens (representing their share in the pool) they wish to burn. They also set minimum acceptable amounts for the assets they expect to receive back. This mechanism ensures that users can control the minimum value they receive, protecting against unfavorable price movements during the withdrawal process. To withdraw liquidity from a pool, follow these steps: 1. Navigate to the **Extrinsics** section on the Polkadot.js Apps interface 1. Select **Developer** from the top menu 2. Click on **Extrinsics** from the dropdown menu ![Extrinsics Section](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-1.webp) 2. Choose the **`AssetConversion`** pallet and click on the **`remove_liquidity`** extrinsic 1. Select the **`AssetConversion`** pallet 2. Choose the **`removeLiquidity`** extrinsic from the list of available extrinsics ![Remove Liquidity Extrinsic](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-14.webp) 3. Fill in the required fields: 1. **`asset1`** - the Multilocation of the first asset in the pool. In this case, it is the DOT token, which the following Multilocation represents: ```javascript { parents: 0, interior: 'Here' } ``` 2. **`asset2`** - the second asset's Multilocation within the pool. This refers to the PPM token, which the following Multilocation identifies: ```javascript { parents: 0, interior: { X2: [{ PalletInstance: 50 }, { GeneralIndex: 1112 }] } } ``` 3. **`lpTokenBurn`** - the number of liquidity tokens to burn 4. **`amount1MinReceived`** - the minimum amount of the first asset that the user expects to receive 5. **`amount2MinReceived`** - the minimum quantity of the second asset the user expects to receive 6. **`withdrawTo`** - the account to which the withdrawn assets will be sent 7. Click on **Submit Transaction** to withdraw liquidity from the pool ![Remove Liquidity Fields](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-15.webp) !!! warning Ensure that the tokens provided have been minted previously and are available in your account before withdrawing liquidity from the pool. In this case, the intention is to withdraw 0.05 liquidity tokens from the pool, expecting to receive 0.004 DOT token (`u128` value of 40000000000 as it has 10 decimals) and 0.04 PPM token (`u128` value of 400000000000 as it also has 10 decimals). Signing and submitting the transaction will initiate the withdrawal of liquidity from the pool. To verify the withdrawal, check the **Explorer** section on the Polkadot.js Apps interface and ensure that the **`LiquidityRemoved`** event was emitted. ![Remove Liquidity Event](/images/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/asset-conversion-16.webp) ## Test Environment Setup To test the Asset Conversion pallet, you can set up a local test environment to simulate different scenarios. This guide uses Chopsticks to spin up an instance of Polkadot Asset Hub. For further details on using Chopsticks, please refer to the [Chopsticks documentation](/develop/toolkit/parachains/fork-chains/chopsticks/get-started){target=\_blank}. To set up a local test environment, execute the following command: ```bash npx @acala-network/chopsticks \ --config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml ``` This command initiates a lazy fork of Polkadot Asset Hub, including the most recent block information from the network. For Kusama Asset Hub testing, simply switch out `polkadot-asset-hub.yml` with `kusama-asset-hub.yml` in the command. You now have a local Asset Hub instance up and running, ready for you to test various asset conversion procedures. The process here mirrors what you'd do on MainNet. After completing a transaction on TestNet, you can apply the same steps to convert assets on MainNet. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/ --- BEGIN CONTENT --- --- title: Asset Hub Tutorials description: Learn how to manage assets on Asset Hub, including registering local and foreign assets and converting between different asset types. template: index-page.html --- # Asset Hub Tutorials ## Benefits of Asset Hub Polkadot SDK-based relay chains focus on security and consensus, leaving asset management to an external component, such as a [system chain](/polkadot-protocol/architecture/system-chains/){target=\_blank}. The [Asset Hub](/polkadot-protocol/architecture/system-chains/asset-hub/){target=\_blank} is one example of a system chain and is vital to managing tokens which aren't native to the Polkadot ecosystem. Developers opting to integrate with Asset Hub can expect the following benefits: - **Support for non-native on-chain assets** - create and manage your own tokens or NFTs with Polkadot ecosystem compatibility available out of the box - **Lower transaction fees** - approximately 1/10th of the cost of using the relay chain - **Reduced deposit requirements** - approximately 1/100th of the deposit required for the relay chain - **Payment of fees with non-native assets** - no need to buy native tokens for gas, increasing flexibility for developers and users ## Get Started Through these tutorials, you'll learn how to manage cross-chain assets, including: - Asset registration and configuration - Cross-chain asset representation - Liquidity pool creation and management - Asset swapping and conversion - Transaction parameter optimization ## In This Section :::INSERT_IN_THIS_SECTION::: ## Additional Resources --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/ --- BEGIN CONTENT --- --- title: Register a Foreign Asset on Asset Hub description: An in-depth guide to registering a foreign asset on the Asset Hub parachain, providing clear, step-by-step instructions. tutorial_badge: Intermediate categories: dApps --- # Register a Foreign Asset on Asset Hub ## Introduction As outlined in the [Asset Hub Overview](/polkadot-protocol/architecture/system-chains/asset-hub){target=\_blank}, Asset Hub supports two categories of assets: local and foreign. Local assets are created on the Asset Hub system parachain and are identified by integer IDs. On the other hand, foreign assets, which originate outside of Asset Hub, are recognized by [Multilocations](https://wiki.polkadot.network/docs/learn/xcm/fundamentals/multilocation-summary){target=\_blank}. When registering a foreign asset on Asset Hub, it's essential to notice that the process involves communication between two parachains. The Asset Hub parachain will be the destination of the foreign asset, while the source parachain will be the origin of the asset. The communication between the two parachains is facilitated by the [Cross-Chain Message Passing (XCMP)](/develop/interoperability/intro-to-xcm/){target=\_blank} protocol. This guide will take you through the process of registering a foreign asset on the Asset Hub parachain. ## Prerequisites The Asset Hub parachain is one of the system parachains on a relay chain, such as [Polkadot](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot.api.onfinality.io%2Fpublic-ws#/explorer){target=\_blank} or [Kusama](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fkusama.api.onfinality.io%2Fpublic-ws#/explorer){target=\_blank}. To interact with these parachains, you can use the [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} interface for: - [Polkadot Asset Hub](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fasset-hub-polkadot-rpc.dwellir.com#/explorer){target=\_blank} - [Kusama Asset Hub](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fsys.ibp.network%2Fstatemine#/explorer){target=\_blank} For testing purposes, you can also interact with the Asset Hub instance on the following test networks: - [Paseo Asset Hub](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpas-rpc.stakeworld.io%2Fassethub#/explorer){target=\_blank} Before you start, ensure that you have: - Access to the Polkadot.js Apps interface, and you are connected to the desired chain - A parachain that supports the XCMP protocol to interact with the Asset Hub parachain - A funded wallet to pay for the transaction fees and subsequent registration of the foreign asset This guide will use Polkadot, its local Asset Hub instance, and the [Astar](https://astar.network/){target=\_blank} parachain (`ID` 2006), as stated in the [Test Environment Setup](#test-environment-setup) section. However, the process is the same for other relay chains and their respective Asset Hub parachain, regardless of the network you are using and the parachain owner of the foreign asset. ## Steps to Register a Foreign Asset ### Asset Hub 1. Open the [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} interface and connect to the Asset Hub parachain using the network selector in the top left corner - Testing foreign asset registration is recommended on TestNet before proceeding to MainNet. If you haven't set up a local testing environment yet, consult the [Environment setup](#test-environment-setup) guide. After setting up, connect to the Local Node (Chopsticks) at `ws://127.0.0.1:8000` - For live network operations, connect to the Asset Hub parachain. You can choose either Polkadot or Kusama Asset Hub from the dropdown menu, selecting your preferred RPC provider 2. Navigate to the **Extrinsics** page 1. Click on the **Developer** tab from the top navigation bar 2. Select **Extrinsics** from the dropdown ![Access to Developer Extrinsics section](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-assets/register-a-foreign-asset-1.webp) 3. Select the Foreign Assets pallet 3. Select the **`foreignAssets`** pallet from the dropdown list 4. Choose the **`create`** extrinsic ![Select the Foreign Asset pallet](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-assets/register-a-foreign-asset-2.webp) 3. Fill out the required fields and click on the copy icon to copy the **encoded call data** to your clipboard. The fields to be filled are: - **id** - as this is a foreign asset, the ID will be represented by a Multilocation that reflects its origin. For this case, the Multilocation of the asset will be from the source parachain perspective: ```javascript { parents: 1, interior: { X1: [{ Parachain: 2006 }] } } ``` - **admin** - refers to the account that will be the admin of this asset. This account will be able to manage the asset, including updating its metadata. As the registered asset corresponds to a native asset of the source parachain, the admin account should be the sovereign account of the source parachain The sovereign account can be obtained through [Substrate Utilities](https://www.shawntabrizi.com/substrate-js-utilities/){target=\_blank}. Ensure that **Sibling** is selected and that the **Para ID** corresponds to the source parachain. In this case, since the guide follows the test setup stated in the [Test Environment Setup](#test-environment-setup) section, the **Para ID** is `2006`. ![Get parachain sovereign account](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-assets/register-a-foreign-asset-3.webp) - **`minBalance`** - the minimum balance required to hold this asset ![Fill out the required fields](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-assets/register-a-foreign-asset-4.webp) !!! tip If you need an example of the encoded call data, you can copy the following: ``` 0x3500010100591f007369626cd6070000000000000000000000000000000000000000000000000000a0860100000000000000000000000000 ``` ### Source Parachain 1. Navigate to the **Developer > Extrinsics** section 2. Create the extrinsic to register the foreign asset through XCM 1. Paste the **encoded call data** copied in the previous step 2. Click the **Submit Transaction** button ![Register foreign asset through XCM](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-assets/register-a-foreign-asset-5.webp) This XCM call involves withdrawing DOT from the sibling account of the parachain, using it to initiate an execution. The transaction will be carried out with XCM as the origin kind, and will be a hex-encoded call to create a foreign asset on Asset Hub for the specified parachain asset multilocation. Any surplus will be refunded, and the asset will be deposited into the sibling account. !!! warning Note that the sovereign account on the Asset Hub parachain must have a sufficient balance to cover the XCM `BuyExecution` instruction. If the account does not have enough balance, the transaction will fail. If you want to have the whole XCM call ready to be copied, go to the **Developer > Extrinsics > Decode** section and paste the following hex-encoded call data: ``` 0x6300330003010100a10f030c000400010000070010a5d4e81300010000070010a5d4e80006030700b4f13501419ce03500010100591f007369626cd607000000000000000000000000000000000000000000000000000000000000000000000000000000000000 ``` Be sure to replace the encoded call data with the one you copied in the previous step. After the transaction is successfully executed, the foreign asset will be registered on the Asset Hub parachain. ## Asset Registration Verification To confirm that a foreign asset has been successfully accepted and registered on the Asset Hub parachain, you can navigate to the `Network > Explorer` section of the Polkadot.js Apps interface for Asset Hub. You should be able to see an event that includes the following details: ![Asset registration event](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-assets/register-a-foreign-asset-6.webp) In the image above, the **success** field indicates whether the asset registration was successful. ## Test Environment Setup To test the foreign asset registration process before deploying it on a live network, you can set up a local parachain environment. This guide uses Chopsticks to simulate that process. For more information on using Chopsticks, please refer to the [Chopsticks documentation](/develop/toolkit/parachains/fork-chains/chopsticks/get-started){target=\_blank}. To set up a test environment, run the following command: ```bash npx @acala-network/chopsticks xcm \ --r polkadot \ --p polkadot-asset-hub \ --p astar ``` The preceding command will create a lazy fork of Polkadot as the relay chain, its Asset Hub instance, and the Astar parachain. The `xcm` parameter enables communication through the XCMP protocol between the relay chain and the parachains, allowing the registration of foreign assets on Asset Hub. For further information on the chopsticks usage of the XCMP protocol, refer to the [XCM Testing](/tutorials/polkadot-sdk/testing/fork-live-chains/#xcm-testing){target=\_blank} section of the Chopsticks documentation. After executing the command, the terminal will display output indicating the Polkadot relay chain, the Polkadot Asset Hub, and the Astar parachain are running locally and connected through XCM. You can access them individually via the Polkadot.js Apps interface. - [Polkadot Relay Chain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Flocalhost%3A8002#/explorer){target=\_blank} - [Polkadot Asset Hub](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Flocalhost%3A8000#/explorer){target=\_blank} - [Astar Parachain](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Flocalhost%3A8001#/explorer){target=\_blank} --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/ --- BEGIN CONTENT --- --- title: Register a Local Asset description: Comprehensive guide to registering a local asset on the Asset Hub system parachain, including step-by-step instructions. tutorial_badge: Beginner categories: Basics, dApps --- # Register a Local Asset on Asset Hub ## Introduction As detailed in the [Asset Hub Overview](/polkadot-protocol/architecture/system-chains/asset-hub){target=\_blank} page, Asset Hub accommodates two types of assets: local and foreign. Local assets are those that were created in Asset Hub and are identifiable by an integer ID. On the other hand, foreign assets originate from a sibling parachain and are identified by a Multilocation. This guide will take you through the steps of registering a local asset on the Asset Hub parachain. ## Prerequisites Before you begin, ensure you have access to the [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} interface and a funded wallet with DOT or KSM. - For Polkadot Asset Hub, you would need a deposit of 10 DOT and around 0.201 DOT for the metadata - For Kusama Asset Hub, the deposit is 0.1 KSM and around 0.000669 KSM for the metadata You need to ensure that your Asset Hub account balance is a bit more than the sum of those two deposits, which should seamlessly account for the required deposits and transaction fees. ## Steps to Register a Local Asset To register a local asset on the Asset Hub parachain, follow these steps: 1. Open the [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank} interface and connect to the Asset Hub parachain using the network selector in the top left corner - You may prefer to test local asset registration on TestNet before registering the asset on a MainNet hub. If you still need to set up a local testing environment, review the [Environment setup](#test-setup-environment) section for instructions. Once the local environment is set up, connect to the Local Node (Chopsticks) available on `ws://127.0.0.1:8000` - For the live network, connect to the **Asset Hub** parachain. Either Polkadot or Kusama Asset Hub can be selected from the dropdown list, choosing the desired RPC provider 2. Click on the **Network** tab on the top navigation bar and select **Assets** from the dropdown list ![Access to Asset Hub through Polkadot.JS](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-assets/register-a-local-asset-1.webp) 3. Now, you need to examine all the registered asset IDs. This step is crucial to ensure that the asset ID you are about to register is unique. Asset IDs are displayed in the **assets** column ![Asset IDs on Asset Hub](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-assets/register-a-local-asset-2.webp) 4. Once you have confirmed that the asset ID is unique, click on the **Create** button on the top right corner of the page ![Create a new asset](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-assets/register-a-local-asset-3.webp) 5. Fill in the required fields in the **Create Asset** form: 1. **creator account** - the account to be used for creating this asset and setting up the initial metadata 2. **asset name** - the descriptive name of the asset you are registering 3. **asset symbol** - the symbol that will be used to represent the asset 4. **asset decimals** - the number of decimal places for this token, with a maximum of 20 allowed through the user interface 5. **minimum balance** - the minimum balance for the asset. This is specified in the units and decimals as requested 6. **asset ID** - the selected id for the asset. This should not match an already-existing asset id 7. Click on the **Next** button ![Create Asset Form](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-assets/register-a-local-asset-4.webp) 6. Choose the accounts for the roles listed below: 1. **admin account** - the account designated for continuous administration of the token 2. **issuer account** - the account that will be used for issuing this token 3. **freezer account** - the account that will be used for performing token freezing operations 4. Click on the **Create** button ![Admin, Issuer, Freezer accounts](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-assets/register-a-local-asset-5.webp) 7. Click on the **Sign and Submit** button to complete the asset registration process ![Sign and Submit](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-assets/register-a-local-asset-6.webp) ## Verify Asset Registration After completing these steps, the asset will be successfully registered. You can now view your asset listed on the [**Assets**](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fasset-hub-polkadot-rpc.dwellir.com#/assets){target=\_blank} section of the Polkadot.js Apps interface. ![Asset listed on Polkadot.js Apps](/images/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-assets/register-a-local-asset-7.webp) !!! tip Take into consideration that the **Assets** section’s link may differ depending on the network you are using. For the local environment, enter `ws://127.0.0.1:8000` into the **Custom Endpoint** field. In this way, you have successfully registered a local asset on the Asset Hub parachain. For an in-depth explanation about Asset Hub and its features, see the [Asset Hub](/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/){target=\_blank} entry in the Polkadot Wiki. ## Test Setup Environment You can set up a local parachain environment to test the asset registration process before deploying it on the live network. This guide uses Chopsticks to simulate that process. For further information on chopsticks usage, refer to the [Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks/get-started){target=\_blank} documentation. To set up a test environment, execute the following command: ```bash npx @acala-network/chopsticks \ --config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot-asset-hub.yml ``` The above command will spawn a lazy fork of Polkadot Asset Hub with the latest block data from the network. If you need to test Kusama Asset Hub, replace `polkadot-asset-hub.yml` with `kusama-asset-hub.yml` in the command. An Asset Hub instance is now running locally, and you can proceed with the asset registration process. Note that the local registration process does not differ from the live network process. Once you have a successful TestNet transaction, you can use the same steps to register the asset on MainNet. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/system-chains/ --- BEGIN CONTENT --- --- title: System Chains Tutorials description: Explore step-by-step tutorials on how to integrate with system parachains, such as the Asset Hub chain, within the Polkadot ecosystem. template: index-page.html --- # System Chains Tutorials In this section, you'll gain hands-on experience building solutions that integrate with [system chains](/polkadot-protocol/architecture/system-chains/){target=\_blank} on Polkadot using the Polkadot SDK. System chains like the [Asset Hub](/polkadot-protocol/architecture/system-chains/asset-hub/){target=\_blank} provide essential infrastructure for enabling cross-chain interoperability and asset management across the Polkadot ecosystem. Through these tutorials, you'll learn how to leverage these system chains to enhance the functionality and security of your blockchain applications. ## For Parachain Integrators Enhance cross-chain interoperability and expand your parachain’s functionality: - **[Register your parachain's asset on Asset Hub](/tutorials/polkadot-sdk/system-chains/asset-hub/register-foreign-asset/)** - connect your parachain’s assets to Asset Hub as a foreign asset using XCM, enabling seamless cross-chain transfers and integration ## For Developers Leveraging System Chains Unlock new possibilities by tapping into Polkadot’s system chains: - **[Register a new asset on Asset Hub](/tutorials/polkadot-sdk/system-chains/asset-hub/register-local-asset/)** - create and customize assets directly on Asset Hub (local assets) with parameters like metadata, minimum balances, and more - **[Convert Assets](/tutorials/polkadot-sdk/system-chains/asset-hub/asset-conversion/)** - use Asset Hub's AMM functionality to swap between different assets, provide liquidity to pools, and manage LP tokens ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/testing/fork-live-chains/ --- BEGIN CONTENT --- --- title: Fork a Chain with Chopsticks description: Learn how to fork live Polkadot SDK chains with Chopsticks. Configure forks, replay blocks, test XCM, and interact programmatically or via UI. tutorial_badge: Beginner categories: Basics, dApps, Tooling --- # Fork a Chain with Chopsticks ## Introduction Chopsticks is an innovative tool that simplifies the process of forking live Polkadot SDK chains. This guide provides step-by-step instructions to configure and fork chains, enabling developers to: - Replay blocks for state analysis - Test cross-chain messaging (XCM) - Simulate blockchain environments for debugging and experimentation With support for both configuration files and CLI commands, Chopsticks offers flexibility for diverse development workflows. Whether you're testing locally or exploring complex blockchain scenarios, Chopsticks empowers developers to gain deeper insights and accelerate application development. Chopsticks uses the [Smoldot](https://github.com/smol-dot/smoldot){target=\_blank} light client, which does not support calls made through the Ethereum JSON-RPC. As a result, you can't fork your chain using Chopsticks and then interact with it using tools like MetaMask. For additional support and information, please reach out through [GitHub Issues](https://github.com/AcalaNetwork/chopsticks/issues){target=\_blank}. ## Prerequisites To follow this tutorial, ensure you have completed the following: - **Installed Chopsticks** - if you still need to do so, see the [Install Chopsticks](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/#install-chopsticks){target=\_blank} guide for assistance - **Reviewed** [**Configure Chopsticks**](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/#configure-chopsticks){target=\_blank} - and understand how forked chains are configured ## Configuration File To run Chopsticks using a configuration file, utilize the `--config` flag. You can use a raw GitHub URL, a path to a local file, or simply the chain's name. The following commands all look different but they use the `polkadot` configuration in the same way: === "GitHub URL" ```bash npx @acala-network/chopsticks \ --config=https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml ``` === "Local File Path" ```bash npx @acala-network/chopsticks --config=configs/polkadot.yml ``` === "Chain Name" ```bash npx @acala-network/chopsticks --config=polkadot ``` Regardless of which method you choose from the preceding examples, you'll see an output similar to the following:
npx @acala-network/chopsticks --config=polkadot
[18:38:26.155] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml app: "chopsticks" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [18:38:28.186] INFO: Polkadot RPC listening on port 8000 app: "chopsticks"
If using a file path, make sure you've downloaded the [Polkadot configuration file](https://github.com/AcalaNetwork/chopsticks/blob/master/configs/polkadot.yml){target=\_blank}, or have created your own. ## Create a Fork Once you've configured Chopsticks, use the following command to fork Polkadot at block 100: ```bash npx @acala-network/chopsticks \ --endpoint wss://polkadot-rpc.dwellir.com \ --block 100 ``` If the fork is successful, you will see output similar to the following: -8<-- 'code/tutorials/polkadot-sdk/testing/fork-live-chains/polkadot-fork-01.html' Access the running Chopsticks fork using the default address. ```bash ws://localhost:8000 ``` ## Interact with a Fork You can interact with the forked chain using various [libraries](/develop/toolkit/#libraries){target=\_blank} such as [Polkadot.js](https://polkadot.js.org/docs/){target=\_blank} and its user interface, [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank}. ### Use Polkadot.js Apps To interact with Chopsticks via the hosted user interface, visit [Polkadot.js Apps](https://polkadot.js.org/apps/#/explorer){target=\_blank} and follow these steps: 1. Select the network icon in the top left corner ![](/images/tutorials/polkadot-sdk/testing/fork-live-chains/chopsticks-1.webp) 2. Scroll to the bottom and select **Development** 3. Choose **Custom** 4. Enter `ws://localhost:8000` in the input field 5. Select the **Switch** button ![](/images/tutorials/polkadot-sdk/testing/fork-live-chains/chopsticks-2.webp) You should now be connected to your local fork and can interact with it as you would with a real chain. ### Use Polkadot.js Library For programmatic interaction, you can use the Polkadot.js library. The following is a basic example: ```js import { ApiPromise, WsProvider } from '@polkadot/api'; async function connectToFork() { const wsProvider = new WsProvider('ws://localhost:8000'); const api = await ApiPromise.create({ provider: wsProvider }); await api.isReady; // Now you can use 'api' to interact with your fork console.log(`Connected to chain: ${await api.rpc.system.chain()}`); } connectToFork(); ``` ## Replay Blocks Chopsticks allows you to replay specific blocks from a chain, which is useful for debugging and analyzing state changes. You can use the parameters in the [Configuration](/develop/toolkit/parachains/fork-chains/chopsticks/get-started/#configure-chopsticks){target=\_blank} section to set up the chain configuration, and then use the run-block subcommand with the following additional options: - `output-path` - path to print output - `html` - generate HTML with storage diff - `open` - open generated HTML For example, the command to replay block 1000 from Polkadot and save the output to a JSON file would be as follows: ```bash npx @acala-network/chopsticks run-block \ --endpoint wss://polkadot-rpc.dwellir.com \ --output-path ./polkadot-output.json \ --block 1000 ``` ??? code "polkadot-output.json" ```json { "Call": { "result": "0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44a10f6fc59a4d90c3b78e38fac100fc6adc6f9e69a07565ec8abce6165bd0d24078cc7bf34f450a2cc7faacc1fa1e244b959f0ed65437f44208876e1e5eefbf8dd34c040642414245b501030100000083e2cc0f00000000d889565422338aa58c0fd8ebac32234149c7ce1f22ac2447a02ef059b58d4430ca96ba18fbf27d06fe92ec86d8b348ef42f6d34435c791b952018d0a82cae40decfe5faf56203d88fdedee7b25f04b63f41f23da88c76c876db5c264dad2f70c", "storageDiff": [ [ "0x0b76934f4cc08dee01012d059e1b83eebbd108c4899964f707fdaffb82636065", "0x00" ], [ "0x1cb6f36e027abb2091cfb5110ab5087f0323475657e0890fbdbf66fb24b4649e", null ], [ "0x1cb6f36e027abb2091cfb5110ab5087f06155b3cd9a8c9e5e9a23fd5dc13a5ed", "0x83e2cc0f00000000" ], [ "0x1cb6f36e027abb2091cfb5110ab5087ffa92de910a7ce2bd58e99729c69727c1", null ], [ "0x26aa394eea5630e07c48ae0c9558cef702a5c1b19ab7a04f536c519aca4983ac", null ], [ "0x26aa394eea5630e07c48ae0c9558cef70a98fdbe9ce6c55837576c60c7af3850", "0x02000000" ], [ "0x26aa394eea5630e07c48ae0c9558cef734abf5cb34d6244378cddbf18e849d96", "0xc03b86ae010000000000000000000000" ], [ "0x26aa394eea5630e07c48ae0c9558cef780d41e5e16056765bc8461851072c9d7", "0x080000000000000080e36a09000000000200000001000000000000ca9a3b00000000020000" ], [ "0x26aa394eea5630e07c48ae0c9558cef78a42f33323cb5ced3b44dd825fda9fcc", null ], [ "0x26aa394eea5630e07c48ae0c9558cef799e7f93fc6a98f0874fd057f111c4d2d", null ], [ "0x26aa394eea5630e07c48ae0c9558cef7a44704b568d21667356a5a050c118746d366e7fe86e06375e7030000", "0xba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44" ], [ "0x26aa394eea5630e07c48ae0c9558cef7a86da5a932684f199539836fcb8c886f", null ], [ "0x26aa394eea5630e07c48ae0c9558cef7b06c3320c6ac196d813442e270868d63", null ], [ "0x26aa394eea5630e07c48ae0c9558cef7bdc0bd303e9855813aa8a30d4efc5112", null ], [ "0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d15153cb1f00942ff401000000", null ], [ "0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d1b4def25cfda6ef3a00000000", null ], [ "0x26aa394eea5630e07c48ae0c9558cef7ff553b5a9862a516939d82b3d3d8661a", null ], [ "0x2b06af9719ac64d755623cda8ddd9b94b1c371ded9e9c565e89ba783c4d5f5f9b4def25cfda6ef3a000000006f3d6b177c8acbd8dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e7201", "0x9b000000" ], ["0x3a65787472696e7369635f696e646578", null], [ "0x3f1467a096bcd71a5b6a0c8155e208103f2edf3bdf381debe331ab7446addfdc", "0x550057381efedcffffffffffffffffff" ], [ "0x3fba98689ebed1138735e0e7a5a790ab0f41321f75df7ea5127be2db4983c8b2", "0x00" ], [ "0x3fba98689ebed1138735e0e7a5a790ab21a5051453bd3ae7ed269190f4653f3b", "0x080000" ], [ "0x3fba98689ebed1138735e0e7a5a790abb984cfb497221deefcefb70073dcaac1", "0x00" ], [ "0x5f3e4907f716ac89b6347d15ececedca80cc6574281671b299c1727d7ac68cabb4def25cfda6ef3a00000000", "0x204e0000183887050ecff59f58658b3df63a16d03a00f92890f1517f48c2f6ccd215e5450e380e00005809fd84af6483070acbb92378e3498dbc02fb47f8e97f006bb83f60d7b2b15d980d000082104c22c383925323bf209d771dec6e1388285abe22c22d50de968467e0bb6ce00b000088ee494d719d68a18aade04903839ea37b6be99552ceceb530674b237afa9166480d0000dc9974cdb3cebfac4d31333c30865ff66c35c1bf898df5c5dd2924d3280e72011c0c0000e240d12c7ad07bb0e7785ee6837095ddeebb7aef84d6ed7ea87da197805b343a0c0d0000" ], [ "0xae394d879ddf7f99595bc0dd36e355b5bbd108c4899964f707fdaffb82636065", null ], [ "0xbd2a529379475088d3e29a918cd478721a39ec767bd5269111e6492a1675702a", "0x4501407565175cfbb5dca18a71e2433f838a3d946ef532c7bff041685db1a7c13d74252fffe343a960ef84b15187ea0276687d8cb3168aeea5202ea6d651cb646517102b81ff629ee6122430db98f2cadf09db7f298b49589b265dae833900f24baa8fb358d87e12f3e9f7986a9bf920c2fb48ce29886199646d2d12c6472952519463e80b411adef7e422a1595f1c1af4b5dd9b30996fba31fa6a30bd94d2022d6b35c8bc5a8a51161d47980bf4873e01d15afc364f8939a6ce5a09454ab7f2dd53bf4ee59f2c418e85aa6eb764ad218d0097fb656900c3bdd859771858f87bf7f06fc9b6db154e65d50d28e8b2374898f4f519517cd0bedc05814e0f5297dc04beb307b296a93cc14d53afb122769dfd402166568d8912a4dff9c2b1d4b6b34d811b40e5f3763e5f3ab5cd1da60d75c0ff3c12bcef3639f5f792a85709a29b752ffd1233c2ccae88ed3364843e2fa92bdb49021ee36b36c7cdc91b3e9ad32b9216082b6a2728fccd191a5cd43896f7e98460859ca59afbf7c7d93cd48da96866f983f5ff8e9ace6f47ee3e6c6edb074f578efbfb0907673ebca82a7e1805bc5c01cd2fa5a563777feeb84181654b7b738847c8e48d4f575c435ad798aec01631e03cf30fe94016752b5f087f05adf1713910767b7b0e6521013be5370776471191641c282fdfe7b7ccf3b2b100a83085cd3af2b0ad4ab3479448e71fc44ff987ec3a26be48161974b507fb3bc8ad23838f2d0c54c9685de67dc6256e71e739e9802d0e6e3b456f6dca75600bc04a19b3cc1605784f46595bfb10d5e077ce9602ae3820436166aa1905a7686b31a32d6809686462bc9591c0bc82d9e49825e5c68352d76f1ac6e527d8ac02db3213815080afad4c2ecb95b0386e3e9ab13d4f538771dac70d3059bd75a33d0b9b581ec33bb16d0e944355d4718daccb35553012adfcdacb1c5200a2aec3756f6ad5a2beffd30018c439c1b0c4c0f86dbf19d0ad59b1c9efb7fe90906febdb9001af1e7e15101089c1ab648b199a40794d30fe387894db25e614b23e833291a604d07eec2ade461b9b139d51f9b7e88475f16d6d23de6fe7831cc1dbba0da5efb22e3b26cd2732f45a2f9a5d52b6d6eaa38782357d9ae374132d647ef60816d5c98e6959f8858cfa674c8b0d340a8f607a68398a91b3a965585cc91e46d600b1310b8f59c65b7c19e9d14864a83c4ad6fa4ba1f75bba754e7478944d07a1f7e914422b4d973b0855abeb6f81138fdca35beb474b44c7736fc3ab2969878810153aa3c93fc08c99c478ed1bb57f647d3eb02f25cee122c70424643f4b106a7643acaa630a5c4ac39364c3cb14453055170c01b44e8b1ef007c7727494411958932ae8b3e0f80d67eec8e94dd2ff7bbe8c9e51ba7e27d50bd9f52cbaf9742edecb6c8af1aaf3e7c31542f7d946b52e0c37d194b3dd13c3fddd39db0749755c7044b3db1143a027ad428345d930afcefc0d03c3a0217147900bdea1f5830d826f7e75ecd1c4e2bc8fd7de3b35c6409acae1b2215e9e4fd7e360d6825dc712cbf9d87ae0fd4b349b624d19254e74331d66a39657da81e73d7b13adc1e5efa8efd65aa32c1a0a0315913166a590ae551c395c476116156cf9d872fd863893edb41774f33438161f9b973e3043f819d087ba18a0f1965e189012496b691f342f7618fa9db74e8089d4486c8bd1993efd30ff119976f5cc0558e29b417115f60fd8897e13b6de1a48fbeee38ed812fd267ae25bffea0caa71c09309899b34235676d5573a8c3cf994a3d7f0a5dbd57ab614c6caf2afa2e1a860c6307d6d9341884f1b16ef22945863335bb4af56e5ef5e239a55dbd449a4d4d3555c8a3ec5bd3260f88cabca88385fe57920d2d2dfc5d70812a8934af5691da5b91206e29df60065a94a0a8178d118f1f7baf768d934337f570f5ec68427506391f51ab4802c666cc1749a84b5773b948fcbe460534ed0e8d48a15c149d27d67deb8ea637c4cc28240ee829c386366a0b1d6a275763100da95374e46528a0adefd4510c38c77871e66aeda6b6bfd629d32af9b2fad36d392a1de23a683b7afd13d1e3d45dad97c740106a71ee308d8d0f94f6771164158c6cd3715e72ccfbc49a9cc49f21ead8a3c5795d64e95c15348c6bf8571478650192e52e96dd58f95ec2c0fb4f2ccc05b0ab749197db8d6d1c6de07d6e8cb2620d5c308881d1059b50ffef3947c273eaed7e56c73848e0809c4bd93619edd9fd08c8c5c88d5f230a55d2c6a354e5dd94440e7b5bf99326cf4a112fe843e7efdea56e97af845761d98f40ed2447bd04a424976fcf0fe0a0c72b97619f85cf431fe4c3aa6b3a4f61df8bc1179c11e77783bfedb7d374bd1668d0969333cb518bd20add8329462f2c9a9f04d150d60413fdd27271586405fd85048481fc2ae25b6826cb2c947e4231dc7b9a0d02a9a03f88460bced3fef5d78f732684bd218a1954a4acfc237d79ccf397913ab6864cd8a07e275b82a8a72520624738368d1c5f7e0eaa2b445cf6159f2081d3483618f7fc7b16ec4e6e4d67ab5541bcda0ca1af40efd77ef8653e223191448631a8108c5e50e340cd405767ecf932c1015aa8856b834143dc81fa0e8b9d1d8c32278fca390f2ff08181df0b74e2d13c9b7b1d85543416a0dae3a77530b9cd1366213fcf3cd12a9cd3ae0a006d6b29b5ffc5cdc1ab24343e2ab882abfd719892fca5bf2134731332c5d3bef6c6e4013d84a853cb03d972146b655f0f8541bcd36c3c0c8a775bb606edfe50d07a5047fd0fe01eb125e83673930bc89e91609fd6dfe97132679374d3de4a0b3db8d3f76f31bed53e247da591401d508d65f9ee01d3511ee70e3644f3ab5d333ca7dbf737fe75217b4582d50d98b5d59098ea11627b7ed3e3e6ee3012eadd326cf74ec77192e98619427eb0591e949bf314db0fb932ed8be58258fb4f08e0ccd2cd18b997fb5cf50c90d5df66a9f3bb203bd22061956128b800e0157528d45c7f7208c65d0592ad846a711fa3c5601d81bb318a45cc1313b122d4361a7d7a954645b04667ff3f81d3366109772a41f66ece09eb93130abe04f2a51bb30e767dd37ec6ee6a342a4969b8b342f841193f4f6a9f0fac4611bc31b6cab1d25262feb31db0b8889b6f8d78be23f033994f2d3e18e00f3b0218101e1a7082782aa3680efc8502e1536c30c8c336b06ae936e2bcf9bbfb20dd514ed2867c03d4f44954867c97db35677d30760f37622b85089cc5d182a89e29ab0c6b9ef18138b16ab91d59c2312884172afa4874e6989172014168d3ed8db3d9522d6cbd631d581d166787c93209bec845d112e0cbd825f6df8b64363411270921837cfb2f9e7f2e74cdb9cd0d2b02058e5efd9583e2651239654b887ea36ce9537c392fc5dfca8c5a0facbe95b87dfc4232f229bd12e67937d32b7ffae2e837687d2d292c08ff6194a2256b17254748857c7e3c871c3fff380115e6f7faf435a430edf9f8a589f6711720cfc5cec6c8d0d94886a39bb9ac6c50b2e8ef6cf860415192ca4c1c3aaa97d36394021a62164d5a63975bcd84b8e6d74f361c17101e3808b4d8c31d1ee1a5cf3a2feda1ca2c0fd5a50edc9d95e09fb5158c9f9b0eb5e2c90a47deb0459cea593201ae7597e2e9245aa5848680f546256f3" ], [ "0xd57bce545fb382c34570e5dfbf338f5e326d21bc67a4b34023d577585d72bfd7", null ], [ "0xd57bce545fb382c34570e5dfbf338f5ea36180b5cfb9f6541f8849df92a6ec93", "0x00" ], [ "0xd57bce545fb382c34570e5dfbf338f5ebddf84c5eb23e6f53af725880d8ffe90", null ], [ "0xd5c41b52a371aa36c9254ce34324f2a53b996bb988ea8ee15bad3ffd2f68dbda", "0x00" ], [ "0xf0c365c3cf59d671eb72da0e7a4113c49f1f0515f462cdcf84e0f1d6045dfcbb", "0x50defc5172010000" ], [ "0xf0c365c3cf59d671eb72da0e7a4113c4bbd108c4899964f707fdaffb82636065", null ], [ "0xf68f425cf5645aacb2ae59b51baed90420d49a14a763e1cbc887acd097f92014", "0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000" ], [ "0xf68f425cf5645aacb2ae59b51baed9049b58374218f48eaf5bc23b7b3e7cf08a", "0xb3030000" ], [ "0xf68f425cf5645aacb2ae59b51baed904b97380ce5f4e70fbf9d6b5866eb59527", "0x9501800300008203000082030000840300008503000086030000870300008703000089030000890300008b0300008b0300008d0300008d0300008f0300008f0300009103000092030000920300009403000094030000960300009603000098030000990300009a0300009b0300009b0300009d0300009d0300009f0300009f030000a1030000a2030000a3030000a4030000a5030000a6030000a6030000a8030000a8030000aa030000ab030000ac030000ad030000ae030000af030000b0030000b1030000b1030000b3030000b3030000b5030000b6030000b7030000b8030000b9030000ba030000ba030000bc030000bc030000be030000be030000c0030000c1030000c2030000c2030000c4030000c5030000c5030000c7030000c7030000c9030000c9030000cb030000cc030000cd030000ce030000cf030000d0030000d0030000d2030000d2030000d4030000d4030000d6030000d7030000d8030000d9030000da030000db030000db030000dd030000dd030000df030000e0030000e1030000e2030000e3030000e4030000e4030000" ] ], "offchainStorageDiff": [], "runtimeLogs": [] } } ``` ## XCM Testing To test XCM (Cross-Consensus Messaging) messages between networks, you can fork multiple parachains and a relay chain locally using Chopsticks. - `relaychain` - relay chain config file - `parachain` - parachain config file For example, to fork Moonbeam, Astar, and Polkadot enabling XCM between them, you can use the following command: ```bash npx @acala-network/chopsticks xcm \ --r polkadot \ --p moonbeam \ --p astar ``` After running it, you should see output similar to the following:
npx @acala-network/chopsticks xcm \ --r polkadot \ --p moonbeam \ --p astar
[13:46:07.901] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/moonbeam.yml app: "chopsticks" [13:46:12.631] INFO: Moonbeam RPC listening on port 8000 app: "chopsticks" [13:46:12.632] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/astar.yml app: "chopsticks" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [13:46:23.669] INFO: Astar RPC listening on port 8001 app: "chopsticks" [13:46:25.144] INFO (xcm): Connected parachains [2004,2006] app: "chopsticks" [13:46:25.144] INFO: Loading config file https://raw.githubusercontent.com/AcalaNetwork/chopsticks/master/configs/polkadot.yml app: "chopsticks" chopsticks::executor TRACE: Calling Metadata_metadata chopsticks::executor TRACE: Completed Metadata_metadata [13:46:53.320] INFO: Polkadot RPC listening on port 8002 app: "chopsticks" [13:46:54.038] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Moonbeam' app: "chopsticks" [13:46:55.028] INFO (xcm): Connected relaychain 'Polkadot' with parachain 'Astar' app: "chopsticks"
Now you can interact with your forked chains using the ports specified in the output. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/testing/ --- BEGIN CONTENT --- --- title: Blockchain Testing Tutorials description: Follow hands-on tutorials to set up, test, and validate the functionality of Polkadot-SDK blockchains, using tools and methods that streamline testing. template: index-page.html --- # Blockchain Testing Tutorials Polkadot offers specialized tools that make it simple to create realistic testing environments, particularly for cross-chain interactions. These purpose-built tools enable developers to quickly spin up test networks that accurately simulate real-world scenarios. Learn to create controlled testing environments using powerful tools designed for Polkadot SDK development. ## Get Started Through these tutorials, you'll learn important testing techniques including: - Setting up local test environments - Spawning ephemeral testing networks - Forking live chains for testing - Simulating cross-chain interactions - Debugging blockchain behavior Each tutorial provides step-by-step guidance for using these tools effectively in your development workflow. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/polkadot-sdk/testing/spawn-basic-chain/ --- BEGIN CONTENT --- --- title: Spawn a Basic Chain with Zombienet description: Learn to spawn, connect to and monitor a basic blockchain network with Zombienet, using customizable configurations for streamlined development and debugging. tutorial_badge: Beginner categories: Basics, dApps, Tooling --- # Spawn a Basic Chain with Zombienet ## Introduction Zombienet simplifies blockchain development by enabling developers to create temporary, customizable networks for testing and validation. These ephemeral chains are ideal for experimenting with configurations, debugging applications, and validating functionality in a controlled environment. In this guide, you'll learn how to define a basic network configuration file, spawn a blockchain network using Zombienet's CLI, and interact with nodes and monitor network activity using tools like Polkadot.js Apps and Prometheus By the end of this tutorial, you'll be equipped to deploy and test your own blockchain networks, paving the way for more advanced setups and use cases. ## Prerequisites To successfully complete this tutorial, you must ensure you've first: - [Installed Zombienet](/develop/toolkit/parachains/spawn-chains/zombienet/get-started/#install-zombienet){target=\_blank}. This tutorial requires Zombienet version `{{ dependencies.repositories.zombienet.version }}`. Verify that you're using the specified version to ensure compatibility with the instructions. - Reviewed the information in [Configure Zombienet](/develop/toolkit/parachains/spawn-chains/zombienet/get-started/#configure-zombienet){target=\_blank} and understand how to customize a spawned network ## Set Up Local Provider In this tutorial, you will use the Zombienet [local provider](/develop/toolkit/parachains/spawn-chains/zombienet/get-started/#local-provider){target=\_blank} (also called native provider) that enables you to run nodes as local processes in your development environment. You must have the necessary binaries installed (such as `polkadot` and `polkadot-parachain`) to spin up your network successfully. To install the required binaries, use the following Zombienet CLI command: ```bash zombienet setup polkadot polkadot-parachain ``` This command downloads the following binaries: - `polkadot` - `polkadot-execute-worker` - `polkadot-parachain` - `polkadot-prepare-worker` Finally, add these binaries to your PATH environment variable to ensure Zombienet can locate them when spawning the network. For example, you can move the binaries to a directory in your PATH, such as `/usr/local/bin`: ```bash sudo mv ./polkadot ./polkadot-execute-worker ./polkadot-parachain ./polkadot-prepare-worker /usr/local/bin ``` ## Define the Network Zombienet uses a [configuration file](/develop/toolkit/parachains/spawn-chains/zombienet/get-started/#configuration-files){target=\_blank} to define the ephemeral network that will be spawned. Follow these steps to create and define the configuration file: 1. Create a file named `spawn-a-basic-network.toml` ```bash touch spawn-a-basic-network.toml ``` 2. Add the following code to the file you just created: ```toml title="spawn-a-basic-network.toml" [settings] timeout = 120 [relaychain] [[relaychain.nodes]] name = "alice" validator = true [[relaychain.nodes]] name = "bob" validator = true [[parachains]] id = 100 [parachains.collator] name = "collator01" ``` This configuration file defines a network with the following chains: - **relaychain** - with two nodes named `alice` and `bob` - **parachain** - with a collator named `collator01` Settings also defines a timeout of 120 seconds for the network to be ready. ## Spawn the Network To spawn the network, run the following command: ```bash zombienet -p native spawn spawn-a-basic-network.toml ``` This command will spawn the network defined in the `spawn-a-basic-network.toml` configuration file. The `-p native` flag specifies that the network will be spawned using the native provider. If successful, you will see the following output:
zombienet -p native spawn spawn-a-basic-network.toml
Network launched 🚀🚀
Namespace zombie-75a01b93c92d571f6198a67bcb380fcd
Provider native
Node Information
Name alice
Direct Link https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55308#explorer
Prometheus Link http://127.0.0.1:55310/metrics
Log Cmd tail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/alice.log
Node Information
Name bob
Direct Link https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55312#explorer
Prometheus Link http://127.0.0.1:50634/metrics
Log Cmd tail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/bob.log
Node Information
Name collator01
Direct Link https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55316#explorer
Prometheus Link http://127.0.0.1:55318/metrics
Log Cmd tail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/collator01.log
Parachain ID 100
ChainSpec Path /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/100-rococo-local.json
!!! note If the IPs and ports aren't explicitly defined in the configuration file, they may change each time the network is started, causing the links provided in the output to differ from the example. ## Interact with the Spawned Network After the network is launched, you can interact with it using [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank}. To do so, open your browser and use the provided links listed by the output as `Direct Link`. ### Connect to the Nodes Use the [55308 port address](https://polkadot.js.org/apps/?rpc=ws://127.0.0.1:55308#explorer){target=\_blank} to interact with the same `alice` node used for this tutorial. Ports can change from spawn to spawn so be sure to locate the link in the output when spawning your own node to ensure you are accessing the correct port. If you want to interact with the nodes more programmatically, you can also use the [Polkadot.js API](https://polkadot.js.org/docs/api/){target=\_blank}. For example, the following code snippet shows how to connect to the `alice` node using the Polkadot.js API and log some information about the chain and node: ```typescript import { ApiPromise, WsProvider } from '@polkadot/api'; async function main() { const wsProvider = new WsProvider('ws://127.0.0.1:55308'); const api = await ApiPromise.create({ provider: wsProvider }); // Retrieve the chain & node information via rpc calls const [chain, nodeName, nodeVersion] = await Promise.all([ api.rpc.system.chain(), api.rpc.system.name(), api.rpc.system.version(), ]); console.log( `You are connected to chain ${chain} using ${nodeName} v${nodeVersion}` ); } main() .catch(console.error) .finally(() => process.exit()); ``` Both methods allow you to interact easily with the network and its nodes. ### Check Metrics You can also check the metrics of the nodes by accessing the links provided in the output as `Prometheus Link`. [Prometheus](https://prometheus.io/){target=\_blank} is a monitoring and alerting toolkit that collects metrics from the nodes. By accessing the provided links, you can see the metrics of the nodes in a web interface. So, for example, the following image shows the Prometheus metrics for Bob's node from the Zombienet test: ![](/images/tutorials/polkadot-sdk/testing/spawn-basic-chain/spawn-basic-network-01.webp) ### Check Logs To view individual node logs, locate the `Log Cmd` command in Zombienet's startup output. For example, to see what the alice node is doing, find the log command that references `alice.log` in its file path. Note that Zombienet will show you the correct path for your instance when it starts up, so use that path rather than copying from the below example: ```bash tail -f /tmp/zombie-794af21178672e1ff32c612c3c7408dc_-2397036-6717MXDxcS55/alice.log ``` After running this command, you will see the logs of the `alice` node in real-time, which can be useful for debugging purposes. The logs of the `bob` and `collator01` nodes can be checked similarly. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/demo-aplications/deploying-uniswap-v2/ --- BEGIN CONTENT --- --- title: Deploying Uniswap V2 on Polkadot description: Learn how to deploy and test Uniswap V2 on Polkadot Hub using Hardhat, bringing AMM-based token swaps to the Polkadot ecosystem. categories: dApps, Tooling --- # Deploy Uniswap V2 !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Decentralized exchanges (DEXs) are a cornerstone of the DeFi ecosystem, allowing for permissionless token swaps without intermediaries. [Uniswap V2](https://docs.uniswap.org/contracts/v2/overview){target=\_blank}, with its Automated Market Maker (AMM) model, revolutionized DEXs by enabling liquidity provision for any ERC-20 token pair. This tutorial will guide you through how Uniswap V2 works so you can take advantage of it in your projects deployed to Polkadot Hub. By understanding these contracts, you'll gain hands-on experience with one of the most influential DeFi protocols and understand how it functions across blockchain ecosystems. ## Prerequisites Before starting, make sure you have: - Node.js (v16.0.0 or later) and npm installed - Basic understanding of Solidity and JavaScript - Familiarity with [`hardhat-polkadot`](/develop/smart-contracts/dev-environments/hardhat){target=\_blank} development environment - Some PAS test tokens to cover transaction fees (obtained from the [Polkadot faucet](https://faucet.polkadot.io/?parachain=1111){target=\_blank}) - Basic understanding of how AMMs and liquidity pools work ## Set Up the Project Let's start by cloning the Uniswap V2 project: 1. Clone the Uniswap V2 repository: ``` git clone https://github.com/polkadot-developers/polkavm-hardhat-examples.git -b v0.0.6 cd polkavm-hardhat-examples/uniswap-v2-polkadot/ ``` 2. Install the required dependencies: ```bash npm install ``` 3. Update the `hardhat.config.js` file so the paths for the Substrate node and the ETH-RPC adapter match with the paths on your machine. For more info, check the [Testing your Contract](/develop/smart-contracts/dev-environments/hardhat/#testing-your-contract){target=\_blank} section in the Hardhat guide ```js title="hardhat.config.js" hardhat: { polkavm: true, nodeConfig: { nodeBinaryPath: '../bin/substrate-node', rpcPort: 8000, dev: true, }, adapterConfig: { adapterBinaryPath: '../bin/eth-rpc', dev: true, }, }, ``` 4. Create a `.env` file in your project root to store your private keys (you can use as an example the `env.example` file): ```text title=".env" LOCAL_PRIV_KEY="INSERT_LOCAL_PRIVATE_KEY" AH_PRIV_KEY="INSERT_AH_PRIVATE_KEY" ``` Ensure to replace `"INSERT_LOCAL_PRIVATE_KEY"` with a private key available in the local environment (you can get them from this [file](https://github.com/paritytech/hardhat-polkadot/blob/main/packages/hardhat-polkadot-node/src/constants.ts#L22){target=\_blank}). And `"INSERT_AH_PRIVATE_KEY"` with the account's private key you want to use to deploy the contracts. You can get this by exporting the private key from your wallet (e.g., MetaMask). !!!warning Keep your private key safe, and never share it with anyone. If it is compromised, your funds can be stolen. 5. Compile the contracts: ```bash npx hardhat compile ``` If the compilation is successful, you should see the following output:
npx hardhat compile Compiling 12 Solidity files Successfully compiled 12 Solidity files
After running the above command, you should see the compiled contracts in the `artifacts-pvm` directory. This directory contains the ABI and bytecode of your contracts. ## Understanding Uniswap V2 Architecture Before interacting with the contracts, it's essential to understand the core architecture that powers Uniswap V2. This model forms the basis of nearly every modern DEX implementation and operates under automated market making, token pair liquidity pools, and deterministic pricing principles. At the heart of Uniswap V2 lies a simple but powerful system composed of two major smart contracts: - **Factory Contract** - the factory acts as a registry and creator of new trading pairs. When two ERC-20 tokens are to be traded, the Factory contract is responsible for generating a new Pair contract that will manage that specific token pair’s liquidity pool. It keeps track of all deployed pairs and ensures uniqueness—no duplicate pools can exist for the same token combination - **Pair Contract** - each pair contract is a decentralized liquidity pool that holds reserves of two ERC-20 tokens. These contracts implement the core logic of the AMM, maintaining a constant product invariant (x \* y = k) to facilitate swaps and price determination. Users can contribute tokens to these pools in return for LP (liquidity provider) tokens, which represent their proportional share of the reserves This minimal architecture enables Uniswap to be highly modular, trustless, and extensible. By distributing responsibilities across these components, developers, and users can engage with the protocol in a composable and predictable manner, making it an ideal foundation for DEX functionality across ecosystems, including Polkadot Hub. The project scaffolding is as follows: ```bash uniswap-V2-polkadot ├── bin/ ├── contracts/ │ ├── interfaces/ │ │ ├── IERC20.sol │ │ ├── IUniswapV2Callee.sol │ │ ├── IUniswapV2ERC20.sol │ │ ├── IUniswapV2Factory.sol │ │ └── IUniswapV2Pair.sol │ ├── libraries/ │ │ ├── Math.sol │ │ ├── SafeMath.sol │ │ └── UQ112x112.sol │ ├── test/ │ │ └── ERC20.sol │ ├── UniswapV2ERC20.sol │ ├── UniswapV2Factory.sol │ └── UniswapV2Pair.sol ├── ignition/ ├── scripts/ │ └── deploy.js ├── node_modules/ ├── test/ │ ├── shared/ │ │ ├── fixtures.js │ │ └── utilities.js │ ├── UniswapV2ERC20.js │ ├── UniswapV2Factory.js │ └── UniswapV2Pair.js ├── .env.example ├── .gitignore ├── hardhat.config.js ├── package.json └── README.md ``` ## Test the Contracts You can run the provided test suite to ensure the contracts are working as expected. The tests cover various scenarios, including creating pairs, adding liquidity, and executing swaps. To test it locally, you can run the following commands: 1. Spawn a local node for testing: ```bash npx hardhat node ``` This command will spawn a local Substrate node along with the ETH-RPC adapter. The node will be available at `ws://127.0.0.1:8000` and the ETH-RPC adapter at `http://localhost:8545`. 2. In a new terminal, run the tests: ```bash npx hardhat test --network localNode ``` The result should look like this:
npx hardhat test --network localNode Compiling 12 Solidity files Successfully compiled 12 Solidity files UniswapV2ERC20 ✔ name, symbol, decimals, totalSupply, balanceOf, DOMAIN_SEPARATOR, PERMIT_TYPEHASH (44ms) ✔ approve (5128ms) ✔ transfer (5133ms) ✔ transfer:fail ✔ transferFrom (6270ms) ✔ transferFrom:max (6306ms) UniswapV2Factory ✔ feeTo, feeToSetter, allPairsLength ✔ createPair (176ms) ✔ createPair:reverse (1224ms) ✔ setFeeTo (1138ms) ✔ setFeeToSetter (1125ms) UniswapV2Pair ✔ mint (11425ms) ✔ getInputPrice:0 (12590ms) ✔ getInputPrice:1 (17600ms) ✔ getInputPrice:2 (17618ms) ✔ getInputPrice:3 (17704ms) ✔ getInputPrice:4 (17649ms) ✔ getInputPrice:5 (17594ms) ✔ getInputPrice:6 (13643ms) ✔ optimistic:0 (17647ms) ✔ optimistic:1 (17946ms) ✔ optimistic:2 (17657ms) ✔ optimistic:3 (21625ms) ✔ swap:token0 (12665ms) ✔ swap:token1 (17631ms) ✔ burn (17690ms) ✔ feeTo:off (23900ms) ✔ feeTo:on (24991ms) 28 passing (12m)
## Deploy the Contracts After successfully testing the contracts, you can deploy them to the local node or Polkadot Hub. The deployment script is located in the `scripts` directory and is named `deploy.js`. This script deploys the `Factory` and `Pair` contracts to the network. To deploy the contracts, run the following command: ```bash npx hardhat run scripts/deploy.js --network localNode ``` This command deploys the contracts to your local blockchain for development and testing. If you want to deploy to Polkadot Hub, you can use the following command: ```bash npx hardhat run scripts/deploy.js --network passetHub ``` The command above deploys to the actual Polkadot TestNet. It requires PAS test tokens, persists on the network, and operates under real network conditions. The deployment script will output the addresses of the deployed contracts. Save these addresses, as you will need them to interact with the contracts. For example, the output should look like this:
npx hardhat run scripts/deploy.js --network localNode Successfully compiled 12 Solidity files Deploying contracts using 0xf24FF3a9CF04c71Dbc94D0b566f7A27B94566cac Deploying UniswapV2ERC20... ETH deployed to : 0x7acc1aC65892CF3547b1b0590066FB93199b430D Deploying UniswapV2Factory... Factory deployed to : 0x85b108660f47caDfAB9e0503104C08C1c96e0DA9 Deploying UniswapV2Pair with JsonRpcProvider workaround... Pair deployed to : 0xF0e46847c8bFD122C4b5EEE1D4494FF7C5FC5104
## Conclusion This tutorial guided you through deploying Uniswap V2 contracts to Polkadot Hub. This implementation brings the powerful AMM architecture to the Polkadot ecosystem, laying the foundation for the decentralized trading of ERC-20 token pairs. By following this guide, you've gained practical experience with: - Setting up a Hardhat project for deploying to Polkadot Hub - Understanding the Uniswap V2 architecture - Testing Uniswap V2 contracts in a local environment - Deploying contracts to both local and testnet environments To build on this foundation, you could extend this project by implementing functionality to create liquidity pools, execute token swaps, and build a user interface for interacting with your deployment. This knowledge can be leveraged to build more complex DeFi applications or to integrate Uniswap V2 functionality into your existing projects on Polkadot. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/demo-aplications/ --- BEGIN CONTENT --- --- title: Demo Applications description: Explore working demo applications that can be deployed to Polkadot Hub, showcasing common use cases and integration patterns. template: index-page.html --- # Demo Applications !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. This section highlights demo applications that can be deployed to Polkadot Hub. These examples illustrate practical use cases and provide guidance for developers looking to launch and test applications within the Polkadot ecosystem. ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/deploy-erc20/ --- BEGIN CONTENT --- --- title: Deploy an ERC-20 to Polkadot Hub description: Deploy an ERC-20 token on Polkadot Hub using PolkaVM. This guide covers contract creation, compilation, deployment, and interaction via Polkadot Remix IDE. tutorial_badge: Beginner categories: Basics, dApps, Smart Contracts --- # Deploy an ERC-20 to Polkadot Hub !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction [ERC-20](https://eips.ethereum.org/EIPS/eip-20){target=\_blank} tokens are fungible tokens commonly used for creating cryptocurrencies, governance tokens, and staking mechanisms. Polkadot Hub enables easy token deployment with Ethereum-compatible smart contracts via PolkaVM. This tutorial covers deploying an ERC-20 contract on the Polkadot Hub TestNet using [Polkadot Remix IDE](https://remix.polkadot.io){target=\_blank}, a web-based development tool. [OpenZeppelin's ERC-20 contracts]({{ dependencies.repositories.open_zeppelin_contracts.repository_url}}/tree/{{ dependencies.repositories.open_zeppelin_contracts.version}}/contracts/token/ERC20){target=\_blank} are used for security and compliance. ## Prerequisites Before starting, make sure you have: - [MetaMask](https://metamask.io/){target=\_blank} installed and connected to Polkadot Hub. For detailed instructions, see the [Connect Your Wallet](/develop/smart-contracts/wallets){target=\_blank} section - A funded account with some PAS tokens (you can get them from the [Polkadot Faucet](https://faucet.polkadot.io/?parachain=1111){target=\_blank}). To learn how to get test tokens, check out the [Test Tokens](/develop/smart-contracts/connect-to-polkadot#test-tokens){target=\_blank} section - Basic understanding of Solidity and fungible tokens ## Create the ERC-20 Contract To create the ERC-20 contract, you can follow the steps below: 1. Navigate to the [Polkadot Remix IDE](https://remix.polkadot.io){target=\_blank} 2. Click in the **Create new file** button under the **contracts** folder, and name your contract as `MyToken.sol` ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-1.webp) 3. Now, paste the following ERC-20 contract code into the editor ```solidity title="MyToken.sol" // SPDX-License-Identifier: MIT // Compatible with OpenZeppelin Contracts ^5.0.0 pragma solidity ^0.8.22; import {ERC20} from "@openzeppelin/contracts/token/ERC20/ERC20.sol"; import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol"; contract MyToken is ERC20, Ownable { constructor(address initialOwner) ERC20("MyToken", "MTK") Ownable(initialOwner) {} function mint(address to, uint256 amount) public onlyOwner { _mint(to, amount); } } ``` The key components of the code above are: - Contract imports - [**`ERC20.sol`**]({{ dependencies.repositories.open_zeppelin_contracts.repository_url}}/tree/{{ dependencies.repositories.open_zeppelin_contracts.version}}/contracts/token/ERC20/ERC20.sol){target=\_blank} - the base contract for fungible tokens, implementing core functionality like transfers, approvals, and balance tracking - [**`Ownable.sol`**]({{ dependencies.repositories.open_zeppelin_contracts.repository_url}}/tree/{{ dependencies.repositories.open_zeppelin_contracts.version}}/contracts/access/Ownable.sol){target=\_blank} - provides basic authorization control, ensuring only the contract owner can mint new tokens - Constructor parameters - **`initialOwner`** - sets the address that will have administrative rights over the contract - **`"MyToken"`** - the full name of your token - **`"MTK"`** - the symbol representing your token in wallets and exchanges - Key functions - **`mint(address to, uint256 amount)`** - allows the contract owner to create new tokens for any address. The amount should include 18 decimals (e.g., 1 token = 1000000000000000000) - Inherited [Standard ERC-20](https://ethereum.org/en/developers/docs/standards/tokens/erc-20/){target=\_blank} functions: - **`transfer(address recipient, uint256 amount)`** - sends a specified amount of tokens to another address - **`approve(address spender, uint256 amount)`** - grants permission for another address to spend a specific number of tokens on behalf of the token owner - **`transferFrom(address sender, address recipient, uint256 amount)`** - transfers tokens from one address to another, if previously approved - **`balanceOf(address account)`** - returns the token balance of a specific address - **`allowance(address owner, address spender)`** - checks how many tokens an address is allowed to spend on behalf of another address !!! tip Use the [OpenZeppelin Contracts Wizard](https://wizard.openzeppelin.com/){target=\_blank} to quickly generate customized smart contracts. Simply configure your contract, copy the generated code, and paste it into Polkadot Remix IDE for deployment. Below is an example of an ERC-20 token contract created with it: ![Screenshot of the OpenZeppelin Contracts Wizard showing an ERC-20 contract configuration.](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-2.webp) ## Compile the Contract The compilation transforms your Solidity source code into bytecode that can be deployed on the blockchain. During this process, the compiler checks your contract for syntax errors, ensures type safety, and generates the machine-readable instructions needed for blockchain execution. To compile your contract, follow the instructions below: 1. Select the **Solidity Compiler** plugin from the left panel ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-3.webp) 2. Click the **Compile MyToken.sol** button ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-4.webp) 3. If the compilation succeeded, you'll see a green checkmark indicating success in the **Solidity Compiler** icon ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-5.webp) ## Deploy the Contract Deployment is the process of publishing your compiled smart contract to the blockchain, making it permanently available for interaction. During deployment, you'll create a new instance of your contract on the blockchain, which involves: 1. Select the **Deploy & Run Transactions** plugin from the left panel ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-6.webp) 2. Configure the deployment settings 1. From the **ENVIRONMENT** dropdown, select **Injected Provider - Talisman** (check the [Deploying Contracts](/develop/smart-contracts/dev-environments/remix/#deploying-contracts){target=\_blank} section of the Remix IDE guide for more details) 2. From the **ACCOUNT** dropdown, select the account you want to use for the deploy ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-7.webp) 3. Configure the contract parameters 1. Enter the address that will own the deployed token contract 2. Click the **Deploy** button to initiate the deployment ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-8.webp) 4. Talisman will pop up - review the transaction details. Click **Approve** to deploy your contract ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-9.webp){: .browser-extension} If the deployment process succeeded, you will see the transaction details in the terminal, including the contract address and deployment transaction hash: ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-10.webp) ## Interact with Your ERC-20 Contract Once deployed, you can interact with your contract through Remix: 1. Find your contract under **Deployed/Unpinned Contracts**, and click it to expand the available methods ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-11.webp) 2. To mint new tokens: 1. Click in the contract to expand its associated methods 2. Expand the **mint** function 3. Enter: - The recipient address - The amount (remember to add 18 zeros for 1 whole token) 4. Click **Transact** ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-12.webp) 3. Click **Approve** to confirm the transaction in the Talisman popup ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-13.webp){: .browser-extension} If the transaction succeeds, you will see the following output in the terminal: ![](/images/tutorials/smart-contracts/deploy-erc20/deploy-erc20-14.webp) Other common functions you can use: - **`balanceOf(address)`** - check token balance of any address - **`transfer(address to, uint256 amount)`** - send tokens to another address - **`approve(address spender, uint256 amount)`** - allow another address to spend your tokens Feel free to explore and interact with the contract's other functions using the same approach - selecting the method, providing any required parameters, and confirming the transaction through Talisman when needed. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/deploy-nft/ --- BEGIN CONTENT --- --- title: Deploy an NFT to Polkadot Hub description: Deploy an NFT on Polkadot Hub using PolkaVM and OpenZeppelin. Learn how to compile, deploy, and interact with your contract using Polkadot Remix IDE. tutorial_badge: Beginner categories: Basics, dApps, Smart Contracts --- # Deploy an NFT to Polkadot Hub !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Non-Fungible Tokens (NFTs) represent unique digital assets commonly used for digital art, collectibles, gaming, and identity verification. Polkadot Hub supports Ethereum-compatible smart contracts through PolkaVM, enabling straightforward NFT deployment. This tutorial guides you through deploying an [ERC-721](https://eips.ethereum.org/EIPS/eip-721){target=\_blank} NFT contract on the Polkadot Hub TestNet using the [Polkadot Remix IDE](https://remix.polkadot.io){target=\_blank}, a web-based development environment. To ensure security and standard compliance, it uses [OpenZeppelin's NFT contracts]({{ dependencies.repositories.open_zeppelin_contracts.repository_url}}/tree/{{ dependencies.repositories.open_zeppelin_contracts.version}}){target=\_blank} implementation. ## Prerequisites Before starting, make sure you have: - [Talisman](https://talisman.xyz/){target=\_blank} installed and connected to the Polkadot Hub TestNet. Check the [Connect to Polkadot](/develop/smart-contracts/connect-to-polkadot/){target=\_blank} guide for more information - A funded account with some PAS tokens (you can get them from the [Faucet](https://faucet.polkadot.io/?parachain=1111){target=\_blank}, noting that the faucet imposes a daily token limit, which may require multiple requests to obtain sufficient funds for testing) - Basic understanding of Solidity and NFTs, see the [Solidity Basics](https://soliditylang.org/){target=\_blank} and the [NFT Overview](https://ethereum.org/en/nft/){target=\_blank} guides for more details ## Create the NFT Contract To create the NFT contract, you can follow the steps below: 1. Navigate to the [Polkadot Remix IDE](https://remix.polkadot.io/){target=\_blank} 2. Click in the **Create new file** button under the **contracts** folder, and name your contract as `MyNFT.sol` ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-1.webp) 3. Now, paste the following NFT contract code into the editor ```solidity title="MyNFT.sol" // SPDX-License-Identifier: MIT // Compatible with OpenZeppelin Contracts ^5.0.0 pragma solidity ^0.8.22; import {ERC721} from "@openzeppelin/contracts/token/ERC721/ERC721.sol"; import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol"; contract MyToken is ERC721, Ownable { uint256 private _nextTokenId; constructor(address initialOwner) ERC721("MyToken", "MTK") Ownable(initialOwner) {} function safeMint(address to) public onlyOwner { uint256 tokenId = _nextTokenId++; _safeMint(to, tokenId); } } ``` The key components of the code above are: - Contract imports - [**`ERC721.sol`**]({{ dependencies.repositories.open_zeppelin_contracts.repository_url }}/blob/{{ dependencies.repositories.open_zeppelin_contracts.version }}/contracts/token/ERC721/ERC721.sol){target=\_blank} - the base contract for non-fungible tokens, implementing core NFT functionality like transfers and approvals - [**`Ownable.sol`**]({{ dependencies.repositories.open_zeppelin_contracts.repository_url }}/blob/{{ dependencies.repositories.open_zeppelin_contracts.version }}/contracts/access/Ownable.sol){target=\_blank} - provides basic authorization control, ensuring only the contract owner can mint new tokens - Constructor parameters - **`initialOwner`** - sets the address that will have administrative rights over the contract - **`"MyToken"`** - the full name of your NFT collection - **`"MTK"`** - the symbol representing your token in wallets and marketplaces - Key functions - [**`_safeMint(to, tokenId)`**]({{ dependencies.repositories.open_zeppelin_contracts.repository_url }}/blob/{{ dependencies.repositories.open_zeppelin_contracts.version }}/contracts/token/ERC721/ERC721.sol#L304){target=\_blank} - an internal function from `ERC721` that safely mints new tokens. It includes checks to ensure the recipient can handle `ERC721` tokens, with the `_nextTokenId` mechanism automatically generating unique sequential token IDs and the `onlyOwner` modifier restricting minting rights to the contract owner - Inherited [Standard ERC721](https://ethereum.org/en/developers/docs/standards/tokens/erc-721/){target=\_blank} functions provide a standardized set of methods that enable interoperability across different platforms, wallets, and marketplaces, ensuring that your NFT can be easily transferred, traded, and managed by any system that supports the `ERC721` standard: - **`transferFrom(address from, address to, uint256 tokenId)`** - transfers a specific NFT from one address to another - **`safeTransferFrom(address from, address to, uint256 tokenId)`** - safely transfers an NFT, including additional checks to prevent loss - **`approve(address to, uint256 tokenId)`** - grants permission for another address to transfer a specific NFT - **`setApprovalForAll(address operator, bool approved)`** - allows an address to manage all of the owner's NFTs - **`balanceOf(address owner)`** - returns the number of NFTs owned by a specific address - **`ownerOf(uint256 tokenId)`** - returns the current owner of a specific NFT !!! tip Use the [OpenZeppelin Contracts Wizard](https://wizard.openzeppelin.com/){target=\_blank} to generate customized smart contracts quickly. Simply configure your contract, copy the generated code, and paste it into Polkadot Remix IDE for deployment. Below is an example of an ERC-721 token contract created with it: ![Screenshot of the OpenZeppelin Contracts Wizard showing an ERC-721 contract configuration.](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-2.webp) ## Compile the Contract Compilation is a stage that converts your Solidity source code into bytecode suitable for deployment on the blockchain. Throughout this process, the compiler examines your contract for syntax errors, verifies type safety, and produces machine-readable instructions for execution on the blockchain. 1. Select the **Solidity Compiler** plugin from the left panel ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-3.webp) 2. Click in the **Compile MyNFT.sol** button ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-4.webp) 3. If the compilation succeeded, you can see a green checkmark indicating success in the **Solidity Compiler** icon ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-5.webp) ## Deploy the Contract Deployment is the process of uploading your compiled smart contract to the blockchain, allowing for interaction. During deployment, you will instantiate your contract on the blockchain, which involves: 1. Select the **Deploy & Run Transactions** plugin from the left panel ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-6.webp) 2. Configure the deployment settings 1. From the **ENVIRONMENT** dropdown, select **Injected Provider - Talisman** (check the [Deploying Contracts](/develop/smart-contracts/dev-environments/remix/#deploying-contracts){target=\_blank} section of the Remix IDE guide for more details) 2. From the **ACCOUNT** dropdown, select the account you want to use for the deploy ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-7.webp) 3. Configure the contract parameters 1. Enter the address that will own the deployed NFT. 2. Click the **Deploy** button to initiate the deployment ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-8.webp) 4. Talisman will pop up - review the transaction details. Click **Approve** to deploy your contract ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-9.webp){: .browser-extension} Deploying this contract requires paying gas fees in PAS tokens on the Polkadot Hub TestNet. Ensure your Talisman account is funded with sufficient PAS tokens from the faucet before confirming the transaction, check the [Test Tokens](/develop/smart-contracts/connect-to-polkadot/#test-tokens){target=\_blank} section for more information. Gas fees cover the computational resources needed to deploy and execute the smart contract on the blockchain. If the deployment process succeeded, you will see the following output in the terminal: ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-10.webp) ## Interact with Your NFT Contract Once deployed, you can interact with your contract through Remix: 1. Find your contract under **Deployed/Unpinned Contracts**, and click it to expand the available methods for the contract ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-11.webp) 2. To mint an NFT 1. Click on the contract to expand its associated methods 2. Expand the **safeMint** function 3. Enter the recipient address 4. Click **Transact** ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-12.webp) 3. Click **Approve** to confirm the transaction in the Talisman popup ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-13.webp){: .browser-extension} If the transaction is successful, the terminal will display the following output, which details the information about the transaction, including the transaction hash, the block number, the associated logs, and so on. ![](/images/tutorials/smart-contracts/deploy-nft/deploy-nft-14.webp) Feel free to explore and interact with the contract's other functions using the same approach - selecting the method, providing any required parameters, and confirming the transaction through Talisman when needed. --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/ --- BEGIN CONTENT --- --- title: Smart Contracts description: Learn how to create, deploy, and manage smart contracts in the Polkadot ecosystem with detailed, step-by-step tutorials. template: index-page.html --- # Smart Contracts Tutorials !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. Get started with deploying and interacting with smart contracts on Polkadot through practical, hands-on tutorials. Whether you're a beginner or an experienced developer, these guides will help you navigate the entire development lifecycle. ## What to Expect from These Tutorials - **Beginner to advanced** – suitable for developers of all levels - **Complete workflows** – covers the entire process from writing code to on-chain deployment - **Interactive examples** – follow along with real, working code that you can modify and expand ## Start Building Jump into the tutorials and learn how to: - Write and compile smart contracts - Deploy contracts to the Polkadot network - Interact with deployed contracts using libraries like Ethers.js and viem Choose a tutorial below and start coding today! ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/launch-your-first-project/create-contracts/ --- BEGIN CONTENT --- --- title: Create a Smart Contract description: Learn how to write a basic smart contract using just a text editor. This guide covers creating and preparing a contract for deployment on Polkadot Hub. tutorial_badge: Beginner categories: Basics, Smart Contracts --- # Create a Smart Contract !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Creating [smart contracts](/develop/smart-contracts/overview/){target=\_blank} is fundamental to blockchain development. While many frameworks and tools are available, understanding how to write a contract from scratch with just a text editor is essential knowledge. This tutorial will guide you through creating a basic smart contract that can be used with other tutorials for deployment and integration on Polkadot Hub. To understand how smart contracts work in Polkadot Hub, check the [Smart Contract Basics](/polkadot-protocol/smart-contract-basics/){target=\_blank} guide for more information. ## Prerequisites Before starting, make sure you have: - A text editor of your choice ([VS Code](https://code.visualstudio.com/){target=\_blank}, [Sublime Text](https://www.sublimetext.com/){target=\_blank}, etc.) - Basic understanding of programming concepts - Familiarity with the Solidity programming language syntax. For further references, check the official [Solidity documentation](https://docs.soliditylang.org/en/latest/){target=\_blank} ## Understanding Smart Contract Structure Let's explore these components before building the contract: - [**SPDX license identifier**](https://docs.soliditylang.org/en/v0.6.8/layout-of-source-files.html){target=\_blank} - a standardized way to declare the license under which your code is released. This helps with legal compliance and is required by the Solidity compiler to avoid warnings - **Pragma directive** - specifies which version of Solidity compiler should be used for your contract - **Contract declaration** - similar to a class in object-oriented programming, it defines the boundaries of your smart contract - **State variables** - data stored directly in the contract that persists between function calls. These represent the contract's "state" on the blockchain - **Functions** - executable code that can read or modify the contract's state variables - **Events** - notification mechanisms that applications can subscribe to in order to track blockchain changes ## Create the Smart Contract In this section, you'll build a simple storage contract step by step. This basic Storage contract is a great starting point for beginners. It introduces key concepts like state variables, functions, and events in a simple way, demonstrating how data is stored and updated on the blockchain. Later, you'll explore each component in more detail to understand what's happening behind the scenes. This contract will: - Store a number - Allow updating the stored number - Emit an event when the number changes To build the smart contract, follow the steps below: 1. Create a new file named `Storage.sol` 2. Add the SPDX license identifier at the top of the file: ```solidity // SPDX-License-Identifier: MIT ``` This line tells users and tools which license governs your code. The [MIT license](https://opensource.org/license/mit){target=\_blank} is commonly used for open-source projects. The Solidity compiler requires this line to avoid licensing-related warnings. 3. Specify the Solidity version: ```solidity pragma solidity ^0.8.28; ``` The caret `^` means "this version or any compatible newer version." This helps ensure your contract compiles correctly with the intended compiler features. 4. Create the contract structure: ```solidity contract Storage { // Contract code will go here } ``` This defines a contract named "Storage", similar to how you would define a class in other programming languages. 5. Add the state variables and event: ```solidity contract Storage { // State variable to store a number uint256 private number; // Event to notify when the number changes event NumberChanged(uint256 newNumber); } ``` Here, you're defining: - A state variable named `number` of type `uint256` (unsigned integer with 256 bits), which is marked as `private` so it can only be accessed via functions within this contract - An event named `NumberChanged` that will be triggered whenever the number changes. The event includes the new value as data 6. Add the getter and setter functions: ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.28; contract Storage { // State variable to store our number uint256 private number; // Event to notify when the number changes event NumberChanged(uint256 newNumber); // Function to store a new number function store(uint256 newNumber) public { number = newNumber; emit NumberChanged(newNumber); } // Function to retrieve the stored number function retrieve() public view returns (uint256) { return number; } } ``` ??? code "Complete Storage.sol contract" ```solidity title="Storage.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.28; contract Storage { // State variable to store our number uint256 private number; // Event to notify when the number changes event NumberChanged(uint256 newNumber); // Function to store a new number function store(uint256 newNumber) public { number = newNumber; emit NumberChanged(newNumber); } // Function to retrieve the stored number function retrieve() public view returns (uint256) { return number; } } ``` ## Understanding the Code Let's break down the key components of the contract: - **State Variable** - `uint256 private number` - a private variable that can only be accessed through the contract's functions - The `private` keyword prevents direct access from other contracts, but it's important to note that while other contracts cannot read this variable directly, the data itself is still visible on the blockchain and can be read by external tools or applications that interact with the blockchain. "Private" in Solidity doesn't mean the data is encrypted or truly hidden - State variables in Solidity are permanent storage on the blockchain, making them different from variables in traditional programming. Every change to a state variable requires a transaction and costs gas (the fee paid for blockchain operations) - **Event** - `event NumberChanged(uint256 newNumber)` - emitted when the stored number changes - When triggered, events write data to the blockchain's log, which can be efficiently queried by applications - Unlike state variables, events cannot be read by smart contracts, only by external applications - Events are much more gas-efficient than storing data when you only need to notify external systems of changes - **Functions** - `store(uint256 newNumber)` - updates the stored number and emits an event - This function changes the state of the contract and requires a transaction to execute - The `emit` keyword is used to trigger the defined event - `retrieve()` - returns the current stored number - The `view` keyword indicates that this function only reads data and doesn't modify the contract's state - View functions don't require a transaction and don't cost gas when called externally For those new to Solidity, this naming pattern (getter/setter functions) is a common design pattern. Instead of directly accessing state variables, the convention is to use functions to control access and add additional logic if needed. This basic contract serves as a foundation for learning smart contract development. Real-world contracts often require additional security considerations, more complex logic, and thorough testing before deployment. For more detailed information about Solidity types, functions, and best practices, refer to the [Solidity documentation](https://docs.soliditylang.org/en/latest/){target=\_blank} or this [beginner's guide to Solidity](https://www.tutorialspoint.com/solidity/index.htm){target=\_blank}. ## Where to Go Next
- Tutorial __Test and Deploy with Hardhat__ --- Learn how to test and deploy the smart contract you created by using Hardhat. [:octicons-arrow-right-24: Get Started](/tutorials/smart-contracts/launch-your-first-project/test-and-deploy-with-hardhat/)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/launch-your-first-project/create-dapp-ethers-js/ --- BEGIN CONTENT --- --- title: Create a dApp With Ethers.js description: Learn how to build a decentralized application on Polkadot Hub using Ethers.js and Next.js by creating a simple dApp that interacts with a smart contract. tutorial_badge: Intermediate categories: dApp, Tooling --- # Create a DApp With Ethers.js !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction Decentralized applications (dApps) have become a cornerstone of the Web3 ecosystem, allowing developers to create applications that interact directly with blockchain networks. Polkadot Hub, a blockchain that supports smart contract functionality, provides an excellent platform for deploying and interacting with dApps. In this tutorial, you'll build a complete dApp that interacts with a smart contract deployed on the Polkadot Hub TestNet. It will use [Ethers.js](/develop/smart-contracts/libraries/ethers-js){target=\_blank} to interact with the blockchain and [Next.js](https://nextjs.org/){target=\_blank} as the frontend framework. By the end of this tutorial, you'll have a functional dApp that allows users to connect their wallets, read data from the blockchain, and execute transactions. ## Prerequisites Before you begin, make sure you have: - [Node.js](https://nodejs.org/en){target=\_blank} v16 or newer installed on your machine - A crypto wallet (like MetaMask) with some test tokens. For further information, check the [Connect to Polkadot](/develop/smart-contracts/connect-to-polkadot){target=\_blank} guide - Basic understanding of React and JavaScript - Familiarity with blockchain concepts and Solidity (helpful but not mandatory) ## Project Overview The dApp will interact with a simple Storage contract. For a step-by-step guide on creating it, refer to the [Create Contracts](/tutorials/smart-contracts/launch-your-first-project/create-contracts){target=\_blank} tutorial. This contract allows: - Reading a stored number from the blockchain - Updating the stored number with a new value The contract has already been deployed to the Polkadot Hub TestNet for testing purposes: `0x58053f0e8ede1a47a1af53e43368cd04ddcaf66f`. If you want to deploy your own, follow the [Deploying Contracts](/develop/smart-contracts/dev-environments/remix/#deploying-contracts){target=\_blank} section. Here's a simplified view of what you'll be building: ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-ethers-js/create-dapp-ethers-js-1.webp) The general structure of the project should end up as follows: ```bash ethers-dapp ├── abis │ └── Storage.json └── app ├── components │ ├── ReadContract.js │ ├── WalletConnect.js │ └── WriteContract.js ├── favicon.ico ├── globals.css ├── layout.js ├── page.js └── utils ├── contract.js └── ethers.js ``` ## Set Up the Project Let's start by creating a new Next.js project: ```bash npx create-next-app ethers-dapp --js --eslint --tailwind --app --yes cd ethers-dapp ``` Next, install the needed dependencies: ```bash npm install ethers@{{ dependencies.javascript_packages.ethersjs.version }} ``` ## Connect to Polkadot Hub To interact with the Polkadot Hub, you need to set up an [Ethers.js Provider](/develop/smart-contracts/libraries/ethers-js/#set-up-the-ethersjs-provider){target=\_blank} that connects to the blockchain. In this example, you will interact with the Polkadot Hub TestNet, so you can experiment safely. Start by creating a new file called `utils/ethers.js` and add the following code: ```javascript title="app/utils/ethers.js" import { JsonRpcProvider } from 'ethers'; export const PASSET_HUB_CONFIG = { name: 'Passet Hub', rpc: 'https://testnet-passet-hub-eth-rpc.polkadot.io/', // Passet Hub testnet RPC chainId: 420420422, // Passet Hub testnet chainId blockExplorer: 'https://blockscout-passet-hub.parity-testnet.parity.io/', }; export const getProvider = () => { return new JsonRpcProvider(PASSET_HUB_CONFIG.rpc, { chainId: PASSET_HUB_CONFIG.chainId, name: PASSET_HUB_CONFIG.name, }); }; // Helper to get a signer from a provider export const getSigner = async (provider) => { if (window.ethereum) { await window.ethereum.request({ method: 'eth_requestAccounts' }); const ethersProvider = new ethers.BrowserProvider(window.ethereum); return ethersProvider.getSigner(); } throw new Error('No Ethereum browser provider detected'); }; ``` This file establishes a connection to the Polkadot Hub TestNet and provides helper functions for obtaining a [Provider](https://docs.ethers.org/v5/api/providers/provider/){target=_blank} and [Signer](https://docs.ethers.org/v5/api/signer/){target=_blank}. The provider allows you to read data from the blockchain, while the signer enables users to send transactions and modify the blockchain state. ## Set Up the Smart Contract Interface For this dApp, you'll use a simple Storage contract already deployed. So, you need to create an interface to interact with it. First, ensure to create a folder called `abis` at the root of your project, create a file `Storage.json`, and paste the corresponding ABI (Application Binary Interface) of the Storage contract. You can copy and paste the following: ???+ code "Storage.sol ABI" ```json title="abis/Storage.json" [ { "inputs": [ { "internalType": "uint256", "name": "_newNumber", "type": "uint256" } ], "name": "setNumber", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [], "name": "storedNumber", "outputs": [ { "internalType": "uint256", "name": "", "type": "uint256" } ], "stateMutability": "view", "type": "function" } ] ``` Now, create a file called `app/utils/contract.js`: ```javascript title="app/utils/contract.js" import { Contract } from 'ethers'; import { getProvider } from './ethers'; import StorageABI from '../../abis/Storage.json'; export const CONTRACT_ADDRESS = '0x58053f0e8ede1a47a1af53e43368cd04ddcaf66f'; export const CONTRACT_ABI = StorageABI; export const getContract = () => { const provider = getProvider(); return new Contract(CONTRACT_ADDRESS, CONTRACT_ABI, provider); }; export const getSignedContract = async (signer) => { return new Contract(CONTRACT_ADDRESS, CONTRACT_ABI, signer); }; ``` This file defines the contract address, ABI, and functions to create instances of the contract for reading and writing. ## Create the Wallet Connection Component Next, let's create a component to handle wallet connections. Create a new file called `app/components/WalletConnect.js`: ```javascript title="app/components/WalletConnect.js" 'use client'; import React, { useState, useEffect } from 'react'; import { PASSET_HUB_CONFIG } from '../utils/ethers'; const WalletConnect = ({ onConnect }) => { const [account, setAccount] = useState(null); const [chainId, setChainId] = useState(null); const [error, setError] = useState(null); useEffect(() => { // Check if user already has an authorized wallet connection const checkConnection = async () => { if (window.ethereum) { try { // eth_accounts doesn't trigger the wallet popup const accounts = await window.ethereum.request({ method: 'eth_accounts', }); if (accounts.length > 0) { setAccount(accounts[0]); const chainIdHex = await window.ethereum.request({ method: 'eth_chainId', }); setChainId(parseInt(chainIdHex, 16)); } } catch (err) { console.error('Error checking connection:', err); setError('Failed to check wallet connection'); } } }; checkConnection(); if (window.ethereum) { // Setup wallet event listeners window.ethereum.on('accountsChanged', (accounts) => { setAccount(accounts[0] || null); if (accounts[0] && onConnect) onConnect(accounts[0]); }); window.ethereum.on('chainChanged', (chainIdHex) => { setChainId(parseInt(chainIdHex, 16)); }); } return () => { // Cleanup event listeners if (window.ethereum) { window.ethereum.removeListener('accountsChanged', () => {}); window.ethereum.removeListener('chainChanged', () => {}); } }; }, [onConnect]); const connectWallet = async () => { if (!window.ethereum) { setError( 'MetaMask not detected! Please install MetaMask to use this dApp.' ); return; } try { // eth_requestAccounts triggers the wallet popup const accounts = await window.ethereum.request({ method: 'eth_requestAccounts', }); setAccount(accounts[0]); const chainIdHex = await window.ethereum.request({ method: 'eth_chainId', }); const currentChainId = parseInt(chainIdHex, 16); setChainId(currentChainId); // Prompt user to switch networks if needed if (currentChainId !== PASSET_HUB_CONFIG.chainId) { await switchNetwork(); } if (onConnect) onConnect(accounts[0]); } catch (err) { console.error('Error connecting to wallet:', err); setError('Failed to connect wallet'); } }; const switchNetwork = async () => { try { await window.ethereum.request({ method: 'wallet_switchEthereumChain', params: [{ chainId: `0x${PASSET_HUB_CONFIG.chainId.toString(16)}` }], }); } catch (switchError) { // Error 4902 means the chain hasn't been added to MetaMask if (switchError.code === 4902) { try { await window.ethereum.request({ method: 'wallet_addEthereumChain', params: [ { chainId: `0x${PASSET_HUB_CONFIG.chainId.toString(16)}`, chainName: PASSET_HUB_CONFIG.name, rpcUrls: [PASSET_HUB_CONFIG.rpc], blockExplorerUrls: [PASSET_HUB_CONFIG.blockExplorer], }, ], }); } catch (addError) { setError('Failed to add network to wallet'); } } else { setError('Failed to switch network'); } } }; // UI-only disconnection - MetaMask doesn't support programmatic disconnection const disconnectWallet = () => { setAccount(null); }; return (
{error &&

{error}

} {!account ? ( ) : (
{`${account.substring(0, 6)}...${account.substring(38)}`} {chainId !== PASSET_HUB_CONFIG.chainId && ( )}
)}
); }; export default WalletConnect; ``` This component handles connecting to the wallet, switching networks if necessary, and keeping track of the connected account. To integrate this component to your dApp, you need to overwrite the existing boilerplate in `app/page.js` with the following code: ```javascript title="app/page.js" 'use client'; import { useState } from 'react'; import WalletConnect from './components/WalletConnect'; export default function Home() { const [account, setAccount] = useState(null); const handleConnect = (connectedAccount) => { setAccount(connectedAccount); }; return (

Ethers.js dApp - Passet Hub Smart Contracts

); } ``` In your terminal, you can launch your project by running: ```bash npm run dev ``` And you will see the following: ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-ethers-js/create-dapp-ethers-js-2.webp) ## Read Data from the Blockchain Now, let's create a component to read data from the contract. Create a file called `app/components/ReadContract.js`: ```javascript title="app/components/ReadContract.js" 'use client'; import React, { useState, useEffect } from 'react'; import { getContract } from '../utils/contract'; const ReadContract = () => { const [storedNumber, setStoredNumber] = useState(null); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { // Function to read data from the blockchain const fetchData = async () => { try { setLoading(true); const contract = getContract(); // Call the smart contract's storedNumber function const number = await contract.storedNumber(); setStoredNumber(number.toString()); setError(null); } catch (err) { console.error('Error fetching stored number:', err); setError('Failed to fetch data from the contract'); } finally { setLoading(false); } }; fetchData(); // Poll for updates every 10 seconds to keep UI in sync with blockchain const interval = setInterval(fetchData, 10000); // Clean up interval on component unmount return () => clearInterval(interval); }, []); return (

Contract Data

{loading ? (
) : error ? (

{error}

) : (

Stored Number: {storedNumber}

)}
); }; export default ReadContract; ``` This component reads the `storedNumber` value from the contract and displays it to the user. It also sets up a polling interval to refresh the data periodically. To see this change in your dApp, you need to integrate this component into the `app/page.js` file: ```javascript title="app/page.js" 'use client'; import { useState } from 'react'; import WalletConnect from './components/WalletConnect'; import ReadContract from './components/ReadContract'; export default function Home() { const [account, setAccount] = useState(null); const handleConnect = (connectedAccount) => { setAccount(connectedAccount); }; return (

Ethers.js dApp - Passet Hub Smart Contracts

); } ``` Your dApp will automatically be updated to the following: ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-ethers-js/create-dapp-ethers-js-3.webp) ## Write Data to the Blockchain Finally, let's create a component that allows users to update the stored number. Create a file called `app/components/WriteContract.js`: ```javascript title="app/components/WriteContract.js" 'use client'; import { useState } from 'react'; import { getSignedContract } from '../utils/contract'; import { ethers } from 'ethers'; const WriteContract = ({ account }) => { const [newNumber, setNewNumber] = useState(''); const [status, setStatus] = useState({ type: null, message: '' }); const [isSubmitting, setIsSubmitting] = useState(false); const handleSubmit = async (e) => { e.preventDefault(); // Validation checks if (!account) { setStatus({ type: 'error', message: 'Please connect your wallet first' }); return; } if (!newNumber || isNaN(Number(newNumber))) { setStatus({ type: 'error', message: 'Please enter a valid number' }); return; } try { setIsSubmitting(true); setStatus({ type: 'info', message: 'Initiating transaction...' }); // Get a signer from the connected wallet const provider = new ethers.BrowserProvider(window.ethereum); const signer = await provider.getSigner(); const contract = await getSignedContract(signer); // Send transaction to blockchain and wait for user confirmation in wallet setStatus({ type: 'info', message: 'Please confirm the transaction in your wallet...', }); // Call the contract's setNumber function const tx = await contract.setNumber(newNumber); // Wait for transaction to be mined setStatus({ type: 'info', message: 'Transaction submitted. Waiting for confirmation...', }); const receipt = await tx.wait(); setStatus({ type: 'success', message: `Transaction confirmed! Transaction hash: ${receipt.hash}`, }); setNewNumber(''); } catch (err) { console.error('Error updating number:', err); // Error code 4001 is MetaMask's code for user rejection if (err.code === 4001) { setStatus({ type: 'error', message: 'Transaction rejected by user.' }); } else { setStatus({ type: 'error', message: `Error: ${err.message || 'Failed to send transaction'}`, }); } } finally { setIsSubmitting(false); } }; return (

Update Stored Number

{status.message && (
{status.message}
)}
setNewNumber(e.target.value)} disabled={isSubmitting || !account} className="w-full p-2 border rounded-md focus:outline-none focus:ring-2 focus:ring-pink-400" />
{!account && (

Connect your wallet to update the stored number.

)}
); }; export default WriteContract; ``` This component allows users to input a new number and send a transaction to update the value stored in the contract. When the transaction is successful, users will see the stored value update in the `ReadContract` component after the transaction is confirmed. Update the `app/page.js` file to integrate all components: ```javascript title="app/page.js" 'use client'; import { useState } from 'react'; import WalletConnect from './components/WalletConnect'; import ReadContract from './components/ReadContract'; import WriteContract from './components/WriteContract'; export default function Home() { const [account, setAccount] = useState(null); const handleConnect = (connectedAccount) => { setAccount(connectedAccount); }; return (

Ethers.js dApp - Passet Hub Smart Contracts

); } ``` The completed UI will display: ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-ethers-js/create-dapp-ethers-js-4.webp) ## Conclusion Congratulations! You've built a complete dApp that interacts with a smart contract on the Polkadot Hub TestNet using Ethers.js and Next.js. Your application can now: - Connect to a user's wallet - Read data from a smart contract - Send transactions to update the contract state These fundamental skills provide the foundation for building more complex dApps on Polkadot Hub. With these building blocks, you can extend your application to interact with more sophisticated smart contracts and create more advanced user interfaces. To get started right away with a working example, you can clone the repository and navigate to the implementation: ``` git clone https://github.com/polkadot-developers/polkavm-storage-contract-dapps.git -b v0.0.2 cd polkavm-storage-contract-dapps/ethers-dapp ``` --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/launch-your-first-project/create-dapp-viem/ --- BEGIN CONTENT --- --- title: Create a dApp With Viem description: Learn how to build a decentralized application on Polkadot Hub using Viem and Next.js by creating a simple dApp that interacts with a smart contract. tutorial_badge: Intermediate categories: dApp, Tooling --- # Create a DApp with Viem !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. Decentralized applications (dApps) are a key component of the Web3 ecosystem, enabling developers to build applications that communicate directly with blockchain networks. Polkadot Hub, a blockchain with smart contract support, serves as a robust platform for deploying and interacting with dApps. This tutorial will guide you through building a fully functional dApp that interacts with a smart contract on Polkadot Hub. You'll use [Viem](https://viem.sh/){target=\_blank} for blockchain interactions and [Next.js](https://nextjs.org/){target=\_blank} for the frontend. By the end, you'll have a dApp that lets users connect their wallets, retrieve on-chain data, and execute transactions. ## Prerequisites Before getting started, ensure you have the following: - [Node.js](https://nodejs.org/en){target=\_blank} v16 or later installed on your system - A crypto wallet (such as MetaMask) funded with test tokens. Refer to the [Connect to Polkadot](/develop/smart-contracts/connect-to-polkadot){target=\_blank} guide for more details - A basic understanding of React and JavaScript - Some familiarity with blockchain fundamentals and Solidity (useful but not required) ## Project Overview This dApp will interact with a basic Storage contract. Refer to the [Create Contracts](/tutorials/smart-contracts/launch-your-first-project/create-contracts){target=\_blank} tutorial for a step-by-step guide on creating this contract. The contract allows: - Retrieving a stored number from the blockchain - Updating the stored number with a new value Below is a high-level overview of what you'll be building: ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-viem/create-dapp-viem-1.webp) Your project directory will be organized as follows: ```bash viem-dapp ├── abis │ └── Storage.json └── app ├── components │ ├── ReadContract.tsx │ ├── WalletConnect.tsx │ └── WriteContract.tsx ├── favicon.ico ├── globals.css ├── layout.tsx ├── page.tsx └── utils ├── contract.ts └── viem.ts ``` ## Set Up the Project Create a new Next.js project: ```bash npx create-next-app viem-dapp --ts --eslint --tailwind --app --yes cd viem-dapp ``` ## Install Dependencies Install viem and related packages: ```bash npm install viem@{{dependencies.javascript_packages.viem.version}} npm install --save-dev typescript @types/node ``` ## Connect to Polkadot Hub To interact with Polkadot Hub, you need to set up a [Public Client](https://viem.sh/docs/clients/public#public-client){target=\_blank} that connects to the blockchain. In this example, you will interact with the Polkadot Hub TestNet, so you can experiment safely. Start by creating a new file called `utils/viem.ts` and add the following code: ```typescript title="viem.ts" import { createPublicClient, http, createWalletClient, custom } from 'viem' import 'viem/window'; const transport = http('https://testnet-passet-hub-eth-rpc.polkadot.io') // Configure the Passet Hub chain export const passetHub = { id: 420420422, name: 'Passet Hub', network: 'passet-hub', nativeCurrency: { decimals: 18, name: 'PAS', symbol: 'PAS', }, rpcUrls: { default: { http: ['https://testnet-passet-hub-eth-rpc.polkadot.io'], }, }, } as const // Create a public client for reading data export const publicClient = createPublicClient({ chain: passetHub, transport }) // Create a wallet client for signing transactions export const getWalletClient = async () => { if (typeof window !== 'undefined' && window.ethereum) { const [account] = await window.ethereum.request({ method: 'eth_requestAccounts' }); return createWalletClient({ chain: passetHub, transport: custom(window.ethereum), account, }); } throw new Error('No Ethereum browser provider detected'); }; ``` This file initializes a viem client, providing helper functions for obtaining a Public Client and a [Wallet Client](https://viem.sh/docs/clients/wallet#wallet-client){target=\_blank}. The Public Client enables reading blockchain data, while the Wallet Client allows users to sign and send transactions. Also, note that by importing `'viem/window'` the global `window.ethereum` will be typed as an `EIP1193Provider`, check the [`window` Polyfill](https://viem.sh/docs/typescript#window-polyfill){target=\_blank} reference for more information. ## Set Up the Smart Contract Interface For this dApp, you'll use a simple [Storage contract](/tutorials/smart-contracts/launch-your-first-project/create-contracts){target=\_blank} that's already deployed in the Polkadot Hub TestNet: `0x58053f0e8ede1a47a1af53e43368cd04ddcaf66f`. To interact with it, you need to define the contract interface. Create a folder called `abis` at the root of your project, then create a file named `Storage.json` and paste the corresponding ABI (Application Binary Interface) of the Storage contract. You can copy and paste the following: ??? code "Storage.sol ABI" ```json title="Storage.json" [ { "inputs": [ { "internalType": "uint256", "name": "_newNumber", "type": "uint256" } ], "name": "setNumber", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [], "name": "storedNumber", "outputs": [ { "internalType": "uint256", "name": "", "type": "uint256" } ], "stateMutability": "view", "type": "function" } ] ``` Next, create a file called `utils/contract.ts`: ```typescript title="contract.ts" import { getContract } from 'viem'; import { publicClient, getWalletClient } from './viem'; import StorageABI from '../../abis/Storage.json'; export const CONTRACT_ADDRESS = '0x58053f0e8ede1a47a1af53e43368cd04ddcaf66f'; export const CONTRACT_ABI = StorageABI; // Create a function to get a contract instance for reading export const getContractInstance = () => { return getContract({ address: CONTRACT_ADDRESS, abi: CONTRACT_ABI, client: publicClient, }); }; // Create a function to get a contract instance with a signer for writing export const getSignedContract = async () => { const walletClient = await getWalletClient(); return getContract({ address: CONTRACT_ADDRESS, abi: CONTRACT_ABI, client: walletClient, }); }; ``` This file defines the contract address, ABI, and functions to create a viem [contract instance](https://viem.sh/docs/contract/getContract#contract-instances){target=\_blank} for reading and writing operations. viem's contract utilities ensure a more efficient and type-safe interaction with smart contracts. ## Create the Wallet Connection Component Now, let's create a component to handle wallet connections. Create a new file called `components/WalletConnect.tsx`: ```typescript title="WalletConnect.tsx" "use client"; import React, { useState, useEffect } from "react"; import { passetHub } from "../utils/viem"; interface WalletConnectProps { onConnect: (account: string) => void; } const WalletConnect: React.FC = ({ onConnect }) => { const [account, setAccount] = useState(null); const [chainId, setChainId] = useState(null); const [error, setError] = useState(null); useEffect(() => { // Check if user already has an authorized wallet connection const checkConnection = async () => { if (typeof window !== 'undefined' && window.ethereum) { try { // eth_accounts doesn't trigger the wallet popup const accounts = await window.ethereum.request({ method: 'eth_accounts', }) as string[]; if (accounts.length > 0) { setAccount(accounts[0]); const chainIdHex = await window.ethereum.request({ method: 'eth_chainId', }) as string; setChainId(parseInt(chainIdHex, 16)); onConnect(accounts[0]); } } catch (err) { console.error('Error checking connection:', err); setError('Failed to check wallet connection'); } } }; checkConnection(); if (typeof window !== 'undefined' && window.ethereum) { // Setup wallet event listeners window.ethereum.on('accountsChanged', (accounts: string[]) => { setAccount(accounts[0] || null); if (accounts[0]) onConnect(accounts[0]); }); window.ethereum.on('chainChanged', (chainIdHex: string) => { setChainId(parseInt(chainIdHex, 16)); }); } return () => { // Cleanup event listeners if (typeof window !== 'undefined' && window.ethereum) { window.ethereum.removeListener('accountsChanged', () => {}); window.ethereum.removeListener('chainChanged', () => {}); } }; }, [onConnect]); const connectWallet = async () => { if (typeof window === 'undefined' || !window.ethereum) { setError( 'MetaMask not detected! Please install MetaMask to use this dApp.' ); return; } try { // eth_requestAccounts triggers the wallet popup const accounts = await window.ethereum.request({ method: 'eth_requestAccounts', }) as string[]; setAccount(accounts[0]); const chainIdHex = await window.ethereum.request({ method: 'eth_chainId', }) as string; const currentChainId = parseInt(chainIdHex, 16); setChainId(currentChainId); // Prompt user to switch networks if needed if (currentChainId !== passetHub.id) { await switchNetwork(); } onConnect(accounts[0]); } catch (err) { console.error('Error connecting to wallet:', err); setError('Failed to connect wallet'); } }; const switchNetwork = async () => { console.log('Switch network') try { await window.ethereum.request({ method: 'wallet_switchEthereumChain', params: [{ chainId: `0x${passetHub.id.toString(16)}` }], }); } catch (switchError: any) { // Error 4902 means the chain hasn't been added to MetaMask if (switchError.code === 4902) { try { await window.ethereum.request({ method: 'wallet_addEthereumChain', params: [ { chainId: `0x${passetHub.id.toString(16)}`, chainName: passetHub.name, rpcUrls: [passetHub.rpcUrls.default.http[0]], nativeCurrency: { name: passetHub.nativeCurrency.name, symbol: passetHub.nativeCurrency.symbol, decimals: passetHub.nativeCurrency.decimals, }, }, ], }); } catch (addError) { setError('Failed to add network to wallet'); } } else { setError('Failed to switch network'); } } }; // UI-only disconnection - MetaMask doesn't support programmatic disconnection const disconnectWallet = () => { setAccount(null); }; return (
{error &&

{error}

} {!account ? ( ) : (
{`${account.substring(0, 6)}...${account.substring(38)}`} {chainId !== passetHub.id && ( )}
)}
); }; export default WalletConnect; ``` This component handles connecting to the wallet, switching networks if necessary, and keeping track of the connected account. It provides a button for users to connect their wallet and displays the connected account address once connected. To use this component in your dApp, replace the existing boilerplate in `app/page.tsx` with the following code: ```typescript title="page.tsx" "use client"; import { useState } from "react"; import WalletConnect from "./components/WalletConnect"; export default function Home() { const [account, setAccount] = useState(null); const handleConnect = (connectedAccount: string) => { setAccount(connectedAccount); }; return (

Viem dApp - Passet Hub Smart Contracts

); } ``` Now you're ready to run your dApp. From your project directory, execute: ```bash npm run dev ``` Navigate to `http://localhost:3000` in your browser, and you should see your dApp with the wallet connection button, the stored number display, and the form to update the number. ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-viem/create-dapp-viem-2.webp) ## Create the Read Contract Component Now, let's create a component to read data from the contract. Create a file called `components/ReadContract.tsx`: ```typescript title="ReadContract.tsx" 'use client'; import React, { useState, useEffect } from 'react'; import { publicClient } from '../utils/viem'; import { CONTRACT_ADDRESS, CONTRACT_ABI } from '../utils/contract'; const ReadContract: React.FC = () => { const [storedNumber, setStoredNumber] = useState(null); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { // Function to read data from the blockchain const fetchData = async () => { try { setLoading(true); // Call the smart contract's storedNumber function const number = await publicClient.readContract({ address: CONTRACT_ADDRESS, abi: CONTRACT_ABI, functionName: 'storedNumber', args: [], }) as bigint; setStoredNumber(number.toString()); setError(null); } catch (err) { console.error('Error fetching stored number:', err); setError('Failed to fetch data from the contract'); } finally { setLoading(false); } }; fetchData(); // Poll for updates every 10 seconds to keep UI in sync with blockchain const interval = setInterval(fetchData, 10000); // Clean up interval on component unmount return () => clearInterval(interval); }, []); return (

Contract Data

{loading ? (
) : error ? (

{error}

) : (

Stored Number: {storedNumber}

)}
); }; export default ReadContract; ``` This component reads the `storedNumber` value from the contract and displays it to the user. It also sets up a polling interval to refresh the data periodically, ensuring that the UI stays in sync with the blockchain state. To reflect this change in your dApp, incorporate this component into the `app/page.tsx` file. ```typescript title="page.tsx" "use client"; import { useState } from "react"; import WalletConnect from "./components/WalletConnect"; import ReadContract from "./components/ReadContract"; export default function Home() { const [account, setAccount] = useState(null); const handleConnect = (connectedAccount: string) => { setAccount(connectedAccount); }; return (

Viem dApp - Passet Hub Smart Contracts

); } ``` And you will see in your browser: ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-viem/create-dapp-viem-3.webp) ## Create the Write Contract Component Finally, let's create a component that allows users to update the stored number. Create a file called `components/WriteContract.tsx`: ```typescript title="WriteContract.tsx" "use client"; import React, { useState, useEffect } from "react"; import { publicClient, getWalletClient } from "../utils/viem"; import { CONTRACT_ADDRESS, CONTRACT_ABI } from "../utils/contract"; interface WriteContractProps { account: string | null; } const WriteContract: React.FC = ({ account }) => { const [newNumber, setNewNumber] = useState(""); const [status, setStatus] = useState<{ type: string | null; message: string; }>({ type: null, message: "", }); const [isSubmitting, setIsSubmitting] = useState(false); const [isCorrectNetwork, setIsCorrectNetwork] = useState(true); // Check if the account is on the correct network useEffect(() => { const checkNetwork = async () => { if (!account) return; try { // Get the chainId from the public client const chainId = await publicClient.getChainId(); // Get the user's current chainId from their wallet const walletClient = await getWalletClient(); if (!walletClient) return; const walletChainId = await walletClient.getChainId(); // Check if they match setIsCorrectNetwork(chainId === walletChainId); } catch (err) { console.error("Error checking network:", err); setIsCorrectNetwork(false); } }; checkNetwork(); }, [account]); const handleSubmit = async (e: React.FormEvent) => { e.preventDefault(); // Validation checks if (!account) { setStatus({ type: "error", message: "Please connect your wallet first" }); return; } if (!isCorrectNetwork) { setStatus({ type: "error", message: "Please switch to the correct network in your wallet", }); return; } if (!newNumber || isNaN(Number(newNumber))) { setStatus({ type: "error", message: "Please enter a valid number" }); return; } try { setIsSubmitting(true); setStatus({ type: "info", message: "Initiating transaction..." }); // Get wallet client for transaction signing const walletClient = await getWalletClient(); if (!walletClient) { setStatus({ type: "error", message: "Wallet client not available" }); return; } // Check if account matches if ( walletClient.account?.address.toLowerCase() !== account.toLowerCase() ) { setStatus({ type: "error", message: "Connected wallet account doesn't match the selected account", }); return; } // Prepare transaction and wait for user confirmation in wallet setStatus({ type: "info", message: "Please confirm the transaction in your wallet...", }); // Simulate the contract call first console.log('newNumber', newNumber); const { request } = await publicClient.simulateContract({ address: CONTRACT_ADDRESS, abi: CONTRACT_ABI, functionName: "setNumber", args: [BigInt(newNumber)], account: walletClient.account, }); // Send the transaction with wallet client const hash = await walletClient.writeContract(request); // Wait for transaction to be mined setStatus({ type: "info", message: "Transaction submitted. Waiting for confirmation...", }); const receipt = await publicClient.waitForTransactionReceipt({ hash, }); setStatus({ type: "success", message: `Transaction confirmed! Transaction hash: ${receipt.transactionHash}`, }); setNewNumber(""); } catch (err: any) { console.error("Error updating number:", err); // Handle specific errors if (err.code === 4001) { // User rejected transaction setStatus({ type: "error", message: "Transaction rejected by user." }); } else if (err.message?.includes("Account not found")) { // Account not found on the network setStatus({ type: "error", message: "Account not found on current network. Please check your wallet is connected to the correct network.", }); } else if (err.message?.includes("JSON is not a valid request object")) { // JSON error - specific to your current issue setStatus({ type: "error", message: "Invalid request format. Please try again or contact support.", }); } else { // Other errors setStatus({ type: "error", message: `Error: ${err.message || "Failed to send transaction"}`, }); } } finally { setIsSubmitting(false); } }; return (

Update Stored Number

{!isCorrectNetwork && account && (
⚠️ You are not connected to the correct network. Please switch networks in your wallet.
)} {status.message && (
{status.message}
)}
setNewNumber(e.target.value)} disabled={isSubmitting || !account} className="w-full p-2 border rounded-md focus:outline-none focus:ring-2 focus:ring-pink-400" />
{!account && (

Connect your wallet to update the stored number.

)}
); }; export default WriteContract; ``` This component allows users to input a new number and send a transaction to update the value stored in the contract. It provides appropriate feedback during each step of the transaction process and handles error scenarios. Update the `app/page.tsx` file to integrate all components: ```typescript title="page.tsx" "use client"; import { useState } from "react"; import WalletConnect from "./components/WalletConnect"; import ReadContract from "./components/ReadContract"; import WriteContract from "./components/WriteContract"; export default function Home() { const [account, setAccount] = useState(null); const handleConnect = (connectedAccount: string) => { setAccount(connectedAccount); }; return (

Viem dApp - Passet Hub Smart Contracts

); } ``` After that, you will see: ![](/images/tutorials/smart-contracts/launch-your-first-project/create-dapp-viem/create-dapp-viem-4.webp) ## How It Works Let's examine how the dApp interacts with the blockchain: 1. **Wallet Connection**: - The `WalletConnect` component uses the browser's Ethereum provider (MetaMask) to connect to the user's wallet - It handles network switching to ensure the user is connected to the Polkadot Hub TestNet - Once connected, it provides the user's account address to the parent component 2. **Reading Data**: - The `ReadContract` component uses viem's `readContract` function to call the `storedNumber` view function - It periodically polls for updates to keep the UI in sync with the blockchain state - The component displays a loading indicator while fetching data and handles error states 3. **Writing Data**: - The `WriteContract` component uses viem's `writeContract` function to send a transaction to the `setNumber` function - It ensures the wallet is connected before allowing a transaction - The component shows detailed feedback during transaction submission and confirmation - After a successful transaction, the value displayed in the `ReadContract` component will update on the next poll ## Conclusion Congratulations! You've successfully built a fully functional dApp that interacts with a smart contract on Polkadot Hub using viem and Next.js. Your application can now: - Connect to a user's wallet and handle network switching - Read data from a smart contract and keep it updated - Write data to the blockchain through transactions These fundamental skills provide the foundation for building more complex dApps on Polkadot Hub. With this knowledge, you can extend your application to interact with more sophisticated smart contracts and create advanced user interfaces. To get started right away with a working example, you can clone the repository and navigate to the implementation: ``` git clone https://github.com/polkadot-developers/polkavm-storage-contract-dapps.git -b v0.0.2 cd polkavm-storage-contract-dapps/viem-dapp ``` ## Where to Go Next
- Guide __Create a dApp with Wagmi__ --- Learn how to build a decentralized application by using the Wagmi framework. [:octicons-arrow-right-24: Get Started](/develop/smart-contracts/libraries/wagmi)
--- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/launch-your-first-project/ --- BEGIN CONTENT --- --- title: Launch Your First Project description: Follow a step-by-step guide to creating, deploying, and managing your first smart contract project on Polkadot, from coding to execution. template: index-page.html --- # Launch Your First Smart Contract Project !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. Kickstart your journey into smart contract development with this comprehensive guide. Learn how to create, deploy, and interact with contracts on Polkadot. Whether you're new to smart contracts or refining your skills, these guides provide a structured approach to launching your project. Start building your first smart contract today: - **Set up your development environment** with the right tools and frameworks - **Write and compile** your first smart contract - **Deploy and interact** with your contract on Polkadot - **Test and optimize** your code for production readiness Follow the step-by-step tutorials to confidently launch your project. ## Development Pathway - **Beginner-friendly** – step-by-step instructions suitable for newcomers to smart contract development - **Hands-on learning** – practical exercises that build real-world skills - **Production-ready** – progress from basic concepts to deployment-ready contracts ## In This Section :::INSERT_IN_THIS_SECTION::: --- END CONTENT --- Doc-Content: https://docs.polkadot.com/tutorials/smart-contracts/launch-your-first-project/test-and-deploy-with-hardhat/ --- BEGIN CONTENT --- --- title: Test and Deploy with Hardhat description: Learn how to set up a Hardhat development environment, write comprehensive tests for Solidity smart contracts, and deploy to local and Polkadot Hub networks. tutorial_badge: Intermediate categories: dApp, Tooling --- # Test and Deploy with Hardhat !!! smartcontract "PolkaVM Preview Release" PolkaVM smart contracts with Ethereum compatibility are in **early-stage development and may be unstable or incomplete**. ## Introduction After creating a smart contract, the next crucial steps are testing and deployment. Proper testing ensures your contract behaves as expected, while deployment makes your contract available on the blockchain. This tutorial will guide you through using Hardhat, a popular development environment, to test and deploy the `Storage.sol` contract you created in the [Create a Smart Contract](/tutorials/smart-contracts/launch-your-first-project/create-contracts/){target=\_blank} tutorial. For more information about Hardhat usage, check the [Hardhat guide](/develop/smart-contracts/dev-environments/hardhat/){target=\_blank}. ## Prerequisites Before starting, make sure you have: - The [`Storage.sol` contract](/tutorials/smart-contracts/launch-your-first-project/create-contracts/#create-the-smart-contract){target=\_blank} created in the previous tutorial - [Node.js](https://nodejs.org/){target=\_blank} (v16.0.0 or later) and npm installed - Basic understanding of JavaScript for writing tests - Some PAS test tokens to cover transaction fees (obtained from the [Polkadot faucet](https://faucet.polkadot.io/?parachain=1111){target=\_blank}) ## Setting Up the Development Environment Let's start by setting up Hardhat for your Storage contract project: 1. Create a new directory for your project and navigate into it: ```bash mkdir storage-hardhat cd storage-hardhat ``` 2. Initialize a new npm project: ```bash npm init -y ``` 3. Install `hardhat-polkadot` and all required plugins: ```bash npm install --save-dev @parity/hardhat-polkadot solc@0.8.28 ``` For dependencies compatibility, ensure to install the `@nomicfoundation/hardhat-toolbox` dependency with the `--force` flag: ```bash npm install --force @nomicfoundation/hardhat-toolbox ``` 5. Initialize a Hardhat project: ```bash npx hardhat-polkadot init ``` Select **Create an empty hardhat.config.js** when prompted. 6. Configure Hardhat by updating the `hardhat.config.js` file: ```javascript title="hardhat.config.js" require("@nomicfoundation/hardhat-toolbox"); require("@parity/hardhat-polkadot"); const { vars } = require("hardhat/config"); /** @type import('hardhat/config').HardhatUserConfig */ module.exports = { solidity: "0.8.28", resolc: { version: "1.5.2", compilerSource: "npm", }, networks: { hardhat: { polkavm: true, nodeConfig: { nodeBinaryPath: 'INSERT_PATH_TO_SUBSTRATE_NODE', rpcPort: 8000, dev: true, }, adapterConfig: { adapterBinaryPath: 'INSERT_PATH_TO_ETH_RPC_ADAPTER', dev: true, }, }, localNode: { polkavm: true, url: `http://127.0.0.1:8545`, }, passetHub: { polkavm: true, url: 'https://testnet-passet-hub-eth-rpc.polkadot.io', accounts: [vars.get("PRIVATE_KEY")], }, }, }; ``` Ensure that `INSERT_PATH_TO_SUBSTRATE_NODE` and `INSERT_PATH_TO_ETH_RPC_ADAPTER` are replaced with the proper paths to the compiled binaries. If you need to build these binaries, follow the [Installation](/develop/smart-contracts/local-development-node#install-the-substrate-node-and-eth-rpc-adapter){target=\_blank} section on the Local Development Node page. The configuration also defines two network settings: - `localNode` - runs a PolkaVM instance on `http://127.0.0.1:8545` for local development and testing - `passetHub` - connects to the the Polkadot Hub TestNet network using a predefined RPC URL and a private key stored in environment variables 7. Export your private key and save it in your Hardhat environment: ```bash npx hardhat vars set PRIVATE_KEY "INSERT_PRIVATE_KEY" ``` Replace `INSERT_PRIVATE_KEY` with your actual private key. For further details on private key exportation, refer to the article [How to export an account's private key](https://support.metamask.io/configure/accounts/how-to-export-an-accounts-private-key/){target=\_blank}. !!! warning Keep your private key safe, and never share it with anyone. If it is compromised, your funds can be stolen. ## Adding the Smart Contract 1. Create a new folder called `contracts` and create a `Storage.sol` file. Add the contract code from the previous tutorial: ```solidity title="Storage.sol" // SPDX-License-Identifier: MIT pragma solidity ^0.8.28; contract Storage { // State variable to store our number uint256 private number; // Event to notify when the number changes event NumberChanged(uint256 newNumber); // Function to store a new number function store(uint256 newNumber) public { number = newNumber; emit NumberChanged(newNumber); } // Function to retrieve the stored number function retrieve() public view returns (uint256) { return number; } } ``` 2. Compile the contract: ```bash npx hardhat compile ``` 3. If successful, you will see the following output in your terminal:
npx hardhat compile Compiling 1 Solidity file Successfully compiled 1 Solidity file
After compilation, the `artifacts-pvm` and `cache-pvm` folders, containing the metadata and binary files of your compiled contract, will be created in the root of your project. ## Writing Tests Testing is a critical part of smart contract development. Hardhat makes it easy to write tests in JavaScript using frameworks like [Mocha](https://mochajs.org/){target=\_blank} and [Chai](https://www.chaijs.com/){target=\_blank}. 1. Create a folder for testing called `test`. Inside that directory, create a file named `Storage.js` and add the following code: ```javascript title="Storage.js" const { expect } = require('chai'); const { ethers } = require('hardhat'); describe('Storage', function () { let storage; let owner; let addr1; beforeEach(async function () { // Get signers [owner, addr1] = await ethers.getSigners(); // Deploy the Storage contract const Storage = await ethers.getContractFactory('Storage'); storage = await Storage.deploy(); await storage.waitForDeployment(); }); describe('Basic functionality', function () { // Add your logic here }); }); ``` The `beforeEach` hook ensures stateless contract execution by redeploying a fresh instance of the Storage contract before each test case. This approach guarantees that each test starts with a clean and independent contract state by using `ethers.getSigners()` to obtain test accounts and `ethers.getContractFactory('Storage').deploy()` to create a new contract instance. Now, you can add custom unit tests to check your contract functionality. Some example tests are available below: a. **Initial state verification** - ensures that the contract starts with a default value of zero, which is a fundamental expectation for the `Storage.sol` contract ```javascript title="Storage.js" it('Should return 0 initially', async function () { expect(await storage.retrieve()).to.equal(0); }); ``` Explanation: - Checks the initial state of the contract - Verifies that a newly deployed contract has a default value of 0 - Confirms the `retrieve()` method works correctly for a new contract b. **Value storage test** - validate the core functionality of storing and retrieving a value in the contract ```javascript title="Storage.js" it('Should update when store is called', async function () { const testValue = 42; // Store a value await storage.store(testValue); // Check if the value was updated expect(await storage.retrieve()).to.equal(testValue); }); ``` Explanation: - Demonstrates the ability to store a specific value - Checks that the stored value can be retrieved correctly - Verifies the basic write and read functionality of the contract c. **Event emission verification** - confirm that the contract emits the correct event when storing a value, which is crucial for off-chain tracking ```javascript title="Storage.js" it('Should emit an event when storing a value', async function () { const testValue = 100; // Check if the NumberChanged event is emitted with the correct value await expect(storage.store(testValue)) .to.emit(storage, 'NumberChanged') .withArgs(testValue); }); ``` Explanation: - Ensures the `NumberChanged` event is emitted during storage - Verifies that the event contains the correct stored value - Validates the contract's event logging mechanism d. **Sequential value storage test** - check the contract's ability to store multiple values sequentially and maintain the most recent value ```javascript title="Storage.js" it('Should allow storing sequentially increasing values', async function () { const values = [10, 20, 30, 40]; for (const value of values) { await storage.store(value); expect(await storage.retrieve()).to.equal(value); } }); ``` Explanation: - Verifies that multiple values can be stored in sequence - Confirms that each new store operation updates the contract's state - Demonstrates the contract's ability always to reflect the most recently stored value The complete `test/Storage.js` should look like this: ???--- code "View complete script" ```javascript title="Storage.js" const { expect } = require('chai'); const { ethers } = require('hardhat'); describe('Storage', function () { let storage; let owner; let addr1; beforeEach(async function () { // Get signers [owner, addr1] = await ethers.getSigners(); // Deploy the Storage contract const Storage = await ethers.getContractFactory('Storage'); storage = await Storage.deploy(); await storage.waitForDeployment(); }); describe('Basic functionality', function () { it('Should return 0 initially', async function () { expect(await storage.retrieve()).to.equal(0); }); it('Should update when store is called', async function () { const testValue = 42; // Store a value await storage.store(testValue); // Check if the value was updated expect(await storage.retrieve()).to.equal(testValue); }); it('Should emit an event when storing a value', async function () { const testValue = 100; // Check if the NumberChanged event is emitted with the correct value await expect(storage.store(testValue)) .to.emit(storage, 'NumberChanged') .withArgs(testValue); }); it('Should allow storing sequentially increasing values', async function () { const values = [10, 20, 30, 40]; for (const value of values) { await storage.store(value); expect(await storage.retrieve()).to.equal(value); } }); }); }); ``` 2. Run the tests: ```bash npx hardhat test ``` 3. After running the above command, you will see the output showing that all tests have passed:
npx hardhat test Storage Basic functionality ✔ Should return 0 initially ✔ Should update when store is called (1126ms) ✔ Should emit an event when storing a value (1131ms) ✔ Should allow storing sequentially increasing values (12477ms) 4 passing (31s)
## Deploying with Ignition [Hardhat's Ignition](https://hardhat.org/ignition/docs/getting-started#overview){target=\_blank} is a deployment system designed to make deployments predictable and manageable. Let's create a deployment script: 1. Create a new folder called`ignition/modules`. Add a new file named `StorageModule.js` with the following logic: ```javascript title="StorageModule.js" const { buildModule } = require('@nomicfoundation/hardhat-ignition/modules'); module.exports = buildModule('StorageModule', (m) => { const storage = m.contract('Storage'); return { storage }; }); ``` 2. Deploy to the local network: a. First, start a local node: ```bash npx hardhat node ``` b. Then, in a new terminal window, deploy the contract: ```bash npx hardhat ignition deploy ./ignition/modules/StorageModule.js --network localNode ``` c. If successful, output similar to the following will display in your terminal:
npx hardhat ignition deploy ./ignition/modules/Storage.js --network localNode ✔ Confirm deploy to network localNode (420420422)? … yes Hardhat Ignition 🚀 Deploying [ StorageModule ] Batch #1 Executed StorageModule#Storage [ StorageModule ] successfully deployed 🚀 Deployed Addresses StorageModule#Storage - 0xc01Ee7f10EA4aF4673cFff62710E1D7792aBa8f3
3. Deploy to the Polkadot Hub TestNet: a. Make sure your account has enough PAS tokens for gas fees, then run: ```bash npx hardhat ignition deploy ./ignition/modules/StorageModule.js --network passetHub ``` b. After deployment, you'll see the contract address in the console output. Save this address for future interactions.
npx hardhat ignition deploy ./ignition/modules/Storage.js --network passetHub ✔ Confirm deploy to network localNode (420420422)? … yes Hardhat Ignition 🚀 Deploying [ StorageModule ] Batch #1 Executed StorageModule#Storage [ StorageModule ] successfully deployed 🚀 Deployed Addresses StorageModule#Storage - 0xE8693cE64b294E26765573398C7Ca5C700E9C85c
## Interacting with Your Deployed Contract To interact with your deployed contract: 1. Create a new folder named `scripts` and add the `interact.js` with the following content: ```javascript title="interact.js" const hre = require('hardhat'); async function main() { // Replace with your deployed contract address const contractAddress = 'INSERT_DEPLOYED_CONTRACT_ADDRESS'; // Get the contract instance const Storage = await hre.ethers.getContractFactory('Storage'); const storage = await Storage.attach(contractAddress); // Get current value const currentValue = await storage.retrieve(); console.log('Current stored value:', currentValue.toString()); // Store a new value const newValue = 42; console.log(`Storing new value: ${newValue}...`); const tx = await storage.store(newValue); // Wait for transaction to be mined await tx.wait(); console.log('Transaction confirmed'); // Get updated value const updatedValue = await storage.retrieve(); console.log('Updated stored value:', updatedValue.toString()); } main() .then(() => process.exit(0)) .catch((error) => { console.error(error); process.exit(1); }); ``` Ensure that `INSERT_DEPLOYED_CONTRACT_ADDRESS` is replaced with the value obtained in the previous step. 2. Run the interaction script: ```bash npx hardhat run scripts/interact.js --network passetHub ``` 3. If successful, the terminal will show the following output:
npx hardhat run scripts/interact.js --network passetHub Current stored value: 0 Storing new value: 42... Transaction confirmed Updated stored value: 42
## Conclusion Congratulations! You've successfully set up a Hardhat development environment, written comprehensive tests for your Storage contract, and deployed it to local and Polkadot Hub TestNet networks. This tutorial covered essential steps in smart contract development, including configuration, testing, deployment, and interaction. To get started with a working example right away, you can clone the repository and navigate to the project directory: ```bash git clone https://github.com/polkadot-developers/polkavm-hardhat-examples.git -b v0.0.7 cd polkavm-hardhat-examples/storage-hardhat ``` --- END CONTENT ---