diff --git a/docs/05_daos/01_omnipool_listing.md b/docs/05_daos/01_omnipool_listing.md new file mode 100644 index 000000000..0fdfe3640 --- /dev/null +++ b/docs/05_daos/01_omnipool_listing.md @@ -0,0 +1,67 @@ +--- +id: omnipool_listing +title: Omnipool Listing +--- + +import useBaseUrl from '@docusaurus/useBaseUrl'; + +On this page you will find the **requirements and process** for listing a project's token in the Hydration Omnipool. These requirements have been defined by the Hydration DAO. + +All listings in the Omnipool must be approved individually by the governance of the Hydration DAO in a public referendum open to all HDX holders. Depending on the outcome of the referendum, the Hydration DAO may decline a listing even if all criteria have been met. + +## 00 Prerequisites + +With all tokens pooled into a single pool, we must take extra precautions to avoid exposing the Omnipool to large swings in token values due to market manipulation. Therefore the requirements for listing a token in the Omnipool are listed below. There may be exceptions, or additional criteria preventing a token from being listed in the Omnipool since it is difficult to define absolute rules that apply in all situations. The actual decision lies with HDX token holders who may still reject a token that meets all criteria and may accept a token that does not. + +1. The token must have already gone through price discovery for at least eight months. Must be listed on a DEX with at least \$100k liquidity and/or listed on a CEX with active market maker support maintaining a 2% depth of at least \$2k. Ideally the token is listed on at least two exchanges. +2. Sudo control of the chain/token must have been removed or revoked. This could involve the removal of the `sudo` pallet entirely, removing a registered sudo key, or in the case of AssetHub tokens the provable burning of any administrative control over the token. +3. The project must have active token holder governance. Ideally through the use of OpenGov. We recommend setting up automated governance notifications (such as web3alerts) to mitigate governance attacks. +4. The project must have transparent market data available, including the ability to inspect transactions via a block explorer. +5. The token must be sufficiently distributed to avoid individuals causing large price swings. At least 40% of the token supply must be distributed and in circulation. +6. The token must have a market cap of at least $1M. +7. Initial liquidity + - For tokens with an FDV up to \$50M, the minimum team/treasury deposit into the Omnipool is \$300k worth of tokens. + - For tokens with an FDV above \$50M, the minimum team/treasury deposit into the Omnipool is \$500k worth of tokens. + +For new projects, a typical token launch sequence would be to perform a fair token launch using a [Liquidity Boostrapping Pool (LBP)](https://docs.hydration.net/daos/lbp) followed by the LBP liquidity being deposited into an Isolated Pair. Isolated Pairs can be permissionlessly created and swaps on the Hydration platform are automatically routed through both the Omnipool and Isolated Pairs, so new projects can still create their first DEX liquidity on Hydration even before their token qualifies for listing in the Omnipool. After eight months of continued price discovery, the team can apply for the token to be listed in the Omnipool if it meets the criteria above. + + +## 01 Listing Process + +Initial listing of a token in the Omnipool is controlled by Hydration governance. + +1. Open XCM channel (bidirectional HRMP channels). +2. List the token in the Hydration asset registry. +3. Hydration community vote on whether to allow listing the token in the Omnipool. The governance vote will not directly list the token but instead will include a remark to authorize listing in the Omnipool. Hydra community will also decide what the cap will be for the token as a percentage of overall Omnipool TVL. Currently each parachain token is capped at 5% of the Omnipool but will be lowered as the Omnipool asset list diversifies. +3. Initial token liquidity is transferred from team or treasury into either the chain's sibling acct on the Hydration chain or into the Omnipool account (`7L53bUTBbfuj14UpdCNPwmgzzHSsrsTWBHX5pys32mVWM3C1`). +4. Hydration DAO passes a motion (including initial asset price) to add the initial liquidity to the Omnipool and enable trading. The resulting ownership NFT will be placed in the chain's sibling acct (or designated project/team acct for non-parachain tokens). + +Each community should consider depositing additional tokens later to bring the deposit up to $1M or up to 2.5% of FDV, in line with other community deposits. Hydra's DCA feature allows swaps to be spread out, enabling larger trades over time, but loan liquidations need to happen within a single block and therefore deeper token liquidity enables larger money markets for the token. + +## 02 Managing Treasury Deposits in the Omnipool + +Once initial Omnipool listing has been performed, a parachain/token's governance body can control their deposit remotely over XCM. Note that as a security precaution no account can add or remove more than 5% of the liquidity of a token within a single block and therefore large deposits and withdrawals must be performed incrementally unless assisted by Hydra governance. + +### Depositing Additional Liquidity + +The general process to make a large deposit remotely is: +1. Manually try to deposit tokens as liquidity in the Hydration UI to see what the current maximum deposit limit is. +2. Use `xtokens.transfer` or similar to transfer the new tokens to the chain's sibling acct on Hydration (plus like 20 tokens to use to pay XCM fees) +3. Schedule once every 1 block repeating x times to deposit a chunk of tokens into the Omnipool. LP NFTs will automatically be placed in the account that the deposit is happening from. If your chain does not have the ability to schedule XCM transactions (using the `scheduler` pallet) then you likely should seek Hydra governance assistance in performing the deposit. + +**Example from Centrifuge:** +Transfer 750k CFG, then deposit 37500 tokens 20 times (costs 0.442 CFG in XCM fees) +encoded call data: `0x3e02083e0300016d6f646c70792f747273727900000000000000000000000000000000000000007c00000000904cbb5f69aad29e00000000000003010200c91f01007369626cef070000000000000000000000000000000000000000000000000000003f040100000001010000001400000000790003010100c91f0314000400010200bd1f060200010000000000000000000000000000000000000000000000000000000000000013000064a7b3b6e00d1300010200bd1f060200010000000000000000000000000000000000000000000000000000000000000013000064a7b3b6e00d0006010700f2052a0102000400583b020d0000000000701c7cf40ae1f007000000000000140d0100000101007369626cef070000000000000000000000000000000000000000000000000000` + +**Example from Phala:** +Transfer 4.08M PHA, then deposit 48000 PHA 85 times (costs 6.9 PHA in XCM fees) +encoded call data: `0x030208030300016d6f646c70792f747273727900000000000000000000000000000000000000005200000000001300407db892249f38010200c91f01007369626cf30700000000000000000000000000000000000000000000000000000007040100000001010000005500000001210003010100c91f0314000400010100cd1f00070010a5d4e81300010100cd1f00070010a5d4e80006010700f2052a0102000400583b0208000000000038e5be87aa000000000000000000140d0100000101007369626cf3070000000000000000000000000000000000000000000000000000` + +### Removing Liquidity + +Omnipool liquidity must be removed using the number of each specific LP NFT. It is again limited to 5% of the liquidity for that individual token. + +1. Use `assetRegistry->assetIds()` to find the index for your token in the Hydration asset registry. +2. Use `uniques->acount(acct, 1337)` to find the IDs of the LP NFTs. +3. Use `omnipool->positions(ID)` to get the details about each LP NFT. Specifically, the `amount` of tokens it represents. If the amount is greater than 5% of the token's liquidity in the Omnipool you can't remove all of liquidity of that LP in a single block and need to split it up. +4. Create a series of XCM messages to call `omnipool.removeLiquidity(positionID, amount)` for each of the LP NFTs. diff --git a/docs/05_daos/01_lbp.md b/docs/05_daos/03_lbp.md similarity index 100% rename from docs/05_daos/01_lbp.md rename to docs/05_daos/03_lbp.md diff --git a/docs/06_devs/03_remote_swaps.md b/docs/06_devs/03_remote_swaps.md new file mode 100644 index 000000000..13fed0ed0 --- /dev/null +++ b/docs/06_devs/03_remote_swaps.md @@ -0,0 +1,25 @@ +--- +id: remote_swaps +title: Remote swaps +--- + +import useBaseUrl from '@docusaurus/useBaseUrl'; + +Building cross-chain swaps. + +## Introduction {#intro} +With _xcm::execute_ call being gradually whitelisted on various chains, it is now possible to use Hydration as a universal atomic swapping engine. In practice, that means withdrawing funds on one chain, sending it to Hydration, swapping the asset for another and sending it back or another chain in one transaction. This enables cross-chain swaps and opening doors for use cases such as acquiring fee payment asset for a chain before a transaction happens and allowing for much better UX. + +## Example {#example} +For demonstration purposes, let's consider the following scenario: +User has 100 DOT on the Polkadot core chain and wants to swap it for USDT and have it available on Asset Hub. User facing application can offer one signature solution with a good UX and do the heavy lifting in the background. It just needs to construct an XCM message with a correct set of instructions which will differ depending on the nativeness of the asset, or in other words, where the reserve of the swapped asset reside. + +The extrinsic would be in this example constructed as a _polkadotXcm.send_ call containing the following [instructions](https://github.com/paritytech/xcm-format): + +
+ +
+ +## Learn more +To know more about remote swaps, you can head over to our github repository where you can find [integration tests](https://github.com/galacticcouncil/HydraDX-node/blob/769c33d63d24356791c2f0e276350ebdc2914005/integration-tests/src/exchange_asset.rs#L341) covering more advanced scenarios. + diff --git a/docs/06_devs/04_xchain.md b/docs/06_devs/04_xchain.md new file mode 100644 index 000000000..c8504c8e5 --- /dev/null +++ b/docs/06_devs/04_xchain.md @@ -0,0 +1,138 @@ +--- +id: xchain +title: Cross-Chain Integration +--- + +import useBaseUrl from '@docusaurus/useBaseUrl'; + +Pursuing its mission to **enable permissionless liquidity within and beyond the Polkadot ecosystem**, Hydration generally **welcomes integrations** with other projects which would like to leverage some of the functionalities that Hydration has to offer. + +This page provides a **step-by-step guide** that explains how to **integrate your chain and its assets** with Hydration. + +## Establishing cross-chain (XC) communication {#establishing-xc} +The Polkadot ecosystem was designed with multichain interoperability support in mind from day 1. The protocol that allows two chains to exchange Cross-Consensus messages (XCM) with each other is called **Cross-Chain Message Passing (XCMP)**. While full XCMP is still under development, a stop-gap protocol called **Horizontally Relay-routed Message Passing (HRMP)** is used by parachains to establish communication channels. An HRMP channel has the same capabilities as an XCMP channel but is more demanding on resources as messages are not routed directly between parachains, but need to first pass via the relay chain. + +## Onboarding projects to Hydration {#onboarding-assets} +As Hydration is a permissionless and decentralized protocol, anyone can propose a cross-chain integration. A common case for this would be to list tokens on Hydration, bootstrapping liquidity, enable DCAing, but other use cases may also come to mind. + +The procedure for proposing to open a channel to Hydration consists of the following steps: + +### Step 0: Spark a discussion with the community {#discussion} +Before deciding to open a new cross-chain channel, you should initiate a discussion with the broader Hydration community. This step is important because it allows users to express interest in tokens that they would like to see trading on our platform and to red-flag potentially toxic assets. + +To initiate the discussion, please [open a discussion thread on Subsquare](https://hydration.subsquare.io/posts/create) which touches upon the following points: +- introduction of your project +- how it plans to leverage the functionality offered by Hydration +- tokenomics +- any other important info + +After creating the thread, please post a link in **#gov-discussion** on the [Hydration Discord](https://discord.gg/hydration-net). + +### Step 1: Gather asset registry info {#asset-registry-info} +A chain's asset registry requires metadata about its tokens to function properly. For example, our native token HDX would be registered as follows: + +|Field|Example| +|-------------|:-----------:| +|name|Hydration| +|symbol|HDX| +|decimals |12| +|existential deposit |1 HDX| +|location| (X2, (Parachain(2034), GeneralIndex(0))| + +Prepare this table for all the currencies you want to register. + +### Step 2: Integrate on Polkadot network {#live} + +:::important +Both parachain [sovereign accounts](https://substrate.stackexchange.com/questions/1200/how-to-calculate-sovereignaccount-for-parachain/1210) must have enough funds (approx. 10.1 DOT) on the relay chain to reserve a deposit for HRMP channels and to process the XCM messages. +::: + +:::warning +Always test the encoded hash of the call is valid on the appropriate chain, sending transaction data to an incorrect relay chain may lead to a loss of funds. +::: + +##### 1) Your parachain +To initiate a request for opening a channel to Hydration on the relay chain, please follow these steps: + +- prepare encoded transact call that will be executed on the relay chain: +
+ +
+ + _Encoded: 0x3c00f2070000e803000000900100_ + +:::note +The following actions can be performed only from root origin via governance or the sudo module of the respective chain. +::: + +- send an XCM message from the parachain to the relay chain using the _polkadotXcm.send_ call containing the following [instructions](https://github.com/paritytech/xcm-format): + + - WithdrawAsset + - BuyExecution + - Transact (input previously prepared call here) + - RefundSurplus + - DepositAsset + +##### 2) Hydration +On the Hydration side, the following actions need to be performed: +- accept the Parachain → Hydration channel request; +- initiate a request for opening Hydration → Parachain channel; +- register Parachain's native asset(s) in the Hydration asset registry. + +You can find an example of this call [here](https://hydration.subsquare.io/democracy/referenda/158). + +Prepare a batch call that contains all the necessary actions and before submitting, test its successful execution in [Chopsticks](https://github.com/AcalaNetwork/chopsticks). + +Once tested, note the preimage via _preimage.notePreimage_ extrinsic, choose the **Root** governance track and submit the referendum proposal using e.g. PolkadotJS Apps. +
+ +
+ +In order to queue the referendum for voting, a decision deposit needs to be placed. +
+ +
+ +##### 3) Your parachain +If the referendum in previous step passed and was executed successfully, hrmp channel needs to be accepted also on the other parachain. + - accept Hydration → Parachain channel request on the relay chain with the following Transact call, analogically to step 1: +
+ +
+ + _Encoded: 0x17012a080000_ + +- send an XCM message from the parachain to the relay chain using the _polkadotXcm.send_ call containing the following [instructions](https://github.com/paritytech/xcm-format): + + - WithdrawAsset + - BuyExecution + - Transact (input previously prepared call here) + - RefundSurplus + - DepositAsset + + - optionally register HDX in your parachain's asset registry. + +##### 4) Polkadot +Wait for one session after each acceptance for the channels to be opened. + +##### 5) Add icons to the Hydration app +Open a new issue in [Hydration UI repository](https://github.com/galacticcouncil/HydraDX-ui) with title "Add icons for _projectname_" and attach icons for the chain and all assets. Icon should have maximum size of 10kB and SVG/PNG format. + +##### 6) Add tokens to cross-chain UI +To add your tokens to our [Cross-chain](https://app.hydration.net/cross-chain) page, it is necessary to open a pull request to the [sdk repository](https://github.com/galacticcouncil/sdk). + +1. **Fork the sdk repository**. +2. **Extend xcm-cfg package**. + 1. If necessary, add a [new chain](https://github.com/galacticcouncil/sdk/blob/master/packages/xcm-cfg/src/chains/) + 2. Add new [assets](https://github.com/galacticcouncil/sdk/blob/master/packages/xcm-cfg/src/assets.ts) + 3. Add new AssetRoute to both chain config files, [Hydration](https://github.com/galacticcouncil/sdk/blob/master/packages/xcm-cfg/src/configs/polkadot/hydration/index.ts) and [your chain's](https://github.com/galacticcouncil/sdk/tree/master/packages/xcm-cfg/src/configs/polkadot) +3. (Optional / Recommended) **Test your changes locally in developer console**. + 1. Build the project by following [README.md](https://github.com/galacticcouncil/sdk/blob/master/README.md) + 2. Change current directory to `/examples/xcm-transfer/` + 3. Adjust chains, asset, adresses and balance definitions in the [index file](https://github.com/galacticcouncil/sdk/blob/master/examples/xcm-transfer/src/index.ts) + 4. Test your changes by running `npm run dev` and check the developer console output in your browser, typically at `localhost:3000` +3. **Open a PR from your fork to the main repository** and wait until the workflow is approved. UI preview with your changes will be deployed and appear in the PR description. +4. **Try sending each of the registered tokens back and forth** from one chain to the other, and verify the deposits were successful and balances configuration is correct. +5. **Add a comment that configuration is ready to be merged.** + +__Congratulations for registering your tokens on Hydration, and a warm welcome from Hydration!__ \ No newline at end of file diff --git a/docs/06_devs/03_collator_setup.md b/docs/06_devs/05_collators/01_collator_setup.md similarity index 100% rename from docs/06_devs/03_collator_setup.md rename to docs/06_devs/05_collators/01_collator_setup.md diff --git a/docs/06_devs/04_performance_benchmark.md b/docs/06_devs/05_collators/02_performance_benchmark.md similarity index 100% rename from docs/06_devs/04_performance_benchmark.md rename to docs/06_devs/05_collators/02_performance_benchmark.md diff --git a/docs/06_devs/05_node_monitoring.md b/docs/06_devs/05_collators/03_node_monitoring.md similarity index 100% rename from docs/06_devs/05_node_monitoring.md rename to docs/06_devs/05_collators/03_node_monitoring.md diff --git a/docs/06_devs/05_collators/_category_.json b/docs/06_devs/05_collators/_category_.json new file mode 100644 index 000000000..6f3da4f58 --- /dev/null +++ b/docs/06_devs/05_collators/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Collators", + "collapsible": true, + "collapsed": true, + "className": "red", + "link": { + "slug": "/collators", + "type": "generated-index", + "title": "Collators" + }, + "customProps": { + "description": "" + } +} diff --git a/docs/06_devs/06_polkadotjs_apps_local.md b/docs/06_devs/06_polkadotjs_apps_local.md deleted file mode 100644 index ba9fce5d9..000000000 --- a/docs/06_devs/06_polkadotjs_apps_local.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -id: polkadotjs_apps_local -title: Connect to a Local Node ---- - -import useBaseUrl from '@docusaurus/useBaseUrl'; - -You can use the Polkadot/apps to connect to your local Hydration node. For this purpose, you need to have access to port `9944` which is used for RPC websocket connections. - -:::warning - -If you are running the node as a validator, we highly recommend that you blacklist port `9944` for remote connections. This port could be abused by third parties to degrade the performance of your node, which may result in slashing and involuntary loss of funds. You should use port `9944` to connect to your validator node only when the node is in your local network. - -::: - -### Accessing your local node using Polkadot/apps {#accessing-your-local-node-using-polkadotapps} - -To access your node, open [Polkadot/apps](https://polkadot.js.org/apps/) and click in the upper left corner to change the network. - -
- -
- -After opening the menu, click on **Development** and select **Local node**. -
- -
- -Adjust the IP if necessary and click on ***Switch*** to switch to your local node. - -
- -
- -Now you should be connected to your local node and be able to interact with it. diff --git a/docs/06_devs/07_polkadotjs_apps_public.md b/docs/06_devs/07_polkadotjs_apps_public.md deleted file mode 100644 index b838c9709..000000000 --- a/docs/06_devs/07_polkadotjs_apps_public.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -id: polkadotjs_apps_public -title: Connect to a Public Node ---- - -import useBaseUrl from '@docusaurus/useBaseUrl'; - -There are two public RPC nodes which are maintained by Hydration and our partners. You can use these nodes for interacting with Snakenet. You can directly connect to a public node with Polkadot/apps by clicking on the link below: - -* [RPC node hosted by Hydration](https://polkadot.js.org/apps/?rpc=wss%253A%252F%252Frpc.hydradx.cloud#/explorer) - - -## Connect manually to an RPC node {#connect-manually-to-an-rpc-node} - -To access a public RPC node manually, open the [Polkadot/apps](https://polkadot.js.org/apps/) and click in the upper left corner to change the network. - -
- -
- -Click on **LIVE NETWORKS** and select **Hydration**. - -
- -
- -Select one of the nodes and click **Switch**. - -
- -
- -Now you should be connected to the selected public RPC node. diff --git a/docs/06_devs/08_testnet_howto.md b/docs/06_devs/08_testnet_howto.md deleted file mode 100644 index 7dd923da5..000000000 --- a/docs/06_devs/08_testnet_howto.md +++ /dev/null @@ -1,351 +0,0 @@ -# Design and Automation of our Tesnet Deployment at Hydration - -In this article, we are going to show you how we designed and automated our pipeline to be able to deploy a new testnet (Parachain + Relaychain) within minutes using Kubernetes (EKS Fargate), AWS ACM, Route53, Terraform and Github Actions. - -## The choice of EKS with Fargate -### Why EKS with Fargate? -Our Parachain and Relaychain images are based on two separate images, which need one or more containers to run for each. Kubernetes being the standard of container automation and scaling in the industry today, we naturally made this choice (Docker Swarm has some serious scaling issues that we might talk about in a separate article, if interest be.) - -Now, since our infrastructure is partially based on AWS, we had the choice between having either EKS with EC2 instances running under the hood, or using Fargate. The difference between the two is that, with EC2, you have less flexibility as far as controlling the resources is concerned; if you have no idea about the number of pods you need to be running in the future, you most likely will have to overestimate the capacity (CPU / RAM power as well as the number) of your instances, which may result in useless capacity lost and higher bills. Another reason is that these EC2 instances need to be administrated, which needs time and resources. - -For these reasons, we came to the conclusion that the usage of Fargate might be a better solution for dealing with our deployments and to be able to scale (either up or down) them correctly. In Fargate, you don't need to worry about instances or servers, all you have to do (in a nutshell) is to write your Kubernetes Manifests, apply those, and AWS will take care of the rest; i.e. provisioning the servers, planning the pods, etc. - -To create a Kubernetes Instance in AWS, you can either use EKSCTL or Terraform. Nothing fancy here. Here is an example for creating a Fargate Cluster (from the documentation): - -```yaml -apiVersion: eksctl.io/v1alpha5 -kind: ClusterConfig - -metadata: - name: fargate-cluster - region: ap-northeast-1 - -nodeGroups: - - name: ng-1 - instanceType: m5.large - desiredCapacity: 1 - -fargateProfiles: - - name: fp-default - selectors: - # All workloads in the "default" Kubernetes namespace will be - # scheduled onto Fargate: - - namespace: default - # All workloads in the "kube-system" Kubernetes namespace will be - # scheduled onto Fargate: - - namespace: kube-system - - name: fp-dev - selectors: - # All workloads in the "dev" Kubernetes namespace matching the following - # label selectors will be scheduled onto Fargate: - - namespace: dev - labels: - env: dev - checks: passed -``` - -Once done, all we had to do is to create and apply our Kubernetes Objects. - -### Deployment of our Relaychain -#### Deployment Example: -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - namespace: YOUR_NAMESPACE - name: relaychain-alice-deployment -spec: - selector: - matchLabels: - app.kubernetes.io/name: relaychain-alice - replicas: 1 - template: - metadata: - labels: - app.kubernetes.io/name: relaychain-alice - spec: - containers: - - image: YOUR-IMAGE-HERE - imagePullPolicy: Always - name: relaychain-alice - command: ["/polkadot/polkadot"] - args: ["--chain", "/polkadot/config.json", ..."] - ports: - - containerPort: 9944 - - containerPort: 30333 -``` - -In this manifest, we choose the name of our node, the ports to expose, the command and its argument (please check Hydration docs) as well as the number of replicas. This parameter is important as we only want one replica per node, to avoid sync issues. Note that you can have as many nodes as necessary. - -#### Service Example -We use the Service object in Kubernetes for at least two purposes here: -1. First, so nodes can communicate with each other, please check [this link for more info](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/) -2. To be able to expose the service to the outside world, if necessary, using an ingress controller. - -Nothing fancy, just yet another basic service: - -```yaml -apiVersion: v1 -kind: Service -metadata: - namespace: YOUR_NAMESPACE - name: SVC_NAME -spec: - ports: - - port: 9944 - name: websocket - targetPort: 9944 - protocol: TCP - - port: 30333 - name: custom-port - targetPort: 30333 - protocol: TCP - type: NodePort - selector: - app.kubernetes.io/name: relaychain-alice -``` - -Please note that, if you wish to expose the service to the outside world, the `selector` parameter becomes crucial. - -And voilà ! That's it. Now one last step is when we want to expose a Service (related to a given Deployment) to the outside world. For this, we use what we call an Ingress Object: - -#### Ingress Example: - -```yaml -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - namespace: YOUR_NAMESPACE - name: INGRESS_OBJECT_NAME - annotations: - kubernetes.io/ingress.class: alb - alb.ingress.kubernetes.io/scheme: internet-facing - alb.ingress.kubernetes.io/group.name: wstgroup2 - alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=4000 - alb.ingress.kubernetes.io/auth-session-timeout: '86400' - alb.ingress.kubernetes.io/target-type: ip - alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":443}, {"HTTPS":443}]' - alb.ingress.kubernetes.io/healthcheck-path: / - alb.ingress.kubernetes.io/healthcheck-port: '80' - alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=600 - alb.ingress.kubernetes.io/certificate-arn: YOUR_ARN - labels: - app: relaychain -spec: - rules: - - host: relaychain.hydration.cloud - http: - paths: - - path: /ws/ - backend: - serviceName: relaychain-bob-svc - servicePort: 80 - -``` - -This object, namely `Ingress`, is used so our service can be accessible from the outside world using the host address `relaychain.hydration.cloud`. For this, we use the ALB Controller Service of AWS [More information here](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html) - -Parameters of this Ingress are pretty much basic, and can be kept as is [for more info, please check this link](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/). The most important value to change, is the one of `alb.ingress.kubernetes.io/certificate-arn`, which is the identifier of the ACM Certificate you get when you create an entry in [ACM](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) for your `host`. More details later on in this article. - -### Deployment of our Parachain - -Since the steps are pretty much the same, here are simply samples for each object we used to deploy our Parachain: - -#### Deployment Example (collator): -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - namespace: YOUR_NAMESPACE - name: parachain-coll-01-deployment -spec: - selector: - matchLabels: - app.kubernetes.io/name: parachain-coll-01 - replicas: 1 - template: - metadata: - labels: - app.kubernetes.io/name: parachain-coll-01 - spec: - containers: - - image: YOUR_IMAGE - imagePullPolicy: Always - name: parachain-coll-01 - volumeMounts: - - mountPath: /tmp - name: persistent-storage - command: ["/basilisk/basilisk"] - args: ["--chain", "local", "--parachain-id", "", "--alice", "--base-path", "/basilisk/", "--node-key", "", "--bootnodes", "/dns/coll-01-svc.YOUR_NAMESPACE.svc.cluster.local/tcp/30333/p2p/KEY", "--", "--chain", "/tmp/rococo-local-raw.json", "--bootnodes", "/dns/coll-01-svc.YOUR_NAMESPACE.svc.cluster.local/tcp/30333/p2p/KEY", "--base-path", "/basilisk/", "--execution=wasm"] - ports: - - containerPort: 9944 - - containerPort: 9933 - - containerPort: 30333 - volumes: - - name: persistent-storage - persistentVolumeClaim: - claimName: efs-pv -``` -#### Service Example: - -```yaml -apiVersion: v1 -kind: Service -metadata: - namespace: NAMESPACE - name: coll-01-svc -spec: - ports: - - port: 9944 - name: websocket - targetPort: 9944 - protocol: TCP - - port: 30333 - name: custom-port - targetPort: 30333 - protocol: TCP - - port: 9933 - name: rpc-port - targetPort: 9933 - type: NodePort - selector: - app.kubernetes.io/name: parachain-coll-01 -``` - -#### Public RPC Service: -```yaml -apiVersion: v1 -kind: Service -metadata: - namespace: NAMESPACE - name: public-rpc-svc -spec: - ports: - - port: 80 - name: websocket - targetPort: 9944 - protocol: TCP - type: NodePort - selector: - app.kubernetes.io/name: public-rpc -``` -#### Ingress: -Ingress Manifest remains exactly the same. -### What are the challenges we faced? -Apart from the choice that we had to make between EC2 and Fargate instances, we had an issue that wasn't that easy to be dealt with: namely, the **volumes**. During our deployment, we found out that we had to pass a configuration to our Basilisk Command, which could not be stored in a `config-map`, since the configuration was more than 4MB in size, whereas config-maps can only store up to 1MB. Now the problem is that, this is something pretty straight forward to do in Kubernetes (create a Volume, put a file or folder inside and use it from other pods) with EC2, the task isn't so simple with Fargate. In Fargate, Volumes were not supported until August 2020, and the feature is still not mature. So if you have to heavily use volumes in your Kubernetes Deployment, please take this into account. We could however solve this issue following this [documentation, with AWS EFS](https://aws.amazon.com/blogs/aws/new-aws-fargate-for-amazon-eks-now-supports-amazon-efs/). This link will save your ass if you have to use volumes with Fargate, trust me. - - -## ACM and Route53 -If you need to expose your node to the outside world, with a nice and secured URL, you can use AWS ACM. Basically, all you need to do is to create a certificate with the name of your URL, validate it (via DNS), and get the result ARN. Then add it as a value of the `alb.ingress.kubernetes.io/certificate-arn` parameter in your Ingress Manifest file, and voilà ! - -## Terraform for Automated Deployment -Of course, the creation of your certificate can be done through Terraform, if you want to automate it in your CI (we didn't make this choice, but we will probably deploy it later). However, this .tf file might be of a great help to you: -``` -provider "aws" { - region = "eu-west-1" -} - -# DNS Zone Name: hydraction.cloud -variable "dns_zone" { - description = "Specific to your setup, pick a domain you have in route53" - default = "hydration.cloud" -} -# subdomain name -variable "domain_dns_name" { - description = "domainname" - default = "YOUR_SUBDOMAIN" -} - - -# On crée une datasource à partir du nom de la zone DNS -data "aws_route53_zone" "public" { - name = "${var.dns_zone}" - private_zone = false -} -resource "aws_acm_certificate" "myapp-cert" { - domain_name = "${var.domain_dns_name}.${data.aws_route53_zone.public.name}" - #subject_alternative_names = ["${var.alternate_dns_name}.${data.aws_route53_zone.public.name}"] - validation_method = "DNS" - lifecycle { - create_before_destroy = true - } -} -resource "aws_route53_record" "cert_validation" { - for_each = { - for dvo in aws_acm_certificate.myapp-cert.domain_validation_options : dvo.domain_name => { - name = dvo.resource_record_name - record = dvo.resource_record_value - type = dvo.resource_record_type - } - } - allow_overwrite = true - name = each.value.name - records = [each.value.record] - ttl = 60 - type = each.value.type - zone_id = data.aws_route53_zone.public.id -} -# This tells terraform to cause the route53 validation to happen -resource "aws_acm_certificate_validation" "cert" { - certificate_arn = aws_acm_certificate.myapp-cert.arn - validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn] -} - -output "acm-arn" { - value = aws_acm_certificate.myapp-cert.arn -} - -``` - -The output value of this TF is the ARN to be used in your `Ingress` Manifest file. -## Github Actions to wrap it all - -Of course, you can just write your manifest files, and deploy your Kubernetes Objects using `kubectl apply`, but you might as well want to do it through a CI-CD. We use Github Actions, and it's pretty straightforward: - -```yaml -name: deploy app to k8s and expose -on: - push: - branches: - - main - -jobs: - deploy-prod: - name: deploy - runs-on: ubuntu-latest - env: - ACTIONS_ALLOW_UNSECURE_COMMANDS: true - AWS_ACCESS_KEY_ID: ${{ secrets.K8S_AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.K8S_AWS_SECRET_KEY_ID }} - AWS_REGION: ${{ secrets.AWS_REGION }} - NAMESPACE: validators_namespace - APPNAME1: validator1 - APPNAME2: validator2 - DOMAIN: hydration.cloud - SUBDOMAIN: validator1 - IMAGENAME: YOUR_IMAGE - CERTIFICATE_ARN: _CERTIFICATEARN_ - - steps: - - name: checkout code - uses: actions/checkout@v2.1.0 - - - name: run-everything - run: | - curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl - chmod +x ./kubectl - sudo mv ./kubectl /usr/local/bin/kubectl - export AWS_ACCESS_KEY_ID=${{ env.AWS_ACCESS_KEY_ID }} - export AWS_SECRET_ACCESS_KEY=${{ env.AWS_SECRET_ACCESS_KEY }} - curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp - sudo mv /tmp/eksctl /usr/local/bin - eksctl version - aws eks --region eu-west-1 update-kubeconfig --name CLUSTER_NAME - kubectl delete all --all -n ${{ env.NAMESPACE }} - eksctl create fargateprofile --cluster CLUSTER_NAME --region ${{ env.AWS_REGION }} --name ${{ env.NAMESPACE }} --namespace ${{ env.NAMESPACE }} - sed -i 's/_NAMESPACE_/${{ env.NAMESPACE }}/g' components.yaml - kubectl apply -f components.yaml -``` -This workflow basically creates the fargate profile as well as depoys your manifest file containing all your Kubernetes Objects to your chosen Cluster. Of course, make sure you give the right access and secret keys :). - -Good luck! diff --git a/static/img/devs/remote_swaps/xcm_execute.png b/static/img/devs/remote_swaps/xcm_execute.png new file mode 100644 index 000000000..c11848da4 Binary files /dev/null and b/static/img/devs/remote_swaps/xcm_execute.png differ diff --git a/static/img/devs/xchain/decision_deposit.png b/static/img/devs/xchain/decision_deposit.png new file mode 100644 index 000000000..90f53bfe1 Binary files /dev/null and b/static/img/devs/xchain/decision_deposit.png differ diff --git a/static/img/devs/xchain/hrmp_accept.png b/static/img/devs/xchain/hrmp_accept.png new file mode 100644 index 000000000..43b270198 Binary files /dev/null and b/static/img/devs/xchain/hrmp_accept.png differ diff --git a/static/img/devs/xchain/hrmp_init.png b/static/img/devs/xchain/hrmp_init.png new file mode 100644 index 000000000..0c3eb0953 Binary files /dev/null and b/static/img/devs/xchain/hrmp_init.png differ diff --git a/static/img/devs/xchain/submit_referendum.png b/static/img/devs/xchain/submit_referendum.png new file mode 100644 index 000000000..38bf624dd Binary files /dev/null and b/static/img/devs/xchain/submit_referendum.png differ diff --git a/static/polkadotjs-apps/PolkadotJS-APPS-1.png b/static/polkadotjs-apps/PolkadotJS-APPS-1.png deleted file mode 100644 index 9776ca202..000000000 Binary files a/static/polkadotjs-apps/PolkadotJS-APPS-1.png and /dev/null differ diff --git a/static/polkadotjs-apps/local-1.png b/static/polkadotjs-apps/local-1.png deleted file mode 100644 index 1ab3b15fc..000000000 Binary files a/static/polkadotjs-apps/local-1.png and /dev/null differ diff --git a/static/polkadotjs-apps/local-2.png b/static/polkadotjs-apps/local-2.png deleted file mode 100644 index 2bd9fefb3..000000000 Binary files a/static/polkadotjs-apps/local-2.png and /dev/null differ diff --git a/static/polkadotjs-apps/public-1.png b/static/polkadotjs-apps/public-1.png deleted file mode 100644 index 4f994f6c2..000000000 Binary files a/static/polkadotjs-apps/public-1.png and /dev/null differ diff --git a/static/polkadotjs-apps/public-2.png b/static/polkadotjs-apps/public-2.png deleted file mode 100644 index b6387fb62..000000000 Binary files a/static/polkadotjs-apps/public-2.png and /dev/null differ