In the rapidly evolving blockchain industry, it’s crucial to keep up with the pace of change. Here’s a quick overview of how we seamlessly integrate new chains into the Mochi system, with the caveat that this method is specific to EVM-based blockchains:
In this section, most of the tasks require manual handling.
Add new chain to supported chain list
Go to Mochi Pay repository. Create a migration file as show below. INSERT INTO "public"."product_supported_chains" ("chain_id", "name", "symbol", "rpc", "explorer", "icon", "is_evm",) VALUES
+
In the rapidly evolving blockchain industry, it’s crucial to keep up with the pace of change. Here’s a quick overview of how we seamlessly integrate new chains into the Mochi system, with the caveat that this method is specific to EVM-based blockchains:
In this section, most of the tasks require manual handling.
Add new chain to supported chain list
Go to Mochi Pay repository. Create a migration file as show below. INSERT INTO "public"."product_supported_chains" ("chain_id", "name", "symbol", "rpc", "explorer", "icon", "is_evm",) VALUES
('1', 'Ethereum Mainnet', 'ETH', 'https://eth.llamarpc.com', 'https://etherscan.io', 'https://cdn.discordapp.com/emojis/928216430451761172.png?size=240&quality=lossless', 't'),
-
use the /chains
command to see the new changes. Add token emojis To enhance the aesthetics of Mochi , we aim to display token emojis whenever possible. Currently, please reach out to @minh_cloud
for assistance with this task.
To store blockchain raw events data in ClickHouse, follow these steps:
Go to Infras repository and create a service with the following configuration, similar to the example evm:
+
use the /chains
command to see the new changes. Add token emojis To enhance the aesthetics of Mochi , we aim to display token emojis whenever possible. Currently, please reach out to @minh_cloud
for assistance with this task.
To store blockchain raw events data in ClickHouse, follow these steps:
Go to Infras repository and create a service with the following configuration, similar to the example evm:
chainName: base
chainId: 8453
rpcServers:
@@ -10,12 +10,12 @@
- https://mainnet.base.org
- https://base.blockpi.network/v1/rpc/public
syncFromBlockNumber: 2769582
-
Verify Clickhouse Database Data Check the Clickhouse database to ensure it contains data for the new chains. Specifically, verify the latest_block_timestamp
table to confirm that it has the necessary records for the new chain. Ensure that the latest_block_timestamp
column is up to date and reflects the latest block timestamp for the new chain. This step should work automatically. Simply go to Mochi Clients and test out some well-known tokens to ensure that everything is functioning as expected.
Wallet tracking is managed within the account-telemetry repository, follow these tasks:
Verify Clickhouse Database Data : this step should be done at [1]
Update utils
package :
Use any token of this chain to call Mochi API api/v1/defi/custom-tokens?chain_id=%d&address=%s
to see what is the key that represents the new block chain. Using this key to update in the next step. For Example, with the following response, the new chain is Optimistic , but the key that is provided by API is optimistic-ethereum , so we must use this key as the chain name. Because in some places, we need this key to detect token detail platform by chain id. TODO: rename these methods for compatible and readable . map[string]response.TokenDetailPlatform{
+
Verify Clickhouse Database Data Check the Clickhouse database to ensure it contains data for the new chains. Specifically, verify the latest_block_timestamp
table to confirm that it has the necessary records for the new chain. Ensure that the latest_block_timestamp
column is up to date and reflects the latest block timestamp for the new chain. This step should work automatically. Simply go to Mochi Clients and test out some well-known tokens to ensure that everything is functioning as expected.
Wallet tracking is managed within the account-telemetry repository, follow these tasks:
Verify Clickhouse Database Data : should be done at step 1
Update utils
package :
Use any token of this chain to call Mochi API api/v1/defi/custom-tokens?chain_id=%d&address=%s
to see what is the key that represents the new block chain. Using this key to update in the next step. For Example, with the following response, the new chain is Optimistic , but the key that is provided by API is optimistic-ethereum , so we must use this key as the chain name. Because in some places, we need this key to detect token detail platform by chain id. TODO: rename these methods for compatible and readable . map[string]response.TokenDetailPlatform{
"optimistic-ethereum": response.TokenDetailPlatform{
DecimalPlace: 18,
ContractAddress: "0x9560e827af36c94d2ac33a39bce1fe78631088db",
},
}
-
Update the internal/util/chain.go
file to add support for the new chain. This involves adding the necessary configurations and logic to handle the data retrieval and notification process for the new chain. Currently, it is just a few utilities that are responsible for convert between chain ID, Name, etc. For example: By completing these tasks, you’ll ensure that wallet tracking in the account-telemetry repository works seamlessly with the new chains, allowing for accurate notifications to users when their tracked accounts receive new transactions. 3. How to test
In the Mochi Bot , use command /wallet track
to track a specific wallet that owns by you on the new chain. Try transfer Native/ERC20 token from another wallet to above tracked wallet. Try transfer Native/ERC20 token from above tracked wallet to another wallet. NOTE: This is the primary step before doing any other below steps In this part, it is essential to ensure that all widely-recognized tokens from the new chains are added to our database whitelist. To fulfill this requirement, we can follow these steps.
mochi-pay-api :Write a database migration file to seed new chain’s info to the table chains
. Write script into above database miration file to seed tokens of new chain’s to the table tokens
Find list listed tokens in the coingeko via this page https://www.coingecko.com/en/all-cryptocurrencies with proper filter. For Example: I’m searching token on the chain BASE . Because we have no API to fetch these coin information, so we can do some cheat to extract list of expected token symbols like []string{"KRAV", "USDT" ...}
to put into some Golang script to generate Database migration script for enriching tokens. For example I asked Chat GPT help me extract all symbol of all coins that have 24h Volume greater than $1000. Sometimes, result can be wrong, or not be a list of symbols, you can do some outstanding questions to optimize it. Using result of above script as list input tokens to generate file migration, check repositorymochi-pay-api at folder /script
for further information. mochi-api :Update all utils at pkg/util folder to support the new chains, just simply add new switch case condition now. Such as in the ConvertChainIdToChainName
we must add new chain in the switch case and return proper chain’s name depend on provided chainId
. How to test
Firstly, you actually must deposit some tokens in the new integrated chain to your Mochi wallet by using the command /deposit
. Then execute the command /profile
and /bal
to see the detail and ensure that new token is shown/counted in these command result. Everything must be done at [4] , except also need to update list chainIds in the function ListOrCreate
of repository mochi-pay-api to support creating new inapp wallet that is used for transfer/deposit/withdraw in the Mochi.
How to test
/tip
: make sure you can tip new token to others, balances of source and target is change./airdrop
: same tip, need somebody help you collect the tip then check their balance to see the change./pay me
: execute this cmd and make sure new wallet address for the new chain is generated in the option list , then try pay some token by using given url./pay link
: execute this cmd and make sure you can claim token from given url.This is actually done at [4] and [5] . A token can be withdrawn when it is whitelist in the DB.
NOTE: You must faucet native token to the centralized wallet to pay for gas of withdraw transactions.
How to test Try using cmd /withdraw
and ensure that you give 2 following notifications:
Withdraw submitted Withdraw success NOTE: You must faucet native token to the centralized wallet to pay for gas of sweeping native/token transactions.
To enable support for deposits on a new chain, follow these essential steps:
Compile and Deploy Deposit Contract :
Compile the deposit contract using the contract available in the consolelabs/contract-tip-bot repository. Deploy the compiled deposit contract to the new chain. Ensure that the deployment is successful and that the contract is functioning as expected . Can testing on the testnet first. Change Ownership to Centralized Wallet Address :
Change the ownership of the deployed deposit contract to a centralized wallet address. Seed Deposit Contract Information :
Update the mochi_pay_api
by adding information about the newly deployed deposit contract to the deposit_contracts
table. This includes details such as the contract address and other relevant information. Test Deposits :
Perform testing to ensure that deposits work smoothly on the new chain. Test depositing a few tokens to verify that the deposit functionality is working as intended. Ensure that you got following notification and the balance is changed by
+Update the internal/util/chain.go
file to add support for the new chain. This involves adding the necessary configurations and logic to handle the data retrieval and notification process for the new chain. Currently, it is just a few utilities that are responsible for convert between chain ID, Name, etc. For example: By completing these tasks, you’ll ensure that wallet tracking in the account-telemetry repository works seamlessly with the new chains, allowing for accurate notifications to users when their tracked accounts receive new transactions. 3. How to test
In the Mochi Bot , use command /wallet track
to track a specific wallet that owns by you on the new chain. Try transfer Native/ERC20 token from another wallet to above tracked wallet. Try transfer Native/ERC20 token from above tracked wallet to another wallet. NOTE: This is the primary step before doing any other below steps In this part, it is essential to ensure that all widely-recognized tokens from the new chains are added to our database whitelist. To fulfill this requirement, we can follow these steps.
mochi-pay-api :Write a database migration file to seed new chain’s info to the table chains
. Write script into above database miration file to seed tokens of new chain’s to the table tokens
Find list listed tokens in the coingeko via this page https://www.coingecko.com/en/all-cryptocurrencies with proper filter. For Example: I’m searching token on the chain BASE . Because we have no API to fetch these coin information, so we can do some cheat to extract list of expected token symbols like []string{"KRAV", "USDT" ...}
to put into some Golang script to generate Database migration script for enriching tokens. For example I asked Chat GPT help me extract all symbol of all coins that have 24h Volume greater than $1000. Sometimes, result can be wrong, or not be a list of symbols, you can do some outstanding questions to optimize it. Using result of above script as list input tokens to generate file migration, check repositorymochi-pay-api at folder /script
for further information. mochi-api :Update all utils at pkg/util folder to support the new chains, just simply add new switch case condition now. Such as in the ConvertChainIdToChainName
we must add new chain in the switch case and return proper chain’s name depend on provided chainId
. How to test
Firstly, you actually must deposit some tokens in the new integrated chain to your Mochi wallet by using the command /deposit
. Then execute the command /profile
and /bal
to see the detail and ensure that new token is shown/counted in these command result. Everything must be done at step 4 , except also need to update list chainIds in the function ListOrCreate
of repository mochi-pay-api to support creating new inapp wallet that is used for transfer/deposit/withdraw in the Mochi.
How to test
/tip
: make sure you can tip new token to others, balances of source and target is change./airdrop
: same tip, need somebody help you collect the tip then check their balance to see the change./pay me
: execute this cmd and make sure new wallet address for the new chain is generated in the option list , then try pay some token by using given url./pay link
: execute this cmd and make sure you can claim token from given url.This is actually done at step 4 and step 5 . A token can be withdrawn when it is whitelist in the DB.
NOTE: You must faucet native token to the centralized wallet to pay for gas of withdraw transactions.
How to test Try using cmd /withdraw
and ensure that you give 2 following notifications:
Withdraw submitted Withdraw success NOTE: You must faucet native token to the centralized wallet to pay for gas of sweeping native/token transactions.
To enable support for deposits on a new chain, follow these essential steps:
Compile and Deploy Deposit Contract :
Compile the deposit contract using the contract available in the consolelabs/contract-tip-bot repository. Deploy the compiled deposit contract to the new chain. Ensure that the deployment is successful and that the contract is functioning as expected . Can testing on the testnet first. Change Ownership to Centralized Wallet Address :
Change the ownership of the deployed deposit contract to a centralized wallet address. Seed Deposit Contract Information :
Update the mochi_pay_api
by adding information about the newly deployed deposit contract to the deposit_contracts
table. This includes details such as the contract address and other relevant information. Test Deposits :
Perform testing to ensure that deposits work smoothly on the new chain. Test depositing a few tokens to verify that the deposit functionality is working as intended. Ensure that you got following notification and the balance is changed
\ No newline at end of file
diff --git a/playbook/data-layer/index.html b/playbook/data-layer/index.html
index 74e9866..34536bf 100644
--- a/playbook/data-layer/index.html
+++ b/playbook/data-layer/index.html
@@ -1,3 +1,3 @@
Data layer | Console Labs
-
This data layer serves as the central source of information for various features such as tickers, price alerts, and token details. These features rely on the data within this layer to provide accurate and up-to-date information to users. In essence, it’s like the beating heart of the mochi ecosystem, ensuring that all the essential data is readily available for users to access and use.
The raw data that powers our system originates from blockchain transaction data, which we can retrieve from various RPC sources. However, maintaining a stable and consistent RPC connection has proven to be challenging in our past experiences. To address this issue, we’ve adopted a different approach. We pull all the raw data into our Clickhouse database, where we store it for a period of 30 days before it’s automatically deleted.
This raw data plays a crucial role in providing transaction notifications to our users. Any transaction originating from wallets that we track is processed by our system. If a tracked wallet engages in a transaction, our system ensures that users are promptly notified about it. This process allows us to deliver real-time transaction updates and information to our users reliably.
Token data encompasses several important aspects:
Token Info: This component provides project metadata, including details such as the project’s Twitter account, website, and a concise project description. We retrieve this information from sources like Etherscan or CoinGecko.
Price Info: Calculating price information ourselves can be redundant and resource-intensive. Instead, we rely on established sources like CoinGecko, which offers a stable API with reasonable costs. This allows us to provide accurate and up-to-date price data to our users without reinventing the wheel.
Dex Data: Dex data represents the most comprehensive information about each token. It includes details about which decentralized exchanges (dex) currently list the token and provides data such as the number of token holders or trading volume for each day. Rather than calculating this information in-house, we source it from multiple data sources and store it in our data layer. This approach ensures that our users have access to comprehensive dex-related data for each token without the need for extensive calculations on our part.
When it comes to building online services, speed is a top priority. As our data layer continues to grow, we’ve implemented several optimization strategies to ensure efficient and fast data retrieval:
Caching Frequently Accessed Data: To address the need for frequently queried data like token prices and metadata (such as emoji associated with tokens), we employ short-lived Redis caching. This allows us to store this data temporarily in a highly responsive cache, reducing the need to query the upstream services or databases repeatedly for the same information. Caching helps deliver swift responses to user requests. Read Replicas and Data Partitioning: For data that cannot be cached due to its infrequent use or size, we’ve taken a two-pronged approach. First, we’ve created read replicas within our PostgreSQL layer. These replicas serve as copies of our main database, dedicated to handling read requests. This not only spreads the load but also improves query response times. Additionally, we’ve optimized data storage and retrieval by partitioning our most frequently accessed tables. Partitioning involves breaking these tables into smaller, more manageable parts based on certain criteria (e.g., time, category, or other relevant factors). This partitioning strategy significantly enhances query performance, as it reduces the amount of data that needs to be scanned or processed during each query. This combination of read replicas and data partitioning ensures that we can maintain high-speed access to our data even as our data layer continues to expand.
Ensuring real-time data accuracy for crucial information like token prices is essential to avoid missing any market movements. However, for certain types of data like total assets and token info, real-time updates can be delayed without significantly impacting user experience. Here’s how we manage this balance:
Real-time Data for Price Updates: For data that requires real-time accuracy, such as token prices, we prioritize instant updates. This ensures that our users receive the most up-to-date information, especially those who rely on timely market data. Real-time data is essential to provide users with the information they need to make informed decisions in the fast-paced world of cryptocurrencies.
Delayed Data for Less Critical Information: To reduce stress on our system and enhance the overall user experience, we introduce a slight delay in updating less critical information, such as total assets and token details. This delay allows us to optimize data processing and avoid overwhelming our resources. From the perspective of most users, this data still appears to be served instantly, ensuring a seamless experience.
Prioritizing Real-time Data for Top Users: Recognizing the importance of real-time data for our top users, we’ve implemented a ranking system. The most active users or those with specific access privileges can enjoy the benefit of receiving data in real-time with minimal delay. This approach ensures that our highest-priority users have access to the most critical data without compromise.
Background Jobs for Data Handling: To manage these data updates efficiently, we handle most data processing as background jobs. This allows us to update and synchronize information without disrupting the user interface or slowing down the system’s responsiveness. By running these tasks in the background, we maintain a balance between real-time data needs and system stability.
In summary, our approach involves a careful balance between real-time data for critical information and slightly delayed updates for less critical data. This strategy, combined with a ranking system and background job processing, allows us to provide both real-time accuracy and a smooth user experience while managing system resources effectively.
Maintaining a reasonable cost for the data layer is crucial for the sustainability of your online service. Here’s how you approach cost management:
Start with Third-Party Services: When in doubt, opting for third-party services is often the most cost-effective and efficient choice. These services are specialized and can save you time and resources in the short term. Even if they come with a monthly cost, as you mentioned, they often offer a great value proposition by allowing you to focus on other aspects of your platform. Continuous Evaluation: Periodically assess the cost-effectiveness of the third-party services you use. Determine if the expense is justified by the benefits they provide. If a service becomes too costly or if you outgrow it, consider alternatives or building an in-house solution. Building In-House Solutions: As your platform matures and your needs become more complex, it may become financially prudent to build certain services in-house. By doing so, you have more control over costs, scalability, and customization. However, this should be a well-considered decision, as it often involves higher initial investments in development and ongoing maintenance. Cost Monitoring and Optimization: Implement robust cost monitoring and optimization practices. Keep an eye on data usage, query efficiency, and infrastructure costs. Use cloud provider tools to track spending and optimize resources accordingly. Often, small adjustments can lead to significant cost savings. by
+
This data layer serves as the central source of information for various features such as tickers, price alerts, and token details. These features rely on the data within this layer to provide accurate and up-to-date information to users. In essence, it’s like the beating heart of the mochi ecosystem, ensuring that all the essential data is readily available for users to access and use.
The raw data that powers our system originates from blockchain transaction data, which we can retrieve from various RPC sources. However, maintaining a stable and consistent RPC connection has proven to be challenging in our past experiences. To address this issue, we’ve adopted a different approach. We pull all the raw data into our Clickhouse database, where we store it for a period of 30 days before it’s automatically deleted.
This raw data plays a crucial role in providing transaction notifications to our users. Any transaction originating from wallets that we track is processed by our system. If a tracked wallet engages in a transaction, our system ensures that users are promptly notified about it. This process allows us to deliver real-time transaction updates and information to our users reliably.
Token data encompasses several important aspects:
Token Info: This component provides project metadata, including details such as the project’s Twitter account, website, and a concise project description. We retrieve this information from sources like Etherscan or CoinGecko.
Price Info: Calculating price information ourselves can be redundant and resource-intensive. Instead, we rely on established sources like CoinGecko, which offers a stable API with reasonable costs. This allows us to provide accurate and up-to-date price data to our users without reinventing the wheel.
Dex Data: Dex data represents the most comprehensive information about each token. It includes details about which decentralized exchanges (dex) currently list the token and provides data such as the number of token holders or trading volume for each day. Rather than calculating this information in-house, we source it from multiple data sources and store it in our data layer. This approach ensures that our users have access to comprehensive dex-related data for each token without the need for extensive calculations on our part.
When it comes to building online services, speed is a top priority. As our data layer continues to grow, we’ve implemented several optimization strategies to ensure efficient and fast data retrieval:
Caching Frequently Accessed Data: To address the need for frequently queried data like token prices and metadata (such as emoji associated with tokens), we employ short-lived Redis caching. This allows us to store this data temporarily in a highly responsive cache, reducing the need to query the upstream services or databases repeatedly for the same information. Caching helps deliver swift responses to user requests.
Read Replicas and Data Partitioning: For data that cannot be cached due to its infrequent use or size, we’ve taken a two-pronged approach. First, we’ve created read replicas within our PostgreSQL layer. These replicas serve as copies of our main database, dedicated to handling read requests. This not only spreads the load but also improves query response times.
Additionally, we’ve optimized data storage and retrieval by partitioning our most frequently accessed tables. Partitioning involves breaking these tables into smaller, more manageable parts based on certain criteria (e.g., time, category, or other relevant factors). This partitioning strategy significantly enhances query performance, as it reduces the amount of data that needs to be scanned or processed during each query. This combination of read replicas and data partitioning ensures that we can maintain high-speed access to our data even as our data layer continues to expand.
Ensuring real-time data accuracy for crucial information like token prices is essential to avoid missing any market movements. However, for certain types of data like total assets and token info, real-time updates can be delayed without significantly impacting user experience. Here’s how we manage this balance:
Real-time Data for Price Updates: For data that requires real-time accuracy, such as token prices, we prioritize instant updates. This ensures that our users receive the most up-to-date information, especially those who rely on timely market data. Real-time data is essential to provide users with the information they need to make informed decisions in the fast-paced world of cryptocurrencies.
Delayed Data for Less Critical Information: To reduce stress on our system and enhance the overall user experience, we introduce a slight delay in updating less critical information, such as total assets and token details. This delay allows us to optimize data processing and avoid overwhelming our resources. From the perspective of most users, this data still appears to be served instantly, ensuring a seamless experience.
Prioritizing Real-time Data for Top Users: Recognizing the importance of real-time data for our top users, we’ve implemented a ranking system. The most active users or those with specific access privileges can enjoy the benefit of receiving data in real-time with minimal delay. This approach ensures that our highest-priority users have access to the most critical data without compromise.
Background Jobs for Data Handling: To manage these data updates efficiently, we handle most data processing as background jobs. This allows us to update and synchronize information without disrupting the user interface or slowing down the system’s responsiveness. By running these tasks in the background, we maintain a balance between real-time data needs and system stability.
In summary, our approach involves a careful balance between real-time data for critical information and slightly delayed updates for less critical data. This strategy, combined with a ranking system and background job processing, allows us to provide both real-time accuracy and a smooth user experience while managing system resources effectively.
Maintaining a reasonable cost for the data layer is crucial for the sustainability of your online service. Here’s how you approach cost management:
Start with Third-Party Services: When in doubt, opting for third-party services is often the most cost-effective and efficient choice. These services are specialized and can save you time and resources in the short term. Even if they come with a monthly cost, as you mentioned, they often offer a great value proposition by allowing you to focus on other aspects of your platform. Continuous Evaluation: Periodically assess the cost-effectiveness of the third-party services you use. Determine if the expense is justified by the benefits they provide. If a service becomes too costly or if you outgrow it, consider alternatives or building an in-house solution. Building In-House Solutions: As your platform matures and your needs become more complex, it may become financially prudent to build certain services in-house. By doing so, you have more control over costs, scalability, and customization. However, this should be a well-considered decision, as it often involves higher initial investments in development and ongoing maintenance. Cost Monitoring and Optimization: Implement robust cost monitoring and optimization practices. Keep an eye on data usage, query efficiency, and infrastructure costs. Use cloud provider tools to track spending and optimize resources accordingly. Often, small adjustments can lead to significant cost savings.
\ No newline at end of file
diff --git a/playbook/index.xml b/playbook/index.xml
index 0e03e02..ced7d63 100644
--- a/playbook/index.xml
+++ b/playbook/index.xml
@@ -12,5 +12,5 @@ Fix bugs and improve UI/UX These changelogs are confusing and don’t pr
One of the key benefits of integrating regularly is that you can detect errors quickly and locate them more easily. As each change introduced is typically small, pinpointing the specific change that introduced a defect can be done quickly.Format a Neko profile https://log.console.so/playbook/format-profile-number/Mon, 11 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/format-profile-number/ Intro When it comes to UI rendering, there’s no one-size-fits-all “correct way.” This is especially true for us because we’re building across various platforms and ecosystems, each with its unique user behavior. Consequently, there’s no universal UI rendering approach that suits every situation.
In today’s article, we’ll delve into our first component: the “Neko Profile.”
What is an Neko Profile A Neko profile is what we call our “user.” This term applies not only to Mochi but also to everything we’ve built, from Pod Town to NFTs and beyond. Integrate new chain https://log.console.so/playbook/add-new-chains/Mon, 11 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/add-new-chains/ In the rapidly evolving blockchain industry, it’s crucial to keep up with the pace of change. Here’s a quick overview of how we seamlessly integrate new chains into the Mochi system, with the caveat that this method is specific to EVM-based blockchains:
-Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit [0] Product Preparation In this section, most of the tasks require manual handling. Shared UI library in multiple platforms https://log.console.so/playbook/shared-ui-lib/Mon, 11 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/shared-ui-lib/ Background Earlier this year, we introduced a new version of Telegram for Mochi, but it came with some new challenges.
+Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit Product Preparation In this section, most of the tasks require manual handling. Shared UI library in multiple platforms https://log.console.so/playbook/shared-ui-lib/Mon, 11 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/shared-ui-lib/ Background Earlier this year, we introduced a new version of Telegram for Mochi, but it came with some new challenges.
Up until now, we’ve been using a single backend system to support all of our services. This approach was taken to ensure a smooth experience for both users and engineers across different platforms. However, a problem arose when it came to the user interface. We found ourselves dealing with two different sets of rules (Discord & Telegram) for how the interface should look and behave.
\ No newline at end of file
diff --git a/playbook/meaningful-commit/index.html b/playbook/meaningful-commit/index.html
index 5e13838..3316614 100644
--- a/playbook/meaningful-commit/index.html
+++ b/playbook/meaningful-commit/index.html
@@ -1,3 +1,3 @@
Meaningful commit message | Console Labs
-
Have you ever been frustrated by reading an app’s changelog that’s full of unhelpful information? It’s like they don’t care about their product or their users. You might have seen changelogs like this:
2.11.4
or
Fix bugs and improve UI/UX These changelogs are confusing and don’t provide any meaningful insights for users. Changelogs are meant for people, not machines. They serve as a way for us to communicate with our users. Whether you’re working on a big or small software project, it’s essential to create a useful changelog.
In the early stages, our commit messages were similar, not designed for humans, product teams, or even fellow engineers. They looked something like this:
🐞 Bugs
Correct releaserc cfg (4337ccb) Disable webpage preview in tip message (1fe0fae) Remove extra @ in tip message (e6ac302) Sentry check ticker command (#64) (f81c9ca) Initially, we didn’t see anything wrong with these messages; in fact, we considered them good. However, as time passed, we realized that we were missing essential context, such as which features, platforms, or affected areas these commits related to.
So, we decided to change the way we write commit messages and put in a bit more effort to enhance our notification platform. Now, with each release, we have clear and informative messages like this:
We’ve adopted a specific syntax when creating a new commit:
level(scope): message
level : This indicates the nature of the change, such as a refactor, improvement, chore, fix, or feature.scope : This specifies which feature or area is affected by the changes.message : This is where we explain to users what they need to know about the update.by
+
Have you ever been frustrated by reading an app’s changelog that’s full of unhelpful information? It’s like they don’t care about their product or their users. You might have seen changelogs like this:
2.11.4
or
Fix bugs and improve UI/UX These changelogs are confusing and don’t provide any meaningful insights for users. Changelogs are meant for people, not machines. They serve as a way for us to communicate with our users. Whether you’re working on a big or small software project, it’s essential to create a useful changelog.
In the early stages, our commit messages were similar, not designed for humans, product teams, or even fellow engineers. They looked something like this:
🐞 Bugs
Correct releaserc cfg (4337ccb) Disable webpage preview in tip message (1fe0fae) Remove extra @ in tip message (e6ac302) Sentry check ticker command (#64) (f81c9ca) Initially, we didn’t see anything wrong with these messages; in fact, we considered them good. However, as time passed, we realized that we were missing essential context, such as which features, platforms, or affected areas these commits related to.
So, we decided to change the way we write commit messages and put in a bit more effort to enhance our notification platform. Now, with each release, we have clear and informative messages like this:
We’ve adopted a specific syntax when creating a new commit:
level(scope): message
level : This indicates the nature of the change, such as a refactor, improvement, chore, fix, or feature.scope : This specifies which feature or area is affected by the changes.message : This is where we explain to users what they need to know about the update.Focus on what matters most to users: new features, bug fixes, and improvements in terms of UI/speed. Write messages with users in mind, not just for fellow engineers. Always consolidate your PRs into a single commit to maintain a clean and coherent commit history.
\ No newline at end of file
diff --git a/tags/blockchain/index.xml b/tags/blockchain/index.xml
index f6ce037..2996409 100644
--- a/tags/blockchain/index.xml
+++ b/tags/blockchain/index.xml
@@ -2,7 +2,7 @@
You can choose token to support from these source
From the /ticker and /watchlist add. Check out the suggested list here. Binance Coinmarketcap/ Twitter, hastag $… Token Priority to support
The most recent and queried token by Mochi user The token on the chain that we have supported Token on Binance, CMC, Twitter Process support tokenLanding - Mochi UI Kit https://log.console.so/earn/mochi-uikit/Mon, 04 Dec 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/mochi-uikit/ Mochi UI Kit Landing (Designing - tab Marketing Website): Link Landing - Neko Emoji https://log.console.so/earn/emoji-landing/Mon, 04 Dec 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/emoji-landing/ check the design out here: Link Support APTOS chain https://log.console.so/earn/data-aptos-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-aptos-chain/ References Guideline: add-new-chains.md Support Avax chain https://log.console.so/earn/data-avax-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-avax-chain/ References Guideline: add-new-chains.md Support BRC-20 chain https://log.console.so/earn/data-klaytn-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-klaytn-chain/ References Guideline: add-new-chains.md Support Linea chain https://log.console.so/earn/data-linea-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-linea-chain/ References Guideline: add-new-chains.md Support OpBNB chain https://log.console.so/earn/data-opbnb-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-opbnb-chain/ References Guideline: add-new-chains.md Support RONIN chain https://log.console.so/earn/data-ronin-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-ronin-chain/ References Guideline: add-new-chains.md Support Starknet chain https://log.console.so/earn/data-starknet-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-starknet-chain/ References Guideline: add-new-chains.md Support SUI chain https://log.console.so/earn/data-sui-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-sui-chain/ References Guideline: add-new-chains.md Support TON chain https://log.console.so/earn/data-ton-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-ton-chain/ References Guideline: add-new-chains.md Support zkSync chain https://log.console.so/earn/data-zksync-chain/Thu, 05 Oct 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/data-zksync-chain/ References Guideline: add-new-chains.md Integrate new chain https://log.console.so/playbook/add-new-chains/Mon, 11 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/add-new-chains/ In the rapidly evolving blockchain industry, it’s crucial to keep up with the pace of change. Here’s a quick overview of how we seamlessly integrate new chains into the Mochi system, with the caveat that this method is specific to EVM-based blockchains:
-Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit [0] Product Preparation In this section, most of the tasks require manual handling. Blockchain event calendar https://log.console.so/earn/mochi-blockchain-event-calendar/Tue, 05 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/mochi-blockchain-event-calendar/ 1/ Get data from: https://coinmarketcal.com/en/?page=3
+Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit Product Preparation In this section, most of the tasks require manual handling. Blockchain event calendar https://log.console.so/earn/mochi-blockchain-event-calendar/Tue, 05 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/earn/mochi-blockchain-event-calendar/ 1/ Get data from: https://coinmarketcal.com/en/?page=3
2/ Render list of popular events, like: ETH, BTC, SOL, Token2049 …, support pagination
3/ Search event via keyword
4/ Users can submit their own event, with tag local and link to join, also alert a msg to mochi discord
diff --git a/tags/data/index.xml b/tags/data/index.xml
index 5fb5bd4..8b9bec4 100644
--- a/tags/data/index.xml
+++ b/tags/data/index.xml
@@ -2,4 +2,4 @@
You can choose token to support from these source
From the /ticker and /watchlist add. Check out the suggested list here. Binance Coinmarketcap/ Twitter, hastag $… Token Priority to support
The most recent and queried token by Mochi user The token on the chain that we have supported Token on Binance, CMC, Twitter Process support token Integrate new chain https://log.console.so/playbook/add-new-chains/Mon, 11 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/add-new-chains/ In the rapidly evolving blockchain industry, it’s crucial to keep up with the pace of change. Here’s a quick overview of how we seamlessly integrate new chains into the Mochi system, with the caveat that this method is specific to EVM-based blockchains:
-Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit [0] Product Preparation In this section, most of the tasks require manual handling.
\ No newline at end of file
+Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit Product Preparation In this section, most of the tasks require manual handling.
\ No newline at end of file
diff --git a/tags/guideline/index.xml b/tags/guideline/index.xml
index 0decc87..e58dd41 100644
--- a/tags/guideline/index.xml
+++ b/tags/guideline/index.xml
@@ -3,4 +3,4 @@ A typical sprint of the Console Labs:
8:30 am, Monday: There is a planning session between CEO and the product team. 2 pm, Wednesday: The product team review the work in the first half of the week. 4 pm, Friday: The team review all the work in the week, then release the weekly changelog at 6 pm.API-first development https://log.console.so/playbook/api-first/Fri, 15 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/api-first/ Background @Console Labs, our primary goal is to rapidly introduce new features and gather user feedback as quickly as possible. We understand that this approach isn’t flawless and can lead to regression bugs. However, at this stage of our product development, we are content with this strategy.
The challenge arises when we aim to develop something new within a short timeframe, typically one week or even just two days. This typically involves creating a small set of new features, starting from gathering requirements, designing the features, and then implementing them on both the backend and frontend. Data layer https://log.console.so/playbook/data-layer/Fri, 15 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/data-layer/ This data layer serves as the central source of information for various features such as tickers, price alerts, and token details. These features rely on the data within this layer to provide accurate and up-to-date information to users. In essence, it’s like the beating heart of the mochi ecosystem, ensuring that all the essential data is readily available for users to access and use.
Raw Data The raw data that powers our system originates from blockchain transaction data, which we can retrieve from various RPC sources. Leverage Mochi balances https://log.console.so/playbook/integrate-mochi-balances/Fri, 15 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/integrate-mochi-balances/ Integrate new chain https://log.console.so/playbook/add-new-chains/Mon, 11 Sep 2023 00:00:00 +0000 han@console.so (Han Ngo) https://log.console.so/playbook/add-new-chains/ In the rapidly evolving blockchain industry, it’s crucial to keep up with the pace of change. Here’s a quick overview of how we seamlessly integrate new chains into the Mochi system, with the caveat that this method is specific to EVM-based blockchains:
-Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit [0] Product Preparation In this section, most of the tasks require manual handling.
\ No newline at end of file
+Product Preparation Pull raw data into clickhouse Ticker, Watchlist Token Info Wallet tracking on this new chain Show assets in profile Transfer / Tip / Airdrop / Payme / Paylink Withdraw Deposit Product Preparation In this section, most of the tasks require manual handling.
\ No newline at end of file