-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathsearch-doc.json
1 lines (1 loc) · 212 KB
/
search-doc.json
1
{"searchDocs":[{"title":"HwameiStor Comes Online","type":0,"sectionRef":"#","url":"/blog/1","content":"","keywords":"","version":null},{"title":"The local storage is smart, stable, and agile","type":1,"pageTitle":"HwameiStor Comes Online","url":"/blog/1#the-local-storage-is-smart-stable-and-agile","content":"Automatic operation and maintenance management Automatically discover, identify, manage, and allocate disks. Intelligently schedule applications and data based on affinity. Automatically monitor the disk status and give early warning in time. Highly available data Use inter-node replicas to synchronize data for high availability. When a problem occurs, the application will be automatically scheduled to a highly available data node to guarantee the application continuity. Multiple data volume types are supported Aggregate HDD, SSD, and NVMe disks to provide data service with low latency and high throughput. Flexible and dynamic linear expansion A dynamic expansion is supported according to the cluster size, to flexibly meet the data persistence needs of applications. ","version":null,"tagName":"h2"},{"title":"Enrich scenarios and widely adapt to enterprise needs","type":1,"pageTitle":"HwameiStor Comes Online","url":"/blog/1#enrich-scenarios-and-widely-adapt-to-enterprise-needs","content":"Adapt to middlewares with high available architecture Kafka, Elasticsearch, Redis, and other middleware applications have high available architecture and strict requirements for IO data access. The LVM-based single-replica local data volume provided by HwameiStor can well meet their requirements. Provide highly available data volumes for applications MySQL and other OLTP databases require the underlying storage to provide highly available data storage, which can quickly restore data in case of problems. At the same time, it is also required to guarantee high-performance data access. The dual-replica high available data volume provided by HwameiStor can well meet such requirements. Automated operation and maintenance of traditional storage software MinIO, Ceph, and other storage software need to use the disks on a kubernetes node. These software can utilize PVC/PV to automatically use the single-replica local volume of HwameiStor through CSI drivers, quickly respond to the deployment, expansion, migration, and other requests from the business system, and realize the automatic operation and maintenance based on Kubernetes. ","version":null,"tagName":"h2"},{"title":"Join us","type":1,"pageTitle":"HwameiStor Comes Online","url":"/blog/1#join-us","content":"If the coming future is an era of intelligent Internet, developers will be the pioneers to that milestone, and the open source community will become the "metaverse" of developers. If you have any questions about the HwameiStor cloud-native local storage system, welcome to join the community to explore this metaverse world dedicated for developers and grow together. ","version":null,"tagName":"h2"},{"title":"LV and LVReplica","type":0,"sectionRef":"#","url":"/blog/3","content":"","keywords":"","version":null},{"title":"LocalVolume","type":1,"pageTitle":"LV and LVReplica","url":"/blog/3#localvolume","content":"LocalVolume is a CRD defined by HwameiStor. It is the volume that HwameiStor provides for users. Each LocalVolume corresponds to a PersistentVolume of Kubernetes. Both are volumes, but LocalVolume stores HwameiStor-related information, while the other records information about Kubernetes itself and links it to LocalVolume. You can check details of LocalVolume with this command: # check status of local volume and volume replica $ kubectl get lv # or localvolume NAME POOL KIND REPLICAS CAPACITY ACCESSIBILITY STATE RESOURCE PUBLISHED AGE pvc-996b05e8-80f2-4240-ace4-5f5f250310e2 LocalStorage_PoolHDD LVM 1 1073741824 k8s-node1 Ready -1 22m Now that HwameiStor can use LocalVolume to provide a volume, why do we still need LocalVolumeReplica? ","version":null,"tagName":"h2"},{"title":"LocalVolumeReplica","type":1,"pageTitle":"LV and LVReplica","url":"/blog/3#localvolumereplica","content":"LocalVolumeReplica is another CRD defined by HwameiStor. It represents a replica of a volume. In HwameiStor, LocalVolume can specify one of its LocalVolumeReplica as the active replica. As a volume, LocalVolume can have many LocalVolumeReplica as its replicas. The replica in active state will be mounted by applications and others will stand by as high available replicas. You can check details of LocalVolumeReplica with this command: $ kubectl get lvr # or localvolumereplica NAME KIND CAPACITY NODE STATE SYNCED DEVICE AGE pvc-996b05e8-80f2-4240-ace4-5f5f250310e2-v5scm9 LVM 1073741824 k8s-node1 Ready true /dev/LocalStorage_PoolHDD/pvc-996b05e8-80f2-4240-ace4-5f5f250310e2 80s LocalVolumeReplica allows HwameiStor to support features like HA, migration, hot standby of volumes and fast recovery of Kubernetes applications, making it more competitive as a local storage tool. ","version":null,"tagName":"h2"},{"title":"Conclusion","type":1,"pageTitle":"LV and LVReplica","url":"/blog/3#conclusion","content":"LocalVolume and LocalVolumeReplica are common concepts in many storage products, but each product can have its own competitive and unique features based on these two concepts. A technical difficulty can be solved with different solutions, so these concepts are also suitable for different production scenarios. We will provide more capabilities for more scenarios in future releases. Both users and developers are welcome to join us! ","version":null,"tagName":"h2"},{"title":"Reliable Helper System for HwameiStor Is Online","type":0,"sectionRef":"#","url":"/blog/2","content":"","keywords":"","version":null},{"title":"Comprehensively enhance the operation and maintenance","type":1,"pageTitle":"Reliable Helper System for HwameiStor Is Online","url":"/blog/2#comprehensively-enhance-the-operation-and-maintenance","content":"","version":null,"tagName":"h2"},{"title":"Reliable, one-click replacement, alert reminder","type":1,"pageTitle":"Reliable Helper System for HwameiStor Is Online","url":"/blog/2#reliable-one-click-replacement-alert-reminder","content":"Reliable data migration and backfill Automatically recognize RAID disks and determine if the data migration and backfill is required to guarantee data reliability. One-click disk replacement This feature is implemented by using the disk uuid. Intuitive alert reminder If any exceptions occur in the process of one-click disk replacement, the system will raise an alert to remind you. ","version":null,"tagName":"h2"},{"title":"Join us","type":1,"pageTitle":"Reliable Helper System for HwameiStor Is Online","url":"/blog/2#join-us","content":"If the coming future is an era of intelligent Internet, developers will be the pioneers to that milestone, and the open source community will become the "metaverse" of developers. If you have any questions about the HwameiStor cloud-native local storage system, welcome to join the community to explore this metaverse world dedicated for developers and grow together. ","version":null,"tagName":"h2"},{"title":"Livestream Highlights: Cloud Native Storage","type":0,"sectionRef":"#","url":"/blog/live","content":"","keywords":"","version":null},{"title":"HwameiStor","type":1,"pageTitle":"Livestream Highlights: Cloud Native Storage","url":"/blog/live#hwameistor","content":"As an infrastructure for containerization in cloud native stack, cloud-native storage exposes underlying storage services to containers and micro-services, and collects storage resources from different media. It can enable stateful workloads to run in containers by providing persistent volumes. CNCF's definition for cloud-native storage has three key points. First, it runs on Kubernetes as a container. Second, it uses Kubernetes object classes, mainly custom resource definition (CRD). Last and the most important, it uses a container storage interface (CSI). HwameiStor is developed based on these three points. it is an end-to-end cloud-native local storage system. The bottom layer of HwameiStor is a local disk manager (LDM). LDM automatically collects resources from various storage media (such as HDD, SSD and NVMe disks) to form a local storage resource pool for unified automatic management. After HwameiStor is deployed, it can allocate resources on various storage media to different pools. Above the resource pool, it uses logical volume manager (LVM) for management, It also uses the CSI architecture to provide distributed local data volume service, so as to provide data persistence capabilities for cloud-native stateful applications or components. HwameiStor is a storage system designed for cloud native. It supports high availability, automation, rapid deployment, and high performance with low cost, making it a good alternative to expensive traditional storage area network (SAN). It has three core components: Local Disk Management (LDM), which uses CRD to define and manage local data disks. LDM can explicitly get the attributes, size and other information of local disks in Kubernetes. Local Storage (LS), which uses LVM to allocate logical volumes (LV) to persistent volumes (PV) after LDM is implemented. Scheduler, which schedules containers to nodes with local data. The core of HwameiStor lies in the definition and implementation of CRDs. On top of PersistentVolume (PV) and PersistentVolumeClaim (PVC) object classes in Kubernetes, HwameiStor defines a new object class that can associate PV/PVC in Kubernetes with local data disks. HwameiStor has four features: Automatic operation and management: it can automatically discover, identify, manage, and allocate disks, and schedule applications and data according to affinity. It can also automatically monitor disk status and provide timely warning. Highly available data support: HwameiStor uses cross-node replicas to synchronize data and achieve high availability. When a node goes wrong, it will automatically schedule applications to highly available data nodes to ensure application continuity. Rich types of data volume: HwameiStor aggregates HDD, SSD, and NVMe disks to provide data services with low latency and high throughput. Flexible and dynamic linear expansion:HwameiStor can dynamically expand according to cluster size to meet application's requirements for data persistence. HwameiStor is recommended in the following scenarios: Adapt to middlewares with a highly available architecture Some middlewares, like Kafka, ElasticSearch and Redis, have a highly available architecture and a high requirement for IO data access. The LVM-based single-replica local data volumes provided by HwameiStor are suitable for such kind of applications. Provide highly available data volumes for applications MySQL and other OLTP databases require their underlying storage systems to provide highly available data storage for quick recovery. Meanwhile, data access should also have a high performance. The highly available dual-replica data volumes with can meet these requirements. Provide automatic operation and maintenance for traditional storage software MinIO, Ceph and other similar storage software need to use the disks on Kubernetes nodes. HwameiStor's single-replica local volumes can fast respond to the business system's needs for deployment, expansion, and migration, thus realizing automatic operation and maintenance based on Kubernetes. This kind of local volumes can be automatically used by PVC/PV through CSI drives. ","version":null,"tagName":"h2"},{"title":"ZettaStor HASP","type":1,"pageTitle":"Livestream Highlights: Cloud Native Storage","url":"/blog/live#zettastor-hasp","content":"According to the CNCF 2020 Annual Report, stateful applications account for 55% of all container applications, making it a mainstream. About 29% stateful applications listed storage as the main challenge for adopting container technology. A storage problem facing cloud native scenarios is that it is difficult to balance performance and availability. Using a single type of storage cannot meet all needs. Therefore, in the actual implementation of stateful applications, data storage technology is the key. ZettaStor HASP is a cloud-native data aggregation & storage platform. It is also a user-state file system of high performance that supports redundancy protection based on multiple replicas across heterogeneous storage systems. Data replicas can flow between different types of storage systems, which is a big competitiveness over traditional distributed storage systems. In addition, it also supports unified and flexible orchestration of storage resources, and can be tightly integrated with container platforms. For example: ZettaStor HASP has a higher data access performance and can realize dynamic allocation and refined management of storage resources. This is desirable for distributed applications that have data redundancy mechanisms themselves and also require high performance, such as Kafka, Redis, MongoDB, and HDFS. ZettaStor HASP can achieve data high availability based on cross-node redundancy protection, and meanwhile ensure high performance of local storage. This is suitable for applications that have no data redundancy mechanisms and must rely on external storage, such as MySQL, PostgreSQL. Critical business should prevent the risk of simultaneous failure of two nodes. ZettaStor HASP's replica redundancy protection across local and external storage systems can ensure smooth running of critical business. ZettaStor HASP has a three-layer architecture. On the top is a high-performance distributed file system, which is also the core of HASP. This independently-developed file system is fully compatible with POSIX standards and supports zero data copy between user-mode and kernel-mode, giving a full play to the performance of high-speed media, such as NVMe SSD/SCM/PM. In the middle is the data service layer, which can provide services across local storage systems on different nodes, across local storage and external storage systems, and across heterogeneous external storage systems. It can be customized into single replica, multiple replicas, strong consistency, and weak consistency solutions. At the bottom is a storage management layer, which is responsible for interacting with different storage devices. It can break device barriers and connect data islands with unified data format, helping data and business get rid of device limitations. In addition, it can also schedule storage devices through CSI. ZettaStor HASP is a distributed storage platform closely integrated with container platforms. It supports hyper-converged deployment and CSI integration. It has node affinity, and can sense the running status of user's pods, making storage behavior more adaptable. ","version":null,"tagName":"h2"},{"title":"Round Table Discussion","type":1,"pageTitle":"Livestream Highlights: Cloud Native Storage","url":"/blog/live#round-table-discussion","content":"Q1: What is cloud-native storage alexzhc: In a narrow sense, cloud-native storage need meet three standards. First, it should meet CSI specification and connect well with CS. Second, it should be deployed on Kubernetes as a container. Third, information in the storage system should create a new object class through CRDs and eventually be stored in Kubernetes. fengqinah: Cloud-native storage has multiple features and can meet various demands. In addition to providing a unified storage platform that can offer different storage features for different applications, it should also connect CSI and establish a bridge between storage systems and Kubernetes for communication. niulechuan:The most common cloud-native storage solutions are based on cloud storage or distributed storage. Meanwhile, some service providers are also trying to extend the special capabilities of traditional storage. Q2: How should cloud-native storage support cloud-native applications fengqinah: There are mainly two points. First, cloud-native storage should support features of cloud-native applications, because these features decide the application's requirements for storage. Second, it should meet CSI specifications to support special cloud native requirements. niulechuan: From a performance perspective, cloud-native storage needs to meet all requirements of the CSI architecture, so that it can fit diverse cloud-native scenarios. In order to provide good underlying support and response guarantee for cloud-native applications, cloud-native storage needs efficient operation and maintenance. In real cases, cost, portability, and technical support should also be taken into consideration when designing a cloud-native storage solution. Like cloud-native applications, cloud-native storage also requires "elasticity". It should have elastic computing capabilities, scalable data, conversions between cold and hot data. In addition, cloud-native storage should also have an open ecosystem. Q3: What are the similarities and differences between cloud-native storage and traditional storage? How about their advantages and disadvantages alexzhc:Cloud-native storage is aggregated with Kubernetes after being deployed, while traditional storage is often separated from Kubernetes after being deployed. Cloud-native storage runs on Kubernetes, making it convenient to develop micro-services. If using traditional storage, developers might need to extend the storage API. However, the aggregated form may also, to a certain extent, cause the problem of Kubernetes easily spill over to storage, bringing difficulties to operation and maintenance. Besides, cloud-native storage also has problems about network sharing and disk load. fengqinah:In external storage, storage nodes and computing nodes have a weak mutual impact. Back-end storage is generally distributed, with relatively high security and availability. However, in cloud native scenarios, only using external storage has certain disadvantages. It will increase network consumption, generate additional cost, and lack linkage with Kubernetes. Q4: How should cloud-native storage empower traditional storage niulechuan:This is a very urgent need. Kubernetes-native capabilities, such as deletion, creation, and expansion, are mainly implemented through CRDs. The community plays an active role in this process. At the same time, some capabilities of traditional storage have not yet been realized in cloud-native storage, such as cron jobs and observability. How to make cloud-native storage better empower traditional storage on platforms, give full play to their advantages, and further advance cloud native storage? This is a long way to go, and our team is working on this. alexzhc: To add one more thing. To use a common way to connect Kubernetes and traditional storage, you should aggregate CSI drivers. However, although CSI defines some storage operation flows, it is an interface after all. Therefore, we should consider whether the CSI community should use CRDs to define some features of traditional storage, and whether service providers can define some high-level and special flows by CRDs? We should try to make Kubernetes more applicable in the storage field. ","version":null,"tagName":"h2"},{"title":"HwameiStor Supports MinIO","type":0,"sectionRef":"#","url":"/blog/minio","content":"","keywords":"","version":null},{"title":"MinIO introduction","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#minio-introduction","content":"MinIO is a high performance object storage solution with native support for Kubernetes deployments. It can provide distributed, S3-compatible, and multi-cloud storage service in public cloud, private cloud, and edge computing scenarios. MinIO is a software-defined product and released under GNU Affero General Public License v3.0. It can also run well on x86 and other standard hardware. MinIO is designed to meet private cloud's requirements for high performance, in addition to all required features of object storage. MinIO features easy to use, cost-effective, and high performance in providing scalable cloud-native object storage services. MinIO works well in traditional object storage scenarios, such as secondary storage, disaster recovery, and archiving. It also shows competitive capabilities in machine learning, big data, private cloud, hybrid cloud, and other emerging fields to well support data analysis, high performance workloads, and cloud-native applications. ","version":null,"tagName":"h2"},{"title":"MinIO architecture","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#minio-architecture","content":"MinIO is designed for the cloud-native architecture, so it can be run as a lightweight container and managed by external orchestration tools like Kubernetes. The MinIO package comprises of static binary files less than 100 MB. This small package enables it to efficiently use CPU and memory resources even with high workloads and can host a large number of tenants on shared hardware. MinIO's architecture is as follows: MinIO can run on a standard server that have installed proper local drivers (JBOD/JBOF). An MinIO cluster has a totally symmetric architecture. In other words, each server provide same functions, without any name node or metadata server. MinIO can write both data and metadata as objects, so there is no need to use metadata servers. MinIO provides erasure coding, bitrot protection, encryption and other features in a strict and consistent way. Each MinIO cluster is a set of distributed MinIO servers, one MinIO process running on each node. MinIO runs in a userspace as a single process, and it uses lightweight co-routines for high concurrence. It divides drivers into erasure sets (generally 16 drivers in each set), and uses the deterministic hash algorithm to place objects into these erasure sets. MinIO is specifically designed for large-scale and multi-datacenter cloud storage service. Tenants can run their own MinIO clusters separately from others, getting rid of interruptions from upgrade or security problems. Tenants can scale up by connecting multi clusters across geographical regions. ","version":null,"tagName":"h3"},{"title":"Build test environment","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#build-test-environment","content":"","version":null,"tagName":"h2"},{"title":"Deploy Kubernetes cluster","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#deploy-kubernetes-cluster","content":"A Kubernetes cluster was deployed with three virtual machines: one as the master node and two as worker nodes. The kubelet version is 1.22.0. ","version":null,"tagName":"h3"},{"title":"Deploy HwameiStor local storage","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#deploy-hwameistor-local-storage","content":"Deploy HwameiStor local storage on Kubernetes: Allocate five disks (SDB, SDC, SDD, SDE, and SDF) for each worker node to support HwameiStor local disk management: Check node status of local storage: Create storageClass: ","version":null,"tagName":"h3"},{"title":"Deploy distributed multi-tenant cluster (minio-operator)","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#deploy-distributed-multi-tenant-cluster-minio-operator","content":"This section will show how to deploy minio-operator, how to create a tenant, and how to configure HwameiStor local volumes. ","version":null,"tagName":"h2"},{"title":"Deploy minio-operator","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#deploy-minio-operator","content":"Copy minio-operator repo to your local environment git clone <https://github.com/minio/operator.git> Enter helm operator directory /root/operator/helm/operator Deploy the minio-operator instance helm install minio-operator \\ --namespace minio-operator \\ --create-namespace \\ --generate-name . --set persistence.storageClass=local-storage-hdd-lvm . Check minio-operator running status ","version":null,"tagName":"h3"},{"title":"Create tenants","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#create-tenants","content":"Enter the /root/operator/examples/kustomization/base directory and change tenant.yaml Enter the /root/operator/helm/tenant/ directory and change values.yaml Enter /root/operator/examples/kustomization/tenant-lite directory and change kustomization.yaml Change tenant.yaml Change tenantNamePatch.yaml Create a tenant kubectl apply –k . Check resource status of the tenant minio-t1 To create another new tenant, you can first create a new directory tenant (in this example tenant-lite-2) under /root/operator/examples/kustomization and change the files listed above Run kubectl apply –k . to create the new tenant minio-t2 ","version":null,"tagName":"h3"},{"title":"Configure HwameiStor local volumes","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#configure-hwameistor-local-volumes","content":"Run the following commands in sequence to finish this configuration: kubectl get statefulset.apps/minio-t1-pool-0 -nminio-tenant -oyaml kubectl get pvc –A kubectl get pvc export-minio6-0 -nminio-6 -oyaml kubectl get pv kubectl get pvc data0-minio-t1-pool-0-0 -nminio-tenant -oyaml kubectl get lv kubect get lvr ","version":null,"tagName":"h3"},{"title":"Test HwameiStor's support for MinIo","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#test-hwameistors-support-for-minio","content":"With the above settings in place, now let's test basic features and tenant isolation. ","version":null,"tagName":"h2"},{"title":"Test basic features","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#test-basic-features","content":"Log in to minio console:10.6.163.52:30401/login Get JWT by kubectl minio proxy -n minio-operator Browse and manage information about newly-created tenants Log in as tenant minio-t1 (Account: minio) Browse bucket bk-1 Create a new bucket bk-1-1 Create path path-1-2 Upload the file Upload the folder Create a user with read-only permission ","version":null,"tagName":"h3"},{"title":"Test tenant isolation","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#test-tenant-isolation","content":"Log in as tenant minio-t2 Only minio-t2 information is visible. You cannot see information about tenant minio-t1. Create bucket Create path Upload the file Create a user Configure user policies Delete a bucket ","version":null,"tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"HwameiStor Supports MinIO","url":"/blog/minio#conclusion","content":"In this test, we successfully deployed MinIO distributed object storage on the basis of Kubernetes 1.22 and the HwameiStor local storage. We performed the basic feature test, system security test, and operation and maintenance management test. All tests are passed, proving HwameiStor can well support for MinIO. ","version":null,"tagName":"h2"},{"title":"Welcome","type":0,"sectionRef":"#","url":"/blog/welcome","content":"Welcome to the Hwameistor blog space. Here you can keep up with the progress of the Hwameistor open source project and recent hot topics. We also plan to include release notes for major releases, guidance articles, community-related events, and possibly some development tips, and interesting topics within the team. If you are interested in contributing to this open source project and would like to join the discussion or make some guest blog posts, please contact us. GitHub address is: https://github.com/hwameistor","keywords":"","version":null},{"title":"APIs","type":0,"sectionRef":"#","url":"/docs/apis","content":"","keywords":"","version":"Next"},{"title":"CRD Object Classes","type":1,"pageTitle":"APIs","url":"/docs/apis#crd-object-classes","content":"HwameiStor defines some object classes to associate PV/PVC with local disks. Name\tAbbr\tKind\tFunctionclusters\thmcluster\tCluster\tHwameiStor cluster events\tevt\tEvent\tAudit information of HwameiStor cluster localdiskclaims\tldc\tLocalDiskClaim\tFilter and allocate local data disks localdisknodes\tldn\tLocalDiskNode\tStorage pool for disk volumes localdisks\tld\tLocalDisk\tData disks on nodes and automatically find which disks are available localdiskvolumes\tldv\tLocalDiskVolume\tDisk volumes localstoragenodes\tlsn\tLocalStorageNode\tStorage pool for lvm volumes localvolumeconverts\tlvconvert\tLocalVolumeConvert\tConvert common LVM volume to highly available LVM volume localvolumeexpands\tlvexpand\tLocalVolumeExpand\tExpand local volume storage capacity localvolumegroups\tlvg\tLocalVolumeGroup\tLVM volume groups localvolumemigrates\tlvmigrate\tLocalVolumeMigrate\tMigrate LVM volume localvolumereplicas\tlvr\tLocalVolumeReplica\tReplicas of LVM volume localvolumereplicasnapshotrestores\tlvrsrestore,lvrsnaprestore\tLocalVolumeReplicaSnapshotRestore\tRestore snapshots of LVM volume Replicas localvolumereplicasnapshots\tlvrs\tLocalVolumeReplicaSnapshot\tSnapshots of LVM volume Replicas localvolumes\tlv\tLocalVolume\tLVM local volumes localvolumesnapshotrestores\tlvsrestore,lvsnaprestore\tLocalVolumeSnapshotRestore\tRestore snapshots of LVM volume localvolumesnapshots\tlvs\tLocalVolumeSnapshot\tSnapshots of LVM volume resizepolicies ResizePolicy\tPVC automatic expansion policy ","version":"Next","tagName":"h2"},{"title":"Use Disk Volume","type":0,"sectionRef":"#","url":"/docs/apps/disk","content":"Use Disk Volume HwameiStor provides another type of data volume known as raw disk data volume. This volume is based on the raw disk present on the node and can be directly mounted for container use. As a result, this type of data volume offers more efficient data read and write performance, thereby fully unleashing the performance of the disk. Create a nginx application and use hwameistor-disk-volume volume using the following command: $ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: containers: - name: nginx image: docker.io/library/nginx:latest imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data ports: - containerPort: 80 volumes: - name: data persistentVolumeClaim: claimName: hwameistor-disk-volume EOF ","keywords":"","version":"Next"},{"title":"Use HA Volumes","type":0,"sectionRef":"#","url":"/docs/apps/ha","content":"","keywords":"","version":"Next"},{"title":"Verify StorageClass","type":1,"pageTitle":"Use HA Volumes","url":"/docs/apps/ha#verify-storageclass","content":"StorageClass "hwameistor-storage-lvm-hdd-ha" has parameter replicaNumber: "2", which indicates a DRBD replication pair. $ kubectl apply -f examples/sc_ha.yaml $ kubectl get sc hwameistor-storage-lvm-hdd-ha -o yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hwameistor-storage-lvm-hdd-ha parameters: replicaNumber: "2" convertible: "false" csi.storage.k8s.io/fstype: xfs poolClass: HDD poolType: REGULAR striped: "true" volumeKind: LVM provisioner: lvm.hwameistor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true ","version":"Next","tagName":"h2"},{"title":"Create StatefulSet","type":1,"pageTitle":"Use HA Volumes","url":"/docs/apps/ha#create-statefulset","content":"With HwameiStor and its StorageClass ready, a MySQL StatefulSet and its volumes can be deployed by a single command: $ kubectl apply -f exapmles/sts-mysql_ha.yaml Please note the volumeClaimTemplates uses storageClassName: hwameistor-storage-lvm-hdd-ha: spec: volumeClaimTemplates: - metadata: name: data labels: app: sts-mysql-ha app.kubernetes.io/name: sts-mysql-ha spec: storageClassName: hwameistor-storage-lvm-hdd-ha accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi ","version":"Next","tagName":"h2"},{"title":"Verify MySQL Pod and PVC/PV","type":1,"pageTitle":"Use HA Volumes","url":"/docs/apps/ha#verify-mysql-pod-and-pvcpv","content":"In this example, the pod is scheduled on node k8s-worker-1. $ kubectl get po -l app=sts-mysql-ha -o wide NAME READY STATUS RESTARTS AGE IP NODE sts-mysql-ha-0 2/2 Running 0 3m08s 10.1.15.151 k8s-worker-1 $ kubectl get pvc -l app=sts-mysql-ha NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE data-sts-mysql-ha-0 Bound pvc-5236ee6f-8212-4628-9876-1b620a4c4c36 1Gi RWO hwameistor-storage-lvm-hdd 3m Filesystem Attentions: 1) By default, dual replicas require both nodes to meet pod affinity and taint tolerance. If you only want to back up data, you can configure the pvc annotation to ignore affinity and taint. I In this case, only one node needs to be able to be scheduled apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: hwameistor.io/skip-affinity-annotations: true ","version":"Next","tagName":"h2"},{"title":"Verify LocalVolume and LocalVolumeReplica objects","type":1,"pageTitle":"Use HA Volumes","url":"/docs/apps/ha#verify-localvolume-and-localvolumereplica-objects","content":"By listing LocalVolume(LV) objects with the same name as that of the PV, we can see that the LV object is created on two nodes: k8s-worker-1 and k8s-worker-2. $ kubectl get lv pvc-5236ee6f-8212-4628-9876-1b620a4c4c36 NAME POOL REPLICAS CAPACITY ACCESSIBILITY STATE RESOURCE PUBLISHED AGE pvc-5236ee6f-8212-4628-9876-1b620a4c4c36 LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-1 3m LocalVolumeReplica (LVR) further shows the backend logical volume devices on each node. $ kubectl get lvr NAME CAPACITY NODE STATE SYNCED DEVICE AGE 5236ee6f-8212-4628-9876-1b620a4c4c36-d2kn55 1073741824 k8s-worker-1 Ready true /dev/LocalStorage_PoolHDD-HA/5236ee6f-8212-4628-9876-1b620a4c4c36 4m 5236ee6f-8212-4628-9876-1b620a4c4c36-glm7rf 1073741824 k8s-worker-2 Ready true /dev/LocalStorage_PoolHDD-HA/5236ee6f-8212-4628-9876-1b620a4c4c36 4m ","version":"Next","tagName":"h2"},{"title":"lvm","type":0,"sectionRef":"#","url":"/docs/apps/lvm","content":"","keywords":"","version":"Next"},{"title":"Verify StorageClass","type":1,"pageTitle":"lvm","url":"/docs/apps/lvm#verify-storageclass","content":"Make sure the StorageClasses have been created successfully by HwameiStor Operator. And then select one of them to provision the data volume for the application. $ kubectl get sc hwameistor-storage-lvm-hdd -o yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hwameistor-storage-lvm-hdd parameters: convertible: "false" csi.storage.k8s.io/fstype: xfs poolClass: HDD poolType: REGULAR replicaNumber: "1" striped: "true" volumeKind: LVM provisioner: lvm.hwameistor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true ","version":"Next","tagName":"h2"},{"title":"Create StatefulSet","type":1,"pageTitle":"lvm","url":"/docs/apps/lvm#create-statefulset","content":"With HwameiStor and its StorageClass ready, a MySQL StatefulSet and its volumes can be deployed by a single command: $ kubectl apply -f sts-mysql_local.yaml Please note the volumeClaimTemplates uses storageClassName: hwameistor-storage-lvm-hdd: spec: volumeClaimTemplates: - metadata: name: data labels: app: sts-mysql-local app.kubernetes.io/name: sts-mysql-local spec: storageClassName: hwameistor-storage-lvm-hdd accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi Please note the minimum PVC size need to be over 4096 blocks, for example, 16MB with 4KB block. ","version":"Next","tagName":"h2"},{"title":"Verify MySQL Pod and PVC/PV","type":1,"pageTitle":"lvm","url":"/docs/apps/lvm#verify-mysql-pod-and-pvcpv","content":"In this example, the pod is scheduled on node k8s-worker-3. $ kubectl get po -l app=sts-mysql-local -o wide NAME READY STATUS RESTARTS AGE IP NODE sts-mysql-local-0 2/2 Running 0 3m08s 10.1.15.154 k8s-worker-3 $ kubectl get pvc -l app=sts-mysql-local NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE data-sts-mysql-local-0 Bound pvc-accf1ddd-6f47-4275-b520-dc317c90f80b 1Gi RWO hwameistor-storage-lvm-hdd 3m Filesystem ","version":"Next","tagName":"h2"},{"title":"Verify LocalVolume objects","type":1,"pageTitle":"lvm","url":"/docs/apps/lvm#verify-localvolume-objects","content":"By listing LocalVolume(LV) objects with the same name as that of the PV, we can see that the local volume is also created on node k8s-worker-3 $ kubectl get lv pvc-accf1ddd-6f47-4275-b520-dc317c90f80b NAME POOL REPLICAS CAPACITY ACCESSIBILITY STATE RESOURCE PUBLISHED AGE pvc-accf1ddd-6f47-4275-b520-dc317c90f80b LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-3 3m ","version":"Next","tagName":"h2"},{"title":"[Optional] Scale MySQL out into a 3-node Cluster","type":1,"pageTitle":"lvm","url":"/docs/apps/lvm#optional-scale-mysql-out-into-a-3-node-cluster","content":"HwameiStor supports StatefulSet scaleout. Each pod of the StatefulSet will attach and mount an independent HwameiStor volume. $ kubectl scale sts/sts-mysql-local --replicas=3 $ kubectl get po -l app=sts-mysql-local -o wide NAME READY STATUS RESTARTS AGE IP NODE sts-mysql-local-0 2/2 Running 0 4h38m 10.1.15.154 k8s-worker-3 sts-mysql-local-1 2/2 Running 0 19m 10.1.57.44 k8s-worker-2 sts-mysql-local-2 0/2 Init:0/2 0 14s 10.1.42.237 k8s-worker-1 $ kubectl get pvc -l app=sts-mysql-local -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE data-sts-mysql-local-0 Bound pvc-accf1ddd-6f47-4275-b520-dc317c90f80b 1Gi RWO hwameistor-storage-lvm-hdd 3m07s Filesystem data-sts-mysql-local-1 Bound pvc-a4f8b067-9c1d-450f-aff4-5807d61f5d88 1Gi RWO hwameistor-storage-lvm-hdd 2m18s Filesystem data-sts-mysql-local-2 Bound pvc-47ee308d-77da-40ec-b06e-4f51499520c1 1Gi RWO hwameistor-storage-lvm-hdd 2m18s Filesystem $ kubectl get lv NAME POOL REPLICAS CAPACITY ACCESSIBILITY STATE RESOURCE PUBLISHED AGE pvc-47ee308d-77da-40ec-b06e-4f51499520c1 LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-1 2m50s pvc-a4f8b067-9c1d-450f-aff4-5807d61f5d88 LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-2 2m50s pvc-accf1ddd-6f47-4275-b520-dc317c90f80b LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-3 3m40s ","version":"Next","tagName":"h2"},{"title":"HwameiStor Capability, Security, Operation, and Maintenance Test","type":0,"sectionRef":"#","url":"/blog/test","content":"","keywords":"","version":null},{"title":"Introduction to TiDB","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#introduction-to-tidb","content":"TiDB is a distributed database product that supports OLTP (Online Transactional Processing), OLAP (Online Analytical Processing), and HTAP (Hybrid Transactional and Analytical Processing) services, compatible with key features such as MySQL 5.7 protocol and MySQL ecosystem. The goal of TiDB is to provide users with one-stop OLTP, OLAP, and HTAP solutions, which are suitable for various application scenarios such as high availability, strict requirements for strong consistency, and large data scale. ","version":null,"tagName":"h2"},{"title":"TiDB architecture","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#tidb-architecture","content":"The TiDB distributed database splits the overall architecture into multiple modules that can communicate with each other. The architecture diagram is as follows: TiDB Server The SQL layer exposes the connection endpoints of the MySQL protocol to the outside world, and is responsible for accepting connections from clients, performing SQL parsing and optimization and finally generating a distributed execution plan. The TiDB layer itself is stateless. In practice, you can start several TiDB instances. A unified access address is provided externally through load-balance components (such as LVS, HAProxy, or F5), and client connections can be evenly distributed on to these TiDB instances. The TiDB server itself does not store data, but only parses SQL and forwards the actual data read request to the underlying storage node, TiKV (or TiFlash). PD (Placement Driver) Server The metadata management module across a TiDB cluster is responsible for storing the real-time data distribution of each TiKV node and the overall topology of the cluster, providing the TiDB Dashboard management and control interface, and assigning transaction IDs to distributed transactions. Placement Driver (PD) not only stores metadata, but also issues data scheduling commands to specific TiKV nodes based on the real-time data distribution status reported by TiKV nodes, which can be said to be the "brain" of the entire cluster. In addition, the PD itself is also composed of at least 3 nodes and has high availability capabilities. It is recommended to deploy an odd number of PD nodes. Storage nodes TiKV Server: In charge of storing data. From the outside, TiKV is a distributed Key-Value storage engine that provides transactions. The basic unit for storing data is Region. Each Region is responsible for storing the data of a Key Range (the block between left-closed and right-open from StartKey to EndKey). Each TiKV node is responsible for multiple Regions. TiKV API provides native support for distributed transactions at the KV key-value pair level, and provides the levels of Snapshot Isolation (SI) by default, which is also the core of TiDB's support for distributed transactions at the SQL level. After the SQL layer of TiDB completes the SQL parsing, it will convert the SQL execution plan into the actual call to the TiKV API. Therefore, the data is stored in TiKV. In addition, the TiKV data will be automatically maintained in multiple replicas (the default is three replicas), which naturally supports high availability and automatic failover. TiFlash is a special storage node. Unlike ordinary TiKV nodes, data is stored in columns in TiFlash, and the main function is to accelerate analysis-based scenarios. ","version":null,"tagName":"h3"},{"title":"TiDB database storage","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#tidb-database-storage","content":" Key-Value Pair The choice of TiKV is the Key-Value model that provides an ordered traversal method. Two key points of TiKV data storage are: A huge Map (comparable to std::map in C++) that stores Key-Value Pairs. The Key-Value pairs in this Map are sorted by the binary order of the Key, that is, you can seek to the position of a certain Key, and then continuously call the Next method to obtain the Key-Value larger than this Key in an ascending order. Local storage (Rocks DB) In any persistent storage engine, data must be saved on disk after all, and TiKV is not different. However, TiKV does not choose to write data directly to the disk, but stores the data in RocksDB, and RocksDB is responsible for the specific data storage. The reason is that developing a stand-alone storage engine requires a lot of work, especially to make a high-performance stand-alone engine, which may require various meticulous optimizations. RocksDB is a very good stand-alone KV storage engine open sourced by Facebook. It can meet various requirements of TiKV for single engine. Here we can simply consider that RocksDB is a persistent Key-Value Map on a host. Raft protocol TiKV uses the Raft algorithm to ensure that data is not lost and error-free when a single host fails. In short, it is to replicate data to multiple hosts, so that if one host cannot provide services, replicas on other hosts can still provide services. This data replication scheme is reliable and efficient, and can deal with replica failures. Region TiKV divides the Range by Key. A certain segment of consecutive Keys are stored on a storage node. Divide the entire Key-Value space into many segments, each segment is a series of consecutive Keys, called a Region. Try to keep the data saved in each Region within a reasonable size. Currently, the default in TiKV is no more than 96 MB. Each Region can be described by a left-closed and right-open block such as [StartKey, EndKey]. MVCC TiKV implements Multi-Version Concurrency Control (MVCC). Distributed ACID transactions TiKV uses the transaction model used by Google in BigTable: Percolator. ","version":null,"tagName":"h3"},{"title":"Build the test environment","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#build-the-test-environment","content":"","version":null,"tagName":"h2"},{"title":"Kubernetes cluster","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#kubernetes-cluster","content":"In this test, we use three VM nodes to deploy the Kubernetes cluster, including one master node and two worker nodes. Kubelete version is 1.22.0. ","version":null,"tagName":"h3"},{"title":"HwameiStor local storage","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#hwameistor-local-storage","content":"Deploy the HwameiStor local storage in the Kubernetes cluster Configure a 100G local disk, sdb, for HwameiStor on two worker nodes respectively Create StorageClass ","version":null,"tagName":"h3"},{"title":"Deploy TiDB on Kubernetes","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#deploy-tidb-on-kubernetes","content":"TiDB can be deployed on Kubernetes using TiDB Operator. TiDB Operator is an automatic operation and maintenance system for TiDB clusters on Kubernetes. It provides full lifecycle management of TiDB including deployment, upgrade, scaling, backup and recovery, and configuration changes. With TiDB Operator, TiDB can run seamlessly on public cloud or privately deployed Kubernetes clusters. The compatibility between TiDB and TiDB Operator versions is as follows: TiDB version\tApplicable versions of TiDB Operatordev\tdev TiDB >= 5.4\t1.3 5.1 <= TiDB < 5.4\t1.3 (recommended), 1.2 3.0 <= TiDB < 5.1\t1.3 (recommended), 1.2, 1.1 2.1 <= TiDB < 3.0\t1.0 (maintenance stopped) Deploy TiDB Operator Install TiDB CRDs kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml Install TiDB Operator helm repo add pingcap https://charts.pingcap.org/ kubectl create namespace tidb-admin helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.3.2 \\ --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.3.2 \\ --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.3.2 \\ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler Check TiDB Operator components Deploy the TiDB cluster kubectl create namespace tidb-cluster && \\ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com /pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml Connect the TiDB cluster yum -y install mysql-client kubectl port-forward -n tidb-cluster svc/basic-tidb 4000 > pf4000.out & Check and verify the TiDB cluster status Create the Hello_world table create table hello_world (id int unsigned not null auto_increment primary key, v varchar(32)); Check the TiDB version select tidb_version()\\G; Check the Tikv storage status select * from information_schema.tikv_store_status\\G; Configure the HwameiStor storage Create a PVC for tidb-tikv and tidb-pd from storageClass local-storage-hdd-lvm: kubectl get po basic-tikv-0 -oyaml kubectl get po basic-pd-0 -oyaml ","version":null,"tagName":"h3"},{"title":"Test procedure","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#test-procedure","content":"","version":null,"tagName":"h2"},{"title":"Basic SQL capability test","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#basic-sql-capability-test","content":"After the database cluster is deployed, we performed the following tests about basic capabilities. All are successfully passed. Distributed transaction Test purpose: In the case of multiple isolation levels, check if the completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID) Test steps: Create the database: testdb Create the table t_test ( id int AUTO_INCREMENT, name varchar(32), PRIMARY KEY (id) ) Run a test script Test result: The completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID), in the case of multiple isolation levels Object isolation Test purpose: Check if the object isolation can be implemented by using different schemas Test script: create database if not exists testdb; use testdb create table if not exists t_test ( id bigint, name varchar(200), sale_time datetime default current_timestamp, constraint pk_t_test primary key (id) ); insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'); create user 'readonly'@'%' identified by "readonly"; grant select on testdb.* to readonly@'%'; select * from testdb.t_test; update testdb.t_test set name='aaa'; create user 'otheruser'@'%' identified by "otheruser"; Test result: Supported to create different schemas to implement the object isolation Table operation support Test purpose: Check if you can create, delete, and modifiy table data, DML, columns, partition table Test steps: Run the test scripts step by step after connecting the database Test script: # Create and delete table drop table if exists t_test; create table if not exists t_test ( id bigint default '0', name varchar(200) default '' , sale_time datetime default current_timestamp, constraint pk_t_test primary key (id) ); # Delete and modify insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'),(4,'d'),(5,'e'); update t_test set name='aaa' where id=1; update t_test set name='bbb' where id=2; delete from t_dml where id=5; # Modify, add, delete columns alter table t_test modify column name varchar(250); alter table t_test add column col varchar(255); insert into t_test(id,name,col) values(10,'test','new_col'); alter table t_test add column colwithdefault varchar(255) default 'aaaa'; insert into t_test(id,name) values(20,'testdefault'); insert into t_test(id,name,colwithdefault ) values(10,'test','non-default '); alter table t_test drop column colwithdefault; # Type of partition table (only listed part of scripts) CREATE TABLE employees ( id INT NOT NULL, fname VARCHAR(30), lname VARCHAR(30), hired DATE NOT NULL DEFAULT '1970-01-01', separated DATE NOT NULL DEFAULT '9999-12-31', job_code INT NOT NULL, store_id INT NOT NULL ) Test result: Supported to create, delete, and modifiy table data, DML, columns, partition table Index support Test purpose: Verify different indexes (unique, clustered, partitioned, Bidirectional indexes, Expression-based indexes, hash indexes, etc.) and index rebuild operations. Test script: alter table t_test add unique index udx_t_test (name); # The default is clustered index of primary key ADMIN CHECK TABLE t_test; create index time_idx on t_test(sale_time); alter table t_test drop index time_idx; admin show ddl jobs; admin show ddl job queries 156; create index time_idx on t_test(sale_time); Test result: Supported to create, delete, combine, and list indexes and supported for unique index Statements Test purpose: Check if the statements in distributed databases are supported such as if, case when, for loop, while loop, loop exit when (up to 5 kinds) Test script: SELECT CASE id WHEN 1 THEN 'first' WHEN 2 THEN 'second' ELSE 'OTHERS' END AS id_new FROM t_test; SELECT IF(id>2,'int2+','int2-') from t_test; Test result: supported for statements such as if, case when, for loop, while loop, and loop exit when (up to 5 kinds) Parsing execution plan Test purpose: Check if execution plan parsing is supported for distributed databases Test script: explain analyze select * from t_test where id NOT IN (1,2,4); explain analyze select * from t_test a where EXISTS (select * from t_test b where a.id=b.id and b.id<3); explain analyze SELECT IF(id>2,'int2+','int2-') from t_test; Test result: the execution plan is supported to parse Binding execution plan Test purpose: Verify the feature of binding execution plan for distributed databases Test steps: View the current execution plan of sql statements Use the binding feature View the execution plan after the sql statement is binded Delete the binding Test script: explain select * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; explain select /*+ hash_join(a,b) */ * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; Test result: It may not be hash_join when hint is not used, and it must be hash_join after hint is used. Common functions Test purpose: Verify standard functions of distributed databases Test result: Standard database functions are supported Explicit/implicit transactions Test purpose: Verify the transaction support of distributed databases Test result: Explict and implicit transactions are supported Character set Test purpose: Verify the data types supported by distributed database Test result: Only the UTF-8 mb4 character set is supported now Lock support Test purpose: Verify the lock implementation of distributed databases Test result: Described how the lock is implemented, what are blockage conditions in the case of R-R/R-W/W-W, and how the deadlock is handled Isolation levels Test purpose: Verify the transactional isolation levels of distributed databases Test result: Supported for si and rc isolation levels (4.0 GA version) Distributed complex query Test purpose: Verify the complex query capabilities of distributed databases Test result: Supported for the distributed complex queries and operations such as inter-node joins, and supported for window functions and hierarchical queries ","version":null,"tagName":"h3"},{"title":"System security test","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#system-security-test","content":"This section describes system security tests. After the database cluster is deployed, all the following tests are passed. Account management and permission test Test purpose: Verify the accout permisson management of distributed databases Test script: select host,user,authentication_string from mysql.user; create user tidb IDENTIFIED by 'tidb'; select host,user,authentication_string from mysql.user; set password for tidb =password('tidbnew'); select host,user,authentication_string,Select_priv from mysql.user; grant select on *.* to tidb; flush privileges ; select host,user,authentication_string,Select_priv from mysql.user; grant all privileges on *.* to tidb; flush privileges ; select * from mysql.user where user='tidb'; revoke select on *.* from tidb; flush privileges ; revoke all privileges on *.* from tidb; flush privileges ; grant select(id) on test.TEST_HOTSPOT to tidb; drop user tidb; Test results: Supported for creating, modifying, and deleting accounts, and configuring passwords, and supported for the separation of security, audit, and data management Based on different accounts, various permission control for database includes: instance, library, table, and column Access control Test purpose: Verify the permission access control of distributed databases, and control the database data by granting basic CRUD (create, read, update, and delete) permissions Test script: mysql -u root -h 172.17.49.222 -P 4000 drop user tidb; drop user tidb1; create user tidb IDENTIFIED by 'tidb'; grant select on tidb.* to tidb; grant insert on tidb.* to tidb; grant update on tidb.* to tidb; grant delete on tidb.* to tidb; flush privileges; show grants for tidb; exit; mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'select * from aa;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'insert into aa values(2);' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'update aa set id=3;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'delete from aa where id=3;' Test result: Database data is controlled by granting the basic CRUD permissions Whitelist Test purpose: Verify the whitelist feature of distributed databases Test script: mysql -u root -h 172.17.49.102 -P 4000 drop user tidb; create user tidb@'127.0.0.1' IDENTIFIED by 'tidb'; flush privileges; select * from mysql.user where user='tidb'; mysql -u tidb -h 127.0.0.1 -P 4000 -ptidb mysql -u tidb -h 172.17.49.102 -P 4000 -ptidb Test result: Supported for the IP whitelist feature and supportred for matching actions with IP segments Operation log Test purpose: Verify the monitor capability to distributed databases Test script: kubectl -ntidb-cluster logs tidb-test-pd-2 --tail 22 Test result: Record key actions or misoperations performed by users through the operation and maintenance management console or API ","version":null,"tagName":"h3"},{"title":"Operation and maintenance test","type":1,"pageTitle":"HwameiStor Capability, Security, Operation, and Maintenance Test","url":"/blog/test#operation-and-maintenance-test","content":"This section describes the operation and maintenance test. After the database cluster is deployed, the following operation and maintenance tests are all passed. Import and export data Test purpose: Verify the tools support for importing and exporting data of distributed databases Test script: select * from sbtest1 into outfile '/sbtest1.csv'; load data local infile '/sbtest1.csv' into table test100; Test result: Supported for importing and exporting table, schema, and database Slow log query Test purpose: Get the SQL info by slow query Prerequisite: The SQL execution time shall be longer than the configured threshold for slow query, and the SQL execution is completed Test steps: Adjust the slow query threshold to 100 ms Run SQL View the slow query info from log, system table, or dashboard Test script: show variables like 'tidb_slow_log_threshold'; set tidb_slow_log_threshold=100; select query_time, query from information_schema.slow_query where is_internal = false order by query_time desc limit 3; Test result: Can get the slow query info. For details about test data, see TiDB on HwameiStor Deployment and Test Logs. ","version":null,"tagName":"h3"},{"title":"Contributing","type":0,"sectionRef":"#","url":"/docs/contribute/CONTRIBUTING","content":"","keywords":"","version":"Next"},{"title":"Community guidelines","type":1,"pageTitle":"Contributing","url":"/docs/contribute/CONTRIBUTING#community-guidelines","content":"HwameiStor is a sandbox project listed in the CNCF landscape. We follow CNCF Community Code of Conduct. ","version":"Next","tagName":"h2"},{"title":"Requirements","type":1,"pageTitle":"Contributing","url":"/docs/contribute/CONTRIBUTING#requirements","content":"The source code of HwameiStor is written in golangand managed with git. To easily install HwameiStor, you may need to have helm installed. ","version":"Next","tagName":"h2"},{"title":"Getting involved","type":1,"pageTitle":"Contributing","url":"/docs/contribute/CONTRIBUTING#getting-involved","content":"Please check issuesto see if there are any existing tasks you may be interested in. In particular, if you're just getting started, you may want to look for issues labeled withhelp wantedor kind/bug, which are standard labels in the cloud-native communities. If you're interested in working on any of these, please leave a comment to let us know! Note: Please open an issue or discussion before starting work on a new feature or large change. ","version":"Next","tagName":"h2"},{"title":"What's next","type":1,"pageTitle":"Contributing","url":"/docs/contribute/CONTRIBUTING#whats-next","content":"If you're planning to contribute code changes, the reviewers may need you to: follow good coding guidelines.write good commit messages.break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue. If you're looking to contribute documentation improvements, you'll want to read thedocumentation. ","version":"Next","tagName":"h2"},{"title":"Community","type":0,"sectionRef":"#","url":"/docs/community","content":"","keywords":"","version":"Next"},{"title":"GitHub","type":1,"pageTitle":"Community","url":"/docs/community#github","content":"Submit your feedback and issue via Issue. ","version":"Next","tagName":"h2"},{"title":"Slack","type":1,"pageTitle":"Community","url":"/docs/community#slack","content":"Join a user group #userJoin a developer group #developer ","version":"Next","tagName":"h2"},{"title":"Blog","type":1,"pageTitle":"Community","url":"/docs/community#blog","content":"The blog posts updates on a regular basis. ","version":"Next","tagName":"h2"},{"title":"Regular meetings","type":1,"pageTitle":"Community","url":"/docs/community#regular-meetings","content":"Join regular discussions with community developers. ","version":"Next","tagName":"h2"},{"title":"Commercial support","type":1,"pageTitle":"Community","url":"/docs/community#commercial-support","content":"Tel.: (+86) 400 002 6898 Email: [email protected] Join our Wechat group by scanning the QR code: ","version":"Next","tagName":"h2"},{"title":"Fast Failover","type":0,"sectionRef":"#","url":"/docs/fast_failover","content":"","keywords":"","version":"Next"},{"title":"How to use","type":1,"pageTitle":"Fast Failover","url":"/docs/fast_failover#how-to-use","content":"HwameiStor provides the fast failover considering the two cases: Node Failure When a node fails, all the Pods on this node can't work any more。As to the Pod using HwameiStor volume, it's necessary to reschedule to another healthy node with the associated data volume replica. You can trigger the fast failover for this node by: Add a label to this node: kubectl label node <nodeName> hwameistor.io/failover=start When the fast failover completes, the label will be modified as: hwameistor.io/failover=completed Pod Failure When a Pod fails, you can trigger the fast failover for it by adding a lable to this Pod: kubectl label pod <podName> hwameistor.io/failover=start When the fast failover completes, the old Pod will be deleted and then the new one will be created on a new node. ","version":"Next","tagName":"h2"},{"title":"Deploy with hwameistor-operator","type":0,"sectionRef":"#","url":"/docs/install/operator","content":"","keywords":"","version":"Next"},{"title":"Steps","type":1,"pageTitle":"Deploy with hwameistor-operator","url":"/docs/install/operator#steps","content":"Add hwameistor-operator Helm Repo helm repo add hwameistor-operator https://hwameistor.io/hwameistor-operator helm repo update hwameistor-operator Install HwameiStor with hwameistor-operator note If no available clean disk provided, the operator will not create StorageClass automatically. Operator will claim disk automatically while installing, the available disks will be added into pool of LocalStorageNode. If available clean disk provided after installing, it's needed to apply a LocalDiskClaim manually to add the disk into pool of LocalStorageNode. Once LocalStorageNode has any disk available in its pool, the operator will create StorageClass automatically. That is to say, no capacity, no StorageClass. helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace Optional installation parameters: Disk Reserve Available clean disk will be claimed and added into pool of LocalStorageNode by default. If you want to reserve some disks for other use before installing, you can set diskReserveConfigurations by helm values. Method 1: helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace \\ --set diskReserve\\[0\\].nodeName=node1 \\ --set diskReserve\\[0\\].devices={/dev/sdc\\,/dev/sdd} \\ --set diskReserve\\[1\\].nodeName=node2 \\ --set diskReserve\\[1\\].devices={/dev/sdc\\,/dev/sde} This is a example to set diskReserveConfigurations by helm install --set, it may be hard to write --set options like that. If it's possible, we suggest write the diskReserveConfigurations values into a file. Method 2: diskReserve: - nodeName: node1 devices: - /dev/sdc - /dev/sdd - nodeName: node2 devices: - /dev/sdc - /dev/sde For example, you write values like this into a file call diskReserve.yaml, you can apply the file when running helm install. helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace -f diskReserve.yaml Enable authentication helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace \\ --set apiserver.authentication.enable=true \\ --set apiserver.authentication.accessId={YourName} \\ --set apiserver.authentication.secretKey={YourPassword} You can also enable authentication by editing deployment/apiserver. Install operator by using DaoCloud image registry: helm install hwameistor-operator hwameistor-operator/hwameistor-operator -n hwameistor --create-namespace \\ --set global.hwameistorImageRegistry=ghcr.m.daocloud.io \\ --set global.k8sImageRegistry=m.daocloud.io/registry.k8s.io ","version":"Next","tagName":"h2"},{"title":"HwameiStor Membership Roles","type":0,"sectionRef":"#","url":"/docs/contribute/membership","content":"","keywords":"","version":"Next"},{"title":"Role summary","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#role-summary","content":"Here is the set of roles we use within the HwameiStor community, the general responsibilities expected by individuals in each role, the requirements necessary to join or stay in a given role, and the concrete manifestation of the role in terms of permissions and privileges. Role\tResponsibilities\tRequirements\tPrivileges Collaborator\tCasual contributor to the project\tn/a Outside collaborator of the GitHub HwameiStor organization Can submit PRs and issues Read and commenting permission on the HwameiStor Team drive Member\tRegular active contributor in the community Has pushed at least one PR to an HwameiStor repository Member of the GitHub HwameiStor organization Edit permission on the HwameiStor Team drive Triage permission on the HwameiStor repos, allowing issues to be manipulated. Reviewer\tContent expert helping improve code and documentation.\tHighly experienced contributor to the HwameiStor documentation.\tLike a member, plus: Maintainers and Administrator prioritize approval of the content they reviewed. Maintainer\tApprove contributions from other members\tHighly experienced and active reviewer and contributor to an area\tLike a member, plus: Able to approve code changes in GitHub Voting rights in the context of working group decision-making Responsible for making sure that release notes and upgrade notes get added to pull requests with user facing changes Administrator\tManage and control permissions\tAppointed by the HwameiStor organization Admin privileges on varous HwameiStor-related resources ","version":"Next","tagName":"h2"},{"title":"Collaborator","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#collaborator","content":"Individuals may be added as an outside collaborator (with READ access) to a repo in the HwameiStor GitHub organization without becoming a member. This allows them to be assigned issues and PRs until they become a member, but will not allow tests to be run against their PRs automatically nor allow them to interact with the PR bot. ","version":"Next","tagName":"h2"},{"title":"Requirements","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#requirements","content":"Working on some contribution to the project that would benefit from the ability to have PRs or Issues to be assigned to the contributor. ","version":"Next","tagName":"h3"},{"title":"Member","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#member","content":"Established community members are expected to demonstrate their adherence to the principles in this document, familiarity with project organization, roles, policies, procedures, conventions, etc., and technical and/or writing ability. Members are continuously active contributors in the community. They can have issues and PRs assigned to them, participate in working group meetings, and pre-submit tests are automatically run for their PRs. Members are expected to remain active contributors to the community. All members are encouraged to help with the code review burden, although each PR must be reviewed by one or more official reviewers and maintainers for the area before being accepted into the source base. ","version":"Next","tagName":"h2"},{"title":"Requirements","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#requirements-1","content":"Has pushed at least one PR to the HwameiStor repositories within the last 6 months.Actively contributing to one or more areas. Members are expected to be active participants in the project on an on-going basis. If an individual doesn't contribute to the project for a 180 day period, that individual may lose membership. On-going contributions include: Successfully merging pull requestsTriaging issues or pull requestsCommenting on issues or pull requestsClosing issues or pull requests ","version":"Next","tagName":"h3"},{"title":"Becoming a member","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#becoming-a-member","content":"If you are interested in becoming a member and meet the requirements above, you can join the organization by adding yourself to the members list under members.yaml. Once that has been done, submit a Pull Request with the change and fill out the pull request template with all information requested. ","version":"Next","tagName":"h3"},{"title":"Responsibilities and privileges","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#responsibilities-and-privileges","content":"Responsive to issues and PRs assigned to themActive owner of code they have contributed (unless ownership is explicitly transferred) Code is well testedTests consistently passAddresses bugs or issues discovered after code is accepted Members who frequently contribute code are expected to proactively perform code reviews for the area that they are active in. ","version":"Next","tagName":"h3"},{"title":"Reviewer","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#reviewer","content":"A Reviewer is trusted to only approve content that meets the acceptance criteria described in thecontribution guides. ","version":"Next","tagName":"h2"},{"title":"Requirements","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#requirements-2","content":"To become a Reviewer, contributors must meet the following requirements: Be a Member of the HwameiStor community.Perform 5 substantial contributions to the HwameiStor.io repo. Substantial contributions include the following examples: New contentContent reviewsContent improvements Demonstrate a solid commitment to documentation quality and use of our style guide.Be sponsored by an HwameiStor Maintainer or WG Lead. ","version":"Next","tagName":"h3"},{"title":"Responsibilities","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#responsibilities","content":"Review PRs in hwameistor/hwameistor.Ensure the relevant technical Working Group is added as a reviewer and ensure a maintainer or administrator has approved the PR. ","version":"Next","tagName":"h3"},{"title":"Privileges","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#privileges","content":"Content approved by a Reviewer gets prioritized by Maintainers or Administrator.Reviewers can place a /lgtm label to notify Maintainers to expedite publication of the reviewed content. Reviewers can't merge content into the hwameistor/hwameistor main; only Maintainers and Administrator can merge content into main. ","version":"Next","tagName":"h3"},{"title":"Maintainer","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#maintainer","content":"Maintainers review and approve code contributions. While code review is focused on code quality and correctness, approval is focused on holistic acceptance of a contribution including: backwards / forwards compatibility, adhering to API and flag conventions, subtle performance and correctness issues, interactions with other parts of the system, etc. Maintainer status is scoped to a part of the codebase and is reflected in a CODEOWNERS file. ","version":"Next","tagName":"h2"},{"title":"Requirements","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#requirements-3","content":"The following apply to the part of the codebase for which one would be a maintainer: Member for at least 3 monthsContributed at least 30 substantial PRs to the codebaseMust remain an active participant in the community by contributing code, performing reviews, triaging issues, etc.Knowledgeable about the codebaseSponsored by a working group lead with no objections from other leads If a maintainer becomes inactive in the project for an extended period of time, the individual will transition to being an emeritus maintainer. Emeritus maintainers lose their ability to approve code contributions, but retain their voting rights for up to one year. After one year, emeritus maintainers revert back to being normal members with no voting rights. Maintainers contribute to the parts of the project they are responsible for by: Successfully merging pull requestsTriaging issues or pull requestsClosing issues or pull requests ","version":"Next","tagName":"h3"},{"title":"Responsibilities and privileges","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#responsibilities-and-privileges-1","content":"The following apply to the part of the codebase for which one would be a maintainer: Maintainer status may be a precondition to accepting large code contributionsDemonstrates sound technical judgementResponsible for project quality control via code reviews Focus on code quality and correctness, including testing and factoringFocus on holistic acceptance of contribution such as dependencies with other features, backwards / forwards compatibility, API and flag definitions, etc Expected to be responsive to review requests as per community expectationsMay approve code contributions for acceptanceMaintainers in an area get a vote when a working group needs to make decisions. ","version":"Next","tagName":"h3"},{"title":"Administrator","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#administrator","content":"Administrators are responsible for the bureaucratic aspects of the project. ","version":"Next","tagName":"h2"},{"title":"Requirements","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#requirements-4","content":"Appointed by the HwameiStor organization. ","version":"Next","tagName":"h3"},{"title":"Responsibilities and privileges","type":1,"pageTitle":"HwameiStor Membership Roles","url":"/docs/contribute/membership#responsibilities-and-privileges-2","content":"Manage a variety of infrastructure support for the HwameiStor projectAlthough admins may have the authority to override any policy and cut corners, we expect admins to generally abide by the overall rules of the project. For example, unless strictly necessary, admins should not approve and/or commit PRs they aren't entitled to if they were not admins. ","version":"Next","tagName":"h3"},{"title":"Post-Check after Deployment","type":0,"sectionRef":"#","url":"/docs/install/post_check","content":"","keywords":"","version":"Next"},{"title":"Check Components","type":1,"pageTitle":"Post-Check after Deployment","url":"/docs/install/post_check#check-components","content":"The following pods should be up and running: $ kubectl -n hwameistor get pod NAME READY STATUS RESTARTS AGE drbd-adapter-k8s-master-rhel7-gtk7t 0/2 Completed 0 23m drbd-adapter-k8s-node1-rhel7-gxfw5 0/2 Completed 0 23m drbd-adapter-k8s-node2-rhel7-lv768 0/2 Completed 0 23m hwameistor-admission-controller-dc766f976-mtlvw 1/1 Running 0 23m hwameistor-apiserver-86d6c9b7c8-v67gg 1/1 Running 0 23m hwameistor-auditor-54f46fcbc6-jb4f4 1/1 Running 0 23m hwameistor-exporter-6498478c57-kr8r4 1/1 Running 0 23m hwameistor-failover-assistant-cdc6bd665-56wbw 1/1 Running 0 23m hwameistor-local-disk-csi-controller-6587984795-fztcd 2/2 Running 0 23m hwameistor-local-disk-manager-7gg9x 2/2 Running 0 23m hwameistor-local-disk-manager-kqkng 2/2 Running 0 23m hwameistor-local-disk-manager-s66kn 2/2 Running 0 23m hwameistor-local-storage-csi-controller-5cdff98f55-jj45w 6/6 Running 0 23m hwameistor-local-storage-mfqks 2/2 Running 0 23m hwameistor-local-storage-pnfpx 2/2 Running 0 23m hwameistor-local-storage-whg9t 2/2 Running 0 23m hwameistor-pvc-autoresizer-86dc79d57-s2l68 1/1 Running 0 23m hwameistor-scheduler-6db69957f-r58j6 1/1 Running 0 23m hwameistor-ui-744cd78d84-vktjq 1/1 Running 0 23m hwameistor-volume-evictor-5db99cf979-4674n 1/1 Running 0 23m info The components of local-disk-manager and local-storage are DaemonSets, and should have one pod on each Kubernetes node. ","version":"Next","tagName":"h2"},{"title":"Check APIs","type":1,"pageTitle":"Post-Check after Deployment","url":"/docs/install/post_check#check-apis","content":"HwameiStor CRDs create the following APIs. $ kubectl api-resources --api-group hwameistor.io NAME SHORTNAMES APIVERSION NAMESPACED KIND clusters hmcluster hwameistor.io/v1alpha1 false Cluster events evt hwameistor.io/v1alpha1 false Event localdiskclaims ldc hwameistor.io/v1alpha1 false LocalDiskClaim localdisknodes ldn hwameistor.io/v1alpha1 false LocalDiskNode localdisks ld hwameistor.io/v1alpha1 false LocalDisk localdiskvolumes ldv hwameistor.io/v1alpha1 false LocalDiskVolume localstoragenodes lsn hwameistor.io/v1alpha1 false LocalStorageNode localvolumeconverts lvconvert hwameistor.io/v1alpha1 false LocalVolumeConvert localvolumeexpands lvexpand hwameistor.io/v1alpha1 false LocalVolumeExpand localvolumegroups lvg hwameistor.io/v1alpha1 false LocalVolumeGroup localvolumemigrates lvmigrate hwameistor.io/v1alpha1 false LocalVolumeMigrate localvolumereplicas lvr hwameistor.io/v1alpha1 false LocalVolumeReplica localvolumereplicasnapshotrestores lvrsrestore,lvrsnaprestore hwameistor.io/v1alpha1 false LocalVolumeReplicaSnapshotRestore localvolumereplicasnapshots lvrs hwameistor.io/v1alpha1 false LocalVolumeReplicaSnapshot localvolumes lv hwameistor.io/v1alpha1 false LocalVolume localvolumesnapshotrestores lvsrestore,lvsnaprestore hwameistor.io/v1alpha1 false LocalVolumeSnapshotRestore localvolumesnapshots lvs hwameistor.io/v1alpha1 false LocalVolumeSnapshot resizepolicies hwameistor.io/v1alpha1 false ResizePolicy Please refer to CRDs for details. ","version":"Next","tagName":"h2"},{"title":"Check LocalDiskNodes and localDisks","type":1,"pageTitle":"Post-Check after Deployment","url":"/docs/install/post_check#check-localdisknodes-and-localdisks","content":"HwameiStor automatically scans each node and registers each disk as CRD LocalDisk(ld). The unused disks are displayed with PHASE: Available. $ kubectl get localdisknodes NAME FREECAPACITY TOTALCAPACITY TOTALDISK STATUS AGE k8s-master Ready 28h k8s-node1 Ready 28h k8s-node2 Ready 28h $ kubectl get localdisks NAME NODEMATCH DEVICEPATH PHASE AGE localdisk-2307de2b1c5b5d051058bc1d54b41d5c k8s-node1 /dev/sdb Available 28h localdisk-311191645ea00c62277fe709badc244e k8s-node2 /dev/sdb Available 28h localdisk-37a20db051af3a53a1c4e27f7616369a k8s-master /dev/sdb Available 28h localdisk-b57b108ad2ccc47f4b4fab6f0b9eaeb5 k8s-node2 /dev/sda Bound 28h localdisk-b682686c65667763bda58e391fbb5d20 k8s-master /dev/sda Bound 28h localdisk-da121e8f0dabac9ee1bcb6ed69840d7b k8s-node1 /dev/sda Bound 28h ","version":"Next","tagName":"h2"},{"title":"Check LocalStorageNodes and Storage Pools","type":1,"pageTitle":"Post-Check after Deployment","url":"/docs/install/post_check#check-localstoragenodes-and-storage-pools","content":"HwameiStor automatically generates the LocalStorageNode (i.e. LSN) resource for each node. Each LSN will record the resources and status of the node, including Storage Pool, Volumes, etc. $ kubectl get lsn NAME IP STATUS AGE 10-6-234-40 10.6.234.40 Ready 3m52s 10-6-234-41 10.6.234.41 Ready 3m54s 10-6-234-42 10.6.234.42 Ready 3m55s $ kubectl get lsn 10-6-234-41 -o yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: creationTimestamp: "2023-04-11T06:46:52Z" generation: 1 name: 10-6-234-41 resourceVersion: "13575433" uid: 4986f7b8-6fe1-43f1-bdca-e68b6fa53f92 spec: hostname: 10-6-234-41 storageIP: 10.6.234.41 topogoly: region: default zone: default status: pools: LocalStorage_PoolHDD: class: HDD disks: - capacityBytes: 10733223936 devPath: /dev/sdb state: InUse type: HDD - capacityBytes: 1069547520 devPath: /dev/sdc state: InUse type: HDD - capacityBytes: 1069547520 devPath: /dev/sdd state: InUse type: HDD - capacityBytes: 1069547520 devPath: /dev/sde state: InUse type: HDD - capacityBytes: 1069547520 devPath: /dev/sdf state: InUse type: HDD - capacityBytes: 1069547520 devPath: /dev/sdg state: InUse type: HDD freeCapacityBytes: 16080961536 freeVolumeCount: 1000 name: LocalStorage_PoolHDD totalCapacityBytes: 16080961536 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 16080961536 state: Ready ","version":"Next","tagName":"h2"},{"title":"Check StorageClass","type":1,"pageTitle":"Post-Check after Deployment","url":"/docs/install/post_check#check-storageclass","content":"The Operator will automatically create the StorageClasses as following according to the HwameiStor system's configuration (e.g. HA enabled or not, disk type, and more.) $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE hwameistor-storage-lvm-hdd lvm.hwameistor.io Delete WaitForFirstConsumer false 23h hwameistor-storage-lvm-hdd-convertible lvm.hwameistor.io Delete WaitForFirstConsumer false 23h hwameistor-storage-lvm-hdd-ha lvm.hwameistor.io Delete WaitForFirstConsumer false 23h ","version":"Next","tagName":"h2"},{"title":"FAQs","type":0,"sectionRef":"#","url":"/docs/faqs","content":"","keywords":"","version":"Next"},{"title":"Q1: How does hwameistor-scheduler work in a Kubernetes platform?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q1-how-does-hwameistor-scheduler-work-in-a-kubernetes-platform","content":"The hwameistor-scheduler is deployed as a pod in the hwameistor namespace. Once the applications (Deployment or StatefulSet) are created, the pod will be scheduled to the worker nodes on which HwameiStor is already configured. ","version":"Next","tagName":"h2"},{"title":"Q2: How to schedule applications with multi-replica workloads?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q2-how-to-schedule-applications-with-multi-replica-workloads","content":"This question can be extended to: How does HwameiStor schedule applications with multi-replica workloads and how does it differ from traditional shared storage (NFS/block)? To efficiently schedule applications with multi-replica workloads, we highly recommend using StatefulSet. StatefulSet ensures that replicas are deployed on the same worker node as the original pod. It also creates a PV data volume for each replica. If you need to deploy replicas on different worker nodes, manual configuration with pod affinity is necessary. We suggest using a single pod for deployment because the block data volumes can not be shared. ","version":"Next","tagName":"h2"},{"title":"Q3: How to maintain a Kubernetes node?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q3-how-to-maintain-a-kubernetes-node","content":"HwameiStor provides the volume eviction/migration functions to keep the Pods and HwameiStor volumes' data running when retiring/rebooting a node. ","version":"Next","tagName":"h2"},{"title":"Remove a node","type":1,"pageTitle":"FAQs","url":"/docs/faqs#remove-a-node","content":"Before you remove a node from a Kubernetes cluster, the Pods and volumes on the node should be rescheduled and migrated to another available node, and keep the Pods/volumes running. Follow these steps to remove a node: Drain node. kubectl drain NODE --ignore-daemonsets=true. --ignore-daemonsets=true This command can evict and reschedule Pods on the node. It also automatically triggers HwameiStor's data volume eviction behavior. HwameiStor will automatically migrate all replicas of the data volumes from that node to other nodes, ensuring data availability. Check the migration progress. kubectl get localstoragenode NODE -o yaml The output may look like: apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: name: NODE spec: hostname: NODE storageIP: 10.6.113.22 topogoly: region: default zone: default status: ... pools: LocalStorage_PoolHDD: class: HDD disks: - capacityBytes: 17175674880 devPath: /dev/sdb state: InUse type: HDD freeCapacityBytes: 16101933056 freeVolumeCount: 999 name: LocalStorage_PoolHDD totalCapacityBytes: 17175674880 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 1073741824 usedVolumeCount: 1 volumeCapacityBytesLimit: 17175674880 ## **** make sure volumes is empty **** ## volumes: state: Ready At the same time, HwameiStor will automatically reschedule the evicted Pods to the other node which has the associated volume replica, and continue to run. Remove the NODE from the cluster. kubectl delete nodes NODE ","version":"Next","tagName":"h3"},{"title":"Reboot a node","type":1,"pageTitle":"FAQs","url":"/docs/faqs#reboot-a-node","content":"It usually takes a long time (~10minutes) to reboot a node. All the Pods and volumes on the node will not work until the node is back online. For some applications like DataBase, the long downtime is very costly and even unacceptable. HwameiStor can immediately reschedule the Pod to another available node with associated volume data and bring the Pod back to running in very short time (~ 10 seconds for the Pod using a HA volume, and longer time for the Pod with non-HA volume depends on the data size). If users wish to keep data volumes on a specific node, accessible even after the node restarts, they can add the following labels to the node. This prevents the system from migrating the data volumes from that node. However, the system will still immediately schedule Pods on other nodes that have replicas of the data volumes. Add a label (optional) If it is not required to migrate the volumes during the node reboots, you can add the following label to the node before draining it. kubectl label node NODE hwameistor.io/eviction=disable Drain the node. kubectl drain NODE --ignore-daemonsets=true. --ignore-daemonsets=true If Step 1 has been performed, you can reboot the node after Step 2 is successful.If Step 1 has not been performed, you should check if the data migration is complete after Step 2 is successful (similar to Step 2 in remove node). After the data migration is complete, you can reboot the node. After the first two steps are successful, you can reboot the node and wait for the node system to return to normal. Bring the node back to normal. kubectl uncordon NODE ","version":"Next","tagName":"h3"},{"title":"Traditional shared storage","type":1,"pageTitle":"FAQs","url":"/docs/faqs#traditional-shared-storage","content":"StatefulSet, which is used for stateful applications, prioritizes deploying replicated replicas to different nodes to distribute the workload. However, it creates a PV data volume for each Pod replica. Only when the number of replicas exceeds the number of worker nodes, multiple replicas will be deployed on the same node. On the other hand, Deployments, which are used for stateless applications, prioritize deploying replicated replicas to different nodes to distribute the workload. All Pods share a single PV data volume (currently only supports NFS). Similar to StatefulSets, multiple replicas will be deployed on the same node only when the number of replicas exceeds the number of worker nodes. For block storage, as data volumes cannot be shared, it is recommended to use a single replica. ","version":"Next","tagName":"h3"},{"title":"Q4: How to handle the error when encountering \"LocalStorageNode\" during inspection?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q4-how-to-handle-the-error-when-encountering-localstoragenode-during-inspection","content":"When encountering the following error while inspecting LocalStorageNode: Possible causes of the error: The node does not have LVM2 installed. You can install it using the following command: rpm -qa | grep lvm2 # Check if LVM2 is installed yum install lvm2 # Install LVM on each node Ensure that the proper disk on the node has GPT partitioning. blkid /dev/sd* # Confirm if the disk partitions are clean wipefs -a /dev/sd* # Clean the disk ","version":"Next","tagName":"h2"},{"title":"Q5: Why is StorageClasses not automatically created after installation using Hwameistor-operator?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q5-why-is-storageclasses-not-automatically-created-after-installation-using-hwameistor-operator","content":"Probable reasons: The node has no remaining bare disks that can be automatically managed. You can check it by running the following command: kubectl get ld # Check disk kubectl get lsn <node-name> -o yaml # Check whether the disk is managed normally The hwameistor related components are not working properly. You can check it by running the following command: drbd-adapter is only needed when HA is enabled, if not, ignore the related error. kubectl get pod -n hwameistor # Confirm whether the pod is running kubectl get hmcluster -o yaml # View the health field ","version":"Next","tagName":"h2"},{"title":"Q6: How to expand the storage pool manually?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q6-how-to-expand-the-storage-pool-manually","content":"When is manually expanding storage needed: To use the disk partition (Issue #1387)Same serial number is shared between different disks (Issue #1450,Issue #1449) Run lsblk -o +SERIAL to check serial number. Manual expansion steps: Create and expand storage pool vgcreate LocalStorage_PoolHDD /dev/sdb LocalStorage_PoolHDD is the StoragePool name for HDD type disk. Other optional names are LocalStorage_PoolSSD for SSD type and LocalStorage_PoolNVMe for NVMe type. If you want to expand the storage pool with disk partition, you can use the following command: vgcreate LocalStorage_PoolHDD /dev/sdb1 If storage pool is already exist, you can use the following command: vgextend LocalStorage_PoolHDD /dev/sdb1 Check the status of the node storage pool and confirm that the disk is added to the storage pool like this: $ kubectl get lsn node1 -o yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode ... pools: LocalStorage_PoolHDD: class: HDD disks: - capacityBytes: 17175674880 devPath: /dev/sdb ... ","version":"Next","tagName":"h2"},{"title":"Q7: How to manually recycle a data volume?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q7-how-to-manually-recycle-a-data-volume","content":"When do you need to manually recycle a data volume?: The reclaim policy of StorageClass is set to Retain. After deleting the PVC, the volume fails to be automatically reclaimed.The data volume is not automatically recycled after the PVC is deleted under abnormal circumstances Manually reclaim data volumes: Check the mapping table between LV (data volume) and PVC, and find the PVC that is no longer in use. The corresponding LV should be recycled.: $ kubectl get lv | awk '{print $1}' | grep -v NAME | xargs -I {} kubectl get lv {} -o jsonpath='{.metadata.name} -> {.spec.pvcNamespace}/{.spec.pvcName}{"\\n"}' pvc-be53be2a-1c4b-430e-a45b-05777c791957 -> default/data-nginx-sts-0 Check whether the PVC exists and delete it if it does. Check if a PV with the same name as the LV exists, and if so, delete it. Edit LV, modify spec.delete=true $ kubectl edit lv pvc-be53be2a-1c4b-430e-a45b-05777c791957 ... spec: delete: true ","version":"Next","tagName":"h2"},{"title":"Q8: Why are there residual LocalVolume resources?","type":1,"pageTitle":"FAQs","url":"/docs/faqs#q8-why-are-there-residual-localvolume-resources","content":"If you delete PV first and then PVC, LocalVolume resources will not be reclaimed normally. You need to enable the HonorPVReclaimPolicy feature to reclaim them normally. note Reference Documents: https://kubernetes.io/blog/2021/12/15/kubernetes-1-23-prevent-persistentvolume-leaks-when-deleting-out-of-order/ Steps to enable HonorPVReclaimPolicy: Modify kube-controller-manager: $ vi /etc/kubernetes/manifests/kube-controller-manager.yaml ... spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=false - --feature-gates=HonorPVReclaimPolicy=true Modify csi-provisioner: $ kubectl edit -n hwameistor deployment.apps/hwameistor-local-storage-csi-controller ... containers: - args: - --v=5 - --csi-address=$(CSI_ADDRESS) - --leader-election=true - --feature-gates=Topology=true - --strict-topology - --extra-create-metadata=true - --feature-gates=HonorPVReclaimPolicy=true env: - name: CSI_ADDRESS value: /csi/csi.sock image: k8s.m.daocloud.io/sig-storage/csi-provisioner:v3.5.0 Check whether the configuration is effective: You can check whether the finalizers of the existing pv contain external-provisioner.volume.kubernetes.io/finalizer $ kubectl get pv pvc-a7b7e3ba-f837-45ba-b243-dec7d8aaed53 -o yaml ... apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com creationTimestamp: "2021-11-17T19:28:56Z" finalizers: - kubernetes.io/pv-protection - external-attacher/lvm-hwameistor-io - external-provisioner.volume.kubernetes.io/finalizer ","version":"Next","tagName":"h2"},{"title":"What is HwameiStor","type":0,"sectionRef":"#","url":"/docs/intro","content":"","keywords":"","version":"Next"},{"title":"Features","type":1,"pageTitle":"What is HwameiStor","url":"/docs/intro#features","content":"Automated Maintenance Disks can be automatically discovered, identified, managed, and allocated. Smart scheduling of applications and data based on affinity. Automatically monitor disk status and give early warning. High Availability Use cross-node replicas to synchronize data for high availability. When a problem occurs, the application will be automatically scheduled to the high-availability data node to ensure the continuity of the application. Full-Range support of Storage Medium Aggregate HDD, SSD, and NVMe disks to provide low-latency, high-throughput data services. Agile Linear Scalability Dynamically expand the cluster according to flexibly meet the data persistence requirements of the application. ","version":"Next","tagName":"h2"},{"title":"Uninstall (For test purposes only, not for production use)","type":0,"sectionRef":"#","url":"/docs/install/uninstall","content":"","keywords":"","version":"Next"},{"title":"Uninstall but retain data volumes","type":1,"pageTitle":"Uninstall (For test purposes only, not for production use)","url":"/docs/install/uninstall#uninstall-but-retain-data-volumes","content":"If you want to uninstall the HwameiStor components, but still keep the existing data volumes working with the applications, perform the following steps: $ kubectl get cluster.hwameistor.io NAME AGE cluster-sample 21m $ kubectl delete clusters.hwameistor.io hwameistor-cluster Finally, all the HwameiStor's components (i.e. Pods) will be deleted. Check by: kubectl -n hwameistor get pod ","version":"Next","tagName":"h2"},{"title":"Uninstall and delete data volumes","type":1,"pageTitle":"Uninstall (For test purposes only, not for production use)","url":"/docs/install/uninstall#uninstall-and-delete-data-volumes","content":"danger Before you start to perform actions, make sure you reallly want to delete all your data. If you confirm to delete your data volumes and uninstall HwameiStor, perform the following steps: Clean up stateful applications. Delete stateful applications. Delete PVCs. The relevant PVs, LVs, LVRs, LVGs will also been deleted. Clean up HwameiStor components. Delete HwameiStor components. kubectl delete clusters.hwameistor.io hwameistor-cluster Delete hwameistor namespace. kubectl delete ns hwameistor Delete CRD, Hook, and RBAC. kubectl get crd,mutatingwebhookconfiguration,clusterrolebinding,clusterrole -o name \\ | grep hwameistor \\ | xargs -t kubectl delete Delete StorageClass. kubectl get sc -o name \\ | grep hwameistor-storage-lvm- \\ | xargs -t kubectl delete Delete hwameistor-operator. helm uninstall hwameistor-operator -n hwameistor Finally, you still need to clean up the LVM configuration on each node, and also data on the disks by tools like wipefs. wipefs -a /dev/sdx blkid /dev/sdx ","version":"Next","tagName":"h2"},{"title":"Prerequisites","type":0,"sectionRef":"#","url":"/docs/install/prereq","content":"","keywords":"","version":"Next"},{"title":"Kubernetes","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#kubernetes","content":"Kubernetes 1.18+CoreDNS is deployed ","version":"Next","tagName":"h2"},{"title":"Unsupported platforms","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#unsupported-platforms","content":"OpenShiftRancher note The aforementioned platforms are currently unsupported; however, they will be supported in the future. ","version":"Next","tagName":"h3"},{"title":"Hosts","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#hosts","content":"","version":"Next","tagName":"h2"},{"title":"Supported linux distributions","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#supported-linux-distributions","content":"CentOS/RHEL 7.4+Rocky Linux 8.4+Ubuntu 18+Kylin V10 ","version":"Next","tagName":"h3"},{"title":"Supported processors","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#supported-processors","content":"x86_64ARM64 ","version":"Next","tagName":"h3"},{"title":"Required dependencies","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#required-dependencies","content":"LVM2 is installed. For HA features, it is necessary to install kernel-devel with a compatible version to the current kernel. The HA features module cannot be automatically installed on nodes with certain kernel versions and needs to be installed manually. Click to view all compatible kernel versions 5.4.247-1.el7.elrepo 5.14.0-284.11.1.el9_2 5.15.0-204.147.6.2.el8uek --------------------------------------------------- 5.8.0-1043-azure 5.8.0-1042-azure 5.8.0-1041-azure 5.4.17-2102.205.7.2.el7uek 5.4.17-2011.0.7.el8uek 5.4.0-91 5.4.0-90 5.4.0-89 5.4.0-88 5.4.0-86 5.4.0-84 5.4.0-1064-azure 5.4.0-1063-azure 5.4.0-1062-azure 5.4.0-1061-azure 5.4.0-1060-aws 5.4.0-1059-azure 5.4.0-1059-aws 5.4.0-1058-azure 5.4.0-1058-aws 5.4.0-1057-aws 5.4.0-1056-aws 5.4.0-1055-aws 5.3.18-57.3 5.3.18-22.2 5.14.0-1.7.1.el9 5.11.0-1022-azure 5.11.0-1022-aws 5.11.0-1021-azure 5.11.0-1021-aws 5.11.0-1020-azure 5.11.0-1020-aws 5.11.0-1019-aws 5.11.0-1017-aws 5.11.0-1016-aws 5.10.0-8 5.10.0-7 5.10.0-6 4.9.215-36.el7 4.9.212-36.el7 4.9.206-36.el7 4.9.199-35.el7 4.9.188-35.el7 4.4.92-6.30.1 4.4.74-92.38.1 4.4.52-2.1 4.4.27-572.565306 4.4.0-217 4.4.0-216 4.4.0-214 4.4.0-213 4.4.0-210 4.4.0-1133-aws 4.4.0-1132-aws 4.4.0-1131-aws 4.4.0-1128-aws 4.4.0-1121-aws 4.4.0-1118-aws 4.19.19-5.0.8 4.19.0-8 4.19.0-6 4.19.0-5 4.19.0-16 4.18.0-80.1.2.el8_0 4.18.0-348.el8 4.18.0-305.el8 4.18.0-240.1.1.el8_3 4.18.0-193.el8 4.18.0-147.el8 4.15.0-163 4.15.0-162 4.15.0-161 4.15.0-159 4.15.0-158 4.15.0-156 4.15.0-112-lowlatency 4.15.0-1113-azure 4.15.0-1040-azure 4.15.0-1036-azure 4.14.35-2047.502.5.el7uek 4.14.35-1902.4.8.el7uek 4.14.35-1818.3.3.el7uek 4.14.248-189.473.amzn2 4.14.128-112.105.amzn2 4.13.0-1018-azure 4.12.14-95.3.1 4.12.14-25.25.1 4.12.14-197.29 4.12.14-120.1 4.1.12-124.49.3.1.el7uek 4.1.12-124.26.3.el6uek 4.1.12-124.21.1.el6uek 3.10.0-957.el7 3.10.0-862.el7 3.10.0-693.el7 3.10.0-693.21.1.el7 3.10.0-693.17.1.el7 3.10.0-514.6.2.el7 3.10.0-514.36.5.el7 3.10.0-327.el7 3.10.0-229.1.2.el7 3.10.0-123.20.1.el7 3.10.0-1160.el7 3.10.0-1127.el7 3.10.0-1062.el7 3.10.0-1049.el7 3.0.101-108.13.1 2.6.32-754.el6 2.6.32-696.el6 2.6.32-696.30.1.el6 2.6.32-696.23.1.el6 2.6.32-642.1.1.el6 2.6.32-573.1.1.el6 2.6.32-504.el6 For VolumeResize features, a tool to resize the filesystem is required. By default, xfs is used as the volume filesystem. Therefore, you need to install xfs_growfs on the host. CentOS/RHEL, Rocky, and KylinUbuntu yum install -y lvm2 yum install -y kernel-devel-$(uname -r) yum install -y xfsprogs ","version":"Next","tagName":"h3"},{"title":"Secure Boot","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#secure-boot","content":"The HA feature does not support Secure Boot currently. Make sure Secure Boot is disabled: $ mokutil --sb-state SecureBoot disabled $ dmesg | grep secureboot [ 0.000000] secureboot: Secure boot disabled ","version":"Next","tagName":"h3"},{"title":"Data disks","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#data-disks","content":"HwameiStor supports HDD, SSD, and NVMe. For test, each host must have at least one unused drive with a minimal size of 10GiB. For production, it is recommended to have at least one unused drive, protected by RAID1 or RAID5/6, with a minimal size of 200GiB. note For virtual machine environments, please ensure that the disk serial number feature is enabled for each virtual machine. This will assist HwameiStor in better identifying and managing the disks on the host. To avoid disk identification conflicts, please ensure that the provided virtual disk serial numbers are not duplicated. ","version":"Next","tagName":"h3"},{"title":"Network","type":1,"pageTitle":"Prerequisites","url":"/docs/install/prereq#network","content":"For production, it is recommended to have a redundant 10Giga TCP/IP network, if the HA feature is enabled. ","version":"Next","tagName":"h3"},{"title":"DataLoad Manager","type":0,"sectionRef":"#","url":"/docs/modules/dlm","content":"","keywords":"","version":"Next"},{"title":"Applicable scenarios","type":1,"pageTitle":"DataLoad Manager","url":"/docs/modules/dlm#applicable-scenarios","content":"DataloadManager supports multiple data loading protocols: s3, nfs, ftp, http, ssh In AI data training scenarios, data can be loaded into local cache volumes faster. Especially when the data set supports s3 protocol pull, p2p technology can be combined to significantly improve data loading. ","version":"Next","tagName":"h2"},{"title":"Usage with DataLoad Manager","type":1,"pageTitle":"DataLoad Manager","url":"/docs/modules/dlm#usage-with-dataload-manager","content":"DataLoad Manager is a component of HwameiStor and must work with the DataLoad Manager module. ","version":"Next","tagName":"h2"},{"title":"admission-controller","type":0,"sectionRef":"#","url":"/docs/modules/admission_controller","content":"","keywords":"","version":"Next"},{"title":"How to identify a HwameiStor volume?","type":1,"pageTitle":"admission-controller","url":"/docs/modules/admission_controller#how-to-identify-a-hwameistor-volume","content":"admission-controller gets all the PVCs used by a pod, and checks the provisioner of each PVC in turn. If the suffix of the provisioner name is *.hwameistor.io, it is believed that the pod is using the volume provided by HwameiStor. ","version":"Next","tagName":"h2"},{"title":"Which resources will be verified?","type":1,"pageTitle":"admission-controller","url":"/docs/modules/admission_controller#which-resources-will-be-verified","content":"Only Pod resources will be verified, and the verification process occurs at the time of creation. info In order to ensure that the pods of HwameiStor can be started smoothly, the pods in the namespace where HwameiStor is deployed will not be verified. ","version":"Next","tagName":"h2"},{"title":"DataSet Manager","type":0,"sectionRef":"#","url":"/docs/modules/dsm","content":"DataSet Manager DataSet Manager is one of the modules of DataStor which is a cloud-native local storage system acceleration solution in AI scenarios. It provides high-performance local cache volumes for data sets required by AI applications Supported volume managers: LVM. Supported storage medium: HDD, SSD, NVMe.","keywords":"","version":"Next"},{"title":"Evictor","type":0,"sectionRef":"#","url":"/docs/modules/evictor","content":"Evictor The Evictor is used to automatically migrate HwameiStor volumes in case of node or pod eviction. When a node or pod is evicted as either Planned or Unplanned, the associated HwameiStor volumes, which have a replica on the node, will be detected and migrated out of this node automatically. A scheduler will work for both LVM and Disk volumes. The Evictor should be deployed with the HA mode in the cluster, which is a best practice for production.","keywords":"","version":"Next"},{"title":"Exporter","type":0,"sectionRef":"#","url":"/docs/modules/exporter","content":"Exporter Exporter is HwameiStor's metrics server which will collect the metrics for the system resources, such as Disk, Volumes, Operations, etc.., and supply for the Prometheus module.","keywords":"","version":"Next"},{"title":"GUI","type":0,"sectionRef":"#","url":"/docs/modules/gui","content":"GUI HwameiStor has a module for Graph User Interface. It will provide users with an easy way to manage the HwameiStor system. The GUI can be deployed by the Operator.","keywords":"","version":"Next"},{"title":"LDA Controller","type":0,"sectionRef":"#","url":"/docs/modules/lda_controller","content":"LDA Controller The LDA controller provides a separate CRD - LocalDiskAction, which is used to match the localdisk and execute the specified action. Its yaml definition is as follows: apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskAction metadata: name: forbidden-1 spec: action: reserve rule: minCapacity: 1024 devicePath: /dev/rbd* --- apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskAction metadata: name: forbidden-2 spec: action: reserve rule: maxCapacity: 1048576 devicePath: /dev/sd* The above yaml indicates: Localdisks smaller than 1024 bytes and whose devicePath meets the /dev/rbd* matching condition will be reservedLocaldisks larger than 1048576 bytes and whose devicePath meets the /dev/sd* matching condition will be reserved Note that the currently supported actions are only reserve.","keywords":"","version":"Next"},{"title":"Local Storage","type":0,"sectionRef":"#","url":"/docs/modules/ls","content":"","keywords":"","version":"Next"},{"title":"Applicable scenarios","type":1,"pageTitle":"Local Storage","url":"/docs/modules/ls#applicable-scenarios","content":"HwameiStor provisions two kinds of local volumes: LVM and Disk. As a component of HwameiStor, Local Storage provisions two types of local LVM volumes, such as HA and non-HA. For the non-HA local LVM volume, it's the best solution for data persistency in the following use cases: Databases with HA capability, such as MongoDB, etc.Messaging systems with HA capability, such as Kafka, RabbitMQ, etc.Key-value stores with HA capability, such as Redis, etc.Others with HA capability For the HA local LVM volume, it's the best solution for data persistency in the following use cases: Database, such as MySQL, PostgreSQL, etc.Other applications which require the data with HA features. ","version":"Next","tagName":"h2"},{"title":"Usage with Local Disk Manager","type":1,"pageTitle":"Local Storage","url":"/docs/modules/ls#usage-with-local-disk-manager","content":"Local Storage is a component of HwameiStor and must work with the Local Disk Manager module. ","version":"Next","tagName":"h2"},{"title":"Roadmap","type":1,"pageTitle":"Local Storage","url":"/docs/modules/ls#roadmap","content":"Roadmap provides a release plan about local storage and its features. ","version":"Next","tagName":"h2"},{"title":"Local Disk Manager","type":0,"sectionRef":"#","url":"/docs/modules/ldm","content":"","keywords":"","version":"Next"},{"title":"Concepts","type":1,"pageTitle":"Local Disk Manager","url":"/docs/modules/ldm#concepts","content":"LocalDisk (LD): LDM abstracts disk resources into objects in kubernetes. An LD resource object represents the disk resources on the host. LocalDiskClaim (LDC): This is a way to use disks. A user can add the disk description to select a disk for use. At present, LDC supports the following options to describe disk: NodeNameCapacityDiskType (such as HDD/SSD/NVMe) ","version":"Next","tagName":"h2"},{"title":"Usage","type":1,"pageTitle":"Local Disk Manager","url":"/docs/modules/ldm#usage","content":"Get the LocalDisk information. kubectl get localdisk NAME NODEMATCH PHASE 10-6-118-11-sda 10-6-118-11 Available 10-6-118-11-sdb 10-6-118-11 Available Get locally discovered disk resource information with three columns displayed. NAME: represents how this disk is displayed in the cluster resources.NODEMATCH: indicates which host this disk is on.PHASE: represents the current state of the disk. Use kubectl get localdisk <name> -o yaml to view more information about disks. Claim available disks. Apply a LocalDiskClaim. cat << EOF | kubectl apply -f - apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: <localDiskClaimName> spec: description: # e.g. HDD,SSD,NVMe diskType: <diskType> # the node where disks attached nodeName: <nodeName> # the owner of the allocated disks e.g. local-storage,local-disk-manager owner: <ownerName> EOF Allocate available disks by issuing a disk usage request. In the request description, you can add more requirements about the disk, such as disk type and capacity. Get the LocalDiskClaim information. kubectl get localdiskclaim <name> Once the LDC is processed successfully, it will be cleanup by the system automatically. The result will be recorded in the LocalStorageNode if the owner is local-storage. ","version":"Next","tagName":"h2"},{"title":"Scheduler","type":0,"sectionRef":"#","url":"/docs/modules/scheduler","content":"Scheduler The Scheduler is used to automatically schedule the Pod to the correct node which is associated with the HwameiStor volume. With the scheduler, the Pod does not need the NodeAffinity or NodeSelector field to select the node. A scheduler will work for both LVM and Disk volumes. The Scheduler should be deployed with the HA mode in the cluster, which is a best practice for production.","keywords":"","version":"Next"},{"title":"Disk Storage Node","type":0,"sectionRef":"#","url":"/docs/nodes_and_disks/disk_nodes","content":"","keywords":"","version":"Next"},{"title":"Steps","type":1,"pageTitle":"Disk Storage Node","url":"/docs/nodes_and_disks/disk_nodes#steps","content":"","version":"Next","tagName":"h2"},{"title":"1. Prepare a disk storage node","type":1,"pageTitle":"Disk Storage Node","url":"/docs/nodes_and_disks/disk_nodes#1-prepare-a-disk-storage-node","content":"Add the node to the Kubernetes cluster or select a Kubernetes node. For example, suppose you have a new node with the following information: name: k8s-worker-2devPath: /dev/sdbdiskType: SSD disk After the new node is already added into the Kubernetes cluster, make sure the following HwameiStor pods are already running on this node. $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-1 Ready master 96d v1.24.3-2+63243a96d1c393 k8s-worker-1 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-2 Ready worker 96h v1.24.3-2+63243a96d1c393 $ kubectl -n hwameistor get pod -o wide | grep k8s-worker-2 hwameistor-local-disk-manager-sfsf1 2/2 Running 0 19h 10.6.128.150 k8s-worker-2 <none> <none> # check LocalDiskNode resource $ kubectl get localdisknode k8s-worker-2 NAME FREECAPACITY TOTALCAPACITY TOTALDISK STATUS AGE k8s-worker-2 Ready 21d ","version":"Next","tagName":"h3"},{"title":"2. Add the storage node into HwameiStor","type":1,"pageTitle":"Disk Storage Node","url":"/docs/nodes_and_disks/disk_nodes#2-add-the-storage-node-into-hwameistor","content":"First, change the owner information of the disk sdb to local-disk-manager as below: $ kubectl edit ld localdisk-2307de2b1c5b5d051058bc1d54b41d5c apiVersion: hwameistor.io/v1alpha1 kind: LocalDisk metadata: name: localdisk-2307de2b1c5b5d051058bc1d54b41d5c spec: devicePath: /dev/sdb nodeName: k8s-worker-2 + owner: local-disk-manager ... Create the storage pool of the node by adding a LocalStorageClaim CR as below: $ kubectl apply -f - <<EOF apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: k8s-worker-2 spec: nodeName: k8s-worker-2 owner: local-disk-manager description: diskType: SSD EOF ","version":"Next","tagName":"h3"},{"title":"3. Post check","type":1,"pageTitle":"Disk Storage Node","url":"/docs/nodes_and_disks/disk_nodes#3-post-check","content":"Finally, check if the node has created the storage pool by checking the LocalDiskNode CR. kubectl get localstoragenode k8s-worker-2 -o yaml The output may look like: apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskNode metadata: name: k8s-worker-2 spec: nodeName: k8s-worker-2 status: pools: LocalDisk_PoolSSD: class: SSD disks: - capacityBytes: 214744170496 devPath: /dev/sdb state: Available type: SSD freeCapacityBytes: 214744170496 freeVolumeCount: 1 totalCapacityBytes: 214744170496 totalVolumeCount: 1 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 214744170496 volumes: state: Ready ","version":"Next","tagName":"h3"},{"title":"System Audit","type":0,"sectionRef":"#","url":"/docs/system_audit","content":"","keywords":"","version":"Next"},{"title":"How to use","type":1,"pageTitle":"System Audit","url":"/docs/system_audit#how-to-use","content":"HwameiStor designs a new CRD for every resource as below: apiVersion: hwameistor.io/v1alpha1 kind: Event name: spec: resourceType: <Cluster | Node | StoragePool | Volume> resourceName: records: - action: actionContent: # in JSON format time: state: stateContent: # in JSON format For instance, let's look at audit information of a volume: apiVersion: hwameistor.io/v1alpha1 kind: Event metadata: creationTimestamp: "2023-08-08T15:52:55Z" generation: 5 name: volume-pvc-34e3b086-2d95-4980-beb6-e175fd79a847 resourceVersion: "10221888" uid: d3ebaffb-eddb-4c84-93be-efff350688af spec: resourceType: Volume resourceName: pvc-34e3b086-2d95-4980-beb6-e175fd79a847 records: - action: Create actionContent: '{"requiredCapacityBytes":5368709120,"volumeQoS":{},"poolName":"LocalStorage_PoolHDD","replicaNumber":2,"convertible":true,"accessibility":{"nodes":["k8s-node1","k8s-master"],"zones":["default"],"regions":["default"]},"pvcNamespace":"default","pvcName":"mysql-data-volume","volumegroup":"db890e34-a092-49ac-872b-f2a422439c81"}' time: "2023-08-08T15:52:55Z" - action: Mount actionContent: '{"allocatedCapacityBytes":5368709120,"replicas":["pvc-34e3b086-2d95-4980-beb6-e175fd79a847-krp927","pvc-34e3b086-2d95-4980-beb6-e175fd79a847-wm7p56"],"state":"Ready","publishedNode":"k8s-node1","fsType":"xfs","rawblock":false}' time: "2023-08-08T15:53:07Z" - action: Unmount actionContent: '{"allocatedCapacityBytes":5368709120,"usedCapacityBytes":33783808,"totalInode":2621120,"usedInode":3,"replicas":["pvc-34e3b086-2d95-4980-beb6-e175fd79a847-krp927","pvc-34e3b086-2d95-4980-beb6-e175fd79a847-wm7p56"],"state":"Ready","publishedNode":"k8s-node1","fsType":"xfs","rawblock":false}' time: "2023-08-08T16:03:03Z" - action: Delete actionContent: '{"requiredCapacityBytes":5368709120,"volumeQoS":{},"poolName":"LocalStorage_PoolHDD","replicaNumber":2,"convertible":true,"accessibility":{"nodes":["k8s-node1","k8s-master"],"zones":["default"],"regions":["default"]},"pvcNamespace":"default","pvcName":"mysql-data-volume","volumegroup":"db890e34-a092-49ac-872b-f2a422439c81","config":{"version":1,"volumeName":"pvc-34e3b086-2d95-4980-beb6-e175fd79a847","requiredCapacityBytes":5368709120,"convertible":true,"resourceID":2,"readyToInitialize":true,"initialized":true,"replicas":[{"id":1,"hostname":"k8s-node1","ip":"10.6.113.101","primary":true},{"id":2,"hostname":"k8s-master","ip":"10.6.113.100","primary":false}]},"delete":true}' time: "2023-08-08T16:03:38Z" ","version":"Next","tagName":"h2"},{"title":"Disk Expansion","type":0,"sectionRef":"#","url":"/docs/nodes_and_disks/disk_expansion","content":"","keywords":"","version":"Next"},{"title":"Prepare a new storage disk","type":1,"pageTitle":"Disk Expansion","url":"/docs/nodes_and_disks/disk_expansion#prepare-a-new-storage-disk","content":"Select a storage node from the HwameiStor system, and add a new disk into it. For example, the storage node and new disk information are as follows: name: k8s-worker-4devPath: /dev/sdcdiskType: SSD After the new disk is added into the storage node k8s-worker-4, you can check the disk status as below. Check if the new disk is added into the node successfully ssh root@k8s-worker-4 lsblk | grep sdc The output may look like: sdc 8:32 0 20G 1 disk Check if the LocalDisk CR already exists for the new disk and the status is "Unclaimed" kubectl get localdisk | grep k8s-worker-4 | grep sdc The output may look like: k8s-worker-4-sdc k8s-worker-4 Available ","version":"Next","tagName":"h2"},{"title":"Add the new disk into the node's storage pool","type":1,"pageTitle":"Disk Expansion","url":"/docs/nodes_and_disks/disk_expansion#add-the-new-disk-into-the-nodes-storage-pool","content":"The new disk should be added into the existing SSD storage pool of the node. If the storage pool doesn't exist, it will be constructed automatically and the new disk should be added into it. $ kubectl apply -f - <<EOF apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: k8s-worker-4-expand spec: nodeName: k8s-worker-4 owner: local-storage description: diskType: SSD EOF ","version":"Next","tagName":"h2"},{"title":"Post check","type":1,"pageTitle":"Disk Expansion","url":"/docs/nodes_and_disks/disk_expansion#post-check","content":"Finally, check if the new disk has been added into the node's storage pool successfully by checking the LocalStorageNode CR: kubectl get localstoragenode k8s-worker-4 The output may look like: apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: name: k8s-worker-4 spec: hostname: k8s-worker-4 storageIP: 10.6.182.103 topogoly: region: default zone: default status: pools: LocalStorage_PoolSSD: class: SSD disks: - capacityBytes: 214744170496 devPath: /dev/sdb state: InUse type: SSD - capacityBytes: 214744170496 devPath: /dev/sdc state: InUse type: SSD freeCapacityBytes: 429488340992 freeVolumeCount: 1000 name: LocalStorage_PoolSSD totalCapacityBytes: 429488340992 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 429488340992 volumes: state: Ready ","version":"Next","tagName":"h2"},{"title":"CAS","type":0,"sectionRef":"#","url":"/docs/terms/cas","content":"","keywords":"","version":"Next"},{"title":"Advantages of CAS","type":1,"pageTitle":"CAS","url":"/docs/terms/cas#advantages-of-cas","content":"","version":"Next","tagName":"h2"},{"title":"Agility","type":1,"pageTitle":"CAS","url":"/docs/terms/cas#agility","content":"Each storage volume in CAS has a containerized storage controller and corresponding containerized replicas. Hence, maintenance and tuning of the resources around these components are truly agile. The capability of Kubernetes for rolling upgrades enables seamless upgrades of storage controllers and storage replicas. Resources such as CPU and memory can be tuned using container cgroups. ","version":"Next","tagName":"h3"},{"title":"Granularity of Storage Policies","type":1,"pageTitle":"CAS","url":"/docs/terms/cas#granularity-of-storage-policies","content":"Containerizing the storage software and dedicating the storage controller to each volume brings maximum granularity in storage policies. With CAS architecture, you can configure all storage policies on a per-volume basis. In addition, you can monitor storage parameters of every volume and dynamically update storage policies to achieve the desired result for each workload. The control of storage throughput, IOPS, and latency increases with this additional level of granularity in the volume storage policies. ","version":"Next","tagName":"h3"},{"title":"Avoid Lock-in","type":1,"pageTitle":"CAS","url":"/docs/terms/cas#avoid-lock-in","content":"Avoiding cloud vendor lock-in is a common goal for many Kubernetes users. However, the data of stateful applications often remains dependent on the cloud provider and technology or on an underlying traditional shared storage system, NAS or SAN. With the CAS approach, storage controllers can migrate the data in the background per workload and live migration becomes simpler. In other words, the granularity of control of CAS simplifies the movement of stateful workloads from one Kubernetes cluster to another in a non-disruptive way. ","version":"Next","tagName":"h3"},{"title":"Cloud Native","type":1,"pageTitle":"CAS","url":"/docs/terms/cas#cloud-native","content":"CAS containerizes the storage software and uses Kubernetes Custom Resource Definitions (CRDs) to represent low-level storage resources, such as disks and storage pools. This model enables storage to be integrated into other cloud-native tools seamlessly. The storage resources can be provisioned, monitored, and managed using cloud-native tools such as Prometheus, Grafana, Fluentd, Weavescope, Jaeger, and others. Similar to hyperconverged systems, storage and performance of a volume in CAS are scalable. As each volume has it's own storage controller, the storage can scale up within the permissible limits of a storage capacity of a node. As the number of container applications increases in a given Kubernetes cluster, more nodes are added, which increases the overall availability of storage capacity and performance, thereby making the storage available to the new application containers. ","version":"Next","tagName":"h3"},{"title":"Lower Blast Radius","type":1,"pageTitle":"CAS","url":"/docs/terms/cas#lower-blast-radius","content":"Because the CAS architecture is per workload and components are loosely coupled, CAS has a much smaller blast radius than a typical distributed storage architecture. CAS can deliver high availability through synchronous replication from storage controllers to storage replicas. The metadata required to maintain the replicas is simplified to store the information of the nodes that have replicas and information about the status of replicas to help with quorum. If a node fails, the storage controller, which is a stateless container in this case, is spun on a node where second or third replica is running and data continues to be available. Hence, with CAS the blast radius is much lower and also localized to the volumes that have replicas on that node. ","version":"Next","tagName":"h3"},{"title":"LVM Storage Node","type":0,"sectionRef":"#","url":"/docs/nodes_and_disks/lvm_nodes","content":"","keywords":"","version":"Next"},{"title":"Steps","type":1,"pageTitle":"LVM Storage Node","url":"/docs/nodes_and_disks/lvm_nodes#steps","content":"","version":"Next","tagName":"h2"},{"title":"1. Prepare a storage node","type":1,"pageTitle":"LVM Storage Node","url":"/docs/nodes_and_disks/lvm_nodes#1-prepare-a-storage-node","content":"Add the node into the Kubernetes cluster or select a Kubernetes node. The node should have all the required items described in Prerequisites. For example, suppose you have a new node with the following information: name: k8s-worker-4devPath: /dev/sdbdiskType: SSD disk After the new node is already added into the Kubernetes cluster, make sure the following HwameiStor pods are already running on this node. $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-1 Ready master 96d v1.24.3-2+63243a96d1c393 k8s-worker-1 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-2 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-3 Ready worker 96d v1.24.3-2+63243a96d1c393 k8s-worker-4 Ready worker 1h v1.24.3-2+63243a96d1c393 $ kubectl -n hwameistor get pod -o wide | grep k8s-worker-4 hwameistor-local-disk-manager-c86g5 2/2 Running 0 19h 10.6.182.105 k8s-worker-4 <none> <none> hwameistor-local-storage-s4zbw 2/2 Running 0 19h 192.168.140.82 k8s-worker-4 <none> <none> # check if LocalStorageNode exists $ kubectl get localstoragenode k8s-worker-4 NAME IP ZONE REGION STATUS AGE k8s-worker-4 10.6.182.103 default default Ready 8d ","version":"Next","tagName":"h3"},{"title":"2. Add the storage node into HwameiStor","type":1,"pageTitle":"LVM Storage Node","url":"/docs/nodes_and_disks/lvm_nodes#2-add-the-storage-node-into-hwameistor","content":"Construct the storage pool of the node by adding a LocalStorageClaim CR as below: $ kubectl apply -f - <<EOF apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: k8s-worker-4 spec: nodeName: k8s-worker-4 owner: local-storage description: diskType: SSD EOF ","version":"Next","tagName":"h3"},{"title":"3. Post check","type":1,"pageTitle":"LVM Storage Node","url":"/docs/nodes_and_disks/lvm_nodes#3-post-check","content":"Finally, check if the node has constructed the storage pool by checking the LocalStorageNode CR. kubectl get localstoragenode k8s-worker-4 -o yaml The output may look like: apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: name: k8s-worker-4 spec: hostname: k8s-worker-4 storageIP: 10.6.182.103 topogoly: region: default zone: default status: pools: LocalStorage_PoolSSD: class: SSD disks: - capacityBytes: 214744170496 devPath: /dev/sdb state: InUse type: SSD freeCapacityBytes: 214744170496 freeVolumeCount: 1000 name: LocalStorage_PoolSSD totalCapacityBytes: 214744170496 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 214744170496 volumes: state: Ready ","version":"Next","tagName":"h3"},{"title":"CRD and CR","type":0,"sectionRef":"#","url":"/docs/terms/crd_and_cr","content":"","keywords":"","version":"Next"},{"title":"CRD","type":1,"pageTitle":"CRD and CR","url":"/docs/terms/crd_and_cr#crd","content":"CRD is the abbreviation of Custom Resource Definition, and is a resource type natively provided by Kubernetes. It is the definition of Custom Resource (CR) to describe what a custom resource is. A CRD can register a new resource with the Kubernetes cluster to extend the capabilities of the Kubernetes cluster. With CRD, you can define the abstraction of the underlying infrastructure, customize resource types based on business needs, and use the existing resources and capabilities of Kubernetes to define higher-level abstractions through a Lego-like building blocks. ","version":"Next","tagName":"h2"},{"title":"CR","type":1,"pageTitle":"CRD and CR","url":"/docs/terms/crd_and_cr#cr","content":"CR is the abbreviation of Custom Resource. In practice, it is an instance of CRD, a resource description that matches with the field format in CRD. ","version":"Next","tagName":"h2"},{"title":"CRDs + Controllers","type":1,"pageTitle":"CRD and CR","url":"/docs/terms/crd_and_cr#crds--controllers","content":"We all know that Kubernetes has powerful scalability, but only CRD is not useful. It also needs the support of controller (Custom Controller) to reflect the value of CRD.Custom Controller can listen CRUD events of CR to implement custom business logic. In Kubernetes, CRDs + Controllers = Everything. See also the official documentation provided by Kubernetes: CustomResourceCustomResourceDefinition ","version":"Next","tagName":"h2"},{"title":"K8s Storage","type":0,"sectionRef":"#","url":"/docs/terms/k8s_storage","content":"","keywords":"","version":"Next"},{"title":"Container Storage Interface","type":1,"pageTitle":"K8s Storage","url":"/docs/terms/k8s_storage#container-storage-interface","content":"The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI third-party storage providers like HwameiStor can write and deploy plugins exposing new storage volumes like HwameiStor Local and Replicated Volumes in Kubernetes without ever having to touch the core Kubernetes code. When cluster administrators install HwameiStor, the required HwameiStor CSI driver components are installed into the Kubernetes cluster. Prior to CSI, Kubernetes supported adding storage providers using out-of-tree provisioners referred to as external provisioners. And Kubernetes in-tree volumes pre-date the external provisioners. There is an ongoing effort in the Kubernetes community to deprecate in-tree volumes with CSI based volumes. ","version":"Next","tagName":"h2"},{"title":"Storage Classes and Dynamic Provisioning","type":1,"pageTitle":"K8s Storage","url":"/docs/terms/k8s_storage#storage-classes-and-dynamic-provisioning","content":"A StorageClass provides a way for administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. This concept is sometimes called "profiles" in other storage systems. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. The implementation of dynamic volume provisioning is based on the StorageClass abstraction. A cluster administrator can define as many StorageClass objects as needed, each specifying a volume plugin (aka provisioner) that provisions a volume and the set of parameters to pass to that provisioner when provisioning. A cluster administrator can define and expose multiple flavors of storage (from the same or different storage systems) within a cluster, each with a custom set of parameters. This design also ensures that end users don't have to worry about the complexity and nuances of how storage is provisioned, but still have the ability to select from multiple storage options. When HwameiStor is installed, it ships with a couple of default storage classes that allow users to create either local (HwameiStor LocalVolume) or replicated (HwameiStor LocalVolumeReplica) volumes. The cluster administrator can enable the required storage engines and then create Storage Classes for the required Data Engines. ","version":"Next","tagName":"h2"},{"title":"Persistent Volume Claims","type":1,"pageTitle":"K8s Storage","url":"/docs/terms/k8s_storage#persistent-volume-claims","content":"PersistentVolumeClaim (PVC) is a user’s storage request that is served by a StorageClass offered by the cluster administrator. An application running on a container can request a certain type of storage. For example, a container can specify the size of storage it needs or the way it needs to access the data (read only, read/write, read-write many, etc.,). Beyond storage size and access mode, administrators create Storage Classes to provided PVs with custom properties, such as the type of disk (HDD vs. SSD), the level of performance, or the storage tier (regular or cold storage). ","version":"Next","tagName":"h2"},{"title":"Persistent Volume","type":1,"pageTitle":"K8s Storage","url":"/docs/terms/k8s_storage#persistent-volume","content":"The PersistentVolume(PV) is dynamically provisioned by the storage providers when users request for a PVC. PV contains the details on how the storage can be consumed by the container. Kubernetes and the Volume Drivers use the details in the PV to attach/detach the storage to the node where the container is running and mount/unmount storage to a container. HwameiStor Control Plane dynamically provisions HwameiStor Local and Replicated volumes and helps in creating the PV objects in the cluster. ","version":"Next","tagName":"h2"},{"title":"StatefulSets and Deployments","type":1,"pageTitle":"K8s Storage","url":"/docs/terms/k8s_storage#statefulsets-and-deployments","content":"Kubernetes provides several built-in workload resources such as StatefulSets and Deployments that let application developers define an application running on Kubernetes. You can run a stateful application by creating a Kubernetes Deployment/Statefulset and connecting it to a PersistentVolume using a PersistentVolumeClaim. For example, you can create a MySQL Deployment YAML that references a PersistentVolumeClaim. The MySQL PersistentVolumeClaim referenced by the Deployment should be created with the requested size and StorageClass. Once the HwameiStor control plane provisions a PersistenceVolume for the required StorageClass and requested capacity, the claim is set as satisfied. Kubernetes will then mount the PersistentVolume and launch the MySQL Deployment. ","version":"Next","tagName":"h2"},{"title":"LVM","type":0,"sectionRef":"#","url":"/docs/terms/lvm","content":"","keywords":"","version":"Next"},{"title":"Basic functionality","type":1,"pageTitle":"LVM","url":"/docs/terms/lvm#basic-functionality","content":"Physical media (PM): LVM storage media can be partitions, disks, RAID arrays, or SAN disks. Physical volume (PV): Physical volume is the basic storage logical block of LVM, but compared with basic physical storage media (such as partitions, disks, etc.), it contains management parameters related to LVM. A physical volume can be partitioned by a disk, or it can be the disk itself. Disks must be initialized as LVM physical volumes to be used with LVM. Volume groups (VG): It can be resized online by absorbing new physical volumes (PVs) or ejecting existing ones. Logical volumes (LV): It can be resized online by concatenating extents onto them or truncating extents from them. Physical extents (PE): The smallest storage unit that can be allocated in the physical volume. The size of PE can be specified, and the default is 4MB. Logical extents (LE): The smallest storage unit that can be allocated in an logical volume. In the same volume group, the size of LE is the same as that of PE, and there is a one-to-one correspondence. ","version":"Next","tagName":"h2"},{"title":"Advantages","type":1,"pageTitle":"LVM","url":"/docs/terms/lvm#advantages","content":"Use volume groups to make multiple hard drives look like one big hard driveUsing logical volumes, partitions can span multiple hard disk spaces sdb1 sdb2 sdc1 sdd2 sdfUsing logical volumes, you can dynamically resize it if the storage space is insufficientWhen resizing a logical volume, you need not to consider the location of the logical volume on a hard disk, and you need not to worry about no contiguous space availableLV and VG can be created, deleted, and resized online, and the file system on LVM also needs to be resizedYou can create snapshots, which can be used to back up file systemsRAID + LVM combined: LVM is a software method of volume management, while RAID is a method of disk management. For important data, RAID is used to protect physical disks from failures and services are not interrupted, and LVM is used to achieve a good volume management and better use of disk resources. ","version":"Next","tagName":"h2"},{"title":"Basic procedure to use LVM","type":1,"pageTitle":"LVM","url":"/docs/terms/lvm#basic-procedure-to-use-lvm","content":"Format a physical disk as PVs, that is, the space is divided into PEs. A PV contains multiple PEs.Add different PVs to the same VG, that is, the PEs of different PVs all enter the PE pool of the VG. A VG contains multiple PVs.Create logical volumes in the VG. This creation process is based on PE, so the PEs that make up the LV may come from different physical disks. LV is created based on PE.Directly format the LV and mount it for use.The scaling in / out of an LV is actually to increase or decrease the number of PEs that make up the LV without losing the original data.Format the LV and mount it for use. ","version":"Next","tagName":"h2"},{"title":"LV expansion","type":1,"pageTitle":"LVM","url":"/docs/terms/lvm#lv-expansion","content":"First, determine if there is available space for expansion, because space is created from VG, and LVs cannot be expanded across VGs. If the VG has no capacity, you need to expand the VG first. Perform the following steps: $ vgs VG #PV #LV #SN Attr VSize VFree vg-sdb1 1 8 1 wz--n- <16.00g <5.39g $ lvextend -L +100M -r /dev/vg-sdb1/lv-sdb1 #将 /dev/vg-sdb1/lv-sdb 扩容 100M ","version":"Next","tagName":"h2"},{"title":"VG expansion","type":1,"pageTitle":"LVM","url":"/docs/terms/lvm#vg-expansion","content":"If there is not sufficient space in the VG and you need to add a new disk, run the following commands in sequence: pvcreate /dev/sdc vgextend vg-sdb1 /dev/sdb3 ","version":"Next","tagName":"h2"},{"title":"LV snapshots","type":1,"pageTitle":"LVM","url":"/docs/terms/lvm#lv-snapshots","content":"The LVM mechanism provides the function of snapshotting LVs to obtain a state-consistent backup of the file system. LVM adopts Copy-On-Write (COW) technology, which can be backed up without stopping the service or setting the logical volume as read-only. Using the LVM, snapshot function can enable consistent backup without affecting the availability of the server. The copy-on-write adopted by LVM means that when creating an LVM snapshot, only the metadata in the original volume is copied. In other words, when an LVM logical volume is created, no physical replication of the data occurs. In another words, only the metadata is copied, not the physical data, so the snapshot creation is almost real-time. When a write operation is performed on the original volume, the snapshot will track the changes to the blocks in the original volume. At this time, the data that will be changed on the original volume will be copied to the space reserved by the snapshot before the change. ","version":"Next","tagName":"h2"},{"title":"PV and PVC","type":0,"sectionRef":"#","url":"/docs/terms/pv_pvc","content":"PV and PVC The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: PersistentVolume (PV) and PersistentVolumeClaim (PVC). A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes). While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource. It is used to mark storage resources and performance, and dynamically provision appropriate PV resources based on PVC demand. After the mechanism of StorageClass and dynamic provisioning developed for storage resources, the on-demand creation of volumes is realized, which is an important step in the automatic management process of shared storage. See also the official documentation provided by Kubernetes: Persistent VolumeStorageClassDynamic Volume Provisioning","keywords":"","version":"Next"},{"title":"CSI","type":0,"sectionRef":"#","url":"/docs/terms/csi","content":"","keywords":"","version":"Next"},{"title":"Extend CSI and Kubernetes","type":1,"pageTitle":"CSI","url":"/docs/terms/csi#extend-csi-and-kubernetes","content":"In order to enable the feature of expanding volume atop Kubernetes, we should extend several components including CSI specification, “in-tree” volume plugin, external-provisioner and external-attacher. ","version":"Next","tagName":"h2"},{"title":"Extend CSI spec","type":1,"pageTitle":"CSI","url":"/docs/terms/csi#extend-csi-spec","content":"The feature of expanding volume is still undefined in latest CSI 0.2.0. The new 3 RPCs, including RequiresFSResize, ControllerResizeVolume and NodeResizeVolume, should be introduced. service Controller { rpc CreateVolume (CreateVolumeRequest) returns (CreateVolumeResponse) {} …… rpc RequiresFSResize (RequiresFSResizeRequest) returns (RequiresFSResizeResponse) {} rpc ControllerResizeVolume (ControllerResizeVolumeRequest) returns (ControllerResizeVolumeResponse) {} } service Node { rpc NodeStageVolume (NodeStageVolumeRequest) returns (NodeStageVolumeResponse) {} …… rpc NodeResizeVolume (NodeResizeVolumeRequest) returns (NodeResizeVolumeResponse) {} } ","version":"Next","tagName":"h2"},{"title":"Extend “In-Tree” Volume Plugin","type":1,"pageTitle":"CSI","url":"/docs/terms/csi#extend-in-tree-volume-plugin","content":"In addition to the extend CSI specification, the csiPlugin interface within Kubernetes should also implement expandablePlugin. The csiPlugin interface will expandPersistentVolumeClaim representing for ExpanderController. type ExpandableVolumePlugin interface { VolumePlugin ExpandVolumeDevice(spec Spec, newSize resource.Quantity, oldSize resource.Quantity) (resource.Quantity, error) RequiresFSResize() bool } ","version":"Next","tagName":"h2"},{"title":"Implement Volume Driver","type":1,"pageTitle":"CSI","url":"/docs/terms/csi#implement-volume-driver","content":"Finally, to abstract complexity of the implementation, we should hard code the separate storage provider management logic into the following functions which is well-defined in the CSI specification: CreateVolumeDeleteVolumeControllerPublishVolumeControllerUnpublishVolumeValidateVolumeCapabilitiesListVolumesGetCapacityControllerGetCapabilitiesRequiresFSResizeControllerResizeVolume ","version":"Next","tagName":"h3"},{"title":"Demonstration","type":1,"pageTitle":"CSI","url":"/docs/terms/csi#demonstration","content":"Let’s demonstrate this feature with a concrete user case. Create storage class for CSI storage provisioner allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-qcfs parameters: csiProvisionerSecretName: orain-test csiProvisionerSecretNamespace: default provisioner: csi-qcfsplugin reclaimPolicy: Delete volumeBindingMode: Immediate Deploy CSI Volume Driver including storage provisioner csi-qcfsplugin across Kubernetes cluster Create PVC qcfs-pvc which will be dynamically provisioned by storage class csi-qcfs apiVersion: v1 kind: PersistentVolumeClaim metadata: name: qcfs-pvc namespace: default .... accessModes: - ReadWriteOnce resources: requests: storage: 300Gi storageClassName: csi-qcfs Create MySQL 5.7 instance to use PVC qcfs-pvc In order to mirror the exact same production-level scenario, there are actually two different types of workloads including: Batch insert to make MySQL consuming more file system capacitySurge query request Dynamically expand volume capacity through edit pvc qcfs-pvc configuration ","version":"Next","tagName":"h2"},{"title":"Volume","type":0,"sectionRef":"#","url":"/docs/terms/volume","content":"Volume On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. One problem is the loss of files when a container crashes. The kubelet restarts the container but with a clean state.A second problem occurs when sharing files between containers running together in a Pod. The Kubernetes volume abstraction solves both of these problems. Kubernetes supports many types of volumes. A Pod can use any number of volume types simultaneously. Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes. For any kind of volume in a given pod, data is preserved across container restarts. At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. To use a volume, specify the volumes to provide for the Pod in .spec.volumes and declare where to mount those volumes into containers in .spec.containers[*].volumeMounts. See also the official documentation provided by Kubernetes: VolumePersistent VolumeEphemeral Volume","keywords":"","version":"Next"},{"title":"Overview","type":0,"sectionRef":"#","url":"/docs/use_cases/overview","content":"","keywords":"","version":"Next"},{"title":"Hot backup","type":1,"pageTitle":"Overview","url":"/docs/use_cases/overview#hot-backup","content":"","version":"Next","tagName":"h2"},{"title":"Intra-node hot backup","type":1,"pageTitle":"Overview","url":"/docs/use_cases/overview#intra-node-hot-backup","content":"It guarantees with raid 5, which can tolerate a set of disk failures. Control flow is independent of data flow to ensure the stability of data access. ","version":"Next","tagName":"h3"},{"title":"Inter-node hot backup","type":1,"pageTitle":"Overview","url":"/docs/use_cases/overview#inter-node-hot-backup","content":"It provides protection with raid 5 + active/standby mechanism. The dce-storage, a dedicated network logical interface for HA, is designed to synchronize storage traffic between nodes. It can synchronously replicate data across nodes to guarantee a hot backup of data. ","version":"Next","tagName":"h3"},{"title":"Rebalance data","type":1,"pageTitle":"Overview","url":"/docs/use_cases/overview#rebalance-data","content":"Balanced placement of data in the cluster is achieved through data volume migration technology. Move data online to nodes with more storage space. ","version":"Next","tagName":"h2"},{"title":"Change volume type","type":1,"pageTitle":"Overview","url":"/docs/use_cases/overview#change-volume-type","content":"In order to support some special scenarios, a single-replica volume is allowed to be changed to multiple replicas, and inter-node hot backup is supported. ","version":"Next","tagName":"h2"},{"title":"MinIO","type":0,"sectionRef":"#","url":"/docs/use_cases/minio","content":"","keywords":"","version":"Next"},{"title":"Introduction to MinIO","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#introduction-to-minio","content":"MinIO is a high performance object storage solution with native support for Kubernetes deployments. It can provide distributed, S3-compatible, and multi-cloud storage service in public cloud, private cloud, and edge computing scenarios. MinIO is a software-defined product and released under GNU Affero General Public License v3.0. It can also run well on x86 and other standard hardware. MinIO is designed to meet private cloud's requirements for high performance, in addition to all required features of object storage. MinIO features easy to use, cost-effective, and high performance in providing scalable cloud-native object storage services. MinIO works well in traditional object storage scenarios, such as secondary storage, disaster recovery, and archiving. It also shows competitive capabilities in machine learning, big data, private cloud, hybrid cloud, and other emerging fields to well support data analysis, high performance workloads, and cloud-native applications. ","version":"Next","tagName":"h2"},{"title":"MinIO architecture","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#minio-architecture","content":"MinIO is designed for the cloud-native architecture, so it can be run as a lightweight container and managed by external orchestration tools like Kubernetes. The MinIO package comprises of static binary files less than 100 MB. This small package enables it to efficiently use CPU and memory resources even with high workloads and can host a large number of tenants on shared hardware. MinIO's architecture is as follows: MinIO can run on a standard server that have installed proper local drivers (JBOD/JBOF). An MinIO cluster has a totally symmetric architecture. In other words, each server provide same functions, without any name node or metadata server. MinIO can write both data and metadata as objects, so there is no need to use metadata servers. MinIO provides erasure coding, bitrot protection, encryption and other features in a strict and consistent way. Each MinIO cluster is a set of distributed MinIO servers, one MinIO process running on each node. MinIO runs in a userspace as a single process, and it uses lightweight co-routines for high concurrence. It divides drivers into erasure sets (generally 16 drivers in each set), and uses the deterministic hash algorithm to place objects into these erasure sets. MinIO is specifically designed for large-scale and multi-datacenter cloud storage service. Tenants can run their own MinIO clusters separately from others, getting rid of interruptions from upgrade or security problems. Tenants can scale up by connecting multi clusters across geographical regions. ","version":"Next","tagName":"h3"},{"title":"Build test environment","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#build-test-environment","content":"","version":"Next","tagName":"h2"},{"title":"Deploy Kubernetes cluster","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#deploy-kubernetes-cluster","content":"A Kubernetes cluster was deployed with three virtual machines: one as the master node and two as worker nodes. The kubelet version is 1.22.0. ","version":"Next","tagName":"h3"},{"title":"Deploy HwameiStor local storage","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#deploy-hwameistor-local-storage","content":"Deploy HwameiStor local storage on Kubernetes: Allocate five disks (SDB, SDC, SDD, SDE, and SDF) for each worker node to support HwameiStor local disk management: Check node status of local storage: Create storageClass: ","version":"Next","tagName":"h3"},{"title":"Deploy distributed multi-tenant cluster (minio-operator)","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#deploy-distributed-multi-tenant-cluster-minio-operator","content":"This section will show how to deploy minio-operator, how to create a tenant, and how to configure HwameiStor local volumes. ","version":"Next","tagName":"h2"},{"title":"Deploy minio-operator","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#deploy-minio-operator","content":"Copy minio-operator repo to your local environment git clone <https://github.com/minio/operator.git> Enter helm operator directory /root/operator/helm/operator Deploy the minio-operator instance helm install minio-operator \\ --namespace minio-operator \\ --create-namespace \\ --generate-name . --set persistence.storageClass=local-storage-hdd-lvm . Check minio-operator running status ","version":"Next","tagName":"h3"},{"title":"Create tenants","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#create-tenants","content":"Enter the /root/operator/examples/kustomization/base directory and change tenant.yaml Enter the /root/operator/helm/tenant/ directory and change values.yaml Enter /root/operator/examples/kustomization/tenant-lite directory and change kustomization.yaml Change tenant.yaml Change tenantNamePatch.yaml Create a tenant kubectl apply –k . Check resource status of the tenant minio-t1 To create another new tenant, you can first create a new directory tenant (in this example tenant-lite-2) under /root/operator/examples/kustomization and change the files listed above Run kubectl apply –k . to create the new tenant minio-t2 ","version":"Next","tagName":"h3"},{"title":"Configure HwameiStor local volumes","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#configure-hwameistor-local-volumes","content":"Run the following commands in sequence to finish this configuration: kubectl get statefulset.apps/minio-t1-pool-0 -nminio-tenant -oyaml kubectl get pvc –A kubectl get pvc export-minio6-0 -nminio-6 -oyaml kubectl get pv kubectl get pvc data0-minio-t1-pool-0-0 -nminio-tenant -oyaml kubectl get lv kubect get lvr ","version":"Next","tagName":"h3"},{"title":"Test HwameiStor's support for MinIo","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#test-hwameistors-support-for-minio","content":"With the above settings in place, now let's test basic features and tenant isolation. ","version":"Next","tagName":"h2"},{"title":"Test basic features","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#test-basic-features","content":"Log in to minio console:10.6.163.52:30401/login Get JWT by kubectl minio proxy -n minio-operator Browse and manage information about newly-created tenants Log in as tenant minio-t1 (Account: minio) Browse bucket bk-1 Create a new bucket bk-1-1 Create path path-1-2 Upload the file Upload the folder Create a user with read-only permission ","version":"Next","tagName":"h3"},{"title":"Test tenant isolation","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#test-tenant-isolation","content":"Log in as tenant minio-t2 Only minio-t2 information is visible. You cannot see information about tenant minio-t1. Create bucket Create path Upload the file Create a user Configure user policies Delete a bucket ","version":"Next","tagName":"h3"},{"title":"Conclusion","type":1,"pageTitle":"MinIO","url":"/docs/use_cases/minio#conclusion","content":"In this test, we successfully deployed MinIO distributed object storage on the basis of Kubernetes 1.22 and the HwameiStor local storage. We performed the basic feature test, system security test, and operation and maintenance management test. All tests are passed, proving HwameiStor can well support for MinIO. ","version":"Next","tagName":"h2"},{"title":"Expand Volumes","type":0,"sectionRef":"#","url":"/docs/volumes/expand","content":"","keywords":"","version":"Next"},{"title":"Verify StorageClass","type":1,"pageTitle":"Expand Volumes","url":"/docs/volumes/expand#verify-storageclass","content":"Verify if the StorageClass has the parameter allowVolumeExpansion: true. $ kubectl get pvc data-sts-mysql-local-0 -o jsonpath='{.spec.storageClassName}' hwameistor-storage-lvm-hdd $ kubectl get sc hwameistor-storage-lvm-hdd -o jsonpath='{.allowVolumeExpansion}' true ","version":"Next","tagName":"h2"},{"title":"Edit PVC size","type":1,"pageTitle":"Expand Volumes","url":"/docs/volumes/expand#edit-pvc-size","content":"$ kubectl edit pvc data-sts-mysql-local-0 ... spec: resources: requests: storage: 2Gi ... ","version":"Next","tagName":"h2"},{"title":"Observe the process","type":1,"pageTitle":"Expand Volumes","url":"/docs/volumes/expand#observe-the-process","content":"The larger the volume, the longer it takes to expand the volume. You may observe the process from PVC events. $ kubectl describe pvc data-sts-mysql-local-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ExternalExpanding 34s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC. Warning VolumeResizeFailed 33s external-resizer lvm.hwameistor.io resize volume "pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8" by resizer "lvm.hwameistor.io" failed: rpc error: code = Unknown desc = volume expansion not completed yet Normal Resizing 32s (x2 over 33s) external-resizer lvm.hwameistor.io External resizer is resizing volume pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 Normal FileSystemResizeRequired 32s external-resizer lvm.hwameistor.io Require file system resize of volume on node Normal FileSystemResizeSuccessful 11s kubelet MountVolume.NodeExpandVolume succeeded for volume "pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8" k8s-worker-3 ","version":"Next","tagName":"h2"},{"title":"Verify the size of PVC/PV after expansion","type":1,"pageTitle":"Expand Volumes","url":"/docs/volumes/expand#verify-the-size-of-pvcpv-after-expansion","content":"$ kubectl get pvc data-sts-mysql-local-0 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-sts-mysql-local-0 Bound pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 2Gi RWO hwameistor-storage-lvm-hdd 96m $ kubectl get pv pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 2Gi RWO Delete Bound default/data-sts-mysql-local-0 hwameistor-storage-lvm-hdd 96m ","version":"Next","tagName":"h2"},{"title":"Migrate Volumes","type":0,"sectionRef":"#","url":"/docs/volumes/migrate","content":"","keywords":"","version":"Next"},{"title":"Basic concepts","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#basic-concepts","content":"LocalVolumeGroup(LVG) management is an important function in HwameiStor. When an application Pod applies for multiple data volume PVCs, in order to ensure the correct operation of the Pod, these data volumes must have certain attributes, such as the number of copies of the data volume and the node where the copies are located. Properly managing these associated data volumes through the data volume group management function is a very important capability in HwameiStor. ","version":"Next","tagName":"h2"},{"title":"Prerequisites","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#prerequisites","content":"LocalVolumeMigrate needs to be deployed in the Kubernetes system, and the deployed application needs to meet the following conditions: Support lvm type volumes When migrating based on LocalVolume granularity, the data volumes belonging to the same LocalVolumeGroup by default will not be migrated together (if they are migrated together, you need to configure the switch MigrateAllVols: true) ","version":"Next","tagName":"h2"},{"title":"Step 1: Create convertible StorageClass","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-1-create-convertible-storageclass","content":"$ cd ../../deploy/ $ kubectl apply -f storageclass-convertible-lvm.yaml ","version":"Next","tagName":"h2"},{"title":"Step 2: Create multiple PVCs","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-2-create-multiple-pvcs","content":"$ kubectl apply -f pvc-multiple-lvm.yaml ","version":"Next","tagName":"h2"},{"title":"Step 3: Deploy multi-volume pod","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-3-deploy-multi-volume-pod","content":"$ kubectl apply -f nginx-multiple-lvm.yaml ","version":"Next","tagName":"h2"},{"title":"Step 4: Detach multi-volume pod","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-4-detach-multi-volume-pod","content":"$ kubectl -n hwameistor scale --current-replicas=1 --replicas=0 deployment/nginx-local-storage-lvm ","version":"Next","tagName":"h2"},{"title":"Step 5: Create migration tasks","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-5-create-migration-tasks","content":"$ cat << EOF | kubectl apply -f - apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeMigrate metadata: namespace: hwameistor name: <localVolumeMigrateName> annotations: hwameistor.io/replica-affinity: <need/forbid> spec: sourceNode: <sourceNodeName> targetNodesSuggested: - <targetNodesName1> - <targetNodesName2> volumeName: <volName> migrateAllVols: <true/false> EOF Attentions: 1) HwameiStor will select a target node from targetNodesSuggested to migrate. If all the candidates don't have enough storage space, the migrate will fail. 2) If targetNodesSuggested is emtpy or not set, HwameiStore will automatically select a propriate node for the migrate. If there is no valid candidate, the migrate will fail. 3) If you do not plan to consider the affinity of the pod during the migration process, just back up the data once. hwameistor.io/replica-affinity: forbid $ cat << EOF | kubectl apply -f - apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeMigrate metadata: namespace: hwameistor name: <localVolumeMigrateName> spec: sourceNode: <sourceNodeName> targetNodesSuggested: [] volumeName: <volName> migrateAllVols: <true/false> EOF ","version":"Next","tagName":"h2"},{"title":"Step 6: Check migration Status","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-6-check-migration-status","content":"$ kubectl get LocalVolumeMigrate localvolumemigrate-1 -o yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeMigrate metadata: generation: 1 name: localvolumemigrate-1 namespace: hwameistor resourceVersion: "12828637" uid: 78af7f1b-d701-4b03-84de-27fafca58764 spec: abort: false migrateAllVols: true sourceNode: k8s-172-30-40-61 targetNodesSuggested: - k8s-172-30-45-223 volumeName: pvc-1a0913ac-32b9-46fe-8258-39b4e3b696a4 status: originalReplicaNumber: 1 targetNode: k8s-172-30-45-223 state: Completed message: ","version":"Next","tagName":"h2"},{"title":"Step 7: Verify migration results","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-7-verify-migration-results","content":"$ kubectl get lvr NAME CAPACITY NODE STATE SYNCED DEVICE AGE pvc-1a0913ac-32b9-46fe-8258-39b4e3b696a4-9cdkkn 1073741824 k8s-172-30-45-223 Ready true /dev/LocalStorage_PoolHDD-HA/pvc-1a0913ac-32b9-46fe-8258-39b4e3b696a4 77s pvc-d9d3ae9f-64af-44de-baad-4c69b9e0744a-7ppmrx 1073741824 k8s-172-30-45-223 Ready true /dev/LocalStorage_PoolHDD-HA/pvc-d9d3ae9f-64af-44de-baad-4c69b9e0744a 77s ","version":"Next","tagName":"h2"},{"title":"Step 8: Reattach/Remount volume","type":1,"pageTitle":"Migrate Volumes","url":"/docs/volumes/migrate#step-8-reattachremount-volume","content":"$ kubectl -n hwameistor scale --current-replicas=0 --replicas=1 deployment/nginx-local-storage-lvm ","version":"Next","tagName":"h2"},{"title":"Local Cache Volumes","type":0,"sectionRef":"#","url":"/docs/volumes/cache","content":"","keywords":"","version":"Next"},{"title":"Install Dragonfly","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#install-dragonfly","content":"Configure /etc/hosts according to the cluster. $ vi /etc/hosts host1-IP hostName1 host2-IP hostName2 host3-IP hostName3 To install Dragonfly components, ensure a default storage class is configured, as it is required to create storage volumes. kubectl patch storageclass hwameistor-storage-lvm-hdd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' Install dragonfly using helm. helm repo add dragonfly https://dragonflyoss.github.io/helm-charts/ helm install --create-namespace --namespace dragonfly-system dragonfly dragonfly/dragonfly --version 1.1.63 Configure dragonfly-dfdaemon. kubectl -n dragonfly-system get ds kubectl -n dragonfly-system edit ds dragonfly-dfdaemon ... spec: spec: containers: - image: docker.io/dragonflyoss/dfdaemon:v2.1.45 ... securityContext: capabilities: add: - SYS_ADMIN privileged: true volumeMounts: ... - mountPath: /var/run name: host-run - mountPath: /mnt mountPropagation: Bidirectional name: host-mnt ... volumes: ... - hostPath: path: /var/run type: DirectoryOrCreate name: host-run - hostPath: path: /mnt type: DirectoryOrCreate name: host-mnt ... Install the dfget client command line tool. Each node executes: wget https://github.com/dragonflyoss/Dragonfly2/releases/download/v2.1.44/dfget-2.1.44-linux-amd64.rpm rpm -ivh dfget-2.1.44-linux-amd64.rpm To avoid issues, cancel the previously configured default storage class. kubectl patch storageclass hwameistor-storage-lvm-hdd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' ","version":"Next","tagName":"h2"},{"title":"Verify dragonfly","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#verify-dragonfly","content":"$ kubectl -n dragonfly-system get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dragonfly-dfdaemon-d2fzp 1/1 Running 0 19h 200.200.169.158 hwameistor-test-1 <none> <none> dragonfly-dfdaemon-p7smf 1/1 Running 0 19h 200.200.29.171 hwameistor-test-3 <none> <none> dragonfly-dfdaemon-tcwkr 1/1 Running 0 19h 200.200.39.71 hwameistor-test-2 <none> <none> dragonfly-manager-5479bf9bc9-tp4g5 1/1 Running 1 (19h ago) 19h 200.200.29.174 hwameistor-test-3 <none> <none> dragonfly-manager-5479bf9bc9-wpbr6 1/1 Running 0 19h 200.200.39.92 hwameistor-test-2 <none> <none> dragonfly-manager-5479bf9bc9-zvrdj 1/1 Running 0 19h 200.200.169.142 hwameistor-test-1 <none> <none> dragonfly-mysql-0 1/1 Running 0 19h 200.200.29.178 hwameistor-test-3 <none> <none> dragonfly-redis-master-0 1/1 Running 0 19h 200.200.169.137 hwameistor-test-1 <none> <none> dragonfly-redis-replicas-0 1/1 Running 0 19h 200.200.39.72 hwameistor-test-2 <none> <none> dragonfly-redis-replicas-1 1/1 Running 0 19h 200.200.29.130 hwameistor-test-3 <none> <none> dragonfly-redis-replicas-2 1/1 Running 0 19h 200.200.169.134 hwameistor-test-1 <none> <none> dragonfly-scheduler-0 1/1 Running 0 19h 200.200.169.190 hwameistor-test-1 <none> <none> dragonfly-scheduler-1 1/1 Running 0 19h 200.200.39.76 hwameistor-test-2 <none> <none> dragonfly-scheduler-2 1/1 Running 0 19h 200.200.29.163 hwameistor-test-3 <none> <none> dragonfly-seed-peer-0 1/1 Running 1 (19h ago) 19h 200.200.169.138 hwameistor-test-1 <none> <none> dragonfly-seed-peer-1 1/1 Running 0 19h 200.200.39.80 hwameistor-test-2 <none> <none> dragonfly-seed-peer-2 1/1 Running 0 19h 200.200.29.151 hwameistor-test-3 <none> <none> ","version":"Next","tagName":"h2"},{"title":"Verify DataSet","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#verify-dataset","content":"Take minio as an example: apiVersion: datastore.io/v1alpha1 kind: DataSet metadata: name: dataset-test spec: refresh: true type: minio minio: endpoint: Your service ip address:9000 bucket: BucketName/Dir # Defined according to the directory level where your dataset is located secretKey: minioadmin accessKey: minioadmin region: ap-southeast-2 ","version":"Next","tagName":"h2"},{"title":"Create DataSet","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#create-dataset","content":"kubectl apply -f dataset.yaml Confirm that the cache volume has been created successfully $ k get dataset NAME TYPE LASTREFRESHTIME CONNECTED AGE ERROR dataset-test minio 4m38s $ k get lv NAME POOL REPLICAS CAPACITY USED STATE PUBLISHED AGE dataset-test LocalStorage_PoolHDD 211812352 Ready 4m27s $ k get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE dataset-test 202Mi ROX Retain Available 35s The size of PV is determined by the size of your data set. ","version":"Next","tagName":"h2"},{"title":"Create a PVC and bind it to dataset PV","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#create-a-pvc-and-bind-it-to-dataset-pv","content":"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: hwameistor-dataset namespace: default spec: accessModes: - ReadOnlyMany resources: requests: storage: 202Mi # dataset size volumeMode: Filesystem volumeName: dataset-test # dataset name Confirm that the pvc has been created successfully. $ k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hwameistor-dataset Bound dataset-test 202Mi ROX 4s ","version":"Next","tagName":"h2"},{"title":"Create StatefulSet","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#create-statefulset","content":"kubectl apply -f sts-nginx-AI.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx-dataload namespace: default spec: serviceName: nginx-dataload replicas: 1 selector: matchLabels: app: nginx-dataload template: metadata: labels: app: nginx-dataload spec: hostNetwork: true hostPID: true hostIPC: true containers: - name: nginx image: docker.io/library/nginx:latest imagePullPolicy: IfNotPresent securityContext: privileged: true env: - name: DATASET_NAME value: dataset-test volumeMounts: - name: data mountPath: /data ports: - containerPort: 80 volumes: - name: data persistentVolumeClaim: claimName: hwameistor-dataset info claimName uses the name of the pvc bound to the dataset. env: DATASET_NAME=datasetName ","version":"Next","tagName":"h2"},{"title":"Verify Nginx Pod","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#verify-nginx-pod","content":"$ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-dataload-0 1/1 Running 0 3m58s $ kubectl logs nginx-dataload-0 hwameistor-dataloader Created custom resource Custom resource deleted, exiting DataLoad execution time: 1m20.24310857s According to the log, loading data took 1m20.24310857s. ","version":"Next","tagName":"h2"},{"title":"[Optional] Scale Nginx out into a 3-node Cluster","type":1,"pageTitle":"Local Cache Volumes","url":"/docs/volumes/cache#optional-scale-nginx-out-into-a-3-node-cluster","content":"HwameiStor cache volumes support horizontal expansion of StatefulSet. Each pod of StatefulSet will attach and mount a HwameiStor cache volume bound to the same dataset. $ kubectl scale sts/sts-nginx-AI --replicas=3 $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE nginx-dataload-0 1/1 Running 0 41m nginx-dataload-1 1/1 Running 0 37m nginx-dataload-2 1/1 Running 0 35m $ kubectl logs nginx-dataload-1 hwameistor-dataloader Created custom resource Custom resource deleted, exiting DataLoad execution time: 3.24310857s $ kubectl logs nginx-dataload-2 hwameistor-dataloader Created custom resource Custom resource deleted, exiting DataLoad execution time: 2.598923144s According to the log, the second and third loading of data only took 3.24310857s and 2.598923144s respectively. Compared with the first loading, the speed has been greatly improved. ","version":"Next","tagName":"h2"},{"title":"PVC Autoresizing","type":0,"sectionRef":"#","url":"/docs/volumes/pvc_autoresizing","content":"","keywords":"","version":"Next"},{"title":"ResizePolicy","type":1,"pageTitle":"PVC Autoresizing","url":"/docs/volumes/pvc_autoresizing#resizepolicy","content":"An example of CR is as below: apiVersion: hwameistor.io/v1alpha1 kind: ResizePolicy metadata: name: resizepolicy1 spec: warningThreshold: 60 resizeThreshold: 80 nodePoolUsageLimit: 90 The three fields warningThreshold, resizeThreshold, and nodePoolUsageLimit are all of type integer and represent percentages. warningThreshold currently does not have any associated alert actions. It serves as a target ratio, indicating that the usage rate of the volume will be below this percentage after resizing is completed.resizeThreshold indicates a usage rate at which the resizing action will be triggered when the volume's usage rate reaches this percentage.nodePoolUsageLimit represents the upper limit of storage pool usage on a node. If the usage rate of a pool reaches this percentage, volumes assigned to that pool will not automatically resize. ","version":"Next","tagName":"h2"},{"title":"Match Rules","type":1,"pageTitle":"PVC Autoresizing","url":"/docs/volumes/pvc_autoresizing#match-rules","content":"This is an examle of CR with label selectors. apiVersion: hwameistor.io/v1alpha1 kind: ResizePolicy metadata: name: example-policy spec: warningThreshold: 60 resizeThreshold: 80 nodePoolUsageLimit: 90 storageClassSelector: matchLabels: pvc-resize: auto namespaceSelector: matchLabels: pvc-resize: auto pvcSelector: matchLabels: pvc-resize: auto The ResizePolicy has three label selectors: pvcSelector indicates that PVCs selected by this selector will automatically resize according to the policy that selected them.namespaceSelector indicates that PVCs under namespaces selected by this selector will automatically resize according to this policy.storageClassSelector indicates that PVCs created from storage classes selected by this selector will automatically resize according to this policy. These three selectors have an "AND" relationship. If you specify multiple selectors in a ResizePolicy, the PVCs must match all of the selectors in order to be associated with that policy. If no selectors are specified in the ResizePolicy, it becomes a cluster-wide ResizePolicy, acting as the default policy for all PVCs in the entire cluster. ","version":"Next","tagName":"h2"},{"title":"Volume Clone","type":0,"sectionRef":"#","url":"/docs/volumes/volume_clone","content":"","keywords":"","version":"Next"},{"title":"1. Create Clone Volume","type":1,"pageTitle":"Volume Clone","url":"/docs/volumes/volume_clone#1-create-clone-volume","content":"You can create a pvc to perform a cloning operation on a data volume. The details are as follows: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: hwameistor-lvm-volume-clone spec: storageClassName: hwameistor-storage-lvm-ssd dataSource: # Bound data volumes must be provided name: data-sts-mysql-local-0 kind: PersistentVolumeClaim apiGroup: "" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ","version":"Next","tagName":"h2"},{"title":"2. Use Clone Volume","type":1,"pageTitle":"Volume Clone","url":"/docs/volumes/volume_clone#2-use-clone-volume","content":"Use the following command to create an nginx application and use the data volume hwameistor-lvm-volume-clone: cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: containers: - name: nginx image: docker.io/library/nginx:latest imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data ports: - containerPort: 80 volumes: - name: data persistentVolumeClaim: claimName: hwameistor-lvm-volume-clone EOF ","version":"Next","tagName":"h2"},{"title":"Volume Provisioned IO","type":0,"sectionRef":"#","url":"/docs/volumes/volume_provisioned_io","content":"","keywords":"","version":"Next"},{"title":"Requirements (if you want to limit non-direct io)","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#requirements-if-you-want-to-limit-non-direct-io","content":"cgroup v2 has the following requirements: OS distribution enables cgroup v2Linux Kernel version is 5.8 or later More info, please refer to the Kubernetes website ","version":"Next","tagName":"h2"},{"title":"Create a new StorageClass with the maximum IOPS and throughput parameters","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#create-a-new-storageclass-with-the-maximum-iops-and-throughput-parameters","content":"By default, HwameiStor won't auto-create such a StorageClass during the installation, so you need to create it manually. A sample StorageClass is as follows: allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hwameistor-storage-lvm-hdd-sample parameters: convertible: "false" csi.storage.k8s.io/fstype: xfs poolClass: HDD poolType: REGULAR provision-iops-on-creation: "100" provision-throughput-on-creation: 1Mi replicaNumber: "1" striped: "true" volumeKind: LVM provisioner: lvm.hwameistor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer Compare to the regular StorageClass created by HwameiStor installer, the following parameters are added: provision-iops-on-creation: It specifies the maximum IOPS of the volume on creation.provision-throughput-on-creation: It specifies the maximum throughput of the volume on creation. After the StorageClass is created, you can use it to create a PVC. ","version":"Next","tagName":"h2"},{"title":"Create a PVC with the StorageClass","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#create-a-pvc-with-the-storageclass","content":"A sample PVC is as follows: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sample spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: hwameistor-storage-lvm-hdd-sample After the PVC is created, you can create a deployment to use it. ","version":"Next","tagName":"h2"},{"title":"Create a Deployment with the PVC","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#create-a-deployment-with-the-pvc","content":"A sample Deployment is as follows: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: pod-sample name: pod-sample spec: replicas: 1 selector: matchLabels: app: pod-sample strategy: {} template: metadata: creationTimestamp: null labels: app: pod-sample spec: volumes: - name: data persistentVolumeClaim: claimName: pvc-sample containers: - command: - sleep - "100000" image: busybox name: busybox resources: {} volumeMounts: - name: data mountPath: /data status: {} After the Deployment is created, you can test the volume's IOPS and throughput by using the following command: shell 1: kubectl exec -it pod-sample-5f5f8f6f6f-5q4q5 -- /bin/sh dd if=/dev/zero of=/data/test bs=4k count=1000000 oflag=direct shell 2: /dev/LocalStorage_PoolHDD/pvc-c623054b-e7e9-41d7-a987-77acd8727e66 is the path of the volume on the node. you can find it by using the kubectl get lvr command. iostat -d /dev/LocalStorage_PoolHDD/pvc-c623054b-e7e9-41d7-a987-77acd8727e66 -x -k 2 note Due to the cgroupv1 limitation, the settings of the maximum IOPS and throughput may not take effect on non-direct IO. However, it will take effect on non-direct IO in cgroupv2. ","version":"Next","tagName":"h2"},{"title":"How to change the maximum IOPS and throughput of a volume","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#how-to-change-the-maximum-iops-and-throughput-of-a-volume","content":"The maximum IOPS and throughput are specified on the parameters of the StorageClass, you can not change it directly because it is immutable today. Different from the other storage vendors, HwameiStor is a Native Kubernetes storage solution and it defines a set of operation primitives based on the Kubernetes CRDs. It means that you can modify the related CRD to change the actual maximum IOPS and throughput of a volume. The following steps show how to change the maximum IOPS and throughput of a volume. ","version":"Next","tagName":"h2"},{"title":"Find the corresponding LocalVolume CR for the given PVC","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#find-the-corresponding-localvolume-cr-for-the-given-pvc","content":"$ kubectl get pvc pvc-sample NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo Bound pvc-c354a56a-5cf4-4ff6-9472-4e24c7371e10 10Gi RWO hwameistor-storage-lvm-hdd 5d23h pvc-sample Bound pvc-cac82087-6f6c-493a-afcd-09480de712ed 10Gi RWO hwameistor-storage-lvm-hdd-sample 5d23h $ kubectl get localvolume NAME POOL REPLICAS CAPACITY USED STATE RESOURCE PUBLISHED FSTYPE AGE pvc-c354a56a-5cf4-4ff6-9472-4e24c7371e10 LocalStorage_PoolHDD 1 10737418240 33783808 Ready -1 master xfs 5d23h pvc-cac82087-6f6c-493a-afcd-09480de712ed LocalStorage_PoolHDD 1 10737418240 33783808 Ready -1 master xfs 5d23h According to the print out, the LocalVolume CR for the PVC is pvc-cac82087-6f6c-493a-afcd-09480de712ed. ","version":"Next","tagName":"h3"},{"title":"Modify the LocalVolume CR","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#modify-the-localvolume-cr","content":"kubectl edit localvolume pvc-cac82087-6f6c-493a-afcd-09480de712ed In the editor, find the spec.volumeQoS section and modify the iops and throughput fields. By the way, an empty value means no limit. At last, save the changes and exit the editor. The settings will take effect in a few seconds. note In the future, we will allow users to modify the maximum IOPS and throughput of a volume directly once the Kubernetes supports it. ","version":"Next","tagName":"h3"},{"title":"How to check the actual IOPS and throughput of a volume","type":1,"pageTitle":"Volume Provisioned IO","url":"/docs/volumes/volume_provisioned_io#how-to-check-the-actual-iops-and-throughput-of-a-volume","content":"HwameiStor uses the cgroupv1or cgroupv2 to limit the IOPS and throughput of a volume, so you can use the following command to check the actual IOPS and throughput of a volume. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 160G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 159G 0 part ├─centos-root 253:0 0 300G 0 lvm / ├─centos-swap 253:1 0 7.9G 0 lvm └─centos-home 253:2 0 101.1G 0 lvm /home sdb 8:16 0 100G 0 disk ├─LocalStorage_PoolHDD-pvc--cac82087--6f6c--493a--afcd--09480de712ed 253:3 0 10G 0 lvm /var/lib/kubelet/pods/3d6bc980-68ae-4a65-a1c8-8b410b7d240f/v └─LocalStorage_PoolHDD-pvc--c354a56a--5cf4--4ff6--9472--4e24c7371e10 253:4 0 10G 0 lvm /var/lib/kubelet/pods/521fd7b4-3bef-415b-8720-09225f93f231/v sdc 8:32 0 300G 0 disk └─sdc1 8:33 0 300G 0 part └─centos-root 253:0 0 300G 0 lvm / sr0 11:0 1 973M 0 rom # if cgroup version is v1. $ cat /sys/fs/cgroup/blkio/blkio.throttle.read_iops_device 253:3 100 $ cat /sys/fs/cgroup/blkio/blkio.throttle.write_iops_device 253:3 100 $ cat /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device 253:3 1048576 $ cat /sys/fs/cgroup/blkio/blkio.throttle.write_bps_device 253:3 1048576 # if cgroup version is v2. # cat /sys/fs/cgroup/kubepods.slice/io.max 253:0 rbps=1048576 wbps=1048576 riops=100 wiops=100 ","version":"Next","tagName":"h2"},{"title":"Eviction","type":0,"sectionRef":"#","url":"/docs/volumes/volume_eviction","content":"","keywords":"","version":"Next"},{"title":"Node Eviction","type":1,"pageTitle":"Eviction","url":"/docs/volumes/volume_eviction#node-eviction","content":"In a Kubernetes cluster, a node can be drained by using the following procedure. So that, all the pods and volume replicas on this node will be evicted, and then continue the services on other available nodes. kubectl drain k8s-node-1 --ignore-daemonsets=true Check if all the volumes' migration complete or not by running: kubectl get LocalStorageNode k8s-node-1 -o yaml The output may look like: apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: creationTimestamp: "2022-10-11T07:41:58Z" generation: 1 name: k8s-node-1 resourceVersion: "6402198" uid: c71cc6ac-566a-4e0b-8687-69679b07471f spec: hostname: k8s-node-1 storageIP: 10.6.113.22 topogoly: region: default zone: default status: ... pools: LocalStorage_PoolHDD: class: HDD disks: - capacityBytes: 17175674880 devPath: /dev/sdb state: InUse type: HDD freeCapacityBytes: 16101933056 freeVolumeCount: 999 name: LocalStorage_PoolHDD totalCapacityBytes: 17175674880 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 1073741824 usedVolumeCount: 1 volumeCapacityBytesLimit: 17175674880 # ** make sure volumes is empty ** # volumes: state: Ready Check if there is any volume replica still located in the evicted node by running: kubectl get localvolumereplica The output may look like: NAME CAPACITY NODE STATE SYNCED DEVICE AGE pvc-1427f36b-adc4-4aef-8d83-93c59064d113-957f7g 1073741824 k8s-node-3 Ready true /dev/LocalStorage_PoolHDD-HA/pvc-1427f36b-adc4-4aef-8d83-93c59064d113 20h pvc-1427f36b-adc4-4aef-8d83-93c59064d113-qlpbmq 1073741824 k8s-node-2 Ready true /dev/LocalStorage_PoolHDD-HA/pvc-1427f36b-adc4-4aef-8d83-93c59064d113 30m pvc-6ca4c0d4-da10-4e2e-83b2-19cbf5c5e3e4-scrxjb 1073741824 k8s-node-2 Ready true /dev/LocalStorage_PoolHDD/pvc-6ca4c0d4-da10-4e2e-83b2-19cbf5c5e3e4 30m pvc-f8f017f9-eb09-4fbe-9795-a6e2d6873148-5t782b 1073741824 k8s-node-2 Ready true /dev/LocalStorage_PoolHDD-HA/pvc-f8f017f9-eb09-4fbe-9795-a6e2d6873148 30m In some cases, user doesn't want to migrate the volumes when draining a node. User can add a label into the node before draining it. kubectl label node k8s-node-1 hwameistor.io/eviction=disable ","version":"Next","tagName":"h2"},{"title":"Pod Eviction","type":1,"pageTitle":"Eviction","url":"/docs/volumes/volume_eviction#pod-eviction","content":"When a Kubernetes node is overloaded, it will evict some low-priority pods to recycle system's resources to keep other pods safe. HwameiStor will detect the evicted pod and migrate the associated volumes to another available node. So that, the pod can continue to run on it. ","version":"Next","tagName":"h2"},{"title":"Pod Migration","type":1,"pageTitle":"Eviction","url":"/docs/volumes/volume_eviction#pod-migration","content":"The migration can be pro-actively triggered on the pod and associated HwameiStor volume by using either one of following methods. Method #1 kubectl label pod mysql-pod hwameistor.io/eviction=start kubectl delete pod mysql-pod Method #2 $ cat << EOF | kubectl apply -f - apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeMigrate metadata: name: migrate-pvc-6ca4c0d4-da10-4e2e-83b2-19cbf5c5e3e4 spec: sourceNode: k8s-node-1 targetNodesSuggested: - k8s-node-2 - k8s-node-3 volumeName: pvc-6ca4c0d4-da10-4e2e-83b2-19cbf5c5e3e4 migrateAllVols: true EOF $ kubectl delete pod mysql-pod ","version":"Next","tagName":"h2"},{"title":"Encrypt Volume","type":0,"sectionRef":"#","url":"/docs/volumes/volume_encrypt","content":"","keywords":"","version":"Next"},{"title":"1. Create a Secret for encrypting the volume","type":1,"pageTitle":"Encrypt Volume","url":"/docs/volumes/volume_encrypt#1-create-a-secret-for-encrypting-the-volume","content":"The details are as follows: apiVersion: v1 data: key: H4sIAAAAAAAA/+xVP2/bPhDd9SluyGAHUJTgtzFTgPyKLkmDZiwK4kxebNYUSRwp10Xb715IkWzLf5II6eAC3iTe3eO9R/Le1PoJWpEB0AJthcl4J41LxAu0Av67jBlAVIyBdpZpmYgdWmlxQjbWIACBfUlpRlUUEDyn756XRVjm6/WtNMkUrFEo4Gz08OlW3t/c/T/OuLIkn4ylKLIcCkqqWJcUdTRuLOS9DfI63NTml8X5xQ8sbdZSUN49mWmD+c1Pp MOSBETihVF0551Jnot1193HZQYw885zxxSe0EbKAObVhNhRoiijXqMD5MDekgByOnjjUs2aqSnvp0VfsaKehE1vzVdCnlJ6DgqQMpVBbijXUWiAUNVnJ2BOFJrivchSlpRQbvb9zP45r4N7Q2pgiuTSuoJpSksBo0628XXi6n29derJGnNnp7DMMZjDKr6Ah1ozxShbgefG6cFFq3YiBWRMngVcb/Z37zVdjy7Ox+1isKioJJcEnP28+r3nhJ3XdLx8HrweRid4P YRN3UAMqGifMhuxNwN293XFrI/ZhocgBq8PoQ0kWyMp7xIaR3wIc5XQe0Wa/aBXVG8VZhj7z/QDGkv612OlFJEmPf6Lynbza98lybdyu+u4W/D6++5usDw4LWcYZ02w9LqytSldNb+dlnW8fPnkejCtemejx483n2/HPax2upWU2Ci5Pe3hy9dhrnN1cp0jdZ35Qk+Od0yfbOdkOyfbeZftvPbA/zXf+RMAAP//IK/8i+YNAAA= kind: Secret metadata: name: hwameistor-encrypt-secret namespace: hwameistor type: Opaque ","version":"Next","tagName":"h2"},{"title":"2. Create a StorageClass","type":1,"pageTitle":"Encrypt Volume","url":"/docs/volumes/volume_encrypt#2-create-a-storageclass","content":"Use the following command to create a StorageClass and specify the Secret created above: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage-hdd-encrypt provisioner: lvm.hwameistor.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete parameters: encryptSecret: hwameistor/hwameistor-encrypt-secret encryptType: LUKS replicaNumber: "1" poolClass: "HDD" poolType: "REGULAR" volumeKind: "LVM" striped: "true" csi.storage.k8s.io/fstype: "xfs" ","version":"Next","tagName":"h2"},{"title":"3. Create a PVC and a Deployment","type":1,"pageTitle":"Encrypt Volume","url":"/docs/volumes/volume_encrypt#3-create-a-pvc-and-a-deployment","content":"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-storage-pvc-encrypt spec: accessModes: - ReadWriteOnce storageClassName: local-storage-hdd-encrypt resources: requests: storage: 1Gi apiVersion: apps/v1 kind: Deployment metadata: name: nginx-local-storage-lvm labels: app: nginx-local-storage-lvm spec: replicas: 1 selector: matchLabels: app: nginx-local-storage-lvm template: metadata: labels: app: nginx-local-storage-lvm name: nginx-local-storage-lvm spec: restartPolicy: Always terminationGracePeriodSeconds: 0 containers: - image: nginx:latest imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 command: - sh - -xc - | VOL="$( df | grep /usr/share/nginx/html | awk '{print $1,$NF}' )" echo "<center><h1>Demo volume: ${VOL}</h1></center>" > /usr/share/nginx/html/index.html nginx -g "daemon off;" volumeMounts: - name: html-root mountPath: /usr/share/nginx/html resources: limits: cpu: "100m" memory: "100Mi" volumes: - name: html-root persistentVolumeClaim: claimName: local-storage-pvc-encrypt ","version":"Next","tagName":"h2"},{"title":"4. Check pod status","type":1,"pageTitle":"Encrypt Volume","url":"/docs/volumes/volume_encrypt#4-check-pod-status","content":"# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-local-storage-lvm-79886d9dd-44fsg 1/1 Running 0 20m 100.111.156.91 k8s-node1 ","version":"Next","tagName":"h2"},{"title":"5. Check whether the volume is encrypted","type":1,"pageTitle":"Encrypt Volume","url":"/docs/volumes/volume_encrypt#5-check-whether-the-volume-is-encrypted","content":"You can use the "lsblk" to check whether the TYPE of the volume is crypt. # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 160G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 159G 0 part ├─centos-root 253:0 0 50G 0 lvm / ├─centos-swap 253:1 0 7.9G 0 lvm └─centos-home 253:2 0 101.1G 0 lvm /home sdb 8:16 0 200G 0 disk └─LocalStorage_PoolHDD-pvc--2c097032--690d--4510--99ad--54119b6b650c 253:3 0 1G 0 lvm └─pvc-2c097032-690d-4510-99ad-54119b6b650c-encrypt 253:4 0 1008M 0 crypt /var/lib/kubelet/pods/4c2b76f3-a84f-4e62-88c8-a71abeb68efd/volumes/kubernetes.io~csi/pvc-2c097032-690d-4510-99ad-54119b6b650c/mount sr0 11:0 1 1024M 0 rom You can use the "blkid" command to check whether the TYPE of the volume is crypto_LUKS: # blkid /dev/LocalStorage_PoolHDD/pvc-2c097032-690d-4510-99ad-54119b6b650c /dev/LocalStorage_PoolHDD/pvc-2c097032-690d-4510-99ad-54119b6b650c: UUID="a1910adf-f1dc-45a4-aeb3-6a8cf045bb9d" TYPE="crypto_LUKS" ","version":"Next","tagName":"h2"},{"title":"TiDB","type":0,"sectionRef":"#","url":"/docs/use_cases/tidb","content":"","keywords":"","version":"Next"},{"title":"Introduction to TiDB","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#introduction-to-tidb","content":"TiDB is a distributed database product that supports OLTP (Online Transactional Processing), OLAP (Online Analytical Processing), and HTAP (Hybrid Transactional and Analytical Processing) services, compatible with key features such as MySQL 5.7 protocol and MySQL ecosystem. The goal of TiDB is to provide users with one-stop OLTP, OLAP, and HTAP solutions, which are suitable for various application scenarios such as high availability, strict requirements for strong consistency, and large data scale. ","version":"Next","tagName":"h2"},{"title":"TiDB architecture","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#tidb-architecture","content":"The TiDB distributed database splits the overall architecture into multiple modules that can communicate with each other. The architecture diagram is as follows: TiDB Server The SQL layer exposes the connection endpoints of the MySQL protocol to the outside world, and is responsible for accepting connections from clients, performing SQL parsing and optimization and finally generating a distributed execution plan. The TiDB layer itself is stateless. In practice, you can start several TiDB instances. A unified access address is provided externally through load-balance components (such as LVS, HAProxy, or F5), and client connections can be evenly distributed on to these TiDB instances. The TiDB server itself does not store data, but only parses SQL and forwards the actual data read request to the underlying storage node, TiKV (or TiFlash). PD (Placement Driver) Server The metadata management module across a TiDB cluster is responsible for storing the real-time data distribution of each TiKV node and the overall topology of the cluster, providing the TiDB Dashboard management and control interface, and assigning transaction IDs to distributed transactions. Placement Driver (PD) not only stores metadata, but also issues data scheduling commands to specific TiKV nodes based on the real-time data distribution status reported by TiKV nodes, which can be said to be the "brain" of the entire cluster. In addition, the PD itself is also composed of at least 3 nodes and has high availability capabilities. It is recommended to deploy an odd number of PD nodes. Storage nodes TiKV Server: In charge of storing data. From the outside, TiKV is a distributed Key-Value storage engine that provides transactions. The basic unit for storing data is Region. Each Region is responsible for storing the data of a Key Range (the block between left-closed and right-open from StartKey to EndKey). Each TiKV node is responsible for multiple Regions. TiKV API provides native support for distributed transactions at the KV key-value pair level, and provides the levels of Snapshot Isolation (SI) by default, which is also the core of TiDB's support for distributed transactions at the SQL level. After the SQL layer of TiDB completes the SQL parsing, it will convert the SQL execution plan into the actual call to the TiKV API. Therefore, the data is stored in TiKV. In addition, the TiKV data will be automatically maintained in multiple replicas (the default is three replicas), which naturally supports high availability and automatic failover. TiFlash is a special storage node. Unlike ordinary TiKV nodes, data is stored in columns in TiFlash, and the main function is to accelerate analysis-based scenarios. ","version":"Next","tagName":"h3"},{"title":"TiDB database storage","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#tidb-database-storage","content":" Key-Value Pair The choice of TiKV is the Key-Value model that provides an ordered traversal method. Two key points of TiKV data storage are: A huge Map (comparable to std::map in C++) that stores Key-Value Pairs. The Key-Value pairs in this Map are sorted by the binary order of the Key, that is, you can seek to the position of a certain Key, and then continuously call the Next method to obtain the Key-Value larger than this Key in an ascending order. Local storage (Rocks DB) In any persistent storage engine, data must be saved on disk after all, and TiKV is not different. However, TiKV does not choose to write data directly to the disk, but stores the data in RocksDB, and RocksDB is responsible for the specific data storage. The reason is that developing a stand-alone storage engine requires a lot of work, especially to make a high-performance stand-alone engine, which may require various meticulous optimizations. RocksDB is a very good stand-alone KV storage engine open sourced by Facebook. It can meet various requirements of TiKV for single engine. Here we can simply consider that RocksDB is a persistent Key-Value Map on a host. Raft protocol TiKV uses the Raft algorithm to ensure that data is not lost and error-free when a single host fails. In short, it is to replicate data to multiple hosts, so that if one host cannot provide services, replicas on other hosts can still provide services. This data replication scheme is reliable and efficient, and can deal with replica failures. Region TiKV divides the Range by Key. A certain segment of consecutive Keys are stored on a storage node. Divide the entire Key-Value space into many segments, each segment is a series of consecutive Keys, called a Region. Try to keep the data saved in each Region within a reasonable size. Currently, the default in TiKV is no more than 96 MB. Each Region can be described by a left-closed and right-open block such as [StartKey, EndKey]. MVCC TiKV implements Multi-Version Concurrency Control (MVCC). Distributed ACID transactions TiKV uses the transaction model used by Google in BigTable: Percolator. ","version":"Next","tagName":"h3"},{"title":"Build the test environment","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#build-the-test-environment","content":"","version":"Next","tagName":"h2"},{"title":"Kubernetes cluster","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#kubernetes-cluster","content":"In this test, we use three VM nodes to deploy the Kubernetes cluster, including one master node and two worker nodes. kubelet version is 1.22.0. ","version":"Next","tagName":"h3"},{"title":"HwameiStor local storage","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#hwameistor-local-storage","content":"Deploy the HwameiStor local storage in the Kubernetes cluster Configure a 100G local disk, sdb, for HwameiStor on two worker nodes respectively Create StorageClass ","version":"Next","tagName":"h3"},{"title":"Deploy TiDB on Kubernetes","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#deploy-tidb-on-kubernetes","content":"TiDB can be deployed on Kubernetes using TiDB Operator. TiDB Operator is an automatic operation and maintenance system for TiDB clusters on Kubernetes. It provides full lifecycle management of TiDB including deployment, upgrade, scaling, backup and recovery, and configuration changes. With TiDB Operator, TiDB can run seamlessly on public cloud or privately deployed Kubernetes clusters. The compatibility between TiDB and TiDB Operator versions is as follows: TiDB version\tApplicable versions of TiDB Operatordev\tdev TiDB >= 5.4\t1.3 5.1 <= TiDB < 5.4\t1.3 (recommended), 1.2 3.0 <= TiDB < 5.1\t1.3 (recommended), 1.2, 1.1 2.1 <= TiDB < 3.0\t1.0 (maintenance stopped) Deploy TiDB Operator Install TiDB CRDs kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml Install TiDB Operator helm repo add pingcap https://charts.pingcap.org/ kubectl create namespace tidb-admin helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.3.2 \\ --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.3.2 \\ --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.3.2 \\ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler Check TiDB Operator components Deploy the TiDB cluster kubectl create namespace tidb-cluster && \\ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com /pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml Connect the TiDB cluster yum -y install mysql-client kubectl port-forward -n tidb-cluster svc/basic-tidb 4000 > pf4000.out & Check and verify the TiDB cluster status Create the Hello_world table create table hello_world (id int unsigned not null auto_increment primary key, v varchar(32)); Check the TiDB version select tidb_version()\\G; Check the Tikv storage status select * from information_schema.tikv_store_status\\G; Configure the HwameiStor storage Create a PVC for tidb-tikv and tidb-pd from storageClass local-storage-hdd-lvm: kubectl get po basic-tikv-0 -oyaml kubectl get po basic-pd-0 -oyaml ","version":"Next","tagName":"h3"},{"title":"Test procedure","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#test-procedure","content":"","version":"Next","tagName":"h2"},{"title":"Basic SQL capability test","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#basic-sql-capability-test","content":"After the database cluster is deployed, we performed the following tests about basic capabilities. All are successfully passed. Distributed transaction Test purpose: In the case of multiple isolation levels, check if the completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID) Test steps: Create the database: testdb Create the table t_test ( id int AUTO_INCREMENT, name varchar(32), PRIMARY KEY (id) ) Run a test script Test result: The completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID), in the case of multiple isolation levels Object isolation Test purpose: Check if the object isolation can be implemented by using different schemas Test script: create database if not exists testdb; use testdb create table if not exists t_test ( id bigint, name varchar(200), sale_time datetime default current_timestamp, constraint pk_t_test primary key (id) ); insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'); create user 'readonly'@'%' identified by "readonly"; grant select on testdb.* to readonly@'%'; select * from testdb.t_test; update testdb.t_test set name='aaa'; create user 'otheruser'@'%' identified by "otheruser"; Test result: Supported to create different schemas to implement the object isolation Table operation support Test purpose: Check if you can create, delete, and modifiy table data, DML, columns, partition table Test steps: Run the test scripts step by step after connecting the database Test script: # Create and delete table drop table if exists t_test; create table if not exists t_test ( id bigint default '0', name varchar(200) default '' , sale_time datetime default current_timestamp, constraint pk_t_test primary key (id) ); # Delete and modify insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'),(4,'d'),(5,'e'); update t_test set name='aaa' where id=1; update t_test set name='bbb' where id=2; delete from t_dml where id=5; # Modify, add, delete columns alter table t_test modify column name varchar(250); alter table t_test add column col varchar(255); insert into t_test(id,name,col) values(10,'test','new_col'); alter table t_test add column colwithdefault varchar(255) default 'aaaa'; insert into t_test(id,name) values(20,'testdefault'); insert into t_test(id,name,colwithdefault ) values(10,'test','non-default '); alter table t_test drop column colwithdefault; # Type of partition table (only listed part of scripts) CREATE TABLE employees ( id INT NOT NULL, fname VARCHAR(30), lname VARCHAR(30), hired DATE NOT NULL DEFAULT '1970-01-01', separated DATE NOT NULL DEFAULT '9999-12-31', job_code INT NOT NULL, store_id INT NOT NULL ) Test result: Supported to create, delete, and modifiy table data, DML, columns, partition table Index support Test purpose: Verify different indexes (unique, clustered, partitioned, Bidirectional indexes, Expression-based indexes, hash indexes, etc.) and index rebuild operations. Test script: alter table t_test add unique index udx_t_test (name); # The default is clustered index of primary key ADMIN CHECK TABLE t_test; create index time_idx on t_test(sale_time); alter table t_test drop index time_idx; admin show ddl jobs; admin show ddl job queries 156; create index time_idx on t_test(sale_time); Test result: Supported to create, delete, combine, and list indexes and supported for unique index Statements Test purpose: Check if the statements in distributed databases are supported such as if, case when, for loop, while loop, loop exit when (up to 5 kinds) Test script: SELECT CASE id WHEN 1 THEN 'first' WHEN 2 THEN 'second' ELSE 'OTHERS' END AS id_new FROM t_test; SELECT IF(id>2,'int2+','int2-') from t_test; Test result: supported for statements such as if, case when, for loop, while loop, and loop exit when (up to 5 kinds) Parsing execution plan Test purpose: Check if execution plan parsing is supported for distributed databases Test script: explain analyze select * from t_test where id NOT IN (1,2,4); explain analyze select * from t_test a where EXISTS (select * from t_test b where a.id=b.id and b.id<3); explain analyze SELECT IF(id>2,'int2+','int2-') from t_test; Test result: the execution plan is supported to parse Binding execution plan Test purpose: Verify the feature of binding execution plan for distributed databases Test steps: View the current execution plan of sql statements Use the binding feature View the execution plan after the sql statement is binded Delete the binding Test script: explain select * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; explain select /*+ hash_join(a,b) */ * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; Test result: It may not be hash_join when hint is not used, and it must be hash_join after hint is used. Common functions Test purpose: Verify standard functions of distributed databases Test result: Standard database functions are supported Explicit/implicit transactions Test purpose: Verify the transaction support of distributed databases Test result: Explict and implicit transactions are supported Character set Test purpose: Verify the data types supported by distributed database Test result: Only the UTF-8 mb4 character set is supported now Lock support Test purpose: Verify the lock implementation of distributed databases Test result: Described how the lock is implemented, what are blockage conditions in the case of R-R/R-W/W-W, and how the deadlock is handled Isolation levels Test purpose: Verify the transactional isolation levels of distributed databases Test result: Supported for si and rc isolation levels (4.0 GA version) Distributed complex query Test purpose: Verify the complex query capabilities of distributed databases Test result: Supported for the distributed complex queries and operations such as inter-node joins, and supported for window functions and hierarchical queries ","version":"Next","tagName":"h3"},{"title":"System security test","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#system-security-test","content":"This section describes system security tests. After the database cluster is deployed, all the following tests are passed. Account management and permission test Test purpose: Verify the accout permisson management of distributed databases Test script: select host,user,authentication_string from mysql.user; create user tidb IDENTIFIED by 'tidb'; select host,user,authentication_string from mysql.user; set password for tidb =password('tidbnew'); select host,user,authentication_string,Select_priv from mysql.user; grant select on *.* to tidb; flush privileges ; select host,user,authentication_string,Select_priv from mysql.user; grant all privileges on *.* to tidb; flush privileges ; select * from mysql.user where user='tidb'; revoke select on *.* from tidb; flush privileges ; revoke all privileges on *.* from tidb; flush privileges ; grant select(id) on test.TEST_HOTSPOT to tidb; drop user tidb; Test results: Supported for creating, modifying, and deleting accounts, and configuring passwords, and supported for the separation of security, audit, and data management Based on different accounts, various permission control for database includes: instance, library, table, and column Access control Test purpose: Verify the permission access control of distributed databases, and control the database data by granting basic CRUD (create, read, update, and delete) permissions Test script: mysql -u root -h 172.17.49.222 -P 4000 drop user tidb; drop user tidb1; create user tidb IDENTIFIED by 'tidb'; grant select on tidb.* to tidb; grant insert on tidb.* to tidb; grant update on tidb.* to tidb; grant delete on tidb.* to tidb; flush privileges; show grants for tidb; exit; mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'select * from aa;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'insert into aa values(2);' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'update aa set id=3;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'delete from aa where id=3;' Test result: Database data is controlled by granting the basic CRUD permissions Whitelist Test purpose: Verify the whitelist feature of distributed databases Test script: mysql -u root -h 172.17.49.102 -P 4000 drop user tidb; create user tidb@'127.0.0.1' IDENTIFIED by 'tidb'; flush privileges; select * from mysql.user where user='tidb'; mysql -u tidb -h 127.0.0.1 -P 4000 -ptidb mysql -u tidb -h 172.17.49.102 -P 4000 -ptidb Test result: Supported for the IP whitelist feature and supportred for matching actions with IP segments Operation log Test purpose: Verify the monitor capability to distributed databases Test script: kubectl -ntidb-cluster logs tidb-test-pd-2 --tail 22 Test result: Record key actions or misoperations performed by users through the operation and maintenance management console or API ","version":"Next","tagName":"h3"},{"title":"Operation and maintenance test","type":1,"pageTitle":"TiDB","url":"/docs/use_cases/tidb#operation-and-maintenance-test","content":"This section describes the operation and maintenance test. After the database cluster is deployed, the following operation and maintenance tests are all passed. Import and export data Test purpose: Verify the tools support for importing and exporting data of distributed databases Test script: select * from sbtest1 into outfile '/sbtest1.csv'; load data local infile '/sbtest1.csv' into table test100; Test result: Supported for importing and exporting table, schema, and database Slow log query Test purpose: Get the SQL info by slow query Prerequisite: The SQL execution time shall be longer than the configured threshold for slow query, and the SQL execution is completed Test steps: Adjust the slow query threshold to 100 ms Run SQL View the slow query info from log, system table, or dashboard Test script: show variables like 'tidb_slow_log_threshold'; set tidb_slow_log_threshold=100; select query_time, query from information_schema.slow_query where is_internal = false order by query_time desc limit 3; Test result: Can get the slow query info. For details about test data, see TiDB on HwameiStor Deployment and Test Logs. ","version":"Next","tagName":"h3"},{"title":"Volume Snapshot","type":0,"sectionRef":"#","url":"/docs/volumes/volume_snapshot","content":"","keywords":"","version":"Next"},{"title":"Create VolumeSnapshotClass","type":1,"pageTitle":"Volume Snapshot","url":"/docs/volumes/volume_snapshot#create-volumesnapshotclass","content":"By default, HwameiStor does not automatically create a VolumeSnapshotClass during installation, so you need to create a VolumeSnapshotClass manually. A sample VolumeSnapshotClass is as follows: kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 metadata: name: hwameistor-storage-lvm-snapshot annotations: snapshot.storage.kubernetes.io/is-default-class: "true" parameters: snapsize: "1073741824" driver: lvm.hwameistor.io deletionPolicy: Delete snapsize:It specifies the size of VolumeSnapshot note If the snapsize parameter is not specified, the size of the created snapshot is consistent with the size of the source volume. After you create a VolumeSnapshotClass, you can use it to create VolumeSnapshot. ","version":"Next","tagName":"h2"},{"title":"Create a VolumeSnapshot using the VolumeSnapshotClass","type":1,"pageTitle":"Volume Snapshot","url":"/docs/volumes/volume_snapshot#create-a-volumesnapshot-using-the-volumesnapshotclass","content":"A sample VolumeSnapshot is as follows: apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-local-storage-pvc-lvm spec: volumeSnapshotClassName: hwameistor-storage-lvm-snapshot source: persistentVolumeClaimName: local-storage-pvc-lvm persistentVolumeClaimName:It specifies the PVC to create the VolumeSnapshot After creating a VolumeSnapshot, you can check the VolumeSnapshot using the following command. $ kubectl get vs NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE snapshot-local-storage-pvc-lvm true local-storage-pvc-lvm 1Gi hwameistor-storage-lvm-snapshot snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 53y 2m57s After creating a VolumeSnapshot, you can check the Hwameistor LocalvolumeSnapshot using the following command. $ kubectl get lvs NAME CAPACITY SOURCEVOLUME STATE MERGING INVALID AGE snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 1073741824 pvc-967baffd-ce10-4739-b996-87c9ed24e635 Ready 5m31s CAPACITY: The capacity size of the snapshotSourceVOLUME: The source volume name of the snapshotMERGING: Whether the snapshot is in a merged state (usually triggered by rollback operation)INVALID: Whether the snapshot is invalidated (usually triggered when the snapshot capacity is full)AGE: The actual creation time of the snapshot (different from the CR creation time, this time is the creation time of the underlying snapshot data volume) After creating a VolumeSnapshot, you can restore and rollback the VolumeSnapshot. ","version":"Next","tagName":"h2"},{"title":"Restore VolumeSnapshot","type":1,"pageTitle":"Volume Snapshot","url":"/docs/volumes/volume_snapshot#restore-volumesnapshot","content":"You can create pvc to restore VolumeSnapshot, as follows: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-storage-pvc-lvm-restore spec: storageClassName: local-storage-hdd-lvm dataSource: name: snapshot-local-storage-pvc-lvm kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ","version":"Next","tagName":"h2"},{"title":"Rollback VolumeSnapshot","type":1,"pageTitle":"Volume Snapshot","url":"/docs/volumes/volume_snapshot#rollback-volumesnapshot","content":"note To roll back a snapshot, you must first stop the I/O of the source volume, such as stopping the application and waiting for the rollback operation to complete,confirm data consistency before using the rolled back data volume. VolumeSnapshot can be rolled back by creating the resource LocalVolumeSnapshotRestore, as follows: apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeSnapshotRestore metadata: name: rollback-test spec: sourceVolumeSnapshot: snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 restoreType: "rollback" sourceVolumeSnapshot:It specifies the LocalVolumeSnapshot to be rollback. Observing the created LocalVolumeSnapshotRestore, you can understand the entire rollback process through the state. After the rollback is complete, the corresponding LocalVolumeSnapshotRestore will be deleted. $ kubectl get LocalVolumeSnapshotRestore -w NAME TARGETVOLUME SOURCESNAPSHOT STATE AGE restore-test2 pvc-967baffd-ce10-4739-b996-87c9ed24e635 snapcontent-0fc17697-68ea-49ce-8e4c-7a791e315110 Submitted 0s restore-test2 pvc-967baffd-ce10-4739-b996-87c9ed24e635 snapcontent-81a1f605-c28a-4e60-8c78-a3d504cbf6d9 InProgress 0s restore-test2 pvc-967baffd-ce10-4739-b996-87c9ed24e635 snapcontent-81a1f605-c28a-4e60-8c78-a3d504cbf6d9 Completed 2s ","version":"Next","tagName":"h2"}],"options":{"languages":["en","zh"],"id":"default"}}