华为云UCS-UCS双集群可靠性提升建议:使用kubectl命令实现UCS高可用部署操作步骤
使用kubectl命令实现U CS 高可用部署操作步骤
前置条件:
- 使用kubectl连接集群联邦,详细操作请参见通过kubectl连接集群。
- 已创建可使用的独享型ELB实例,并绑定弹性公网,详情可参见购买独享型负载均衡器。

以下均为示例yaml,请根据实际情况修改参数内容。
实践操作操作步骤:
使用UCS高可用部署,需要进行指定资源下发规则,示例如下yaml:
apiVersion: policy.karmada.io/v1alpha1 kind: ClusterPropagationPolicy metadata: name: karmada-global-policy # 策略名 spec: resourceSelectors: # 分发策略关联的资源,支持同时分发多个资源对象 - apiVersion: apps/v1 # group/version kind: Deployment # 资源类型kind - apiVersion: apps/v1 kind: DaemonSet - apiVersion: v1 kind: Service - apiVersion: v1 kind: Secret - apiVersion: v1 kind: ConfigMap - apiVersion: v1 kind: ResourceQuota - apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler - apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler - apiVersion: autoscaling.cce.io/v2alpha1 kind: CronHorizontalPodAutoscaler priority: 0 # 数值越大,优先级越高 conflictResolution: Overwrite # conflictResolution声明当正在传播的资源已存在于目标集群中时, 默认为“Abort”,这意味着停止传播以避免意外覆盖。Overwrite则表示强制覆盖。 placement: # 放置规则即把关联资源分发到哪些集群 clusterAffinity: # 配置集群亲和性 clusterNames: # 使用集群名选择集群 - ucs01 # 集群名:需要修改为环境中实际集群名 - ucs02 # 集群名:需要修改为环境中实际集群名 replicaScheduling: # 实例调度策略 replicaSchedulingType: Divided # 实例拆分 replicaDivisionPreference: Weighted # 根据权重拆分 weightPreference: # 权重选项 staticWeightList: # 静态权重列表: ucs01的权重为1(分配到大约1/2的实例数),ucs02权重为1 (分配到大约1/2的实例数) - targetCluster: # 目标集群 clusterNames: - ucs01 # 集群名:需要修改为环境中实际集群名 weight: 1 # 权重为1 - targetCluster: # 目标集群 clusterNames: - ucs02 # 集群名:需要修改为环境中实际集群名 weight: 1 # 权重为1 clusterTolerations: # 集群容忍,当集群master不健康或不可达时,应用不作驱逐处理 - key: cluster.karmada.io/not-ready operator: Exists effect: NoExecute - key: cluster.karmada.io/unreachable operator: Exists effect: NoExecute
若创建了HPA,需要拆分hpa中的最小实例数,可使用如下示例yaml(可选):

以下yaml中clusterNum为集群数量,示例集群数量为2,请根据实际场景配置。
apiVersion: config.karmada.io/v1alpha1 kind: ResourceInterpreterCustomization metadata: name: hpa-min-replica-split-ric spec: customizations: replicaResource: luaScript: | function GetReplicas(obj) clusterNum = 2 replica = obj.spec.minReplicas if ( obj.spec.minReplicas == 1 ) then replica = clusterNum end return replica, nil end replicaRevision: luaScript: | function ReviseReplica(obj, desiredReplica) obj.spec.minReplicas = desiredReplica return obj end target: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler
创建configmap实例,示例如下yaml:
apiVersion: v1 kind: ConfigMap metadata: name: demo-configmap namespace: default #命名空间,默认为default data: foo: bar
创建deployment实例,示例如下yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: demo namespace: default #命名空间,默认为default labels: app: demo spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 25% selector: matchLabels: app: demo template: metadata: labels: app: demo spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - demo topologyKey: kubernetes.io/hostname containers: - env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP envFrom: - configMapRef: name: demo-configmap name: demo image: nginx #若使用“开源镜像中心”的镜像,可直接填写镜像名称;若使用“我的镜像”中的镜像,请在SWR中获取具体镜像地址。 command: - /bin/bash args: - '-c' - 'sed -i "s/nginx/podname: $POD_NAME podIP: $POD_IP/g" /usr/share/nginx/html/index.html;nginx "-g" "daemon off;"' imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 100Mi limits: cpu: 100m memory: 100Mi
创建hpa实例,示例如下yaml:
apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: demo-hpa namespace: default #命名空间,默认为default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: demo minReplicas: 2 maxReplicas: 4 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 30 behavior: scaleDown: policies: - type: Pods value: 2 periodSeconds: 100 - type: Percent value: 10 periodSeconds: 100 selectPolicy: Min stabilizationWindowSeconds: 300 scaleUp: policies: - type: Pods value: 2 periodSeconds: 15 - type: Percent value: 20 periodSeconds: 15 selectPolicy: Max stabilizationWindowSeconds: 0
创建service实例,示例如下yaml:
apiVersion: v1 kind: Service metadata: name: demo-svc namespace: default #命名空间,默认为default spec: type: ClusterIP selector: app: demo sessionAffinity: None ports: - name: http protocol: TCP port: 8080 targetPort: 8080
创建mci实例,示例如下yaml:
apiVersion: networking.karmada.io/v1alpha1 kind: MultiClusterIngress metadata: name: demo-mci # MCI的名字 namespace: default #命名空间,默认为default annotations: karmada.io/elb.id: xxx # TODO: ELB实例ID karmada.io/elb.projectid: xxx #TODO: ELB实例的项目ID karmada.io/elb.port: "8080" #TODO: ELB监听端口 karmada.io/elb.health-check-flag: "on" karmada.io/elb.health-check-option.demo-svc: '{"protocol":"TCP"}' spec: ingressClassName: public-elb # ELB类型,固定值 rules: - host: demo.localdev.me # 对外暴露的 域名 TODO:修改实际地址 http: paths: - backend: service: name: demo-svc # 暴露的service名字 port: number: 8080 # 暴露service端口 path: / pathType: Prefix # 前缀匹配