Milvus 最新版本 On Kubernetes StandAlone 模式部署

Milvus 最新版本 On Kubernetes StandAlone 模式部署

  • 环境要求
  • 安装 Docker
  • 安装 MiniKube
  • 安装 Kubectl 工具
  • 安装 Helm 工具
  • 在线安装 Milvus StandAlone
  • 验证 Milvus StandAlone
  • 离线安装 Milvus StandAlone
  • 附录
    • values.yaml 默认配置
      • milvus_manifest.yaml 默认配置

环境要求

  • 官方环境要求:https://milvus.io/docs/prerequisite-helm.md

安装 Docker

  • Docker 24.0.5 部署:https://blog.csdn.net/weixin_42598916/article/details/135725610?spm=1001.2014.3001.5502

安装 MiniKube

  • 官方部署文档:https://minikube.sigs.k8s.io/docs/start/
# 创建服务目录
mkdir -p /data/service/minikube/

# 下载 MiniKube 安装包
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm -o /data/service/minikube/

# 安装 MiniKube 安装包
rpm -ivh /data/service/minikube/minikube-latest.x86_64.rpm

# 启动 MiniKube 服务
# 默认情况下,无法通过 root 启动 MiniKube,所以如果需要通过 root 安装,则需要通过 --force 跳过
minikube start --force
??  minikube v1.31.2 on Centos 7.9.2009
?  minikube skips various validations when --force is supplied; this may lead to unexpected behavior
?  Automatically selected the docker driver. Other choices: none, ssh
??  The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
??  If you are running minikube within a VM, consider using --driver=none:
??    https://minikube.sigs.k8s.io/docs/reference/drivers/none/
??  Using Docker driver with root privileges
??  Starting control plane node minikube in cluster minikube
??  Pulling base image ...
??  Downloading Kubernetes v1.27.4 preload ...
    > preloaded-images-k8s-v18-v1...:  393.21 MiB / 393.21 MiB  100.00% 345.67
    > gcr.io/k8s-minikube/kicbase...:  447.62 MiB / 447.62 MiB  100.00% 364.39
??  Creating docker container (CPUs=2, Memory=2200MB) ...
??  Preparing Kubernetes v1.27.4 on Docker 24.0.4 ...
    ? Generating certificates and keys ...
    ? Booting up control plane ...
    ? Configuring RBAC rules ...
??  Configuring bridge CNI (Container Networking Interface) ...
    ? Using image gcr.io/k8s-minikube/storage-provisioner:v5
??  Verifying Kubernetes components...
??  Enabled addons: storage-provisioner, default-storageclass
??  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
??  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

# 通过查看 Pod 状态,确定 MiniKube 启动正常
minikube kubectl -- get po -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS      AGE
kube-system   coredns-5d78c9869d-grxl4           1/1     Running   0             41s
kube-system   etcd-minikube                      1/1     Running   0             52s
kube-system   kube-apiserver-minikube            1/1     Running   0             52s
kube-system   kube-controller-manager-minikube   1/1     Running   0             52s
kube-system   kube-proxy-dp2dv                   1/1     Running   0             41s
kube-system   kube-scheduler-minikube            1/1     Running   0             52s
kube-system   storage-provisioner                1/1     Running   1 (10s ago)   49s

安装 Kubectl 工具

  • 官方部署文档:https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
# 下载最新版 Kubectl 源码包
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# 安装 Kubectl 源码包
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# 通过查看 kubectl 版本,确定 kubectl 安装完成
kubectl version --client
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

安装 Helm 工具

  • 官方部署文档:https://helm.sh/docs/intro/install/
# 下载 Helm 3.13.1 安装包
wget https://get.helm.sh/helm-v3.13.1-linux-amd64.tar.gz

# 解压 Helm 3.13.1 安装包
tar -xf helm-v3.13.1-linux-amd64.tar.gz

# 将安装包移至 /usr/local/bin 目录下
mv linux-amd64/helm /usr/local/bin/helm

# 通过查看 helm help 命令,确定 helm 安装完成
helm help

在线安装 Milvus StandAlone

  • 官方部署文档:

    • https://artifacthub.io/packages/helm/milvus/milvus

    • https://milvus.io/docs/install_standalone-helm.md

# 将 milvus 官方仓库添加到 helm 仓库中
helm repo add milvus https://zilliztech.github.io/milvus-helm/

# 更新 helm 仓库
helm repo update

# 安装 Milvus StandAlone
#   - my-release: 指定实例名称为 my-release
#   - milvus/milvus: 从 milvus 仓库中,安装 milvus
#   - cluster.enabled=false: 关闭集群模式
#   - etcd.replicaCount=1: milvus 所使用的分布式键值对存储 etcd 的副本数为 1
#   - minio.mode=standalone: milvus 所使用的对象存储 minio 的模式为 standalone
#   - pulsar.enabled=false: 不使用事件流平台 plusar
helm install my-release milvus/milvus --set cluster.enabled=false --set etcd.replicaCount=1 --set minio.mode=standalone --set pulsar.enabled=false

验证 Milvus StandAlone

# 查看 milvus pods 状态
kubectl get pods
NAME                                            READY   STATUS    RESTARTS   AGE
my-release-etcd-0                               1/1     Running   0          21m
my-release-milvus-standalone-7c85f5bdcd-hlzwp   1/1     Running   0          21m
my-release-minio-67c5bcb8d7-7bs7v               1/1     Running   0          21m

# 查看 milvus service 状态
kubectl get services
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
my-release-etcd            ClusterIP   10.110.20.83    <none>        2379/TCP,2380/TCP    17m
my-release-etcd-headless   ClusterIP   None            <none>        2379/TCP,2380/TCP    17m
my-release-milvus          ClusterIP   10.97.190.120   <none>        19530/TCP,9091/TCP   17m
my-release-minio           ClusterIP   10.103.74.140   <none>        9000/TCP             17m

# 手动删除 pod my-release-milvus-standalone-76f75985bd-q9j52
kubectl delete pod my-release-milvus-standalone-7c85f5bdcd-hlzwp -n default

# 重新查看 milvus pods 状态,确认新 milvus pods 自动生成,并进入 running 状态
kubectl get pods -A -o wide | grep my-release
NAME                                            READY   STATUS    RESTARTS   AGE
my-release-etcd-0                               1/1     Running   0          23m
my-release-milvus-standalone-7c85f5bdcd-ddw4r   1/1     Running   0          100s
my-release-minio-67c5bcb8d7-7bs7v               1/1     Running   0          23m

# 重新查看 milvus service 状态
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
kubernetes                 ClusterIP   10.96.0.1       <none>        443/TCP              27m
my-release-etcd            ClusterIP   10.110.20.83    <none>        2379/TCP,2380/TCP    24m
my-release-etcd-headless   ClusterIP   None            <none>        2379/TCP,2380/TCP    24m
my-release-milvus          ClusterIP   10.97.190.120   <none>        19530/TCP,9091/TCP   24m
my-release-minio           ClusterIP   10.103.74.140   <none>        9000/TCP             24m

离线安装 Milvus StandAlone

  • 官方部署文档:https://milvus.io/docs/install_offline-helm.md
# 将 milvus 官方仓库添加到 helm 仓库中
helm repo add milvus https://zilliztech.github.io/milvus-helm/

# 更新 helm 仓库
helm repo update

# 下载 Milvus StandAlone yaml 文件
# - helm template my-release:指定实例名称为 my-release
# - set cluster.enabled=false:关闭集群模式
# - set etcd.replicaCount=1:milvus 所使用的分布式键值对存储 etcd 的副本数为 1
# - set minio.mode=standalone:milvus 所使用的对象存储 minio 的模式为 standalone
# - seet pulsar.enabled=false:不使用事件流平台 plusar
# - milvus/milvus:从 milvus 仓库中,安装 milvus
# - milvus_manifest.yaml:将 Helm 生成的 Kubernetes 资源 YAML 内容输出到名为 milvus_manifest.yaml 的文件中
helm template my-release --set cluster.enabled=false --set etcd.replicaCount=1 --set minio.mode=standalone --set pulsar.enabled=false milvus/milvus > milvus_manifest.yaml

# 若需要调整配置参数,可通过 values.yaml 文件调整
# 下载 values.yaml 文件
wget https://raw.githubusercontent.com/milvus-io/milvus-helm/master/charts/milvus/values.yaml

# 通过 values.yaml 文件重新生成 milvus_mainfest.yaml
helm template -f values.yaml my-release milvus/milvus > milvus_manifest.yaml

# 下载其余脚本文件
wget https://raw.githubusercontent.com/milvus-io/milvus/master/deployments/offline/requirements.txt
wget https://raw.githubusercontent.com/milvus-io/milvus/master/deployments/offline/save_image.py

# 拉取并保存 image
pip3 install -r requirements.txt
python3 save_image.py --manifest milvus_manifest.yaml

# 加载 image
cd images/for image in $(find . -type f -name "*.tar.gz") ; do gunzip -c $image | docker load; done

# 基于 milvus_mainfest.yaml 配置文件,启动 milvus
kubectl apply -f milvus_manifest.yaml

附录

values.yaml 默认配置

## Enable or disable Milvus Cluster mode
cluster:
  enabled: true
image:
  all:
    repository: milvusdb/milvus
    tag: v2.2.13
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  tools:
    repository: milvusdb/milvus-config-tool
    tag: v0.1.1
    pullPolicy: IfNotPresent
# Global node selector
# If set, this will apply to all milvus components
# Individual components can be set to a different node selector
nodeSelector: {}
# Global tolerations
# If set, this will apply to all milvus components
# Individual components can be set to a different tolerations
tolerations: []
# Global affinity
# If set, this will apply to all milvus components
# Individual components can be set to a different affinity
affinity: {}
# Global labels and annotations
# If set, this will apply to all milvus components
labels: {}
annotations: {}
# Extra configs for milvus.yaml
# If set, this config will merge into milvus.yaml
# Please follow the config structure in the milvus.yaml
# at https://github.com/milvus-io/milvus/blob/master/configs/milvus.yaml
# Note: this config will be the top priority which will override the config
# in the image and helm chart.
extraConfigFiles:
  user.yaml: |+
    #    For example enable rest http for milvus proxy
    #    proxy:
    #      http:
    #        enabled: true
## Expose the Milvus service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##
service:
  type: ClusterIP
  port: 19530
  portName: milvus
  nodePort: ""
  annotations: {}
  labels: {}
  ## List of IP addresses at which the Milvus service is available
  ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
  ##
  externalIPs: []
  #   - externalIp1
  # LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to
  # set allowed inbound rules on the security group assigned to the master load balancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  # Optionally assign a known public LB IP
  # loadBalancerIP: 1.2.3.4
ingress:
  enabled: false
  annotations:
    # Annotation example: set nginx ingress type
    # kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: GRPC
    nginx.ingress.kubernetes.io/listen-ports-ssl: '[19530]'
    nginx.ingress.kubernetes.io/proxy-body-size: 4m
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
  labels: {}
  rules:
    - host: "milvus-example.local"
      path: "/"
      pathType: "Prefix"
    # - host: "milvus-example2.local"
    #   path: "/otherpath"
    #   pathType: "Prefix"
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - milvus-example.local
serviceAccount:
  create: false
  name:
  annotations:
  labels:
metrics:
  enabled: true
  serviceMonitor:
    # Set this to `true` to create ServiceMonitor for Prometheus operator
    enabled: false
    interval: "30s"
    scrapeTimeout: "10s"
    # Additional labels that can be used so ServiceMonitor will be discovered by Prometheus
    additionalLabels: {}
livenessProbe:
  enabled: true
  initialDelaySeconds: 90
  periodSeconds: 30
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 5
readinessProbe:
  enabled: true
  initialDelaySeconds: 90
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 5
log:
  level: "info"
  file:
    maxSize: 300    # MB
    maxAge: 10    # day
    maxBackups: 20
  format: "text"    # text/json
  persistence:
    mountPath: "/milvus/logs"
    ## If true, create/use a Persistent Volume Claim
    ## If false, use emptyDir
    ##
    enabled: false
    annotations:
      helm.sh/resource-policy: keep
    persistentVolumeClaim:
      existingClaim: ""
      ## Milvus Logs Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.
      ## ReadWriteMany access mode required for milvus cluster.
      ##
      storageClass:
      accessModes: ReadWriteMany
      size: 10Gi
      subPath: ""
## Heaptrack traces all memory allocations and annotates these events with stack traces.
## See more: https://github.com/KDE/heaptrack
## Enable heaptrack in production is not recommended.
heaptrack:
  image:
    repository: milvusdb/heaptrack
    tag: v0.1.0
    pullPolicy: IfNotPresent
standalone:
  replicas: 1  # Run standalone mode with replication disabled
  resources: {}
  # Set local storage size in resources
  # limits:
  #    ephemeral-storage: 100Gi
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  disk:
    enabled: true
    size:
      enabled: false  # Enable local storage size limit
  profiling:
    enabled: false  # Enable live profiling
  ## Default message queue for milvus standalone
  ## Supported value: rocksmq, pulsar and kafka
  messageQueue: rocksmq
  persistence:
    mountPath: "/var/lib/milvus"
    ## If true, alertmanager will create/use a Persistent Volume Claim
    ## If false, use emptyDir
    ##
    enabled: true
    annotations:
      helm.sh/resource-policy: keep
    persistentVolumeClaim:
      existingClaim: ""
      ## Milvus Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.
      ##
      storageClass:
      accessModes: ReadWriteOnce
      size: 50Gi
      subPath: ""
proxy:
  enabled: true
  replicas: 1
  resources: {}
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: false  # Enable live profiling
  http:
    enabled: true  # whether to enable http rest server
    debugMode:
      enabled: false
rootCoordinator:
  enabled: true
  # You can set the number of replicas greater than 1, only if enable active standby
  replicas: 1  # Run Root Coordinator mode with replication disabled
  resources: {}
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: false  # Enable live profiling
  activeStandby:
    enabled: false  # Enable active-standby when you set multiple replicas for root coordinator
  service:
    port: 53100
    annotations: {}
    labels: {}
    clusterIP: ""
queryCoordinator:
  enabled: true
  # You can set the number of replicas greater than 1, only if enable active standby
  replicas: 1  # Run Query Coordinator mode with replication disabled
  resources: {}
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: false  # Enable live profiling
  activeStandby:
    enabled: false  # Enable active-standby when you set multiple replicas for query coordinator
  service:
    port: 19531
    annotations: {}
    labels: {}
    clusterIP: ""
queryNode:
  enabled: true
  replicas: 1
  resources: {}
  # Set local storage size in resources
  # limits:
  #    ephemeral-storage: 100Gi
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  disk:
    enabled: true  # Enable querynode load disk index, and search on disk index
    size:
      enabled: false  # Enable local storage size limit
  profiling:
    enabled: false  # Enable live profiling
indexCoordinator:
  enabled: true
  # You can set the number of replicas greater than 1, only if enable active standby
  replicas: 1   # Run Index Coordinator mode with replication disabled
  resources: {}
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: false  # Enable live profiling
  activeStandby:
    enabled: false  # Enable active-standby when you set multiple replicas for index coordinator
  service:
    port: 31000
    annotations: {}
    labels: {}
    clusterIP: ""
indexNode:
  enabled: true
  replicas: 1
  resources: {}
  # Set local storage size in resources
  # limits:
  #    ephemeral-storage: 100Gi
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: false  # Enable live profiling
  disk:
    enabled: true  # Enable index node build disk vector index
    size:
      enabled: false  # Enable local storage size limit
dataCoordinator:
  enabled: true
  # You can set the number of replicas greater than 1, only if enable active standby
  replicas: 1           # Run Data Coordinator mode with replication disabled
  resources: {}
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: true  # Enable live profiling
  activeStandby:
    enabled: false  # Enable active-standby when you set multiple replicas for data coordinator
  service:
    port: 13333
    annotations: {}
    labels: {}
    clusterIP: ""
dataNode:
  enabled: true
  replicas: 1
  resources: {}
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: false  # Enable live profiling
## mixCoordinator contains all coord
## If you want to use mixcoord, enable this and disable all of other coords
mixCoordinator:
  enabled: false
  # You can set the number of replicas greater than 1, only if enable active standby
  replicas: 1           # Run Mixture Coordinator mode with replication disabled
  resources: {}
  nodeSelector: {}
  affinity: {}
  tolerations: []
  extraEnv: []
  heaptrack:
    enabled: false
  profiling:
    enabled: false  # Enable live profiling
  activeStandby:
    enabled: false  # Enable active-standby when you set multiple replicas for Mixture coordinator
  service:
    annotations: {}
    labels: {}
    clusterIP: ""
attu:
  enabled: false
  name: attu
  image:
    repository: zilliz/attu
    tag: v2.2.8
    pullPolicy: IfNotPresent
  service:
    annotations: {}
    labels: {}
    type: ClusterIP
    port: 3000
    # loadBalancerIP: ""
  resources: {}
  ingress:
    enabled: false
    annotations: {}
    # Annotation example: set nginx ingress type
    # kubernetes.io/ingress.class: nginx
    labels: {}
    hosts:
      - milvus-attu.local
    tls: []
    #  - secretName: chart-attu-tls
    #    hosts:
    #      - milvus-attu.local
## Configuration values for the minio dependency
## ref: https://github.com/minio/charts/blob/master/README.md
##
minio:
  enabled: true
  name: minio
  mode: distributed
  image:
    tag: "RELEASE.2023-03-20T20-16-18Z"
    pullPolicy: IfNotPresent
  accessKey: minioadmin
  secretKey: minioadmin
  existingSecret: ""
  bucketName: "milvus-bucket"
  rootPath: file
  useIAM: false
  iamEndpoint: ""
  podDisruptionBudget:
    enabled: false
  resources:
    requests:
      memory: 2Gi
  gcsgateway:
    enabled: false
    replicas: 1
    gcsKeyJson: "/etc/credentials/gcs_key.json"
    projectId: ""
  service:
    type: ClusterIP
    port: 9000
  persistence:
    enabled: true
    existingClaim: ""
    storageClass:
    accessMode: ReadWriteOnce
    size: 500Gi
  livenessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 5
  startupProbe:
    enabled: true
    initialDelaySeconds: 0
    periodSeconds: 10
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 60
## Configuration values for the etcd dependency
## ref: https://artifacthub.io/packages/helm/bitnami/etcd
##
etcd:
  enabled: true
  name: etcd
  replicaCount: 3
  pdb:
    create: false
  image:
    repository: "milvusdb/etcd"
    tag: "3.5.5-r2"
    pullPolicy: IfNotPresent
  service:
    type: ClusterIP
    port: 2379
    peerPort: 2380
  auth:
    rbac:
      enabled: false
  persistence:
    enabled: true
    storageClass:
    accessMode: ReadWriteOnce
    size: 10Gi
  ## Enable auto compaction
  ## compaction by every 1000 revision
  ##
  autoCompactionMode: revision
  autoCompactionRetention: "1000"
  ## Increase default quota to 4G
  ##
  extraEnvVars:
  - name: ETCD_QUOTA_BACKEND_BYTES
    value: "4294967296"
  - name: ETCD_HEARTBEAT_INTERVAL
    value: "500"
  - name: ETCD_ELECTION_TIMEOUT
    value: "2500"
## Configuration values for the pulsar dependency
## ref: https://github.com/apache/pulsar-helm-chart
##
pulsar:
  enabled: true
  name: pulsar
  fullnameOverride: ""
  persistence: true
  maxMessageSize: 5242880  # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.
  rbac:
    enabled: false
    psp: false
    limit_to_namespace: true
  affinity:
    anti_affinity: false
## enableAntiAffinity: no
  components:
    zookeeper: true
    bookkeeper: true
    # bookkeeper - autorecovery
    autorecovery: true
    broker: true
    functions: false
    proxy: true
    toolset: false
    pulsar_manager: false
  monitoring:
    prometheus: false
    grafana: false
    node_exporter: false
    alert_manager: false
  images:
    broker:
      repository: apachepulsar/pulsar
      pullPolicy: IfNotPresent
      tag: 2.8.2
    autorecovery:
      repository: apachepulsar/pulsar
      tag: 2.8.2
      pullPolicy: IfNotPresent
    zookeeper:
      repository: apachepulsar/pulsar
      pullPolicy: IfNotPresent
      tag: 2.8.2
    bookie:
      repository: apachepulsar/pulsar
      pullPolicy: IfNotPresent
      tag: 2.8.2
    proxy:
      repository: apachepulsar/pulsar
      pullPolicy: IfNotPresent
      tag: 2.8.2
    pulsar_manager:
      repository: apachepulsar/pulsar-manager
      pullPolicy: IfNotPresent
      tag: v0.1.0
  zookeeper:
    resources:
      requests:
        memory: 1024Mi
        cpu: 0.3
    configData:
      PULSAR_MEM: >
        -Xms1024m
        -Xmx1024m
      PULSAR_GC: >
         -Dcom.sun.management.jmxremote
         -Djute.maxbuffer=10485760
         -XX:+ParallelRefProcEnabled
         -XX:+UnlockExperimentalVMOptions
         -XX:+DoEscapeAnalysis
         -XX:+DisableExplicitGC
         -XX:+PerfDisableSharedMem
         -Dzookeeper.forceSync=no
    pdb:
      usePolicy: false
  bookkeeper:
    replicaCount: 3
    volumes:
      journal:
        name: journal
        size: 100Gi
      ledgers:
        name: ledgers
        size: 200Gi
    resources:
      requests:
        memory: 2048Mi
        cpu: 1
    configData:
      PULSAR_MEM: >
        -Xms4096m
        -Xmx4096m
        -XX:MaxDirectMemorySize=8192m
      PULSAR_GC: >
        -Dio.netty.leakDetectionLevel=disabled
        -Dio.netty.recycler.linkCapacity=1024
        -XX:+UseG1GC -XX:MaxGCPauseMillis=10
        -XX:+ParallelRefProcEnabled
        -XX:+UnlockExperimentalVMOptions
        -XX:+DoEscapeAnalysis
        -XX:ParallelGCThreads=32
        -XX:ConcGCThreads=32
        -XX:G1NewSizePercent=50
        -XX:+DisableExplicitGC
        -XX:-ResizePLAB
        -XX:+ExitOnOutOfMemoryError
        -XX:+PerfDisableSharedMem
        -XX:+PrintGCDetails
      nettyMaxFrameSizeBytes: "104867840"
    pdb:
      usePolicy: false
  broker:
    component: broker
    podMonitor:
      enabled: false
    replicaCount: 1
    resources:
      requests:
        memory: 4096Mi
        cpu: 1.5
    configData:
      PULSAR_MEM: >
        -Xms4096m
        -Xmx4096m
        -XX:MaxDirectMemorySize=8192m
      PULSAR_GC: >
        -Dio.netty.leakDetectionLevel=disabled
        -Dio.netty.recycler.linkCapacity=1024
        -XX:+ParallelRefProcEnabled
        -XX:+UnlockExperimentalVMOptions
        -XX:+DoEscapeAnalysis
        -XX:ParallelGCThreads=32
        -XX:ConcGCThreads=32
        -XX:G1NewSizePercent=50
        -XX:+DisableExplicitGC
        -XX:-ResizePLAB
        -XX:+ExitOnOutOfMemoryError
      maxMessageSize: "104857600"
      defaultRetentionTimeInMinutes: "10080"
      defaultRetentionSizeInMB: "-1"
      backlogQuotaDefaultLimitGB: "8"
      ttlDurationDefaultInSeconds: "259200"
      subscriptionExpirationTimeMinutes: "3"
      backlogQuotaDefaultRetentionPolicy: producer_exception
    pdb:
      usePolicy: false
  autorecovery:
    resources:
      requests:
        memory: 512Mi
        cpu: 1
  proxy:
    replicaCount: 1
    podMonitor:
      enabled: false
    resources:
      requests:
        memory: 2048Mi
        cpu: 1
    service:
      type: ClusterIP
    ports:
      pulsar: 6650
    configData:
      PULSAR_MEM: >
        -Xms2048m -Xmx2048m
      PULSAR_GC: >
        -XX:MaxDirectMemorySize=2048m
      httpNumThreads: "100"
    pdb:
      usePolicy: false
  pulsar_manager:
    service:
      type: ClusterIP
  pulsar_metadata:
    component: pulsar-init
    image:
      # the image used for running `pulsar-cluster-initialize` job
      repository: apachepulsar/pulsar
      tag: 2.8.2
## Configuration values for the kafka dependency
## ref: https://artifacthub.io/packages/helm/bitnami/kafka
##
kafka:
  enabled: false
  name: kafka
  replicaCount: 3
  image:
    repository: bitnami/kafka
    tag: 3.1.0-debian-10-r52
  ## Increase graceful termination for kafka graceful shutdown
  terminationGracePeriodSeconds: "90"
  pdb:
    create: false
  ## Enable startup probe to prevent pod restart during recovering
  startupProbe:
    enabled: true
  ## Kafka Java Heap size
  heapOpts: "-Xmx4096m -Xms4096m"
  maxMessageBytes: _10485760
  defaultReplicationFactor: 3
  offsetsTopicReplicationFactor: 3
  ## Only enable time based log retention
  logRetentionHours: 168
  logRetentionBytes: _-1
  extraEnvVars:
  - name: KAFKA_CFG_MAX_PARTITION_FETCH_BYTES
    value: "5242880"
  - name: KAFKA_CFG_MAX_REQUEST_SIZE
    value: "5242880"
  - name: KAFKA_CFG_REPLICA_FETCH_MAX_BYTES
    value: "10485760"
  - name: KAFKA_CFG_FETCH_MESSAGE_MAX_BYTES
    value: "5242880"
  - name: KAFKA_CFG_LOG_ROLL_HOURS
    value: "24"
  persistence:
    enabled: true
    storageClass:
    accessMode: ReadWriteOnce
    size: 300Gi
  metrics:
    ## Prometheus Kafka exporter: exposes complimentary metrics to JMX exporter
    kafka:
      enabled: false
      image:
        repository: bitnami/kafka-exporter
        tag: 1.4.2-debian-10-r182
    ## Prometheus JMX exporter: exposes the majority of Kafkas metrics
    jmx:
      enabled: false
      image:
        repository: bitnami/jmx-exporter
        tag: 0.16.1-debian-10-r245
    ## To enable serviceMonitor, you must enable either kafka exporter or jmx exporter.
    ## And you can enable them both
    serviceMonitor:
      enabled: false
  service:
    type: ClusterIP
    ports:
      client: 9092
  zookeeper:
    enabled: true
    replicaCount: 3
## Configuration values for the mysql dependency
## ref: https://artifacthub.io/packages/helm/bitnami/mysql
##
## MySQL used for meta store is testing internally
mysql:
  enabled: false
  name: mysql
  image:
    repository: bitnami/mysql
    tag: 8.0.23-debian-10-r84
  architecture: replication
  auth:
    rootPassword: "ChangeMe"
    createDatabase: true
    database: "milvus_meta"
  maxOpenConns: 20
  maxIdleConns: 5
  primary:
    name: primary
    resources:
      limits: {}
      requests: {}
    persistence:
      enabled: true
      storageClass: ""
      accessModes:
        - ReadWriteOnce
      size: 100Gi
  secondary:
    name: secondary
    replicaCount: 1
    resources:
      limits: {}
      requests: {}
    persistence:
      enabled: true
      storageClass: ""
      accessModes:
        - ReadWriteOnce
      size: 100Gi
###################################
# External S3
# - these configs are only used when `externalS3.enabled` is true
###################################
externalS3:
  enabled: false
  host: ""
  port: ""
  accessKey: ""
  secretKey: ""
  useSSL: false
  bucketName: ""
  rootPath: ""
  useIAM: false
  cloudProvider: "aws"
  iamEndpoint: ""
###################################
# GCS Gateway
# - these configs are only used when `minio.gcsgateway.enabled` is true
###################################
externalGcs:
  bucketName: ""
###################################
# External etcd
# - these configs are only used when `externalEtcd.enabled` is true
###################################
externalEtcd:
  enabled: false
  ## the endpoints of the external etcd
  ##
  endpoints:
    - localhost:2379
###################################
# External pulsar
# - these configs are only used when `externalPulsar.enabled` is true
###################################
externalPulsar:
  enabled: false
  host: localhost
  port: 6650
  maxMessageSize: 5242880  # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.
  tenant: public
  namespace: default
  authPlugin: ""
  authParams: ""
###################################
# External kafka
# - these configs are only used when `externalKafka.enabled` is true
###################################
externalKafka:
  enabled: false
  brokerList: localhost:9092
  securityProtocol: SASL_SSL
  sasl:
    mechanisms: PLAIN
    username: ""
    password: ""
###################################
# External mysql
# - these configs are only used when `externalMysql.enabled` is true
###################################
externalMysql:
  enabled: false
  username: ""
  password: ""
  address: localhost
  port: 3306
  dbName: milvus_meta
  maxOpenConns: 20
  maxIdleConns: 5

milvus_manifest.yaml 默认配置

---
# Source: milvus/charts/minio/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: "my-release-minio"
  namespace: "default"
  labels:
    app: minio
    chart: minio-8.0.17
    release: "my-release"
---
# Source: milvus/charts/minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: my-release-minio
  labels:
    app: minio
    chart: minio-8.0.17
    release: my-release
    heritage: Helm
type: Opaque
data:
  accesskey: "bWluaW9hZG1pbg=="
  secretkey: "bWluaW9hZG1pbg=="
---
# Source: milvus/charts/minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-release-minio
  labels:
    app: minio
    chart: minio-8.0.17
    release: my-release
    heritage: Helm
data:
  initialize: |-
    #!/bin/sh
    set -e ; # Have script exit in the event of a failed command.
    MC_CONFIG_DIR="/etc/minio/mc/"
    MC="/usr/bin/mc --insecure --config-dir ${MC_CONFIG_DIR}"
    # connectToMinio
    # Use a check-sleep-check loop to wait for Minio service to be available
    connectToMinio() {
      SCHEME=$1
      ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
      set -e ; # fail if we can't read the keys.
      ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;
      set +e ; # The connections to minio are allowed to fail.
      echo "Connecting to Minio server: $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT" ;
      MC_COMMAND="${MC} config host add myminio $SCHEME://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
      $MC_COMMAND ;
      STATUS=$? ;
      until [ $STATUS = 0 ]
      do
        ATTEMPTS=`expr $ATTEMPTS + 1` ;
        echo "Failed attempts: $ATTEMPTS" ;
        if [ $ATTEMPTS -gt $LIMIT ]; then
          exit 1 ;
        fi ;
        sleep 2 ; # 1 second intervals between attempts
        $MC_COMMAND ;
        STATUS=$? ;
      done ;
      set -e ; # reset `e` as active
      return 0
    }
    # checkBucketExists ($bucket)
    # Check if the bucket exists, by using the exit code of `mc ls`
    checkBucketExists() {
      BUCKET=$1
      CMD=$(${MC} ls myminio/$BUCKET > /dev/null 2>&1)
      return $?
    }
    # createBucket ($bucket, $policy, $purge)
    # Ensure bucket exists, purging if asked to
    createBucket() {
      BUCKET=$1
      POLICY=$2
      PURGE=$3
      VERSIONING=$4
      # Purge the bucket, if set & exists
      # Since PURGE is user input, check explicitly for `true`
      if [ $PURGE = true ]; then
        if checkBucketExists $BUCKET ; then
          echo "Purging bucket '$BUCKET'."
          set +e ; # don't exit if this fails
          ${MC} rm -r --force myminio/$BUCKET
          set -e ; # reset `e` as active
        else
          echo "Bucket '$BUCKET' does not exist, skipping purge."
        fi
      fi
      # Create the bucket if it does not exist
      if ! checkBucketExists $BUCKET ; then
        echo "Creating bucket '$BUCKET'"
        ${MC} mb myminio/$BUCKET
      else
        echo "Bucket '$BUCKET' already exists."
      fi
      # set versioning for bucket
      if [ ! -z $VERSIONING ] ; then
        if [ $VERSIONING = true ] ; then
            echo "Enabling versioning for '$BUCKET'"
            ${MC} version enable myminio/$BUCKET
        elif [ $VERSIONING = false ] ; then
            echo "Suspending versioning for '$BUCKET'"
            ${MC} version suspend myminio/$BUCKET
        fi
      else
          echo "Bucket '$BUCKET' versioning unchanged."
      fi
      # At this point, the bucket should exist, skip checking for existence
      # Set policy on the bucket
      echo "Setting policy of bucket '$BUCKET' to '$POLICY'."
      ${MC} policy set $POLICY myminio/$BUCKET
    }
    # Try connecting to Minio instance
    scheme=http
    connectToMinio $scheme
---
# Source: milvus/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-release-milvus
data:
  default.yaml: |+
    # Copyright (C) 2019-2021 Zilliz. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance
    # with the License. You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software distributed under the License
    # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
    # or implied. See the License for the specific language governing permissions and limitations under the License.
    etcd:
      endpoints:
        - my-release-etcd:2379
    metastore:
      type: etcd
    minio:
      address: my-release-minio
      port: 9000
      accessKeyID: minioadmin
      secretAccessKey: minioadmin
      useSSL: false
      bucketName: milvus-bucket
      rootPath: file
      useIAM: false
      iamEndpoint:
      region:
      useVirtualHost: false
    mq:
      type: rocksmq
    messageQueue: rocksmq
    rootCoord:
      address: localhost
      port: 53100
      enableActiveStandby: false  # Enable rootcoord active-standby
    proxy:
      port: 19530
      internalPort: 19529
    queryCoord:
      address: localhost
      port: 19531
      enableActiveStandby: false  # Enable querycoord active-standby
    queryNode:
      port: 21123
      enableDisk: true # Enable querynode load disk index, and search on disk index
    indexCoord:
      address: localhost
      port: 31000
      enableActiveStandby: false  # Enable indexcoord active-standby
    indexNode:
      port: 21121
      enableDisk: true # Enable index node build disk vector index
    dataCoord:
      address: localhost
      port: 13333
      enableActiveStandby: false  # Enable datacoord active-standby
    dataNode:
      port: 21124
    log:
      level: info
      file:
        rootPath: ""
        maxSize: 300
        maxAge: 10
        maxBackups: 20
      format: text
  user.yaml: |-
    #    For example enable rest http for milvus proxy
    #    proxy:
    #      http:
    #        enabled: true
---
# Source: milvus/charts/minio/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-release-minio
  annotations:
    helm.sh/resource-policy: keep
  labels:
    app: minio
    chart: minio-8.0.17
    release: my-release
    heritage: Helm
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "500Gi"
---
# Source: milvus/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-release-milvus
  annotations:
    helm.sh/resource-policy: keep
  labels:
    helm.sh/chart: milvus-4.1.8
    app.kubernetes.io/name: milvus
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/version: "2.3.2"
    app.kubernetes.io/managed-by: Helm
spec:
  accessModes:
  - "ReadWriteOnce"
  resources:
    requests:
      storage: 50Gi
---
# Source: milvus/charts/etcd/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-release-etcd-headless
  namespace: default
  labels:
    app.kubernetes.io/name: etcd
    helm.sh/chart: etcd-6.3.3
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/managed-by: Helm
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    - name: "client"
      port: 2379
      targetPort: client
    - name: "peer"
      port: 2380
      targetPort: peer
  selector:
    app.kubernetes.io/name: etcd
    app.kubernetes.io/instance: my-release
---
# Source: milvus/charts/etcd/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-release-etcd
  namespace: default
  labels:
    app.kubernetes.io/name: etcd
    helm.sh/chart: etcd-6.3.3
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/managed-by: Helm
  annotations:
spec:
  type: ClusterIP
  ports:
    - name: "client"
      port: 2379
      targetPort: client
      nodePort: null
    - name: "peer"
      port: 2380
      targetPort: peer
      nodePort: null
  selector:
    app.kubernetes.io/name: etcd
    app.kubernetes.io/instance: my-release
---
# Source: milvus/charts/minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-release-minio
  labels:
    app: minio
    chart: minio-8.0.17
    release: my-release
    heritage: Helm
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 9000
      protocol: TCP
      targetPort: 9000
  selector:
    app: minio
    release: my-release
---
# Source: milvus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-release-milvus
  labels:
    helm.sh/chart: milvus-4.1.8
    app.kubernetes.io/name: milvus
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/version: "2.3.2"
    app.kubernetes.io/managed-by: Helm
    component: "standalone"
spec:
  type: ClusterIP
  ports:
    - name: milvus
      port: 19530
      protocol: TCP
      targetPort: milvus
    - name: metrics
      protocol: TCP
      port: 9091
      targetPort: metrics
  selector:
    app.kubernetes.io/name: milvus
    app.kubernetes.io/instance: my-release
    component: "standalone"
---
# Source: milvus/charts/minio/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-release-minio
  labels:
    app: minio
    chart: minio-8.0.17
    release: my-release
    heritage: Helm
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 100%
      maxUnavailable: 0
  selector:
    matchLabels:
      app: minio
      release: my-release
  template:
    metadata:
      name: my-release-minio
      labels:
        app: minio
        release: my-release
      annotations:
        checksum/secrets: aa3f4f64eb45653d3fee45079a6e55a9869ce76297baa50df3bc33192434d05e
        checksum/config: ed4d4467dd70f3e0ed89e5d2bc3c4414b02a52fcf7db7f1b66b675b9016664a7
    spec:
      serviceAccountName: "my-release-minio"
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      containers:
        - name: minio
          image: "minio/minio:RELEASE.2023-03-20T20-16-18Z"
          imagePullPolicy: IfNotPresent
          command:
            - "/bin/sh"
            - "-ce"
            - "/usr/bin/docker-entrypoint.sh minio -S /etc/minio/certs/ server /export"
          volumeMounts:
            - name: export
              mountPath: /export
          ports:
            - name: http
              containerPort: 9000
          livenessProbe:
            httpGet:
              path: /minio/health/live
              port: http
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 5
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            tcpSocket:
              port: http
            initialDelaySeconds: 5
            periodSeconds: 5
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 5
          startupProbe:
            tcpSocket:
              port: http
            initialDelaySeconds: 0
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 60
          env:
            - name: MINIO_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: my-release-minio
                  key: accesskey
            - name: MINIO_SECRET_KEY
              valueFrom:
                secretKeyRef:
                  name: my-release-minio
                  key: secretkey
          resources:
            requests:
              memory: 2Gi
      volumes:
        - name: export
          persistentVolumeClaim:
            claimName: my-release-minio
        - name: minio-user
          secret:
            secretName: my-release-minio
---
# Source: milvus/templates/standalone-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-release-milvus-standalone
  labels:
    helm.sh/chart: milvus-4.1.8
    app.kubernetes.io/name: milvus
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/version: "2.3.2"
    app.kubernetes.io/managed-by: Helm
    component: "standalone"
  annotations:
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app.kubernetes.io/name: milvus
      app.kubernetes.io/instance: my-release
      component: "standalone"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: milvus
        app.kubernetes.io/instance: my-release
        component: "standalone"
      annotations:
        checksum/config: bc6a7e82027efa3ad0df3b00d07c3ad1dcc40d7665f314e6d13a000fdd26aec2
    spec:
      serviceAccountName: default
      initContainers:
      - name: config
        command:
        - /cp
        - /run-helm.sh,/merge
        - /milvus/tools/run-helm.sh,/milvus/tools/merge
        image: "milvusdb/milvus-config-tool:v0.1.1"
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /milvus/tools
          name: tools
      containers:
      - name: standalone
        image: "milvusdb/milvus:v2.3.2"
        imagePullPolicy: IfNotPresent
        args: [ "/milvus/tools/run-helm.sh", "milvus", "run", "standalone" ]
        ports:
          - name: milvus
            containerPort: 19530
            protocol: TCP
          - name: metrics
            containerPort: 9091
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: metrics
          initialDelaySeconds: 90
          periodSeconds: 30
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: metrics
          initialDelaySeconds: 90
          periodSeconds: 10
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        resources:
          {}
        env:
        volumeMounts:
        - mountPath: /milvus/tools
          name: tools
        - name: milvus-config
          mountPath: /milvus/configs/default.yaml
          subPath: default.yaml
          readOnly: true
        - name: milvus-config
          mountPath: /milvus/configs/user.yaml
          subPath: user.yaml
          readOnly: true
        - name: milvus-data-disk
          mountPath: "/var/lib/milvus"
          subPath:
        - mountPath: /var/lib/milvus/data
          name: disk
      volumes:
      - emptyDir: {}
        name: tools
      - name: milvus-config
        configMap:
          name: my-release-milvus
      - name: milvus-data-disk
        persistentVolumeClaim:
          claimName: my-release-milvus
      - name: disk
        emptyDir: {}
---
# Source: milvus/charts/etcd/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-release-etcd
  namespace: default
  labels:
    app.kubernetes.io/name: etcd
    helm.sh/chart: etcd-6.3.3
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: etcd
      app.kubernetes.io/instance: my-release
  serviceName: my-release-etcd-headless
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: etcd
        helm.sh/chart: etcd-6.3.3
        app.kubernetes.io/instance: my-release
        app.kubernetes.io/managed-by: Helm
      annotations:
    spec:
      affinity:
        podAffinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: etcd
                    app.kubernetes.io/instance: my-release
                namespaces:
                  - "default"
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
      securityContext:
        fsGroup: 1001
      serviceAccountName: "default"
      containers:
        - name: etcd
          image: docker.io/milvusdb/etcd:3.5.5-r2
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ETCDCTL_API
              value: "3"
            - name: ETCD_ON_K8S
              value: "yes"
            - name: ETCD_START_FROM_SNAPSHOT
              value: "no"
            - name: ETCD_DISASTER_RECOVERY
              value: "no"
            - name: ETCD_NAME
              value: "$(MY_POD_NAME)"
            - name: ETCD_DATA_DIR
              value: "/bitnami/etcd/data"
            - name: ETCD_LOG_LEVEL
              value: "info"
            - name: ALLOW_NONE_AUTHENTICATION
              value: "yes"
            - name: ETCD_ADVERTISE_CLIENT_URLS
              value: "http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2379"
            - name: ETCD_LISTEN_CLIENT_URLS
              value: "http://0.0.0.0:2379"
            - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
              value: "http://$(MY_POD_NAME).my-release-etcd-headless.default.svc.cluster.local:2380"
            - name: ETCD_LISTEN_PEER_URLS
              value: "http://0.0.0.0:2380"
            - name: ETCD_AUTO_COMPACTION_MODE
              value: "revision"
            - name: ETCD_AUTO_COMPACTION_RETENTION
              value: "1000"
            - name: ETCD_QUOTA_BACKEND_BYTES
              value: "4294967296"
            - name: ETCD_HEARTBEAT_INTERVAL
              value: "500"
            - name: ETCD_ELECTION_TIMEOUT
              value: "2500"
          envFrom:
          ports:
            - name: client
              containerPort: 2379
              protocol: TCP
            - name: peer
              containerPort: 2380
              protocol: TCP
          livenessProbe:
            exec:
              command:
                - /opt/bitnami/scripts/etcd/healthcheck.sh
            initialDelaySeconds: 60
            periodSeconds: 30
            timeoutSeconds: 10
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            exec:
              command:
                - /opt/bitnami/scripts/etcd/healthcheck.sh
            initialDelaySeconds: 60
            periodSeconds: 20
            timeoutSeconds: 10
            successThreshold: 1
            failureThreshold: 5
          resources:
            limits: {}
            requests: {}
          volumeMounts:
            - name: data
              mountPath: /bitnami/etcd
      volumes:
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "10Gi"