Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] vastbase cluster pod role is null after drain node #8986

Open
JashBook opened this issue Feb 27, 2025 · 0 comments
Open

[BUG] vastbase cluster pod role is null after drain node #8986

JashBook opened this issue Feb 27, 2025 · 0 comments
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@JashBook
Copy link
Collaborator

Describe the bug
A clear and concise description of what the bug is.

kbcli version
Kubernetes: v1.28.15-vke.18
KubeBlocks: 0.9.4-beta.0
kbcli: 0.9.3

To Reproduce
Steps to reproduce the behavior:

  1. create cluster
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
  name: vastbase-kdvxho
  namespace: default
  annotations:
      kubeblocks.io/host-network: "vastbase"
spec:
  clusterDefinitionRef: vastbase
  topology: replication
  terminationPolicy: Halt
  componentSpecs:
    - name: vastbase
      serviceVersion: 2.2.15
      replicas: 3
      resources:
        requests:
          cpu: 500m
          memory: 2Gi
        limits:
          cpu: 500m
          memory: 2Gi
      volumeClaimTemplates:
        - name: data
          spec:
            storageClassName:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
kbcli cluster list-instances vastbase-kdvxho --namespace default
    
NAME                         NAMESPACE   CLUSTER           COMPONENT   STATUS    ROLE      ACCESSMODE   AZ              CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE     NODE                      CREATED-TIME                 
vastbase-kdvxho-vastbase-0   default     vastbase-kdvxho   vastbase    Running   Standby   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.37/172.16.0.37   Feb 27,2025 17:33 UTC+0800   
vastbase-kdvxho-vastbase-1   default     vastbase-kdvxho   vastbase    Running   Standby   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.7/172.16.0.7     Feb 27,2025 17:33 UTC+0800   
vastbase-kdvxho-vastbase-2   default     vastbase-kdvxho   vastbase    Running   Primary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.44/172.16.0.44   Feb 27,2025 17:33 UTC+0800   

  1. drain node primary pod 172.16.0.44
kubectl drain 172.16.0.44 --delete-local-data --ignore-daemonsets --force --grace-period 0 --timeout 60s 
evicting pod default/vastbase-kdvxho-vastbase-2
pod/vastbase-kdvxho-vastbase-2 evicted
node/172.16.0.44 drained
  1. See error
kubectl get cluster 
NAME              CLUSTER-DEFINITION   VERSION   TERMINATION-POLICY   STATUS     AGE
vastbase-kdvxho   vastbase                       Halt                 Updating   17m

kbcli cluster list-instances vastbase-kdvxho --namespace default
NAME                         NAMESPACE   CLUSTER           COMPONENT   STATUS    ROLE      ACCESSMODE   AZ              CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE     NODE                      CREATED-TIME                 
vastbase-kdvxho-vastbase-0   default     vastbase-kdvxho   vastbase    Running   Standby   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.37/172.16.0.37   Feb 27,2025 17:33 UTC+0800   
vastbase-kdvxho-vastbase-1   default     vastbase-kdvxho   vastbase    Running   Primary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.7/172.16.0.7     Feb 27,2025 17:33 UTC+0800   
vastbase-kdvxho-vastbase-2   default     vastbase-kdvxho   vastbase    Running   <none>    <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.18/172.16.0.18   Feb 27,2025 17:34 UTC+0800   

logs pod

kubectl logs vastbase-kdvxho-vastbase-2      
Defaulted container "vastbase" out of: vastbase, exporter, lorry, config-manager, init-lorry (init)
+ CONFIG_VOLUME=/etc/vastbase
+ VASTBASE_CM_SERVER_PORT=1038
+ VASTBASE_DB_PORT=1035
+ VASTBASE_DB_REPL_PORT=1036
+ VASTBASE_DB_HEARTBEAT_PORT=1037
+ db_conn_str=postgres://%2Ftmp:1035
+ pwd
+ id
/
uid=1000(vastbase) gid=1000(vastbase) groups=1000(vastbase)
+ hostname=vastbase-kdvxho-vastbase-2
+ hostname_underscore=vastbase_kdvxho_vastbase_2
++ echo vastbase-kdvxho-vastbase-2
++ awk -F - '{print $(NF)}'
+ ordinal=2
+ export NODE_ID=3
+ NODE_ID=3
+ export DB_DIR=/var/lib/vastbase/dn
+ DB_DIR=/var/lib/vastbase/dn
+ export GAUSSLOG=/var/lib/vastbase/log
+ GAUSSLOG=/var/lib/vastbase/log
+ echo 'export GAUSSLOG="/var/lib/vastbase/log"'
+ mkdir -p /tmp
++ ls -A /var/lib/vastbase/dn
+ '[' -z 'PG_VERSION
asp_data
base
bin
data dir not empty, skip initialization
dbe_perf_standby
disable_conn_file
gaussdb.state
global
gs_gazelle.conf
gs_profile
gswlm_userinfo.cfg
license
mem_log
pg_audit
pg_clog
pg_csnlog
pg_ctl.lock
pg_errorinfo
pg_hba.conf
pg_hba.conf.bak
pg_hba.conf.lock
pg_ident.conf
pg_llog
pg_location
pg_log
pg_logical
pg_multixact
pg_notify
pg_perf
pg_replslot
pg_serial
pg_snapshots
pg_stat_tmp
pg_tblspc
pg_twophase
pg_xlog
postgresql.conf
postgresql.conf.guc.bak
postgresql.conf.lock
postmaster.opts
postmaster.pid.lock
sql_monitor
term_file
undo' ']'
+ echo 'data dir not empty, skip initialization'
+ license_file=/scripts/license
+ '[' -f /scripts/license ']'
+ echo 'using license /scripts/license'
using license /scripts/license
+ cat /scripts/license
+ base64 -d
+ gs_guc set -D /var/lib/vastbase/dn -c 'license_path='\''/var/lib/vastbase/dn/license'\'''
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -c license_path='/var/lib/vastbase/dn/license' set ].
expected instance path: [/var/lib/vastbase/dn/postgresql.conf]
gs_guc set: license_path='/var/lib/vastbase/dn/license': [/var/lib/vastbase/dn/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ gs_guc set -D /var/lib/vastbase/dn -c port=1035
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -c port=1035 set ].
expected instance path: [/var/lib/vastbase/dn/postgresql.conf]
gs_guc set: port=1035: [/var/lib/vastbase/dn/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ gs_guc set -D /var/lib/vastbase/dn -h 'host replication root 0.0.0.0/0 sha256'
realpath(/home/vastbase/app/bin/cluster_static_config) failed : No such file or directory!
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -h host replication root 0.0.0.0/0 sha256 set ].
expected instance path: [/var/lib/vastbase/dn/pg_hba.conf]
gs_guc sethba: host replication root 0.0.0.0/0 sha256: [/var/lib/vastbase/dn/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ gs_guc set -D /var/lib/vastbase/dn -h 'host all all 0.0.0.0/0 sha256'
realpath(/home/vastbase/app/bin/cluster_static_config) failed : No such file or directory!
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -h host all all 0.0.0.0/0 sha256 set ].
expected instance path: [/var/lib/vastbase/dn/pg_hba.conf]
gs_guc sethba: host all all 0.0.0.0/0 sha256: [/var/lib/vastbase/dn/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ '[' '!' -d /var/lib/vastbase/cm/ ']'
+ reload_cm_config
++ cat /etc/vastbase/hscale/pods.conf
++ awk -F = '{print $(NF)}'
++ tr , '\n'
++ xargs echo -n
+ hosts='vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local'
+ gen_config_cmd='python3 /scripts/generateconfig.py --cm-server-port 1038 '
+ all_ips=()
+ for host in '${hosts}'
++ echo vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
++ awk -F . '{print $(1)}'
+ replica=vastbase-kdvxho-vastbase-0
++ echo vastbase-kdvxho-vastbase-0
++ awk -F - '{print $(NF)}'
+ host_ordinal=0
+ retry getent ahosts vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
+ command='getent ahosts vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local'
+ true
+ getent ahosts vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
172.16.0.37     STREAM vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
172.16.0.37     DGRAM  
172.16.0.37     RAW    
+ echo 'command '\''getent ahosts vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local'\'' succeeded.'
+ return
command 'getent ahosts vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local' succeeded.
++ getent ahosts vastbase-kdvxho-vastbase-0.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
++ awk '{print $1}'
++ head -1
+ remote_ip=172.16.0.37
+ all_ips+=("$remote_ip")
+ '[' 0 == 0 ']'
+ gen_config_cmd+='--primary-ip 172.16.0.37 --primary-hostname vastbase-kdvxho-vastbase-0 '
+ for host in '${hosts}'
++ echo vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
++ awk -F . '{print $(1)}'
+ replica=vastbase-kdvxho-vastbase-1
++ echo vastbase-kdvxho-vastbase-1
++ awk -F - '{print $(NF)}'
+ host_ordinal=1
+ retry getent ahosts vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
+ command='getent ahosts vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local'
+ true
+ getent ahosts vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
172.16.0.7      STREAM vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
172.16.0.7      DGRAM  
172.16.0.7      RAW    
+ echo 'command '\''getent ahosts vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local'\'' succeeded.'
+ return
command 'getent ahosts vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local' succeeded.
++ getent ahosts vastbase-kdvxho-vastbase-1.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
++ awk '{print $1}'
++ head -1
+ remote_ip=172.16.0.7
+ all_ips+=("$remote_ip")
+ '[' 1 == 0 ']'
+ gen_config_cmd+='--standby-ip 172.16.0.7 --standby-hostname vastbase-kdvxho-vastbase-1 '
+ for host in '${hosts}'
++ echo vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
++ awk -F . '{print $(1)}'
+ replica=vastbase-kdvxho-vastbase-2
++ echo vastbase-kdvxho-vastbase-2
++ awk -F - '{print $(NF)}'
+ host_ordinal=2
+ retry getent ahosts vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
+ command='getent ahosts vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local'
+ true
+ getent ahosts vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
172.16.0.18     STREAM vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
172.16.0.18     DGRAM  
172.16.0.18     RAW    
+ echo 'command '\''getent ahosts vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local'\'' succeeded.'
+ return
command 'getent ahosts vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local' succeeded.
++ getent ahosts vastbase-kdvxho-vastbase-2.vastbase-kdvxho-vastbase-headless.default.svc.cluster.local
++ awk '{print $1}'
++ head -1
+ remote_ip=172.16.0.18
+ all_ips+=("$remote_ip")
+ '[' 2 == 0 ']'
+ gen_config_cmd+='--standby-ip 172.16.0.18 --standby-hostname vastbase-kdvxho-vastbase-2 '
+ eval 'python3 /scripts/generateconfig.py --cm-server-port 1038 --primary-ip 172.16.0.37 --primary-hostname vastbase-kdvxho-vastbase-0 --standby-ip 172.16.0.7 --standby-hostname vastbase-kdvxho-vastbase-1 --standby-ip 172.16.0.18 --standby-hostname vastbase-kdvxho-vastbase-2 '
++ python3 /scripts/generateconfig.py --cm-server-port 1038 --primary-ip 172.16.0.37 --primary-hostname vastbase-kdvxho-vastbase-0 --standby-ip 172.16.0.7 --standby-hostname vastbase-kdvxho-vastbase-1 --standby-ip 172.16.0.18 --standby-hostname vastbase-kdvxho-vastbase-2
+ gs_om -t generateconf -X /home/vastbase/config.xml
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /home/vastbase/app/script/static_config_files.
+ cp /home/vastbase/app/script/static_config_files/cluster_static_config_vastbase-kdvxho-vastbase-2 /home/vastbase/app/bin/cluster_static_config
+ rm -rf /var/lib/vastbase/cm/dcf_data
+ rm -rf /home/vastbase/app/bin/cluster_dynamic_config
+ /bin/killall -9 has_server
has_server: no process found
+ true
+ /bin/killall -9 has_agent
has_agent: no process found
+ true
0
+ repl_index=1
+ (( i = 0 ))
+ (( i < 3 ))
+ echo 0
+ '[' 172.16.0.18 '!=' 172.16.0.37 ']'
+ gs_guc set -D /var/lib/vastbase/dn -c 'replconninfo1='\''localhost=172.16.0.18 localport=1036 localheartbeatport=1037 remotehost=172.16.0.37 remoteport=1036 remoteheartbeatport=1037'\'''
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -c replconninfo1='localhost=172.16.0.18 localport=1036 localheartbeatport=1037 remotehost=172.16.0.37 remoteport=1036 remoteheartbeatport=1037' set ].
expected instance path: [/var/lib/vastbase/dn/postgresql.conf]
gs_guc set: replconninfo1='localhost=172.16.0.18 localport=1036 localheartbeatport=1037 remotehost=172.16.0.37 remoteport=1036 remoteheartbeatport=1037': [/var/lib/vastbase/dn/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ repl_index=2
+ gs_guc set -D /var/lib/vastbase/dn -h 'host all vastbase 172.16.0.37/32 trust'
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -h host all vastbase 172.16.0.37/32 trust set ].
expected instance path: [/var/lib/vastbase/dn/pg_hba.conf]
gs_guc sethba: host all vastbase 172.16.0.37/32 trust: [/var/lib/vastbase/dn/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ (( i++ ))
+ (( i < 3 ))
1
+ echo 1
+ '[' 172.16.0.18 '!=' 172.16.0.7 ']'
+ gs_guc set -D /var/lib/vastbase/dn -c 'replconninfo2='\''localhost=172.16.0.18 localport=1036 localheartbeatport=1037 remotehost=172.16.0.7 remoteport=1036 remoteheartbeatport=1037'\'''
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -c replconninfo2='localhost=172.16.0.18 localport=1036 localheartbeatport=1037 remotehost=172.16.0.7 remoteport=1036 remoteheartbeatport=1037' set ].
expected instance path: [/var/lib/vastbase/dn/postgresql.conf]
gs_guc set: replconninfo2='localhost=172.16.0.18 localport=1036 localheartbeatport=1037 remotehost=172.16.0.7 remoteport=1036 remoteheartbeatport=1037': [/var/lib/vastbase/dn/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ repl_index=3
+ gs_guc set -D /var/lib/vastbase/dn -h 'host all vastbase 172.16.0.7/32 trust'
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -h host all vastbase 172.16.0.7/32 trust set ].
expected instance path: [/var/lib/vastbase/dn/pg_hba.conf]
gs_guc sethba: host all vastbase 172.16.0.7/32 trust: [/var/lib/vastbase/dn/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ (( i++ ))
+ (( i < 3 ))
2
+ echo 2
+ '[' 172.16.0.18 '!=' 172.16.0.18 ']'
+ gs_guc set -D /var/lib/vastbase/dn -h 'host all vastbase 172.16.0.18/32 trust'
The vb_guc run with the following arguments: [gs_guc -D /var/lib/vastbase/dn -h host all vastbase 172.16.0.18/32 trust set ].
expected instance path: [/var/lib/vastbase/dn/pg_hba.conf]
gs_guc sethba: host all vastbase 172.16.0.18/32 trust: [/var/lib/vastbase/dn/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

+ (( i++ ))
+ (( i < 3 ))
+ vb_ctl reload -D /var/lib/vastbase/dn
[2025-02-27 09:35:52.130][172][][vb_ctl]: vb_ctl reload ,datadir is /var/lib/vastbase/dn 
[2025-02-27 09:35:52.130][172][][vb_ctl]:  PID file "/var/lib/vastbase/dn/postmaster.pid" does not exist
[2025-02-27 09:35:52.130][172][][vb_ctl]: Is server running?
+ true
+ cp /home/vastbase/app/version.cfg /home/vastbase/app/bin/upgrade_version
+ '[' -f /var/lib/vastbase/dn/recovery.conf ']'
+ has_ctl set --param --server -k 'log_dir='\''/var/lib/vastbase/log/cm/cm_server'\''' -n 3
has_ctl: set cm_server.conf success.
+ has_ctl set --param --agent -k 'log_dir='\''/var/lib/vastbase/log/cm/cm_agent'\''' -n 3
has_ctl: set cm_agent.conf success.
+ has_ctl set --param --agent -k 'unix_socket_directory='\''/home/vastbase/app'\''' -n 3
has_ctl: set cm_agent.conf success.
+ has_ctl set --param --agent -k enable_ssl=off -n 3
has_ctl: set cm_agent.conf success.
+ has_ctl set --param --server -k enable_ssl=off -n 3
has_ctl: set cm_server.conf success.
waiting for db ready
+ has_monitor -L /var/lib/vastbase/log/has_monitor
+ trap _reload HUP
+ vb_agent
+ trap _stop INT TERM
+ wait_for_db_ready
+ echo 'waiting for db ready'
++ grep Normal
++ has_ctl query -Cvwid
++ grep vastbase-kdvxho-vastbase-2
+ [[ -z '' ]]
+ has_ctl query -Cvwid
[  CMServer State   ]

node                          node_ip         instance                            state
-----------------------------------------------------------------------------------------
1  vastbase-kdvxho-vastbase-0 172.16.0.37     1    /var/lib/vastbase/cm/cm_server Down
2  vastbase-kdvxho-vastbase-1 172.16.0.7      2    /var/lib/vastbase/cm/cm_server Down
3  vastbase-kdvxho-vastbase-2 172.16.0.18     3    /var/lib/vastbase/cm/cm_server Down

has_ctl: can't connect to has_server.
Maybe has_server is not running, or timeout expired. Please try again.
+ true
+ sleep 5
++ has_ctl query -Cvwid
++ grep Normal
++ grep vastbase-kdvxho-vastbase-2
+ [[ -z '' ]]
+ has_ctl query -Cvwid
[  CMServer State   ]

node                          node_ip         instance                            state
-----------------------------------------------------------------------------------------
1  vastbase-kdvxho-vastbase-0 172.16.0.37     1    /var/lib/vastbase/cm/cm_server Down
2  vastbase-kdvxho-vastbase-1 172.16.0.7      2    /var/lib/vastbase/cm/cm_server Down
3  vastbase-kdvxho-vastbase-2 172.16.0.18     3    /var/lib/vastbase/cm/cm_server Standby

has_ctl: can't connect to has_server.
Maybe has_server is not running, or timeout expired. Please try again.
+ true
+ sleep 5
++ has_ctl query -Cvwid
++ grep Normal
++ grep vastbase-kdvxho-vastbase-2
+ [[ -z '' ]]
+ has_ctl query -Cvwid
[  CMServer State   ]

node                          node_ip         instance                            state
-----------------------------------------------------------------------------------------
1  vastbase-kdvxho-vastbase-0 172.16.0.37     1    /var/lib/vastbase/cm/cm_server Down
2  vastbase-kdvxho-vastbase-1 172.16.0.7      2    /var/lib/vastbase/cm/cm_server Down
3  vastbase-kdvxho-vastbase-2 172.16.0.18     3    /var/lib/vastbase/cm/cm_server Standby

has_ctl: can't connect to has_server.
Maybe has_server is not running, or timeout expired. Please try again.
+ true
+ sleep 5
++ has_ctl query -Cvwid
++ grep Normal
++ grep vastbase-kdvxho-vastbase-2
+ [[ -z '' ]]
+ has_ctl query -Cvwid
[  CMServer State   ]
...

logs lorry

kubectl logs vastbase-kdvxho-vastbase-2 lorry
2025-02-27T09:36:04Z	INFO	Initialize DB manager
2025-02-27T09:36:04Z	INFO	KB_WORKLOAD_TYPE ENV not set
2025-02-27T09:36:04Z	INFO	Volume-Protection	succeed to init volume protection	{"pod": "vastbase-kdvxho-vastbase-2", "spec": {"highWatermark":"0","volumes":[]}}
2025-02-27T09:36:04Z	INFO	HTTPServer	Starting HTTP Server
2025-02-27T09:36:04Z	INFO	HTTPServer	API route path	{"method": "POST", "path": ["/v1.0/rebuild", "/v1.0/createuser", "/v1.0/joinmember", "/v1.0/exec", "/v1.0/grantuserrole", "/v1.0/datadump", "/v1.0/leavemember", "/v1.0/volumeprotection", "/v1.0/switchover", "/v1.0/revokeuserrole", "/v1.0/lockinstance", "/v1.0/unlockinstance", "/v1.0/getlag", "/v1.0/deleteuser", "/v1.0/preterminate", "/v1.0/checkrunning", "/v1.0/dataload", "/v1.0/postprovision"]}
2025-02-27T09:36:04Z	INFO	HTTPServer	API route path	{"method": "GET", "path": ["/v1.0/describeuser", "/v1.0/listusers", "/v1.0/healthycheck", "/v1.0/getrole", "/v1.0/checkrole", "/v1.0/listsystemaccounts", "/v1.0/query"]}
2025-02-27T09:36:04Z	INFO	cronjobs	env is not set	{"env": "KB_CRON_JOBS"}
2025-02-27T09:36:11Z	INFO	DCS-K8S	pod selector: app.kubernetes.io/instance=vastbase-kdvxho,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=vastbase
2025-02-27T09:36:11Z	INFO	DCS-K8S	podlist: 3
2025-02-27T09:36:11Z	INFO	DCS-K8S	Leader configmap is not found	{"configmap": "vastbase-kdvxho-vastbase-leader"}
2025-02-27T09:36:11Z	INFO	event	send event: map[event:Success operation:checkRole originalRole:waitForStart role:Unknown]
2025-02-27T09:36:11Z	INFO	event	send event success	{"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"waitForStart\",\"role\":\"Unknown\"}"}

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@JashBook JashBook added the kind/bug Something isn't working label Feb 27, 2025
@JashBook JashBook added this to the Release 0.9.4 milestone Feb 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants